AXE and your shell receives
Paw Prints: Writings of the maddog
I was at a Linux Meetup a couple of months ago and I ran into another former employee of Digital Equipment Corporation (“They are everywhere, everywhere!”) named Larry Camilli. We started talking and somehow the conversation came around to a program called “AXE” (vaX Archictecture Exerciser). Larry looked at me with a strange look on his face and said “That was my program! That is what I did for Digital!” Larry went on to say that few people he met, even those who worked for Digital, knew about the AXE (or its equivalent for the MicroVAX, the “MAX”) program. I replied that I worked in an operating system group, so we often delt with new CPUs, and we knew the *AXE programs and their capabilities very well.
AXE was a program for testing the instruction set and hardware of VAX computer systems. The VAX was a complex instruction set machine, with over 256 operation codes. As an example, one machine language instruction could calculate the Cyclic Reduncancy Code for a block of data. One instruction might also have multiple operands, with 64 different addressing modes to retrieve or deposit data to and from memory and other registers for each operand. Instructions could also operate in priviledged or non-priviledged mode, or test exceptions such as arithmetic traps, and interrupts. Instructions and even operation codes could span page boundaries. It does not take high-order mathematics to realize that with all the different op codes, addressing modes, operand, data types, and ramifications of putting data into and through cache memory that even one run of testing all of this would take a significant amount of time at instruction speeds that were rated at one million instructions per second (one “VAX Unit of Processing” or “VUP”).
However, that was not enough, as some combinations of instructions could also cause difficulties and so to really test a new design for a VAX CPU, and often it might take several MONTHS of testing (24 hours a day, seven days a week) at one “VUP” speeds just to create one “run” of the AXE program.
AXE created “test cases” composed of a VAX instruction, operand specifiers and operands appropriate to that instruction, then stored the results in a database. Over many years of running the program and appropriate set of results are formulated, and can be compared to new processors as they are developed.
Once the new CPU had passed AXE, it was considered compliant with “DEC Standard 32”, which was the VAX Architecture Standard, defining how a VAX machine worked. If an operating system was DEC Standard 32 compliant, then it could run on a VAX.
At Digital new CPUs were a very precious product. The layouts of the printed circuit boards were often done partially by hand. The components were very expensive, and therefore prototype boards would cost a lot of money to create. Later on, as the boards went into early manufacture, the prices would drop considerably, and when they went into mass manufacture, the cost of the boards would drop once again. Software groups and peripheral manufacturing who wanted an early prototype would not only have to justify their need, but would have to pay up to nine times the expected list price of the component to buy one, and this is for a board that would not be repaired if it stopped working. As a major operating system inside of Digital, we would typically purchase one new CPU prototype, then wait for “seed units” (early manufacturing, which typically “sold” for three times projected list price) and finally production units. Looking at results of AXE runs was one way of determining that the CPU design was reasonably ready.
It was because of this high expense and the requirement that engineering units purchase their own equipment that a particular new VAX CPU design escaped from Digital without the AXE program having been run on it. The engineering manager of the CPU development unit did not provide the AXE people with the new CPU design, nor did the hardware development group run the AXE program on their CPU themselves. In fact, the AXE engineering staff had to purchase a production unit with their own funds to run the AXE program. Of course by the time they could purchase the unit several thousand of these new CPUs had been delivered to customers, and about halfway through the first run of AXE, the CPU halted with an error.
The AXE management contacted the hardware development group's managerial team to tell them the bad news. While every operating system that Digital supported would not have seen the problem, nor would any compiler that Digtal supported generate the series of instructions and results that would create the issue, there was no guarantee that an operating system or compiler of a third party or a future system would not experience the problem.
From that point on, damage control had to be executed. Digital's legal department was called in, and a carefully crafted letter went out to the new owners of the CPUs that were already shipped that said something along the lines of “your new CPU will probably never have this problem, but if it does....”
This was a shame, since it was a blemish on the VAX architecture that did not have to occur but for the arrogance of that particular engineering manager and group who did not see the need for running the test.
Of course we did not have blogs back in those days, and Wilileaks was something to happen in the future, and the customer's systems did exactly what they had purchased the systems for, so most of those letters were just filed “for future use”. It is doubtful that those CPUs even exist today other than in some museum, but they were never “patched”, so even though newer CPUa had the problem fixed, that bug still exists in those preliminary units.
I sometimes ponder this extent of testing of hardware versus what we do for software. Some people argue that software is much more complex (therefore hard to create adequate tests), and easier to update with “patches” (therefore not worth the costs of developing comprehensive tests), but the amount of effort and issues that might be avoided with better testing and the creation of test harnesses from the beginning of the software design might help reduce the number of “letters” sent out from “the legal department” and the loss of customer satisfaction that made those letters necessary.
Larry provided me with some AXE documentation and historic letters on AXE. He says that he still has a couple of CD-ROMs with the AXE program on them. I have encouraged him to contribute copies of this material to the Computer History Museum in Mountain View, California, as an example of “Engineering done right”.
comments powered by Disqus
Mozilla’s product think tank sinks silently into history.
TODO group will focus on open source tools in large-scale environments.
New tool will look like GParted but support a wider range of storage technologies.
New public key pinning feature will help prevent man-in-the-middle attacks.
Carnegie Mellon researchers say 3 million pages could fall down the phishing hole in the next year.
The US government rolls new best-practice rules for protecting SSH.
Klaus Knopper announces the latest version of his iconic Live Linux system.
All websites that use these popular CMS tools could be vulnerable to denial of service attacks if users don't install the updates.
According to a report, many potential victims of the Heartbleed attack have patched their systems, but few have cleaned up the crime scene to protect themselves from the effects of a previous intrusion.
DARPA and NICTA release the code for the ultra-secure microkernel system used in aerial drones.