Like most programmers I have spend my entire career working with general purpose “do anything” computers. Every machine I’ve touched from $3 micro-controllers to $3000 servers has been based around the same basic design: A processor that supports universal math and logic combined with a chunk of flexible memory that can hold whatever code and data you want to put in it.
Need your universal computer to do something new? Just load different code into the memory and suddenly your spreadsheet machine is a web server or a fluid simulator.
But what if your computer only ever had to do one very specific thing? What if you your only goal was to get a spaceship to the moon with near-perfect reliability? And what if you had an entire team of engineers willing to build you a custom computer from scratch? How would your one-thing-only computer be different from a generic off the shelf processor and memory combo?
That is the question answered by The Apollo Guidance Computer: Architecture and Operation, a four hundred page book dedicated entirely to the design and programming of the computer that sent the first man to the moon.
As you’ve probably guessed the aspect of the book I found most interesting was how profoundly weird the guidance computer was compared to the generic mass produced machines we use for 99.999% of computation.
For example, normal computers have flexible memory that lets the user program whatever they want. After all, the manufacturer has no idea whether their customers are going to be building stop lights, washing machines or air conditioners. But the guidance computer was only ever going to be used for flight calculations. It didn’t need to be flexible. But what it did need to be was reliable in the harsh, slightly radioactive void of space. So instead of programmable memory it stored it’s code in the form of wires woven around and through hundreds and hundreds of tiny magnets, creating an extremely reliable form of read only memory.
Or what about mathematics. In normal computers a lot of silicon and logic is spent on making sure processors can gracefully and accurately work with numbers both extremely large and extremely small. But implementing that sort of mathematical power in a machine with as many constraints as the guidance computer would have been difficult.
The solution? To not solve the problem. After carefully studying the physics problems they needed to solve it became apparent that they never needed to multiply a very very large number by a very very small number. That meant they didn’t need a full range of flexible numbers. They could instead save processor complexity and instruction space by just doing relatively simple math and leaving it up to the programmers to just remember which equations were working in dozens of kilometers instead of fractions of seconds. Magnitude is relative and as long as the computer fires the thrusters at the right moment it doesn’t really matter whether that it was doing 10 +3 instead of 10,000,000 + 3,000,000.
These design decisions, and hundreds of other, were meticulously described over the course of hundreds of pages of thorough technical description. Technical enough that following along might be difficult if you’ve never taken a class or done some readings on the fundamentals of computer architecture.
So overall a very niche book, but for someone with the right background it’s a true pleasure to read. It’s always interesting learning about the technical accomplishments of NASA and thinking about the design decisions of a very specific computer really helps highlight that modern computer architecture, while very powerful, isn’t actually mandatory or universal.