Liftoff
Sputnik, Apollo, and the explosion of military/space demand. → How government money paid the upfront cost of building the industry.
A radio operator at Columbia University tuned a shortwave receiver to 20.005 megahertz on the night of October 4, 1957, and pulled in a sound that within hours would be played on every American television network. It was a thin, mechanical chirp, three-tenths of a second on, three-tenths off, repeating without variation as a polished aluminum sphere the size of a beach ball passed over the eastern seaboard at five miles a second. Sputnik weighed 184 pounds and carried nothing inside but four whip antennas, two transmitters, and a battery that would last three weeks. It was almost a toy. It was also a demonstration that the Soviet Union now possessed a rocket large enough to lob a hydrogen bomb to Washington, and that the United States did not.
Eisenhower tried to be calm about it. In a press conference five days later he called the satellite “one small ball in the air” and said it did not raise his apprehensions “one iota.” The country did not believe him. Senator Lyndon Johnson, then the Senate majority leader, opened hearings within weeks and told colleagues that whoever controlled the high ground of space would control the world. Edward Teller, the hydrogen bomb’s father, told a national audience that the United States had lost a battle more important than Pearl Harbor. The phrase “Sputnik moment” entered the language so quickly that it had stopped being a metaphor by the end of the year.
What Sputnik actually triggered was a budget. Within six months Eisenhower signed the National Defense Education Act, founded the Advanced Research Projects Agency that would become DARPA, and accelerated the Air Force’s intercontinental ballistic missile program. The following July, on his desk, was a bill creating the National Aeronautics and Space Administration. The federal research and development line in the Bureau of the Budget moved from roughly half a billion dollars a year to more than ten billion within a few years. Behind every line of that budget was a question the engineers were going to have to answer: how do you build a guidance system small enough and reliable enough to ride a rocket and still know where it is to within a few hundred feet after a flight of thousands of miles?
For the missile men, the answer was the device Patrick Haggerty’s company in Dallas and Robert Noyce’s company in Mountain View had each, in their own way, invented two years before Sputnik flew: the integrated circuit.
In 1957, an inertial guidance computer for an ICBM weighed several hundred pounds and consumed enough power to dim the lights on the test stand. Its logic was built from thousands of discrete transistors and resistors soldered into modules that filled a refrigerator-sized cabinet. Every soldered joint was a place where the missile could fail. The Air Force’s first deployed ICBM, the Atlas, used analog electronics and ground-based radio guidance because a self-contained digital computer small enough to fly was, in the late 1950s, just barely a research dream. The competition between the integrated circuit and the discrete transistor module would be settled, in the end, not in laboratories but in two specific procurement decisions made within a few months of each other in Washington and Cambridge.
The first decision belonged to the Air Force. Boeing’s solid-fuel ICBM, the Minuteman, had been deployed in 1962 in its original configuration with a guidance computer called the D-17B, designed by Autonetics, the missile electronics arm of North American Aviation. The D-17B used discrete components and weighed about 62 pounds. It was, by the standards of 1962, a marvel. By the standards of the men who built it, it was almost finished as a design. The Air Force had already commissioned the D-37, a smaller, faster successor for the upgraded Minuteman II, and the D-37’s logic could not be built from discrete parts and meet its weight, range, and reliability targets. Autonetics needed integrated circuits.
Patrick Haggerty had been waiting for this contract. Haggerty had become president of Texas Instruments in 1958, the same year his employee Jack Kilby had built the first working integrated circuit. He had spent the four years since pushing the technology against deep customer skepticism. The Defense Department’s procurement officers, conditioned by a decade of struggle to make discrete transistors meet military reliability specifications, regarded a chip with multiple components on it as a single point of failure with multiple ways to fail. Haggerty’s pitch was the inversion: fewer interconnects meant fewer failures, and TI would prove it on a contract that put the chips inside a nuclear missile.
In the fall of 1962, Texas Instruments won the contract to design and produce twenty-two custom integrated circuits for the D-37. It was, as the Computer History Museum’s records describe it, the first major custom IC design program in the industry. The chips ranged from simple NAND gates and flip-flops to specialized linear amplifiers and a demodulator for the missile’s gyros. The price was breathtaking by commercial standards. A single flip-flop went for around fifty-five dollars in 1962 money, the equivalent of several hundred dollars today, but the Air Force was not buying logic by the gate. It was buying the ability to put a self-contained inertial computer aboard a missile that would sit in a silo for years and then, on command, navigate to a city seven thousand miles away.
Autonetics shrank the D-37 to about 26 pounds. Within two years of the contract award, Minuteman II was rolling onto strategic alert with a digital computer in its nose cone built almost entirely from TI integrated circuits. By 1965, the Computer History Museum’s procurement records show, the Minuteman program had become the single largest consumer of integrated circuits in the United States. The Air Force was not subsidizing the IC industry as a matter of policy. It was buying the only product that met its specifications. But the effect was the same: a guaranteed customer at premium prices, willing to pay yields that no commercial market would tolerate, for years on end.
The second decision belonged to a soft-spoken hardware engineer in a Cambridge basement. Eldon C. Hall had joined the MIT Instrumentation Laboratory in 1952, after Eastern Nazarene College and Harvard, and he had spent the late 1950s talking the Navy into letting digital computers fly inside Polaris submarine-launched ballistic missiles. When NASA awarded the Instrumentation Lab the contract to build the Apollo Guidance Computer in August 1961, the lab’s director, Charles Stark Draper, the man who had invented inertial navigation as a discipline, put Hall in charge of the hardware.
The original AGC design used core-transistor logic, the same kind of discrete-component circuitry that had flown on Polaris. By the spring of 1962, Hall and his colleagues could see that it would not be enough. Apollo’s specification called for the lunar module to perform real-time orbital navigation, attitude control, and rendezvous calculations from inside a 70-pound box that drew 55 watts and never failed. The math did not close. The discrete-component design Hall had inherited was too slow, too heavy, and too dense with solder joints to satisfy the reliability budget.
In November 1962, Hall walked into a meeting with NASA’s Apollo program managers and proposed that the entire computer be redesigned around a single integrated circuit: the Fairchild Type G NOR gate. He had two arguments. The first was that a chip with a few transistors etched into it had fewer joints, fewer wires, and fewer ways to break than the equivalent module of discrete parts. The second was simpler. If the entire computer were built from one device, the Instrumentation Lab could pour every dollar of its quality program into qualifying that one device, and the volume of the order would force its supplier to drive yields up and prices down.
The bet was real. Integrated circuits had been on the market for less than two years. They had not flown in space. Several of NASA’s contractor companies, who would have preferred to keep selling discrete-component computers, lobbied actively against the change. NASA’s reliability staff at Houston demanded a qualification program that subjected every batch of chips to thermal cycling, vibration, and centrifuge tests that would have been considered abuse for any other electronic component. Hall accepted the conditions. The lab inspected its suppliers’ clean rooms, walked their production lines, and rejected lots that other customers would have shipped. Hall later said in his memoir, Journey to the Moon, that the goal was a computer that would work right the first time, every time, because there would be no opportunity to debug it during a translunar coast.
NASA approved the redesign in late 1962. The Block I prototype, frozen in early 1963, used about 4,100 Fairchild Type G NOR gates per machine. Each Type G chip contained one three-input NOR, the simplest piece of combinational logic anyone could imagine putting on silicon. By choosing a single, deliberately humble component and ordering it by the hundred thousand, Hall had turned the AGC contract into the largest single procurement of integrated circuits in the world.
In 1962, the U.S. government bought essentially all integrated circuit production in the country. In 1963, federal procurement still accounted for around 85 percent of IC sales, and the Apollo program alone, by Computer History Museum estimates, consumed roughly 60 percent of the year’s IC output. MIT’s first purchase order for one hundred Type G NOR gates, placed with Fairchild on February 27, 1962, paid $43.50 per chip. By the time the AGC was in volume production, the unit price had fallen into the $20 to $30 range. By the late 1960s, NOR gates of similar complexity sold to commercial customers for less than a dollar.
The price collapse was the predictable consequence of NASA and the Air Force ordering the same parts in tens and then hundreds of thousands of units, with reliability requirements that forced manufacturers to invest in clean process technology and statistical quality control they would have skipped on a discrete-transistor business. Fairchild’s Mountain View plant ran around the clock. Texas Instruments built dedicated lines for the Minuteman program in Dallas. The two companies were learning, on the government’s dime, how to make integrated circuits that worked.
The supply chain for the AGC tells its own story. Fairchild was the original supplier, but by the mid-1960s the company’s commercial leadership had grown bored with the simple resistor-transistor logic devices the AGC used and was pushing toward more profitable product lines. Hall qualified second sources at Motorola, Signetics, Texas Instruments, Transitron, and Westinghouse, but only Transitron met his delivery schedules. The bulk of Block II production, which switched to a slightly more capable dual NOR gate package and used roughly 2,800 ICs per computer, came from Philco-Ford’s plant in Lansdale, Pennsylvania. Ford had bought Philco in 1961, in part to acquire its electronics capability for the auto industry, and Lansdale negotiated a license from Fairchild in 1964 to produce the AGC’s chips under cross-licensed Fairchild masks. Across the lifetime of the program, the Apollo computer’s two blocks consumed roughly a million flat-pack integrated circuits.
A softer version of this history holds that Apollo and Minuteman were merely the first big customers, and the chip industry would have arrived on schedule without them. The procurement records say otherwise. In 1962, no commercial market existed for the integrated circuit at any price a chip company could economically produce it for. Computers in 1962 were built by IBM and DEC and Honeywell from discrete transistors that cost a few dollars apiece and worked. The closest thing to a commercial buyer for ICs was a hearing aid manufacturer that wanted them for their size, and a calculator maker that wanted them for their reliability. Neither would have funded the kind of process investment that turned silicon planar fabrication from a laboratory technique into a manufacturing science. Apollo and Minuteman did. They paid the prices that justified the yields, they ordered the volumes that justified the lines, and they accepted the failures along the way.
The two programs were also each other’s hedge. Apollo’s procurement peaked in the mid-1960s, around the time Minuteman II’s hit its stride. As the Apollo flight schedule wound down after 1969, Minuteman III’s D-37D variant kept TI’s lines busy and pushed the Air Force back into being the dominant IC customer. By the late 1960s, with the technology proven and prices falling, commercial customers began to appear in numbers. Calculators, mainframe peripherals, and the first wave of digital instruments started buying chips at prices the military programs had brought into existence. The federal share of the IC market, which had been near 100 percent in 1962, fell to roughly 72 percent in 1965 and to under half by the end of the decade.
Hall used to tell a story about a NASA reviewer who came through the Instrumentation Lab in 1963 and asked why the AGC prototype seemed to be made of nothing but the same little metal cans, soldered to thin printed circuit boards in row after identical row. Hall explained that they were all NOR gates. The reviewer asked how the computer added two numbers, and Hall walked him through the design at the level of a few gates, then a flip-flop, then an adder, then a register, then the entire arithmetic unit. The reviewer is supposed to have shaken his head and said it looked like a way of building a computer out of nothing. That was the point. The AGC was a computer built from the smallest possible piece of silicon logic, repeated tens of thousands of times, because that was the only design any human being could be confident would still be working when the lunar module fired its descent engine over the Sea of Tranquility.
It worked. On July 20, 1969, the AGC handled a sequence of executive overflow alarms during powered descent and still landed Eagle within five hundred feet of its target. Six successful lunar landings followed. The Block II computer flew on every Apollo mission and on Skylab and on the first uncrewed Apollo-Soyuz test, and not one of its 2,800 integrated circuits is recorded as having failed in flight. Minuteman II stood alert for forty years. Their guidance computers stayed in service longer than most of the engineers who designed them.
When historians of the chip industry later tried to identify what made Silicon Valley possible, they kept coming back to one fact. The federal government, in the years between Sputnik and the first lunar landing, paid for an industry that no commercial market would have supported. It paid premium prices for parts that did not yet exist at scale. It paid the engineering cost of qualifying a new manufacturing technology against the most punishing reliability standards anyone had ever written. And then, having paid that cost, it stepped back and let the surplus capacity find commercial customers. Hall’s NOR gate, ordered by the hundred thousand to fly to the Moon, became the building block of the calculator that sat on a desk in a Dallas office in 1971 and the digital watch on a wrist in 1975. The yields that NASA had demanded made those products possible at prices ordinary people could pay.
That handoff did not happen by itself. Somebody had to figure out how to make these things in the millions, not the thousands, and at a unit cost low enough to put a chip on a circuit board next to a transistor radio rather than inside a missile. The man who would crack that problem was already at Texas Instruments in 1958, drawing pictures of photographic stencils on a chalkboard and arguing that you could pattern silicon the way you patterned a printed page.