The Fab visualizations
Part III · Chapter 18

The Crude Oil of the 1980s

US industry seeks government intervention. → Chips reframed as a strategic resource, not just a product.

Jerry Sanders had a line he liked to use, and by the early 1980s he had used it often enough that other people in the chip industry began to use it too. Standing in front of audiences at industry banquets, in green-room interviews before television cameras, in the small conference rooms where Wall Street analysts gathered to hear from the chairman of Advanced Micro Devices, Sanders would lean into the phrase as though he had only just thought of it. Semiconductors, he would say, are the crude oil of the 1980s. He delivered the line with the showman’s timing he had carried out of his Chicago boyhood and his early years on the Motorola sales floor. The metaphor stuck. By the middle of the decade it had escaped his speeches and was being repeated in Pentagon briefing rooms and on op-ed pages by people who had never met him.

It was not the kind of metaphor American chip executives had used before. For most of the postwar period, the men who ran Silicon Valley had described their product the way they thought about it, as a manufactured good in a competitive market. Chips were a category like ball bearings or jet engines, more sophisticated than most, but a category nonetheless. The country that made the best ones at the lowest cost would sell the most of them. The Pentagon was a customer. The market was the market. The government’s job, in Sanders’s own youthful telling, was to stay out of the way and let the engineers compete.

By the middle of the 1980s that frame had begun to feel inadequate to the men inside it. The crude-oil metaphor carried a specific historical memory. Americans of a certain age remembered the gas lines of 1973 and the OPEC embargo of 1979 viscerally. They remembered the discovery, made in the worst way, that a commodity an industrial economy depended on could be shut off by foreign decision, and that the country whose factories ground to a stop in a fortnight was a country that no longer made its own choices. Crude oil was not just an input. It was a question of sovereignty. To call semiconductors the new crude oil was to argue that the United States, by letting its chip industry be hollowed out by Japan, was sleepwalking back into the kind of strategic dependency it had spent the previous decade trying to claw out of.

The argument did not begin in Sanders’s speeches. By 1981 it was already loose in policy Washington. Robert Reich, who had run the policy planning staff at the Federal Trade Commission under Carter and moved to a lectureship at Harvard’s Kennedy School, published a long Foreign Affairs essay in 1982 called “Making Industrial Policy,” arguing that the United States was already running an industrial policy and that pretending otherwise was costing it the high-value sectors it could least afford to lose. Reich and his coauthor Ira Magaziner followed the essay with the 1982 book Minding America’s Business, which catalogued the categories in which Japanese and German firms were beating American ones and proposed that the federal government identify strategic industries and steer credit, research, and education toward them. The argument was contested at every step by orthodox economists at the Heritage Foundation and the American Enterprise Institute, who saw in it a return to the dirigisme the Reagan administration had been elected to dismantle. By 1983 it was clear the new president would not adopt anything called industrial policy. The phrase became a campaign liability. The substantive argument did not go away. It migrated, by mid-decade, to a place where the libertarian objection had less purchase.

That place was the Pentagon. The Department of Defense had been quietly building, since the late 1970s, a strategic posture that depended on integrated circuits. The Long Range Research and Development Plan that William Perry had set in motion in 1978, the Very High Speed Integrated Circuits program the DoD launched in March 1980 as a joint Army-Navy-Air Force effort, and the Strategic Computing Initiative DARPA stood up in 1983 all rested on a tacit assumption: that the United States would always be able to source the underlying chips from companies inside its own borders. By 1984 the assumption was visibly cracking, and the people responsible for the Pentagon’s bet had begun to ask what would happen to American conventional deterrence in Europe if the precision-strike weapons the offset depended on had to be assembled out of chips bought in Tokyo.

The first attempt to answer the question came from outside the chain of command. Galvin sat at an unusual junction of the chip industry and the defense base; Motorola sold processors and microcontrollers to commercial markets and rugged radios to the U.S. Army. By 1984 he had begun to argue, in private letters and board meetings, that the Pentagon’s procurement bureaucracy was obscuring something the senior services already knew but had not articulated. The systems they were buying were filling up with foreign components. Procurement officers tracking content on a part-by-part basis did not see an alarming pattern, because each individual chip had been competed and specified and bought through the rules. The pattern was visible only when one stepped back and looked at the industrial base as a whole. The merchant memory market was gone. The merchant logic market was thinning. The supplier base for chip-making equipment, the masks and steppers and inspection tools that determined what could be made at all, was migrating to firms in Tokyo, Yokohama, and Kawasaki. Within a few years, by Galvin’s count, the Pentagon would be unable to specify a leading-edge weapons electronic without naming a Japanese supplier somewhere in the bill of materials.

The argument acquired weight from an older man working inside the federal government. Erich Bloch had spent thirty years at IBM, where in the early 1960s he had run the Solid Logic Technology program that gave the System/360 mainframe its competitive advantage and where, by his own account, he had been the IBM executive who took chips most seriously as a strategic capability. In the early 1980s he had helped Noyce stand up the Semiconductor Research Corporation, an industry-funded vehicle for university research that pooled American chipmakers’ contributions to keep doctoral programs alive in process technology. In 1984 Reagan appointed Bloch to run the National Science Foundation, where he became the most senior pro-industrial-policy voice inside the executive branch — careful never to use the phrase, comfortable arguing that federal research investment should flow toward sectors with strategic externalities, and willing to write memos to OMB explaining why economic-textbook market-failure arguments were inadequate to the case. Through 1984 and 1985 Bloch worked the inside of the administration the way Sanders worked microphones. By the end of 1985 he had a quiet alignment with Galvin, with Sporck at National Semiconductor, with the senior staff at the Office of Net Assessment, and with the two undersecretaries of defense most engaged with the procurement industrial base, on the proposition that the chip industry was no longer a normal commercial sector and could no longer be treated as one.

What still did not exist, in late 1985, was a document. Reagan-era Washington moved on documents. The administration would not act on industry lobbying alone, and Congress, for all its restiveness, needed something to point at. The vehicle for producing that document turned out to be the Defense Science Board, the rotating panel of senior industry, academic, and former-government technologists the secretary of defense convened to write rigorous reports on subjects the Pentagon’s own bureaucracy could not investigate without conflict of interest. The DSB’s standing chairman in early 1986 was Charles A. “Bert” Fowler, who had himself been thinking about the chip problem. He raised the question with Norman Augustine, the president of Martin Marietta and a former DSB chairman in his own right. Augustine had spent the previous year listening to industry executives tell him quietly that the conventional Pentagon was not yet seeing what the Pentagon’s planners were already alarmed about.

In February 1986 the secretary of defense chartered the Task Force on Defense Semiconductor Dependency. The chartering memo named Augustine as chairman. Its remit, in language that would be quoted back at it for the next decade, was to assess the impact on U.S. national security if any leading-edge technologies were no longer in this country. The phrasing was deliberate; the verb was no longer a future-tense if. The task force’s executive secretary, E. D. “Sonny” Maynard, was the director of the Pentagon’s VHSIC program, which meant the staff work would be done by the people who knew most precisely what the Pentagon was building its precision-strike weapons out of.

The task force’s composition, which Augustine spent weeks negotiating, drew from the integrated firms that made chips for their own use, from defense primes, from a former undersecretary of commerce and a former undersecretary of defense, and from Bloch as the director of the National Science Foundation. The fifty witnesses the task force would interrogate over the next ten months ranged from chairmen of the surviving merchant chip companies to mid-level procurement officers responsible for specific weapons programs. Several of the briefings were so specific about which weapons systems already depended on Japanese parts that the resulting reports were classified and would remain so for years.

The hearings ran through the spring and summer of 1986. Augustine put witnesses in front of the panel in long sessions in which executives, retired officers, and procurement officials were asked to describe what they had seen, and were then questioned by board members who had read every preceding day’s transcript. The chairmen of equipment makers and small specialty suppliers walked the panel through what would happen to American fabs if particular Japanese vendors chose to delay shipments of resists, masks, or steppers. Each individual data point, on its own, was something a Pentagon procurement officer might have shrugged off. Stacked together, they began to look like a portrait of a country that had been quietly losing something it had not known it was losing.

The men running the task force were not unsophisticated about Japanese motives. Several had spent careers either competing with Japanese firms or partnering with them, and they did not believe that NEC or Hitachi planned to pull state-of-the-art chips off the American market in some future crisis. The argument they were building was not about a shut-off scenario. It was subtler. A lost industry pulled its supplier base down with it, and a lost supplier base dispersed the engineers and institutional memory that had grown up inside it. By the time an emergency drove Washington to want chips back, the workforce capable of making them had retired or moved on. The crude oil metaphor Sanders had been firing off in his speeches captured something the Pentagon’s own analysts had been struggling to phrase with the same compactness. A strategic resource was not just something a country needed in wartime. It was something the country had to be able to make in peacetime, on a routine commercial basis, because the workforce and the supply chain that produced it could not be summoned by emergency procurement once it had been allowed to disperse.

This was the conceptual move. The rhetoric of chips as crude oil sounded, on the surface, like a complaint about Japanese pricing or a recycled OPEC analogy. Underneath it sat a more interesting argument about the persistence of industrial capacity. American economic orthodoxy, as taught at Chicago and at the Reagan-era Council of Economic Advisers, held that production sites were fungible. If American consumers could buy DRAMs more cheaply from Hitachi, the displaced American DRAM workforce would migrate to other industries in which the United States had comparative advantage, and the country as a whole would be richer. The chip executives and the Pentagon strategists were now arguing that the comparative-advantage logic broke down at the upper end of the technology stack. Some industries, once lost, did not regenerate, because the workforces and supplier ecosystems that supported them could not be reassembled at will, and the foreign firms that had captured them became monopolies the lost country could no longer dictate terms to. If that was true, the standard economic case for non-intervention failed at exactly the place where the security case began.

The argument had backers in the academic literature. Charles Ferguson, a young political scientist at MIT finishing a doctorate on the politics of high technology, was preparing a series of articles for the Harvard Business Review and Foreign Affairs that would, when they appeared in 1988 and 1989, become the most widely cited theoretical defense of the strategic-industries framing. Ferguson argued that Silicon Valley’s atomized merchant model was structurally inadequate to compete with the deeply capitalized, vertically integrated Japanese conglomerates, and that absent an American restructuring around large keiretsu-style alliances, the chip industry would simply be ground down. The argument was contested by orthodox economists, including the Brookings economist Kenneth Flamm, who had been working on the U.S.-Japan semiconductor relationship since the early 1980s. Flamm was sympathetic to the security framing but more cautious about policy conclusions; he thought industry alliances might capture rents without producing the manufacturing improvements promised. The arguments converged on a single empirical conclusion. Markets, left alone, were not going to restore the position the United States had lost. If the country wanted that position back, somebody was going to have to spend money getting it.

By the middle of 1986, while Augustine’s task force was still receiving witnesses, the political ground had shifted in ways the chip executives had been working toward for two years. The 1986 U.S.-Japan Semiconductor Trade Agreement had been negotiated and signed. Reagan had, against the instincts of most of his advisers, conceded that the Japanese chip market was structurally closed and that managed-trade machinery was warranted. In Congress, the Omnibus Trade and Competitiveness Act was being drafted with provisions for a formal advisory committee on semiconductors and language that would, the following year, instruct the Pentagon to assist the chip industry. In the trade press and the general newsweeklies, the framing had migrated from a story about pricing disputes to a story about whether the United States was losing, in slow motion, the technological asset the rest of its economy now sat on top of.

Inside the Reagan White House the residual ideological resistance came from places that would not yield to industry lobbying alone. The Council of Economic Advisers under the monetarist Beryl Sprinkel, OMB under James Miller, and the White House counsel’s office all held positions that translated, roughly, to the proposition that government-funded industry consortia were a category mistake and that the right response to the Japanese was for American firms to compete harder. The argument that broke the resistance was not the trade argument. It was the offset argument. The Pentagon’s deterrent posture in Europe, the bet that American precision-guided weapons could counter Soviet armor mass, was held up as the strategic context in which the chip industry had to be understood. Defense officials briefed individual cabinet members through the spring and summer of 1986 on what would happen to the offset if the leading-edge chips it ran on had to be sourced from Japanese suppliers who had declined, repeatedly, to disclose the export licensing they would apply to dual-use technologies. The CoCom export-control regime had been built precisely to keep American electronics out of Soviet hands. The United States was about to lose the industry that produced what CoCom had been built to control.

The reframing was an act of language as much as policy. Calling chips a strategic resource was not the same as calling them a national-security industry; that older phrase carried a tradition of regulation, procurement preference, and industrial-base maintenance that the United States had applied to shipyards and aircraft engines and never to commercial silicon. To say that chips deserved that status was to assert a continuity. An integrated circuit was a successor to a propeller blade, and a chip fab was a successor to the destroyer yard that machined it. None of those analogies had been part of the intellectual furniture of Silicon Valley five years earlier. By mid-1986 they were the furniture, arranged in a way that made the policy conclusions feel obvious rather than radical.

Sanders, off in his California office, had been thinking about something simpler. He had watched AMD’s microprocessor business survive the Japanese onslaught only because Intel had pulled it through a second-source license. He had watched his memory line evaporate. He believed, in the way founders believe things, that chips were as essential to the postindustrial economy as petroleum had been to the industrial one, and he wanted American policy to reflect that belief. He had not known, in the early 1980s, that his throwaway line would still be in circulation a decade later, attached now to Pentagon reports and to congressional hearings he had not been invited to. The line had a life of its own. It had become the shortest available answer to the longest available question: what kind of thing was a chip, exactly, and what did the country owe an industry that made one. By the time Augustine’s task force began drafting its report in the fall of 1986, the country had decided the answer was not the answer it would have given five years earlier, and the Pentagon and the Capitol and the press were simply catching up to a change in vocabulary that Sanders, Galvin, Sporck, and the small group around them had been pushing for since the start of the decade.

The task force was still hearing witnesses. The administration’s free-market holdouts had not yet folded. The 1986 trade agreement was already failing, in ways that would take another year to surface. And the industry the rhetoric had been mobilized to save was, by every quantitative measure available in the fall of 1986, sliding faster than the rhetoric could keep up with. The conceptual reframing had succeeded. The country had come around to the proposition that semiconductors were a strategic resource. What the country was about to discover, in the months ahead, was how much was already gone.