The Fab visualizations
Part III · Chapter 14

The Pentagon's Offset Strategy

DoD's tech-over-numbers doctrine to counter Soviet conventional superiority. → Why the Pentagon bet so heavily on chips — context for Chapter 48.

William Perry was forty-nine when he walked into the Pentagon for his first weeks as Under Secretary of Defense for Research and Engineering, in the early spring of 1977. He had asked for a quiet office. The job came with a corner suite on the third floor of the E-ring, but Perry preferred the inner offices, where windowless walls lent themselves to whiteboards and where you did not have to nod at tourists craning over the river. He carried a thin briefcase, a habit from his years running a signals-intelligence company in Sunnyvale. Inside it were a blue ballpoint pen, a clean legal pad, and the morning’s intelligence digest, which he had been reading on the way in.

The digest was about tanks.

For most of the previous decade, the standing Pentagon estimate of Warsaw Pact armored strength in Central Europe had hovered in a range any serving officer could recite from memory. Roughly nineteen to twenty thousand main battle tanks on the eastern side of the inter-German border, against fewer than ten thousand on the western side, with several thousand more Soviet tanks held in the western military districts that could be moved forward in days. The numbers fluctuated a little year to year. The asymmetry did not. By the briefing Perry was reading that morning, the gap had widened. The Soviet General Staff was refitting older T-62s with T-64s and the new T-72s and adding tube artillery and bridging units at a pace NATO’s analysts described in staff papers as alarming.

If the Warsaw Pact attacked westward through the Fulda Gap or across the North German Plain, NATO’s standing forces could expect to be outnumbered roughly three to one in armor at the point of decision. If reinforcement from the United States arrived on schedule, which depended on a long Atlantic supply line and on ports the Soviets would attempt to neutralize, the ratio improved. If reinforcement was delayed, NATO’s forward divisions had perhaps days before they would be either overrun or forced to ask Washington for nuclear release. None of this was secret. The chairman of the Joint Chiefs had said most of it in open testimony. The classified version sitting in Perry’s lap that morning was simply more specific about which units were where.

Perry put the digest down. He had been thinking about this problem, in one form or another, for the better part of two decades.

He had come to the Pentagon by an unusual route. Trained as a mathematician at Stanford and Penn State, he had spent the 1950s at Sylvania’s Electronic Defense Laboratories in Mountain View, working on the algorithms that turned the noise coming off Soviet radio emitters into intelligence. In 1964 he had broken away with several colleagues to found a company called ESL Incorporated in Palo Alto, on a single insight: that the Soviet Union’s vast network of military and civilian transmitters bled signals into the upper atmosphere that, with sufficient processing, could be read like an open book. ESL built the boxes that did the reading. Its biggest customers were the National Security Agency and the National Reconnaissance Office. By the early 1970s ESL was supplying the signal-processing core of the Rhyolite reconnaissance satellites, which sat in geostationary orbit over Eurasia and listened to telemetry from Soviet missile tests.

Perry had spent thirteen years watching what good silicon could do to a hard problem. Each new generation of integrated circuits, half the size and twice as fast as the last, made tractable an analytical problem that had been considered impossible the year before. He had a Silicon Valley engineer’s instinct for the slope of the curve, and he believed it would keep going. He had arrived at the Pentagon convinced the curve was the most important strategic asset the United States possessed.

His new boss was waiting for him down the corridor.

Harold Brown was a child prodigy who had finished a Columbia physics doctorate at twenty-one, run Lawrence Livermore by his early thirties, served as Director of Defense Research and Engineering under Robert McNamara, and held the Air Force secretaryship through the worst of the Vietnam bombing campaigns. Jimmy Carter had picked him as Secretary of Defense the first day of the new administration. He was the first scientist ever to run the Pentagon. He brought to the job a physicist’s preference for first-principles arguments and a thermonuclear weapons designer’s tolerance for grim arithmetic. Friends said he listened more than he talked. Subordinates said the silence was usually him doing math in his head.

By the time he interviewed Perry in December, Brown had decided that the United States was at the wrong end of a worsening conventional balance and could no longer pretend its way out. The Eisenhower-era doctrine, what historians later named the First Offset, had answered Soviet manpower with American nukes. The arithmetic of massive retaliation had assumed, plausibly enough in 1953, that the United States could threaten any Soviet thrust into Western Europe with strategic nuclear strikes the Soviets could not survive. That assumption had decayed throughout the 1960s as the Soviets built their own strategic forces. By the mid-1970s the two arsenals were roughly at parity. By 1978, the Soviet stockpile would surpass the American one for the first time. Threatening nuclear war over a Soviet armored corps crossing the inter-German border was no longer credible to anyone, including the Soviets. American doctrine still rested on it, but the rest was bluff.

Brown wanted a different answer. He had watched, from inside the bombing campaigns over North Vietnam, what laser-guided bombs had done to the Thanh Hoa Bridge in 1972. He had read the post-strike imagery and asked, in private, what it would mean to apply the same logic to a Soviet tank column. The answer his own analysts gave him, and the small RAND-trained cadre Carter’s transition team had handed him, was that the implications were enormous if the United States chose to take them seriously, and modest if it did not. Brown had decided, in the weeks before his confirmation, to take them seriously. He had picked Perry to make it happen.

Three blocks down the E-ring corridor, in a windowless suite visitors usually walked past, sat a third man whose name almost never appeared in the newspapers but who had quietly assembled the intellectual framework Brown and Perry would now operationalize. Andrew Marshall was fifty-five in 1977, a wiry, soft-spoken former RAND analyst who had spent the early 1970s on Henry Kissinger’s National Security Council staff and had moved to the Pentagon in October 1973 to direct the Office of Net Assessment, an entity Richard Nixon had ordered and Defense Secretary James Schlesinger had stood up. The office was small by Pentagon standards, never more than a few dozen analysts. Its remit was deliberately vague: compare American and Soviet military trajectories over the long run, twenty and thirty years out, and identify where the United States could shift the competition onto terms its rival could not match.

Marshall hated the word strategy. He thought it was misused by people who meant tactics. He preferred long-term competition. At RAND in the 1950s and 1960s, working alongside Albert Wohlstetter and Herman Kahn, he had absorbed the idea that two large bureaucracies locked into a multi-decade rivalry could be analyzed the way a chess player analyzes an opening: not as a contest of immediate moves but as a structural problem about which positions compounded and which decayed. In a 1972 RAND paper titled Long-Term Competition with the Soviets, written for the Office of the Secretary of Defense, Marshall had laid out the framework that would underpin everything ONA did for the next four decades. The American advantage, he argued, was not in mass. It was in the rate at which American institutions could absorb new technologies and translate them into operational capability. The Soviet system was good at making more of what it already knew how to make and much worse at integrating anything new.

The implication, in Marshall’s reading, was that the United States should pick competitions in which integrating new technology was decisive, and avoid competitions in which counting things was decisive. Counting tanks favored the Soviets. Integrating sensors, processors, and weapons favored the Americans. Choose the right ground.

Marshall had been watching Soviet open-source military journals for years. By 1977 he was tracking, with quiet excitement, a stream of articles in Voennaia Mysl’ and other Soviet General Staff publications about what their authors were calling the reconnaissance-strike complex: an integrated system of long-range sensors, automated command and control, and precision conventional weapons that the Soviets believed would, when fully matured, change the character of war. The Soviet writers were intellectualizing about something the Americans had not yet named. Marshall noted, in memos that circulated narrowly inside ONA, that the Soviets were doing the United States the favor of describing the future of warfare. The smart move, he wrote, was to build it.

This is the constellation of people, in the spring of 1977, who decided to bet the post-Vietnam American force structure on chips.

The decision did not arrive as a single document. It accreted over a few months, in classified meetings Brown chaired and Perry implemented, and in the analytical underbrush Marshall’s office kept feeding into the policy machine. By summer, the outline was clear enough that Perry could brief his deputies on it. The United States would stop trying to match the Warsaw Pact tank for tank. It would invest, on a scale and at a pace that defied the post-Vietnam budget mood, in three interlocking technology baskets that together would produce what Perry, in congressional testimony the following year, described with characteristic crispness as the ability “to see all high-value targets on the battlefield at any time, to make a direct hit on any target we can see, and to destroy any target we can hit.”

The first basket was sensors: airborne radar capable of tracking moving ground vehicles at long range, space-based reconnaissance with resolution good enough to count vehicles, electro-optical and infrared imaging that could see through night and weather, signals platforms that could pull a Soviet division’s command radios out of the ambient noise. The second was precision delivery: cruise missiles, guided artillery rockets, terminally guided submunitions, new generations of laser-guided bombs beyond the Vietnam-era Paveways. The third was stealth, the still-classified set of radar-defeating shapes and materials that would let American aircraft penetrate the dense Soviet air-defense net the way no Western fighter could in 1977.

Behind all three baskets sat the same enabler. Each required computation. Sensors that could not process their own data faster than the targets moved were useless; missiles that could not run inertial-and-terminal guidance algorithms in real time were useless; stealth aircraft that could not fuse their own sensor returns into a coherent battlespace picture were merely small. The decisive ingredient was not the airframe or the warhead. It was the silicon riding inside.

Perry knew where that silicon came from. It came, with very few exceptions, from a stretch of orchards south of San Francisco that had been transformed in the previous decade and a half into a chip-manufacturing economy unlike anything in the world. He had run a company there. He still owned a house there. He had sat across boardroom tables from Robert Noyce, Andy Grove, Jerry Sanders, Charles Sporck, and Wilfred Corrigan. He understood, in a way few of his Pentagon predecessors had, that the curve of integrated circuit performance was not a fact of nature but a property of an industrial ecosystem. If the United States kept that ecosystem healthy, the curve continued. If it lost the ecosystem, the curve would belong to whoever held it next.

The plan he and Brown laid down across 1977 and 1978 took some of its institutional shape from a study that had been working its way through the Pentagon’s research bureaucracy since 1975. The Long Range Research and Development Planning Program, run by DARPA under Brown’s predecessor Malcolm Currie, had spent two years asking a deceptively simple question: what would American conventional forces have to look like in the late 1980s and early 1990s if the United States chose not to escalate to nuclear weapons in a major war in Europe? The study’s authors had identified a set of technologies that could plausibly close the gap. Long-range precision strike against rear-echelon armor before it reached the front. Wide-area surveillance that could find Soviet second-echelon divisions while they were still moving forward. Stealth aircraft that could degrade Soviet command and air defenses on the first night of any war. Computer networking that could fuse all of it together. The study had not produced a strategy. It had produced a parts list. Brown and Perry’s contribution was to commit to the list, fund it, and accept the political risk of the choice.

The clearest expression of what the choice meant in practice was a DARPA program that Perry pushed into existence in 1978 and championed for the next three years. Its name was Assault Breaker.

The concept underneath Assault Breaker was the closest thing the Second Offset had to a doctrinal centerpiece. Soviet armored divisions in Central Europe were arrayed in two echelons: a first echelon expected to engage NATO’s forward defenders within hours of the start of any war, and a much larger second echelon held two or three days back, intended to pour through any gap the first echelon punched. Active Defense, the Army doctrine that General William DePuy had codified in the 1976 edition of Field Manual 100-5, focused on stopping the first echelon. The trouble, as DePuy himself recognized, was that even a successful Active Defense left NATO too depleted to handle the second echelon. The second echelon would arrive with fresh tanks, fresh artillery, and fresh men, and would roll over what remained.

Assault Breaker proposed, with a straight face, to destroy the second echelon before it ever reached the front.

The mechanism was a system of systems, in language that DARPA and Perry both began using around the same time. An airborne radar, capable of looking down across hundreds of kilometers of European terrain and picking out columns of moving vehicles, would feed targeting data through a digital downlink to ground stations. Those ground stations would assign targets to long-range tactical missiles, fired from launchers behind NATO lines, that would fly toward the designated coordinates and dispense submunitions at altitude. The submunitions, each carrying its own millimeter-wave seeker, would acquire individual armored vehicles in the column below and steer themselves down. A single missile, at limit, was supposed to be capable of killing five to ten tanks at a range of two or three hundred kilometers. The whole engagement, from radar detection to submunition impact, would take minutes.

In December 1982, on the floor of the White Sands Missile Range in New Mexico, the program reached its public proof-of-concept. A Pave Mover radar aircraft tracked a column of mock Soviet vehicles. The targeting data fed a Lance-derived missile launched from a forward position. The missile arrived over the column on the predicted timeline and dispensed five terminal seekers, four of which acquired and struck their assigned targets. The fifth missed because of a guidance fault later traced to an inertial component. Robert Cooper, the DARPA director, would later describe the demonstration in oral history interviews as the moment the offset stopped being a thesis.

By then Perry had been out of the Pentagon for nearly two years. The Carter administration had lost the 1980 election. Reagan’s incoming defense team, suspicious of anything tagged with the previous administration’s name, had nevertheless quietly absorbed the program structure and budget commitments. Stealth, which Perry had nursed through Carter’s term as a black program, kept its black status and accelerated. Cruise missiles went into volume production. The Joint Surveillance and Target Attack Radar System, descendant of the airborne radar on Pave Mover, became the E-8 JSTARS that would define wide-area battlefield surveillance for the next thirty years. The Army’s tactical missile, a direct outgrowth of Assault Breaker, became the ATACMS that would be fired in anger for the first time against Iraqi armor in 1991.

Marshall, in his ONA office, kept watching the Soviets.

The Soviet response, in his reading, was the most telling vindication of the offset’s logic. Marshal Nikolai Ogarkov, the chief of the Soviet General Staff, published an article in the Soviet army newspaper Krasnaya Zvezda in May 1984 in which he described the emerging Western capability with unusual specificity. The combination of automated reconnaissance, real-time command and control, and precision conventional munitions, Ogarkov said, would soon make it possible for non-nuclear weapons to achieve effects approaching those of tactical nuclear weapons. The Soviet General Staff understood, well enough to write it down in a public newspaper, that the Americans were building something that could shred a Soviet armored thrust without a single warhead being released. Ogarkov also understood, though he could not say so as plainly, that the Soviet electronics industry could not produce the chips required to build the same capability. The Soviets had Zelenograd, their planned chip city outside Moscow, and Zelenograd by the mid-1980s was at least a generation behind. The reconnaissance-strike complex they had themselves theorized about in the 1970s was being built on the wrong side of the inter-German border.

This was the conceptual moment Marshall, in later writings collected in volumes like Reflections on Net Assessment, kept returning to. The American bet had not been on any single weapon, or even on any single technology. It had been on a curve. The integrated circuit’s performance per dollar would, the bet assumed, keep doubling every eighteen to twenty-four months. Stealth aircraft impossible to build with 1977 silicon would be possible with 1985 silicon. Real-time wide-area sensor fusion impossible with 1980 processors would be tractable with 1990 processors. Precision out to hundreds of kilometers, an entire armored corps disabled before it reached the line of contact: each had been a fantasy in the year Perry took the job. Each was a near-term engineering project by the time Reagan left office. The curve had carried the strategy.

It had also tied American military power to American chip leadership in a way nobody would untangle for the rest of the century. A nation that could no longer manufacture leading-edge silicon could no longer build the weapons the Pentagon now considered essential. The offset strategy, in its quietest implication, had made American national security a function of American semiconductor competitiveness. For the moment, in 1980, this was a tautology. The United States made the world’s best chips. The world bought them. Perry, leaving the Pentagon for a brief return to private life, did not seem to consider that any of this might change.

It would change soon. Within five years of Perry’s departure, Japanese DRAM makers would have driven most of the American memory industry into receivership. Within ten, the question of whether the United States could still hold the bleeding edge of chip manufacturing would be the dominant anxiety in Washington’s industrial-policy debates. The offset, designed to compensate for Soviet conventional mass with American technological superiority, had produced an unintended dependency. The dependency would shape every subsequent debate about chips for decades, in ways Brown, Perry, and Marshall could not yet see and would have struggled to believe.

For now, in the late spring of 1977, the bet was still fresh. Perry walked the Pentagon’s corridors during the long afternoons when the building emptied for the secretarial briefings, looking at posted unit charts and intelligence summaries. Brown worked through nuclear-balance briefings and SALT II positions and, between them, reviewed the research-and-engineering portfolio Perry brought him every Friday. Marshall sat in his windowless suite with a yellow pad, sketching what the Soviet posture would look like in 1990, in 1995, in 2000, the curves and counter-curves stacking on the page in his cramped left-handed script. None of them spoke publicly about the strategy. There was no name for it yet. The phrase Second Offset would be applied retroactively, decades later, by analysts trying to make sense of how the United States had gotten from the disaster of Vietnam to the parade-ground victory of the Gulf War in a single span.

What they had decided was simpler than any later phrase could capture. The United States would not match the Soviet Union in mass. It would change the game. It would pour a generation’s worth of defense research budget into a single bet: that the integrated circuit and the technologies it enabled, deployed coherently across sensors, weapons, and platforms, would render Soviet quantitative advantages obsolete before the Soviets could find an answer.

The bet was right. It would be vindicated, less than fifteen years later, on a stretch of desert between Kuwait City and Basra, on televisions watched by every general staff in the world. The vindication would come at a cost the architects had not anticipated, a cost paid not in casualties but in industrial dependence, and the cost would compound across the four decades the offset’s children would be in service. The question of whether chips were essential to American military power had been settled in the spring of 1977 by three men working in adjacent corridors of the same building, and the answer, as a generation of strategists would later put it, was that everything ran on silicon.