The Fab visualizations
Part III · Chapter 13

Intel's Revolutionaries

Noyce, Moore, Grove found Intel; the microprocessor is born. → The pivot that defined American computing dominance.

In the early summer of 1968, Robert Noyce phoned Arthur Rock and told him he was leaving Fairchild. Rock, a slight, severe New Yorker who had relocated to San Francisco a few years earlier, did not need to ask why. He had watched Fairchild Semiconductor become the most important chip company in the world, watched its parent in Syosset, Long Island, treat the California operation as a cash machine for the camera business that gave the company its name, and watched the talent leak out, defection by defection, until the local joke was that you could draw a family tree of Silicon Valley by tracing where the Fairchild alumni had landed. Now Noyce, the most prominent Fairchild man of all, was joining them.

Rock asked how much money he would need. Noyce said he and Gordon Moore had not decided exactly what the company would do beyond build large-scale integrated circuits, that they had no products, no employees, and no name yet, but that they wanted to do it on roughly the scale of two and a half million dollars. Rock, who had helped Noyce raise the money to start Fairchild a decade earlier and had since financed Scientific Data Systems and Teledyne, said he could get it done. He drafted a private placement memorandum, sometimes described later as a one-page document and sometimes as a page and a half, and began calling investors. The round closed in something close to forty-eight hours. Rock had to turn money away. Noyce and Moore each put in $250,000 of their own; Rock put in $300,000 and took the chairmanship of the board. The first outside investors converted debentures at five dollars a share. The new company was incorporated on July 18, 1968, under the placeholder name NM Electronics, after its founders. Within weeks they bought the rights to a better name from a small firm called Intelco and shortened it: Intel.

That, at least, is the version that has hardened into legend, retold in oral histories at the Computer History Museum and in Leslie Berlin’s biography of Noyce. The legend obscures something more interesting, which is how thin the founding really was. Noyce and Moore had no product roadmap, no customers, and no office. They wrote a famously vague pitch that promised, in essence, to do for memory chips what Fairchild had done for logic chips, and they let the reputation of the two of them carry the rest. Noyce had co-invented the planar integrated circuit. Moore had run Fairchild’s research. The market trusted them on those credentials alone. It was a pure bet on people, made at a moment when the institution of the venture-backed semiconductor startup barely existed. Most of the money went into a converted Union Carbide building at 365 Middlefield Road in Mountain View, where Carbide was still moving out as Intel was moving in. For the first weeks the engineers worked out of the only finished room in the building, which happened to be the conference room.

Noyce was forty when he signed the incorporation papers. Photographs from that summer show him in an open-collared shirt, hair already silvering at the temples, the easy smile everyone in the valley by then knew. He had grown up in Grinnell, Iowa, the son of a Congregational minister, with the mid-century Protestant midwestern assumption that authority should be questioned and hierarchy was vaguely embarrassing. He brought the assumption with him to Fairchild and now to Intel. No executive parking at the new company, no private dining room, no corner offices. Noyce had a desk in a sea of identical desks, and the conference rooms were named by number, not by occupant. The visual style was deliberate, and it spread. By the late 1970s every ambitious chip startup in northern California would copy some version of it.

Moore was a year younger and could not have been less like his partner if the casting had been deliberate. Where Noyce was tall and easy in a room, Moore was rumpled and quiet, a chemist by training who had grown up in Pescadero on the San Mateo coast and gone east to Caltech and Johns Hopkins before drifting back. He had a chemist’s patience with experiments that took months to fail. He had also, three years before, written the most consequential piece of writing in the history of the industry. In April 1965, while still running R&D at Fairchild, Moore had published a short article in the trade journal Electronics under the title “Cramming More Components onto Integrated Circuits.” The piece predicted, on the basis of a handful of data points, that the number of components economically placeable on a single chip would roughly double every year for the next decade, taking the industry from a few dozen elements per chip to something like sixty-five thousand by 1975. Moore later refined the doubling time to roughly two years. The Caltech professor Carver Mead would, years afterward, give the prediction the name by which it would govern the rest of the century: Moore’s Law. By the time Intel was founded, the law had been quietly reshaping engineering plans across the industry for three years. It would now become Intel’s operating thesis.

The third figure of the founding troika was, technically, not a founder at all. Andy Grove, born András István Gróf in Budapest in 1936, had survived the German occupation and then the Soviet one, and escaped during the Hungarian uprising of 1956 with the help of a smuggler and a foreign coat. He had landed in New York speaking almost no English, finished an undergraduate degree at City College, taken a doctorate in chemical engineering at Berkeley, and joined Fairchild’s R&D group under Moore in 1963. He was, by then, married, methodical, perpetually slightly anxious, with the immigrant’s conviction that comfort was a temporary condition. When Moore told him, in the spring of 1968, that he and Noyce were leaving to start something, Grove asked to come along. Grove later said the decision was instant; Richard Tedlow’s biography portrays it as nearly so. He showed up on the day of incorporation as Intel’s first employee, customarily numbered as the third person on the payroll behind Noyce and Moore. The Hungarian engineer Leslie Vadász, also defecting from Fairchild, was the fourth.

Grove disliked Noyce. The two had clashed at Fairchild, where Noyce’s hands-off charm struck Grove as managerial cowardice; Grove believed, and would later say plainly, that Noyce avoided difficult decisions. Grove’s reverence was reserved for Moore. He had not signed up to work for Noyce so much as to follow Moore into a cleaner version of the Fairchild experiment. The triangle that resulted — Noyce the public face, Moore the conscience, Grove the executor — was uneasy from the first day. It would also turn out to be one of the most productive divisions of labor in industrial history.

The new company’s actual technical mandate, once the founders sat down to articulate it, was almost prosaic. They believed semiconductor memory was about to displace magnetic core, the tiny ferrite donuts that had stored bits in mainframes since the 1950s. Core memory was reliable but slow, expensive, and built by hand by women in low-wage countries threading wires through magnetic rings. A chip-based memory made by photolithography, the way Fairchild made logic, would be faster, cheaper at volume, and would compound as Moore’s Law took hold. That was the bet. Intel would build memory chips. The first product, shipped in April 1969 under Vadász’s supervision, was the 3101, a 64-bit Schottky bipolar static RAM. Sixty-four bits. Modest by any later standard, but demonstrably faster than anything competing, and it gave Intel its first revenue. Burroughs put it into a machine for the Air Force. A young Xerox project called the Alto, the eventual prototype of the personal computer, used 3101s in its memory.

Bipolar SRAM was a starter product. Intel’s real ambition was dynamic RAM in MOS, a different and harder fabrication process that traded speed for density. Density was where Moore’s Law had its leverage. In 1969 William Regitz at Honeywell invented a three-transistor dynamic memory cell and went shopping for someone to manufacture it. Intel responded. Joel Karp, working with Regitz, designed a chip around the cell, and in October 1970 Intel introduced the 1103, a 1024-bit MOS DRAM in an 18-pin package built on an eight-micron process. Yields in the early months were terrible. The fifth attempt at the production masks finally produced a chip Intel could manufacture in quantity. By the end of 1971 the 1103 was the best-selling semiconductor in the world. By 1972 fourteen of the eighteen mainframe makers in the United States, Europe, and Japan were buying it. Magnetic core, which had stored almost every bit of working memory in computing for two decades, began its long decline. Intel had, in three years, gone from a conference-room startup to the company that had killed core. Inside Intel, the 1103 was the product that made everything else possible. It paid for the next decade.

Among the engineers Intel was hiring in those early months was a young Italian named Federico Faggin, who had arrived at Fairchild in 1968 from Olivetti and SGS-Fairchild in Milan and had spent his Fairchild months developing the silicon-gate process. Silicon-gate was a deceptively quiet innovation. The metal-gate MOS process the industry had been using laid down aluminum gates over the silicon channel; the silicon-gate process used polysilicon instead, self-aligned to the channel through a diffusion sequence Faggin had worked out with the physicist Tom Klein. The result was MOS transistors that were faster, denser, and far more manufacturable than their metal-gate predecessors. Faggin’s first commercial silicon-gate IC, the Fairchild 3708, was an unglamorous analog multiplexer he had used essentially as a vehicle to prove the process worked. It was five times faster than the metal-gate part it replaced, with a hundredth the leakage. By 1970, when Faggin defected to Intel, silicon-gate was on its way to becoming the dominant MOS process for the next half-century. Without it, the chip the world remembers him for would not have been physically possible.

The chip in question started life as a side project. In April 1969, four months before Intel had even shipped its first SRAM, a Japanese desktop calculator company called Nippon Calculating Machine, known by the brand name Busicom, approached Intel for a custom chipset for a new printing calculator. Busicom’s engineers had drawn up a design that called for a dozen different custom chips, each one performing some specific arithmetic or control function. The company assigned a young engineer named Masatoshi Shima to coordinate with Intel. On the Intel side, the project landed on the desk of Marcian “Ted” Hoff, a Stanford-trained engineer hired in 1968 as one of Intel’s first applications engineers. Hoff looked at the Busicom proposal and concluded the Japanese had it backwards. A dozen specialized chips would be a manufacturing nightmare for Intel and a maintenance nightmare for Busicom, and any individual chip would be too obscure to resell if Busicom’s calculator failed. Hoff proposed, instead, a general-purpose architecture. A single chip would contain a small central processor and a tiny instruction set. Other chips would hold the program in read-only memory, the working data in RAM, and the input-output controllers. The CPU would be told, at boot, what kind of calculator to be. The same chipset, with different ROMs, could become a different product entirely.

Hoff’s proposal arrived in the late summer of 1969, was approved by Noyce and the Busicom team, and then sat. Stan Mazor, another applications engineer, helped flesh out the instruction set. But there was no one at Intel who could actually do the chip. Designing a CPU on a single piece of silicon required pushing density well beyond what Intel had ever attempted, and it required someone who knew the silicon-gate process intimately. When Faggin walked into the Mountain View building on April 3, 1970, his first day as an Intel employee, Shima was already there waiting for him, having flown in from Japan to begin a design review that, on paper, was supposed to be near completion. Shima opened the project files and discovered that the chip he had come to review effectively did not exist. The architecture was sketched. The actual transistor-level design, the layout, the masks, the test patterns, none of it had been done. Faggin, on his second day at Intel, took over the project.

What followed has become one of the most argued-over episodes in the history of the industry. Faggin worked, by his own later account, twelve to sixteen hours a day for the next nine months. He designed the random-logic CPU using silicon-gate techniques he had brought from Fairchild, including a buried-contact step that pushed transistor density well beyond what was conventionally achievable. Shima checked his logic. Mazor refined the instruction set. By October 1970 the first set of masks was ready for the support chips, the 4001 ROM and the 4002 RAM. The 4004 itself came back from fabrication at the end of December and proved completely non-functional. Probing the silicon, Faggin discovered that one mask layer, the buried-contact step, had been omitted entirely from the run. A second pass was scheduled. The new wafers came back in January 1971. The chip booted. Faggin sent samples to Shima in February. By March, a fully functional 4004 was running Busicom’s printing calculator in a working prototype.

The chip Faggin had built was called the Intel 4004. It contained 2,300 transistors on a die roughly twelve square millimeters, fabricated on a ten-micron process and clocked at up to 740 kilohertz. It could execute about 92,000 instructions per second. By the standards of any minicomputer of the era, this was nothing; the PDP-8 in the corner of any decent university lab would run rings around it. But the 4004 fit on a fingernail. It cost, in volume, less than two hundred dollars. And it was programmable. The same chip that ran the Busicom printer could, with a different ROM, run a traffic-light controller, a gas pump, a pinball machine, an instrument panel. Hoff had seen the architecture in 1969. Faggin had built it. The chip carried, etched microscopically into a corner of its die where only a microscope could find them, the initials F.F.

Intel almost did not sell the chip to anyone but Busicom. The 1969 contract gave Busicom exclusive rights to the 4004 family. By the spring of 1971 the calculator market had become brutally competitive. Busicom asked for a price cut. Faggin and Hoff went to Noyce and argued, with some heat, that Intel should propose a swap: the price cut in exchange for the right to sell the chipset to anyone outside the calculator market. Noyce agreed. Busicom, in deepening trouble and unaware of what it was giving up, agreed too. The legal release was signed in May 1971. From that moment, the 4004 belonged to Intel and to whomever Intel chose to sell it.

What happened next was almost an anticlimax. Intel’s marketing department was unenthusiastic. The chip was hard to explain, the market for it in 1971 did not visibly exist, and the dominant theory inside Intel was that microprocessors were a curiosity, a way to extract a little more value from the silicon-gate manufacturing line, while the company’s real business was and would remain memory. Faggin and his ally Ed Gelbach, the marketing executive who had recently arrived from Texas Instruments, had to push hard for a public launch. The launch ad, when it ran in Electronic News on November 15, 1971, was unusually grandiose for a parts catalog. “Announcing a new era of integrated electronics,” it read. “A micro-programmable computer on a chip.” It was, as later commentators have noted, almost the only product launch announcement in industry history that turned out to undersell the product.

The 4004 sold well enough. The 8008, an 8-bit follow-on Faggin completed in April 1972 after taking over a stalled internal project, sold better. The real breakthrough came with the 8080, which Faggin and Shima, by then reunited at Intel, designed together and shipped in March 1974. The 8080 was the chip that mattered, the one that put the microprocessor into industrial controllers, scientific instruments, the first arcade games, and the Altair 8800 hobbyist computer that would in turn launch Microsoft and seed the eventual lineage of x86. The patent on the 8080 lists three names: Faggin, Shima, Mazor. By the time it shipped, Faggin had already decided to leave. Intel, in his telling, was unwilling to credit his contribution publicly, preferring to let Hoff, who was personally closer to Noyce and Moore, become the public face of the microprocessor invention. Faggin departed in 1974 to co-found Zilog, taking Shima with him. The Z80 microprocessor that the new company shipped in 1976 would, for a decade, outsell the Intel parts it was based on.

The credit dispute between Faggin and Hoff over who invented the microprocessor has never been fully resolved and is unlikely ever to be. In the strict engineering sense of “invented,” Hoff conceived the architecture, Mazor refined the instruction set, Shima specified the calculator-side requirements that drove the design, and Faggin did the physical realization that made it manufacturable. Intel, for its first decades, told the story as Hoff’s. Faggin told it as his own. The 1988 Marconi Award, given to Faggin, became the first major external recognition of his role. Most modern histories now list the four names together, with the qualification that without silicon-gate and without Faggin’s nine-month sprint in 1970, there would have been no chip to argue over. The episode foreshadowed, in miniature, a problem Intel would have for decades: crediting executives at the top and underweighting engineers two or three layers down. It was a curiously durable habit for a firm that started in a conference room.

By the time the 8080 shipped in 1974, Intel had stopped being a startup. It had a thousand employees, a profitable memory business, an emerging microprocessor business, and an internal culture that had begun to feel less like Noyce’s egalitarian Fairchild reboot and more like something Andy Grove was actively constructing. Grove had become director of operations in 1969 and executive vice president by 1975. The discipline he imposed, “constructive confrontation” in the term that became synonymous with him, was not a slogan but a specific set of meeting protocols. Decisions had to be made with data. Arguments had to be conducted to their conclusion in the room, not in hallways afterward. Anyone in the company could challenge anyone else. Subordinates were expected to push back on superiors and superiors were expected to take it. Grove ran his meetings on a stopwatch and posted late-arrival lists on the wall. He fired people whose performance dropped, including engineers who had been at the company since the conference-room days. Noyce found it all faintly distasteful and largely stayed out of it. Moore, who knew Grove was doing work neither he nor Noyce had the temperament for, backed him completely.

The division of labor that emerged would last roughly fifteen years. Noyce was Intel’s external face, the lobbyist and industry statesman Washington wanted to talk to. Moore was the strategist, the long-horizon planner who had to decide which fabs to build five years before they would be needed. Grove was the operator who made sure whatever Noyce promised and Moore committed to actually got built and shipped. Visitors to Intel in the early 1970s often left convinced Noyce ran the place. Engineers who had been there longer knew Grove did. Noyce became chairman in 1979 and gradually withdrew from operational decisions. Moore became CEO. Grove became president and chief operating officer, and a decade later the CEO who would define Intel for the world.

The pivot Intel had executed by the mid-1970s was already visible to anyone paying close attention. The company had been founded to make memory. The 1103 had succeeded so completely that for several years memory was almost the only thing Intel was known for. But the microprocessor, conceived as a side project to keep a Japanese calculator customer happy, was beginning to look like a different kind of business. Memory was a commodity. Anyone with a fab could make 1Ks of DRAM, and as more competitors entered, prices would fall as predictably as Moore’s Law itself. The microprocessor was something else. It was an architecture. It carried with it a software ecosystem, programmers who learned its instruction set, customers who built products that depended on it, lock-in that compounded year over year. A company that owned the dominant architecture would have a moat no commodity DRAM maker could ever build.

Noyce, Moore, and Grove understood this only partially in 1974. Through the rest of the decade, Intel would continue to derive most of its revenue from memory. The full implications of having created the microprocessor would not be visible until the early 1980s, when IBM, looking for a chip to put inside a small business computer it was rushing to market, picked the 8086 family that descended directly from Faggin’s 8080. The decade after that would belong to Intel almost completely, on the strength of an architecture an Italian engineer had designed in nine months in 1970 because a Japanese calculator company needed something cheap.

What Intel had become, by the time the 4004 was a few years old, was the company that would define what American computing meant for the next forty years. The Pentagon was beginning to draft a doctrine that would treat semiconductors as the core American military advantage, and the same chips that ran Busicom calculators were starting to appear in cruise missile guidance computers. The Japanese, who had given Intel the 4004 contract in the first place, were watching very closely indeed, and would within a decade present American chip-makers with their first existential challenge. Noyce, Moore, and Grove did not yet see most of this. They saw a company that had paid back its $2.5 million many times over, that owned the world’s bestselling memory chip, and that had, almost by accident, invented a new kind of product. The accidents would compound for a long time.