"This Is the Future"
Carver Mead and Lynn Conway revolutionize chip design with VLSI. → The conceptual split between design and fabrication — the seed of fabless.
In late January 1979, a courier package arrived in a hallway of MIT’s Building 36 with the unromantic look of any Xerox PARC mailing label. Inside the bubble wrap were small ceramic packages, and inside the packages were silicon dies, and on the dies were chips that twelve weeks earlier had not existed even on paper. Roughly two dozen graduate students and faculty had spent the autumn term taking a brand-new course called 6.978, taught by a visiting associate professor from Xerox no one in the room had previously heard of. They had drawn rectangles on a layout grid. They had described logic in a textbook chapter that did not yet have a binding. They had submitted the resulting geometry, over a network link to a research lab in California, for fabrication. And now Lynn Conway was handing them back working pieces of silicon, projection-printed integrated circuits that, when probed, did the small computations the students had asked for.
For most of the semiconductor industry’s existence, that sequence of events had been unthinkable. Designing a chip in 1978 was a trade secret of perhaps a thousand people in the world, almost all of them inside Intel, Texas Instruments, IBM, Motorola, Fairchild, and a half-dozen Japanese firms. The work demanded simultaneous fluency in solid-state physics, photolithography, geometric layout, circuit analysis, and the specific personality of whichever fab line would carry the design. Wafers cost a fortune; mask sets cost a smaller fortune; a single retreating diffusion step or a missed contact alignment turned six months of effort into expensive sand. Universities did not design real chips. Hobbyists did not design real chips. The idea that a graduate student in a lecture hall could go home with a working LSI device after a single term was, in Cambridge that January, a brand-new fact about the world.
The course had been a deliberate experiment, run in cooperation between Conway, then leading the LSI Systems group at Xerox PARC, and a Caltech professor named Carver Mead who, for almost a decade, had been quietly converting his own students into chip designers by methods the rest of the field would have called reckless. The MIT term was the first time their joint method had been tried in a classroom outside Caltech and PARC. The chips coming back in those packages were not the experiment’s result. They were proof that the experiment had worked.
Mead and Conway had reached the experiment from different directions and through very different histories.
Mead had been born in Bakersfield in 1934 and had grown up in the small Sierra power-plant towns where his father worked. He arrived at Caltech as an undergraduate in 1952 and had never, in any meaningful sense, left. By the mid-1960s he was a tenured electrical engineering professor with a long-running friendship with Gordon Moore, who had begun, around 1959, sending him cosmetic-reject transistors from Fairchild for the students in his classes. Through the 1960s, Mead made regular trips up to Fairchild’s labs to talk with Moore and his colleagues. Moore would later credit Mead with coining the phrase “Moore’s Law” sometime in the early 1970s, when Mead used the term in a lecture and it stuck. By then Mead had also done the physics that gave the law its physical foundation. Prompted by a 1965 question from Moore about whether quantum tunneling would finally call a halt to miniaturization, Mead and his graduate student Bruce Hoeneisen worked out, in results first presented in 1968 and published in Solid-State Electronics in 1972, that MOS transistors could in principle be scaled to roughly 150 nanometers, two orders of magnitude smaller than anyone in the industry had then assumed. Moore, by then settling Intel into its first product line, took the result and quietly redrew his roadmap. Mead, for his part, came away with a conviction that the physics of integration was nowhere near finished, and that the bottleneck was no longer fabrication. It was design.
In the fall of 1970, a Caltech graduate student named Richard Pashley pestered Mead about why no one at the institute taught a course on metal-oxide-semiconductor design, and Mead agreed to do it on one condition: every student would design a real chip, Intel would fabricate it as a favor, and the grade would be binary. If your chip worked, you passed. If it didn’t, you failed. Two-thirds of the class quit by the second meeting. The nine who stayed produced eight designs that Mead consolidated, by hand, onto a single multi-project chip. By January 1972, every one of those designs had come back from Intel in working order. Mead’s students had each spent the equivalent of a few thousand dollars of fab time on a tool that, at industrial rates, would have cost millions per design. None of them had any prior experience as chip designers. All of them now did.
Mead’s Caltech course evolved through the rest of the decade into a kind of cottage industry. Each spring, a small cohort of Caltech students designed a chip; each summer, the chip came back; each fall, Mead added what he had learned to a stack of unbound notes. The notes were the seed of what would become the textbook. They were also the part of the work he could not, on his own, complete. By 1976 Mead had grown convinced that the problem he was solving was not Caltech’s problem but the world’s, and that what was missing was not another paper but a method, a way of describing chip design at a level of abstraction that anyone competent in computer science could absorb in a semester, the way a student in a programming course absorbed a language. He was looking for a collaborator who could think about chips the way software people thought about programs.
Lynn Conway had spent the previous decade earning, losing, and rebuilding exactly that capacity, under conditions her later colleagues would only learn about much later.
Born in 1938, Conway had attended Columbia in the early 1960s after leaving MIT, and had joined IBM Research at Yorktown Heights in 1964. The lab was then home to a project called Advanced Computing Systems, an effort by John Cocke, Brian Randell, Herb Schorr, Fran Allen, and Ed Sussenguth to design the most ambitious mainframe of the era, a machine intended to outrun the CDC 6600 that had embarrassed IBM the year before. In September 1965, Conway, then twenty-seven and working through the simulator for the ACS instruction-issue logic, sketched out a mechanism by which a processor could decode several instructions at once, hold them in a queue, and dispatch them out of program order as their operands became available. The Computer History Museum would later identify the resulting architecture, documented in an internal IBM memo dated February 23, 1966 and authored by Conway with three colleagues, as the first superscalar processor design in the literature. The technique she had invented, multiple-issue out-of-order dynamic instruction scheduling, would become, two decades later, the operating principle of every high-performance CPU on Earth.
She did not get to finish the work. In 1967, Conway told IBM management that she intended to undergo gender transition. The company terminated her in 1968. Its written justification, as she would later document on her own website, was that her presence on the team would cause “extreme emotional distress in fellow employees.” She was thirty years old, had no employment record under her new name, and had every professional contact she had built standing on the wrong side of a wall. Her response was to vanish. She took a job in late 1969 as a contract programmer at a firm called Computer Applications, then moved to Memorex, where managers, not knowing her history, quickly figured out that the new hire could design digital systems better than most of their senior people, and put her on the architecture of the company’s 7100 mainframe. By 1972 she was back to doing serious computer architecture under a name no one yet associated with the ACS project. In 1973, the same year Xerox decided that its new Palo Alto Research Center needed people who could think simultaneously about software and silicon, Conway joined PARC. She was thirty-five.
PARC in the mid-1970s was the rare lab that took the relationship between computers and microelectronics seriously enough to put designers and physicists in adjacent offices. Conway settled into a group working on what was then called LSI, Large Scale Integration, the regime of tens of thousands of transistors per chip. The group’s senior figure was Bert Sutherland, brother of the computer-graphics pioneer Ivan Sutherland; Ivan himself, a friend and recurring visitor, was running the Caltech computer science effort with Carver Mead. In the spring of 1976, on one of those visits, Mead came up to PARC to give a talk on his Caltech course and on the underlying observation that drove it: that the physics of MOS scaling was now permissive enough that the limiting factor on chip complexity was not the transistor but the human designer. Conway was in the audience. By the end of the talk she had recognized, with the unsentimental clarity of someone who had once invented out-of-order instruction dispatch in a single weekend, that Mead was right and that the missing piece of his program was a treatment of design as an engineering discipline, parallel to software, taught from a textbook.
They began collaborating that year. Mead’s contribution was the physics, the rigor, the long Caltech experience of forcing students through real silicon. Conway’s contribution was the structuring of the methodology, the abstractions, the layered hierarchy, the part of the project that resembled, in spirit, the layered abstractions of an operating system. Two ideas in particular came out of their joint work that would matter for decades.
The first was a system of geometric design rules expressed in terms of a single scalable parameter the authors called lambda. The chip-fabrication processes of the late 1970s differed in absolute dimensions. Intel’s NMOS line might be running at one set of feature sizes, Motorola’s CMOS line at another, and a research fab at MIT at something else again. Anyone designing a chip had previously needed to know the specific tolerances of the specific line. Mead and Conway proposed a different approach. They defined lambda as the minimum scale unit on a given process, roughly half the minimum feature size, and then expressed every layout rule (transistor width, contact spacing, metal pitch) as a small integer multiple of lambda. A design drawn in lambda units could, in principle, be ported across processes and even across vendors by re-scaling. It was a deeply software-like idea: a portable representation of a chip that could be compiled to whichever silicon happened to be available. It also had an ulterior effect that mattered more than the portability. By radically simplifying the rules, reducing the dozens of process-specific constraints to a small number of integer relations, it made design teachable to people who were not solid-state physicists.
The second idea was the multi-project chip itself. A photolithographic mask cost so much, and the fab time on a wafer was so expensive, that no university or small company could afford to amortize either against a single experimental design. Mead’s Caltech course had hand-tiled multiple student designs onto a shared mask precisely to spread the cost. Conway took the principle, generalized it, and built it into a service. By 1978 she had organized a procedure at PARC by which student layouts from outside universities could be sent in by mail or modem, combined onto shared masks, fabricated on a piggybacked production run, and returned as packaged dies a few weeks later. The MIT course of fall 1978 was the first external test of that pipeline. The chips that arrived in Building 36 at the end of January 1979 were the proof.
The textbook the course had been built around appeared in print the following year as Introduction to VLSI Systems, published by Addison-Wesley with a 1980 copyright. By the end of 1983 it was being used in roughly 120 universities around the world. It read less like the engineering manuals of the previous era, dense slabs of process physics, often illustrated with cross-sections of diffusion profiles, than like a software textbook. The opening chapters introduced MOS transistors as switches with a small set of behavioral rules. Subsequent chapters built up from gates and registers to memory, datapaths, and complete processors, each level introducing new abstractions that the lower levels were no longer required to expose. Lambda rules anchored the geometry. Structured layout, the discipline of building chips out of rectangular blocks that connected on a regular grid, anchored the floor plan. The book taught the student to think of a chip as a hierarchical program, written in geometry, that happened to compile to silicon.
The conceptual move, chip design as something formally analogous to software design, organized around abstraction rather than around process secrets, was what later writers would call the Mead-Conway Revolution. The phrase was, in some respects, generous. The fundamental physics of MOS scaling had been Mead’s; the structured methodology had been Conway’s; the multi-project pipeline had been Conway’s; the textbook had been theirs together. Conway, in interviews much later, was careful about the credit. She said the chips were not the invention. The invention, she said, was the living system of people and technology, the textbook, the courses, the multi-project service, the cohort of newly trained designers, that the chips were a symptom of.
DARPA noticed. In 1980, the agency funded a VLSI research initiative across more than a dozen universities; by 1981, it had bankrolled the institutionalization of Conway’s PARC pipeline as a permanent service called MOSIS, the Metal Oxide Semiconductor Implementation Service, run out of USC’s Information Sciences Institute by Danny Cohen. MOSIS aggregated chip designs from universities, government labs, and small companies; sent them to commercial foundries that had spare production capacity; and returned packaged parts, on a turnaround of about a month, at unit costs that small organizations could afford. By the end of the 1980s, MOSIS had fabricated more than twelve thousand designs. Among the early batches were the prototype chips for the Stanford MIPS project and Berkeley’s RISC, both of which would shortly become commercial architectures. A graduate student or a startup with no fab now had access to silicon at something close to the cost of access a startup with no printing press had to a printed page.
What this meant, in the longer view, was that a wall the industry had treated as load-bearing turned out to be a partition.
The wall in question was the one between people who designed integrated circuits and people who manufactured them. Throughout the 1960s and 1970s, the two activities had been treated as a single craft. A chip company designed its own products and built its own fabs and ran them in a tight loop, because the designers needed to know the fab’s quirks and the fab needed to be tuned to the designs. Fairchild, Intel, TI, Motorola, IBM, the Japanese majors, every leading chipmaker was a vertically integrated firm that could not have been disaggregated without losing its competence. The Mead-Conway methodology, supported by lambda rules and by services like MOSIS, asserted that the seam between design and fab could in fact be cleanly cut. A designer who worked in lambda units and structured layout could submit a design to any fab that could meet the rules. A fab that could meet the rules could accept designs from anyone. The two halves of the business could, in principle, become separate industries. They could trade across an interface rather than fold into a single hierarchy.
This was a quiet claim in 1980. It had not yet acquired a label. It would not acquire one until the second half of the decade, when the first companies that designed chips without owning fabs, the early generation of what would come to be called fabless firms, started to appear in California and elsewhere. The decade after that would belong to the foundries that grew up to serve them, and the decade after that to the largest of those foundries in particular. None of those things had happened yet. What had happened was that, in fall 1978, a class of MIT students with no industrial experience had drawn rectangles on paper, sent them through a mail-and-modem pipeline to PARC, and received back, twelve weeks later, working chips that ran code. That fact, once it could be reproduced, did not have to be argued for. It had only to be repeated.
It was repeated. The MIT course was duplicated at Stanford, Berkeley, Carnegie Mellon, and a wave of other schools through 1979 and 1980. The textbook went through reprintings. By the time the American memory industry was collapsing under Japanese pressure in the early 1980s, an entire generation of newly trained chip designers was being educated in a methodology that no Japanese rival had institutionalized at anything like the same depth. The Mead-Conway revolution did not save US DRAM. It built something next to DRAM that would matter more in the long run: a labor market of chip designers who could work without owning a fab, and an infrastructure that let them.
Lynn Conway left PARC in 1983 to spend two years at DARPA, then took a faculty appointment at the University of Michigan, where she would teach until 1998. She did not, in those years, tell her colleagues about IBM in 1968 or about the years between Computer Applications and PARC. She came out publicly only in 1999, after a writer working on the history of the ACS project began assembling fragments. In 2020, fifty-two years after firing her, IBM apologized; the apology was delivered alongside a Lifetime Achievement Award by the head of the company’s human resources function. Conway, by then in her eighties, accepted it with the wry acknowledgment of someone who had outlasted most of the people who had written her termination letter. She died in June 2024.
Carver Mead stayed at Caltech, drifted into a career-long second act in neuromorphic engineering, analog circuits that imitated the wiring of the retina and the cochlea, and watched, from his Pasadena office, as the methodology he had built with Conway became the substrate of an industry he had not predicted. When asked, in his later years, what he had thought he was doing in the 1970s, he tended to give short answers. They had been trying to make it possible, he would say, for a small team of people to design powerful chips. Whatever had happened after that had happened because that small thing had turned out to be true.
In Building 36, that January, the students opened the packages.