The Innovator's Dilemma
Intel begins to slip. → Where Intel's current troubles trace their roots.
On the morning of June 6, 2005, Paul Otellini walked across the stage of the Moscone Center in San Francisco to receive an embrace from Steve Jobs. He was three weeks into the chief executive’s office at Intel, the first non-engineer ever to hold the job. Jobs had spent the prior forty minutes telling a roomful of Apple developers that the Macintosh was leaving the PowerPC architecture it had used for a decade and would, beginning the following year, run on Intel processors. The embrace was the punctuation. Otellini stepped to the podium and walked the audience through what he framed as a long Silicon Valley parallel: Intel founded in 1968, Apple in 1976, two companies whose timelines had finally folded together. He acknowledged, with the indulgent half-smile of a man enjoying a victor’s joke, that for years Apple’s marketing had set fire to a man in an Intel-blue bunny suit; Intel, he said, did not hold grudges. The audience laughed. Jobs grinned. The slide behind them read “Apple and Intel: a powerful combination.”
It was the high-water mark of Otellini’s first year. Intel’s revenues for 2005 would clear thirty-eight billion dollars, the most the company had ever generated, and the Apple deal handed Intel the last great holdout among PC architectures. The microprocessor monopoly that Andy Grove had constructed in the late 1980s, the one Otellini had inherited as its third-generation custodian, looked from a certain angle as if it had reached a kind of geological permanence. PCs ran on Intel chips. Servers ran on Intel chips. Now, finally, even Macs would run on Intel chips.
Eighteen months later, in January 2007, Jobs would return to a different stage at the same Moscone Center and pull a glass-faced rectangle from his pocket. Inside the rectangle, where Intel’s logo would have looked entirely natural, there was no Intel logo. The first iPhone ran on a Samsung-fabricated, ARM-designed, Apple-customized application processor. Intel had been offered the contract. Intel had said no.
The transcripts of how Intel said no were not made public for another six years. Otellini, by then preparing to step down, sat for a long interview with the Atlantic’s Alexis Madrigal in the spring of 2013. Most of it was the practiced retrospective of a chief executive on his way out. Then, near the end, Madrigal asked about the iPhone, and Otellini’s voice changed. “We ended up not winning it or passing on it, depending on how you want to view it,” he said. “And the world would have been a lot different if we’d done it.” The negotiation, he explained, had come down to a price. Apple had wanted a chip at a particular number “and not a nickel more,” and that number sat below what Intel’s finance organization had forecast as the chip’s manufacturing cost. He had run the math, and the math had been clear. “I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.” Then came the line that would be quoted in business-school case studies for the next decade. “The lesson I took away from that was, while we like to speak with data around here, so many times in my career I’ve ended up making decisions with my gut, and I should have followed my gut. My gut told me to say yes.”
It was, Madrigal noted, the only place in two hours of conversation where she heard regret. The rest of Intel’s mistakes during the same period would be told as decisions; this one was told as a confession. It felt different because the iPhone refusal sat at the center of a pattern. It was not an outlier. It was a single instance of a way of thinking that had been governing the company since well before Otellini took the chief executive’s office, and that, judged by the standards of Clayton Christensen’s The Innovator’s Dilemma, was textbook to the point of embarrassment. Christensen had written in 1997 that great companies fell because they did the thing that had made them great too well. They listened to their best customers, optimized for their highest-margin products, and rationally refused the small, low-margin opportunities that, in time, ate them alive. Intel in the Otellini decade was the case study Christensen could not have written more clearly if he had tried.
The pattern had a name inside the company. The “fifty percent rule” was the unwritten rubric Intel used to evaluate any new product. To survive an internal review, a chip had to project gross margins north of fifty percent across its lifecycle, ideally well north. The rule existed for reasons that, in Intel’s history, had been load-bearing. The microprocessor business was capital-intensive on a scale almost no other industry matched: a leading-edge fabrication plant cost two to three billion dollars in the early 2000s and would soon cost more. Each new process node required tens of billions in research and development to perfect. The cash flow that funded the next fab came from the margin on the current chip. Drop margins, the logic went, and the engine seized. Grove had taught the company that vertical integration, sole-sourcing, and high gross margin were not luxuries but the necessary conditions for staying alive on the leading edge. By the time Otellini inherited the chair from Craig Barrett, the rule had hardened into a cultural reflex. Anything below fifty percent margin failed the smell test before it reached the engineering review.
That reflex had already cost Intel one large bet by the time the iPhone arrived, though Intel was telling itself a different story about the bet. The story was Itanium, and to understand the iPhone refusal it helps to start there.
In November 1993, Hewlett-Packard’s chief processor architects walked into Intel’s Santa Clara headquarters with an idea. HP had been running its enterprise servers on its proprietary PA-RISC architecture and could see, the way IBM and Sun and DEC could also see, that the cost of staying on the leading edge of microprocessor design was about to outrun the volumes any single workstation maker could field. HP wanted a partner. It was bringing a research project called Wide Word, in which long instruction packets would let the compiler, rather than the chip, schedule parallel operations. The technique had a textbook name, very long instruction word, and a marketing name, EPIC, for explicitly parallel instruction computing. Intel’s engineers looked at the HP designs and recognized ideas they had been circling themselves. By June 1994, Intel and HP announced a joint development effort. The first chip, codenamed Merced, would ship in 1998.
It did not ship in 1998. By the time the first Itanium processor reached customers on May 29, 2001, the architecture was almost three years late, and the Usenet wits had already given it a nickname that the British technology press would popularize: Itanic. The performance, when the chip finally arrived, was a disappointment by every metric the industry cared about. On x86 software, the lifeblood of the existing server installed base, Itanium emulation ran at speeds equivalent to a 100-megahertz Pentium, even though the chip itself was clocked at 1.1 gigahertz. Compilers struggled with EPIC’s premise. The second-generation McKinley parts that arrived in 2002 ran respectably on recompiled code, but the second generation was already being measured against a market that had moved.
The market had moved because of AMD. In Sunnyvale, a smaller company that Intel had spent the prior decade trying to suffocate had taken a different bet: rather than abandoning x86 for a new architecture, it would extend x86 to sixty-four bits. The internal codename was Hammer. The product, the Opteron, shipped in April 2003 with a feature set that was, in retrospect, devastatingly pragmatic. Existing thirty-two-bit code ran natively at full speed. Sixty-four-bit code ran natively at full speed. Customers who wanted to migrate could do so application by application without rewriting their stack. By 2004, Intel had given up its position and announced that its own Xeon server processors would adopt the AMD instruction set, now relabeled x86-64. By 2008, Itanium held fourth place in the enterprise market behind x86-64, IBM’s Power, and Sun’s SPARC; roughly fifty-five thousand Itanium servers shipped that year against more than eight million x86 servers, and about eighty percent of the Itanium revenue came from HP, the partner that had brought the architecture to Intel in the first place. The total cost of the program was estimated by industry analysts at five billion dollars by the time the first chip shipped and well above ten billion by the end. Pat Gelsinger, who had run Intel’s server group during the Itanium years and would later return as chief executive, told the New York Times after he left Intel that the program had eaten “ten billion dollars of investment” before any of it began to pay back. By the time Intel discontinued the last Itanium part, the architecture had become a cautionary story about what happened when a chip company let its strongest customer drive its product roadmap into a place its existing customers refused to follow.
The lesson Intel took from the Itanic, internally, was not the lesson Christensen would have prescribed. It was not that betting against compatibility was dangerous, or that customers preferred incremental improvement to architectural revolution. It was a narrower, more comforting moral: stick to x86. The architecture Intel had patented in the late 1970s and refined through the Pentium era was the company’s actual asset. Bet against it and you got Itanic. Bet on it, double down on it, and you stayed Intel.
That lesson was being applied with particular vigor to a small business inside Intel that, by 2005, was already orphaned in the company’s strategic imagination. In 1998, as part of a settlement that resolved a sprawling lawsuit with Digital Equipment Corporation, Intel had picked up DEC’s StrongARM design team for roughly seven hundred million dollars. StrongARM was a low-power processor based on the British ARM architecture that DEC had originally designed to extend the life of its Alpha workstation business; in a strange turn, the chip had found a much larger market in personal digital assistants and early smartphones. Intel renamed the line XScale and, under Craig Barrett, treated it as a serious bet. By 2002, Intel was shipping the PXA series ARM processors into the Palm and BlackBerry market. By 2004, the PXA270 was running in handhelds and prototype mobile phones across the industry, with a peculiar wireless extension Intel called Wireless MMX that no other ARM licensee had implemented. The XScale team had a roadmap into the gigahertz range. They had, by mid-decade, perhaps fourteen hundred engineers working on it.
The XScale team also had a problem. ARM processors did not earn fifty percent gross margins. Mobile phones in 2005 retailed for two hundred dollars, and the application processor inside one cost the manufacturer a single-digit number of dollars. The arithmetic that made every Pentium a margin gusher made every PXA chip a margin sinkhole, by the standards Intel applied to itself. And, because XScale was based on someone else’s instruction set, ARM Holdings’s, every PXA design was a tax payment to a Cambridge-based licensing house with no fab and no army of process engineers, at a moment when Intel’s process technology was, on its own benchmark, the best in the world.
Otellini, three months into the chief executive’s chair, presided over the strategic review that concluded Intel should not be in this business. The decision was announced on June 27, 2006: the XScale PXA mobile processor business would be sold to Marvell Technology Group, a Bermuda-headquartered fabless designer, for approximately six hundred million dollars. About fourteen hundred Intel engineers, including most of the senior XScale architects, transferred to Marvell with the assets. The sale closed on November 9, 2006. In Apple’s Cupertino offices, where engineers under Tony Fadell and Steve Jobs had been quietly working on a project codenamed Purple for the better part of two years, the news landed with a particular kind of awkwardness. The chip Apple had been considering as the spine of the device was now somebody else’s product.
Whether what happened next constituted a formal Intel-Apple negotiation or an extended series of conversations that never quite became one is the kind of question on which the participants have given different answers. In Walter Isaacson’s authorized biography of Steve Jobs, Jobs told Isaacson that Apple had wanted Intel to do “this big joint project to do chips for future iPhones,” and that the deal had failed for two reasons. “One,” Jobs said, “was that they are just really slow. They’re like a steamship, not very flexible. We’re used to going pretty fast. Second is that we just didn’t want to teach them everything, which they could go and sell to our competitors.” Otellini, interviewed by the same biographer, framed the failure as a price disagreement combined with an unresolved question of who would own the design. In his Atlantic interview two years later, he reduced it to the cost projection: Apple had asked for a number, the number had been below his finance team’s forecast, he could not see the volume that would justify going below cost, and he had said no.
The likeliest reconstruction, drawing on the careful work of analysts like Ben Thompson at Stratechery and the Chip Letter’s later examination of the XScale era, is that the conversations between Otellini and Jobs in late 2005 and through 2006 were never quite a formal bid. Apple was secretive about Purple to a degree that even seasoned Intel executives found disconcerting. The original iPhone’s eventual application processor, Samsung’s S5L8900, was a relatively modest part: an ARM1176 core clocked down to four hundred and twelve megahertz to spare battery, with a die small enough that the bill-of-materials cost ran in the low single-digit dollars. Intel’s chip teams, looking at the customer’s request through the fifty-percent-margin lens, had no way to model how a part priced that aggressively could ever pay back the engineering effort, especially when Apple was simultaneously refusing to share what its actual sales projections looked like. The math, from where Otellini was sitting, did not pencil. The math, from where Jobs was sitting, did not need to pencil yet, because what he was building did not exist yet, and would only exist if a chip company was willing to invest ahead of its own data.
Christensen’s framework explained the disagreement almost too cleanly. Intel was a sustaining-innovator company, optimized to its core. Its customers, the PC original-equipment manufacturers and the server builders, asked for faster x86 chips at higher margins, and Intel delivered. A new market with low margins, unproven volumes, a different instruction set, and a customer demanding deep design control was, by every metric Intel applied, a bad investment. The decision Otellini made was the decision the rule book demanded he make. The rule book happened to be the wrong rule book. It was always going to be the wrong rule book the moment the volumes flipped.
The volumes flipped fast. By 2010, three years after the first iPhone, Apple was selling tens of millions of them, plus tens of millions of iPod Touches and the new iPad, all running ARM-based Apple processors that Apple had begun designing in-house after acquiring P.A. Semi in 2008. By 2012, smartphone shipments had passed PC shipments globally. Each one of those smartphones contained a processor not made by Intel. The mobile market that Intel had measured against the wrong yardstick had, in five years, become roughly the same size as the PC market it had spent a generation dominating, and Intel’s share of it was approximately zero.
The company tried, intermittently, to fix the problem. In 2008, Intel introduced the Atom, a low-power x86 chip aimed at the netbook category and, eventually, at smartphones and tablets. The first smartphone-targeted variant, codenamed Moorestown, debuted in 2010 and was found to be too power-hungry for any reputable phone maker to ship. The second iteration, Medfield, finally appeared in a commercial smartphone in April 2012, India’s Lava Xolo X900, followed by handsets from Orange and Lenovo. None of them sold meaningfully. The architecture mismatch with the Android software ecosystem, by then standardized on ARM, was a wall against which Intel could only batter. By 2014, under Otellini’s successor Brian Krzanich, Intel adopted a strategy internally called contra-revenue: rather than charge tablet manufacturers for Atom chips, Intel paid them to use the chips, accepting an upfront cash subsidy in exchange for design wins it hoped would compound into a real business. In its peak year, 2014, the strategy cost Intel’s mobile and communications group an operating loss of $4.21 billion, a number Bloomberg’s analysts at the time noted would have bankrupted any of Intel’s mobile competitors outright. By April 2016, Intel announced it was canceling its next-generation smartphone chips, the Broxton parts and the SoFIA family, and effectively retiring from the smartphone processor market altogether. Total mobile losses across the contra-revenue years ran to roughly twelve billion dollars, depending on which line items one chose to include in the accounting.
Through all of it, the core Intel business kept printing money. Otellini’s tenure, by the metrics he was hired to deliver, was a triumph. Annual revenue grew from $34 billion in 2005 to $53 billion in 2012. The company generated more sales during his eight years as chief executive than it had in the prior forty-five combined. The board, which had elected him on the assumption that the next phase of Intel’s life would be about extracting maximum operational performance from the existing franchise rather than reinventing a new one, got exactly what it had asked for. When Otellini announced his retirement in November 2012, he was leaving behind a balance sheet that almost no other technology company on earth could match.
He was also leaving behind a company that had ceded the future of computing to a different architecture, made by different people, in different fabs. Apple’s A-series chips were being designed in Cupertino and manufactured in Hsinchu by TSMC, a Taiwanese foundry whose entire business model Intel had spent two decades dismissing as low-margin contract work. Qualcomm, a fabless designer in San Diego, had become the dominant supplier of mobile modems and applications processors to the rest of the smartphone industry. ARM Holdings, the British architecture house Intel had refused to be a tax payer to, was now the architecture that nearly every consumer-facing computing device on earth used. The microprocessor monopoly that Grove had built and Otellini had operated had not been challenged from within the PC market. It had been routed around by a market the PC incumbents had refused to take seriously when it was small.
Otellini died in his sleep at his Sonoma County home on October 2, 2017, at sixty-six. The obituaries recited his revenue numbers and the WWDC moment with Jobs, and quoted from the Atlantic interview. By the time they ran, the strategic situation Otellini had described to Madrigal had hardened past anything any single executive could undo. Intel was struggling to stay on the leading edge of process technology. Its biggest customers were beginning to design their own chips. The company that, in 2005, had looked like the most permanent installation in Silicon Valley was beginning to look more like an old industrial firm in a town the highway had bypassed.
The exact arithmetic of the bypass would take a decade to play out, and the manufacturing failures behind it had their own characters and their own years. The seed of the trouble, though, was already in the rule book Otellini had inherited and dutifully enforced. The fifty percent margin, the x86 loyalty, the suspicion of someone else’s instruction set, the conviction that a chip below cost was a chip that should not be made: every one of those rules had been written into Intel’s DNA by people who had been right at the time they wrote them, and every one of those rules had been the wrong rule for the decision Otellini was actually facing in 2006. He had read the data correctly. He had read the lesson incorrectly. And he had told Madrigal, on his way out, the truest thing any chief executive of his generation had said about why Intel did what it did, which was that he had known better and had not trusted himself to act on it.
Inside the company, the Atlantic interview was passed around for years afterward as a kind of reverse training document, the thing not to do, said out loud by the man who had done it. By then the question was no longer whether Intel had blown a category. The category had been blown for half a decade. The question was whether the same instincts that had cost Intel mobile would cost it the next thing, too.