Neoclouds and Circular Money
The financialization of compute: CoreWeave (founded 2017 as a crypto miner; pivot to GPU-as-a-service; ~60% Microsoft revenue concentration; March 2025 IPO; market cap above $70B at peak); Lambda Labs and the Series E ramps; Crusoe and the Stargate Abilene build; Nebius (the Yandex Europe spinout); Together / Vultr / Voltage Park / Fluidstack as the second tier. The circular financing concerns: Nvidia invests in CoreWeave; CoreWeave buys Nvidia GPUs; CoreWeave leases them to Microsoft and OpenAI; Microsoft funds OpenAI; OpenAI books capacity at CoreWeave; the same dollars round-trip through the same balance sheets. The 2000 telecom-build parallel. Bain's "$800B question." Jim Chanos, Michael Burry, the bear case. → Why the chip war's commercial layer became its own systemic risk story.
On the morning of Friday, March 28, 2025, three former commodities traders stood on a balcony at the Nasdaq MarketSite in Times Square and rang a bell. Michael Intrator, at the microphone, was the chief executive of a company most of Wall Street had not heard of three years earlier. The two flanking him, Brian Venturo and Brannin McBee, were his cofounders. None of the three had a computer-science degree. None had run a cloud business before 2020. The shares behind them, ticker CRWV, were about to begin trading at forty dollars apiece, the bottom of a range the underwriters had cut twice in the preceding ten days. By lunch the stock would close exactly where it had opened, a flat-line debut the financial press would call disappointing. The valuation, even at the cut price, was twenty-three billion dollars.
The most consequential single buyer at that price was not a mutual fund. Nvidia, on the eve of the deal, had committed to take roughly two hundred and fifty million dollars of stock at the offering price. Seven months earlier the same chipmaker had been an early Series B investor at a two-billion-dollar pre-money valuation. The customers who together supplied seventy-seven percent of CoreWeave’s 2024 revenue were Microsoft, the company whose Azure cloud was supposed to make this kind of niche provider unnecessary, and an undisclosed second party that the financial press would later confirm was OpenAI under its own accounts. Microsoft had spent thirteen billion dollars on OpenAI. OpenAI had committed to spend tens of billions on Microsoft Azure. Microsoft was paying CoreWeave to host that compute because Microsoft itself could not build it fast enough. CoreWeave was paying Nvidia for the GPUs. Nvidia was funding CoreWeave so that CoreWeave could keep buying Nvidia GPUs. Standing on the balcony, Intrator was the smiling middle of a circle that, viewed from a certain angle, looked more like a closed loop than like a market.
To understand how the loop had formed, it helped to start where Intrator had started, which was nowhere near a data center. In 2013 he and Venturo had cofounded a small natural-gas hedge fund in Manhattan called Hudson Ridge Asset Management. Venturo’s edge was a machine-learning rig he had built to ingest pipeline-flow and weather data, a hand-rolled GPU box a quant team in 2014 would have considered exotic. By 2017 the fund was struggling, ethereum was rising, and the GPUs that had been crunching gas curves looked, on a price-per-hash basis, like a better trade. Intrator, Venturo, and McBee registered Atlantic Crypto Corp. in Weehawken, plugged a cluster of Nvidia GPUs into a corner of their Manhattan office, and started mining ether. The first rigs sat on a pool table. When the pool table filled they moved to a closet. When the closet filled they leased a small site in New Jersey and added a CTO, Peter Salanki, who actually understood data-center networking. By the end of 2018 they had a few thousand GPUs and the kind of operating cash flow a hedge-fund team that had stopped trading commodities mostly to keep the lights on regarded with gratitude.
The pivot came in halves. The 2018-2019 crypto crash destroyed the unit economics of GPU mining and forced anyone who had built a hardware base to find a second use for it. By 2020, academic and small-commercial customers were willing to rent unused GPU time by the hour for machine-learning workloads, and the rental rates implied gross margins better than any number Atlantic Crypto had ever printed in mining. The company renamed itself CoreWeave in October 2021 and began signing contracts with anyone who would buy: Stable Diffusion shops, indie game studios, academic labs. Then ChatGPT detonated, the H100 became the most rationed commodity on earth, and Nvidia, looking for a way to seed the market for its data-center silicon outside the established cloud trinity of AWS, Azure, and Google Cloud, decided to bet on a partner that would buy whatever it could ship. That partner was CoreWeave.
The decisive vote came in April 2023, when Nvidia led a hundred-million-dollar slice of a two-hundred-and-twenty-one-million-dollar Series B at a two-billion pre-money valuation. The check was unusual. Nvidia did not normally take large positions in cloud customers, certainly not in cloud customers whose only differentiation was that they bought a lot of Nvidia chips. The check came with something more valuable than money. It came with allocation. When Nvidia produced H100s, CoreWeave received units ahead of larger and better-capitalized buyers. Microsoft, watching from a campus that had spent two decades as Nvidia’s largest single customer, found itself in 2023 unable to take delivery of enough Hopper-generation chips to feed OpenAI’s training and inference demand. Microsoft’s solution was to write CoreWeave the kind of multi-billion-dollar contract that, in conventional cloud economics, made no sense. The justification, repeated through Satya Nadella’s earnings calls, was simple and embarrassing. Microsoft’s own data centers could not be built fast enough. CoreWeave had H100s that could be plugged in and serving inference inside ninety days. The premium Microsoft paid was the cost of buying time.
This was the central misframing in most public coverage. CoreWeave was not, in the way that AWS or Azure or Google Cloud were, a cloud-computing company. It was a financial vehicle for moving Nvidia chips into operational service faster than the hyperscalers could digest their own backlogs. The business depended on three arbitrages: Nvidia’s allocation preference, which let CoreWeave acquire chips other buyers could not; its willingness to leverage those chips against debt at terms a hyperscaler would not accept; and the hyperscalers’ willingness, queued inside their own procurement, to lease back capacity from a vehicle redundant with their own. Strip away the cloud-native vocabulary and CoreWeave was, on its balance sheet, a leveraged warehouse of Nvidia silicon, financed by asset-backed debt collateralized against H100s and H200s and repaid out of multi-year contracts written by customers who all needed the silicon faster than they could acquire it themselves.
The financing was the part the trade press paid least attention to and that most plainly described what kind of company this had become. In August 2023, less than four months after the Series B, CoreWeave announced a 2.3-billion-dollar debt facility led by Magnetar Capital and Blackstone. The collateral was, for the first time in industry memory, a pool of Nvidia H100s themselves. Rating agencies and bank syndicates had to invent the discipline of GPU-backed financing in the moment, treating the chips as depreciating industrial assets whose residual values could be modeled the way a leasing shop modeled airframes or rail cars. In May 2024, Blackstone and Magnetar led a follow-on of seven and a half billion. By 2026 the cumulative GPU-backed debt had crossed ten billion. In late March 2026 the company closed an investment-grade-rated 8.5-billion-dollar facility, the first GPU-backed deal of that quality the public markets had ever seen. By April 2026 it was tapping the junk bond market for another 1.25 billion alongside a 3.1-billion-dollar leveraged loan. Banks that two years earlier had refused to underwrite GPU collateral were now competing for tranches.
The S-1 CoreWeave filed on March 3, 2025 made the dependency arithmetic public. Microsoft accounted for sixty-two percent of the company’s 1.92 billion dollars of 2024 revenue and sixty-seven percent of early-2025 revenue. Two customers supplied seventy-seven percent of 2024 revenue. CoreWeave reported a 1.9-billion-dollar loss for the year and carried about eight billion of debt against four billion of equity. The auditors flagged a material weakness in internal controls. The risk-factor section warned that the loss of either of the two top customers would have a “material adverse effect” on the business. In any prior cycle, a filing like this would have required the underwriters to pull the IPO. The underwriters pushed the price down to forty dollars to clear the book, took an extra fee for the uncertainty, and brought it to market. By June, the founders, who had each cashed out roughly a hundred and fifty million dollars in pre-IPO secondary sales, were worth several billion on paper. By summer the stock had crossed one hundred dollars. By late autumn it touched one hundred and eighty-seven, putting market capitalization north of seventy billion at its peak. The trajectory had taken the company from a pool table in Manhattan to a junk-bond-issuing infrastructure giant in less than a decade.
CoreWeave was the largest of a category SemiAnalysis had begun calling neoclouds. A neocloud was a GPU-rental business optimized for AI workloads, run leaner than a hyperscaler, less diversified, more concerned with raw racks of accelerators wired to a competent network fabric and a steady stream of vendor financing. The four giants were CoreWeave, Crusoe, Lambda, and Nebius. Beneath them sat a second tier including Together AI, Vultr, Voltage Park, and Britain’s Fluidstack. None had existed in their current form before 2020. All depended on a single supplier for the input that defined their cost base. All had structured their balance sheets around the assumption that the supplier would keep allocating, that customers would keep contracting, and that last year’s chip would not collapse in residual value before this year’s paid for itself.
Lambda was the closest thing to a sober cousin. Founded in 2012 by twin brothers Stephen and Michael Balaban, it had originally sold deep-learning workstations to research labs, then drifted through API products, and eventually pivoted to a public GPU cloud as the demand wave from late 2022 reached down through the academic tier and pulled in startups that could not get hyperscaler allocation. Lambda’s Series D, in February 2025, brought in 480 million at a 2.5-billion valuation, with Nvidia participating. Its Series E, in November 2025, raised more than 1.5 billion led by TWG Global and the U.S. Innovative Technology Fund at a 5.9-billion valuation. Almost simultaneously, Nvidia signed a 1.5-billion-dollar agreement to lease back from Lambda eighteen thousand GPUs Lambda had previously bought from Nvidia, an inversion the trade press treated as a curiosity and SemiAnalysis treated as a structural feature. By early 2026 Lambda had also announced a multi-billion-dollar arrangement with Microsoft to deploy GB300 NVL72 systems in liquid-cooled U.S. data centers, repeating CoreWeave’s pattern in miniature.
Crusoe came at the same place from a different vector. Cully Cavness and Chase Lochmiller had cofounded the company in Denver in 2018 with an unusual hardware story. Oil wells in the Bakken, the Powder River, and the DJ Basin flared gas as a byproduct of crude production because the gas was uneconomic to capture and ship. Crusoe built containerized generators that ran on the flared gas and used the electricity to power Bitcoin mining at the wellhead. The fuel was nearly free, the carbon footprint was lower than burning the gas in open atmosphere, and the data centers were modular and movable as the wells declined. By 2022 Crusoe was running forty-plus mobile sites and had a partnership with ExxonMobil. As crypto economics deteriorated and AI demand surged, Crusoe pivoted the same skill set into AI infrastructure. By 2024, AI cloud services were already forty-five percent of revenue. In December 2024 the company closed a 600-million-dollar Series D at a 2.8-billion valuation, with Nvidia participating. In March 2025 it sold its entire Bitcoin mining operation to NYDIG, exiting crypto outright. In October 2025 a 1.4-billion-dollar Series E led by Mubadala and Valor Equity valued the company at just over ten billion. In seven months it had more than tripled in valuation by ceasing to be a crypto miner and becoming the build partner for Stargate.
Stargate’s flagship campus, the 1.2-gigawatt complex on the Lancium Clean Campus in Abilene, Texas, was Crusoe’s biggest physical artifact. Crusoe broke ground in June 2024 on a project initially scoped at 200 megawatts and code-named Project Ludicrous. The campus was conceived as eight 100-megawatt buildings, lashed into a single integrated network fabric capable of running training jobs across the entire site as a single logical machine. By June 2025 Oracle had begun delivering Nvidia GB200 racks to the first two buildings. By September 2025 the first phase was live and serving OpenAI inference and training workloads through Oracle Cloud Infrastructure. Crusoe topped out the eighth and final building in early 2026. The structure was the standard neocloud refrain in larger letters. Crusoe built. Oracle leased. OpenAI consumed. The chips were Nvidia’s. The financing flowed through Crusoe’s debt facilities, Oracle’s balance sheet, and the equity stakes that SoftBank, MGX, and OpenAI itself had taken in Stargate’s holding entity. Each participant booked the same compute on different lines of different statements.
Nebius rounded out the four. Its origin was the strangest. In 2022, sanctions following Russia’s invasion of Ukraine forced Yandex N.V., the Dutch-domiciled holding company over the Russian search giant, to divest its Russian operations. The remaining international assets, including a Finnish data-center campus, an autonomous-driving program, and a small commercial AI cloud, became Nebius Group N.V. under Arkady Volozh, the Russian-Israeli founder who had spent two decades building Yandex and now had to recreate a global technology company in the West, in a hurry, with no consumer brand and no Russian revenue. Nebius resumed Nasdaq trading in October 2024. In December 2024 it raised seven hundred million dollars in a private placement led by Nvidia and Accel. By the end of 2025 it was guiding to roughly five hundred and fifty million in revenue and a 2026 outlook between three and three and a half billion. In February 2026 it announced a 27-billion-dollar multi-year infrastructure deal with Meta to power AI workloads out of a new Lappeenranta, Finland data center. The stock, which had traded around twenty-one in late 2024, peaked above eighty-four in March 2026, valuing the company at thirty-six billion by early May. A spin-off of a sanctioned Russian search engine had become, in eighteen months, one of Europe’s most consequential AI infrastructure companies, and Nvidia owned a piece of it.
The pattern, in aggregate, was the part the analysts kept circling back to. Nvidia invested in the neoclouds. The neoclouds bought Nvidia chips. The neoclouds borrowed against the chips. The hyperscalers and frontier labs leased the chips back. The chip company funded with equity what it booked as revenue. The hyperscalers funded with equity stakes the labs whose compute spend they then booked as cost of revenue. The labs ran on commitments to spend they could only meet by raising more equity, often from the same chip company or hyperscalers whose backlogs they were already paying. The September 2025 announcement that Nvidia would invest up to a hundred billion dollars in OpenAI to underwrite a ten-gigawatt buildout, conditional on OpenAI deploying that capacity on Nvidia silicon, made the structure unambiguous. The chip company was bankrolling the customer who was already its largest indirect end-user, in exchange for a commitment to keep being the largest indirect end-user. Two days later, OpenAI announced a similar warrant arrangement with AMD, sized in the tens of billions, conditional on MI450 deployments. By the end of 2025, OpenAI’s cumulative compute commitments across Microsoft, Oracle, AWS, CoreWeave, Nvidia direct, AMD direct, and Google Cloud had passed a trillion dollars on the most aggressive analyst counts. Sam Altman’s lab, which would lose roughly five billion dollars on revenue under twenty billion in 2024, had committed to spend roughly fifty times its revenue on infrastructure that other companies were, in turn, committing equity to build for it.
The arrangement that made the structure most legible to skeptics was the deal Nvidia signed with CoreWeave in September 2025. In an SEC filing dated September 9, Nvidia agreed to purchase up to 6.3 billion dollars of CoreWeave’s unsold cloud capacity through April 13, 2032. The plain reading was that Nvidia, the supplier, had agreed to buy any of CoreWeave’s GPU time that CoreWeave failed to sell to other customers. The hardware Nvidia had sold to CoreWeave was now revenue Nvidia had partly guaranteed by promising to repurchase the unused output of the same hardware. The same dollar, on the more critical reading, started its life as Nvidia GPU revenue, became CoreWeave capital expenditure, was repaid out of leases booked as Microsoft and OpenAI cost of goods sold, was funded on Microsoft’s side by hyperscaler capex and on OpenAI’s side by Microsoft and Nvidia equity, and ended its life, if no other end customer materialized, as Nvidia repurchasing the GPU time it had originally sold. The architecture had a name the financial press began to attach to it from late 2025 onward. They called it circular financing.
The skeptics had a precedent in mind. Between 1996 and 2001, the United States telecommunications industry built the long-haul fiber boom that followed deregulation under the 1996 Telecommunications Act. The carriers, Worldcom, Global Crossing, Qwest, Williams Communications, and dozens of regional players, financed the buildout with debt. The equipment vendors, principally Lucent Technologies and Nortel Networks, financed the carriers’ purchases with vendor financing of their own. Lucent at its peak had committed roughly eight billion dollars in vendor financing against revenue of around thirty-three billion, a quarter of its top line directly underwritten by loans from itself to its customers. Money lent to Worldcom to buy Lucent gear arrived back at Lucent as Lucent revenue, while the loan sat as a notionally good asset on Lucent’s balance sheet. The structure worked as long as the carriers could service the debt out of telecom revenue. When usage assumptions cracked in 2000 and 2001, the carriers stopped paying, the loans went bad, and the equipment vendors imploded. Lucent’s stock fell from above forty dollars in late 2000 to roughly six dollars by September 2001. Nortel laid off two-thirds of its workforce in 2001. Worldcom went bankrupt in 2002 in what was, at the time, the largest accounting fraud in American history. Global Crossing went bankrupt the same year, leaving thousands of miles of dark fiber as the cycle’s defining symbol.
The bears of 2025 and 2026 saw the same shape. The chipmaker as Lucent. The neoclouds as Worldcom and Global Crossing. The hyperscaler capex commitments as the carriers’ debt-financed buildouts. The closed loop now wired through equity stakes and warrants instead of straight loans, but functionally equivalent in that the supplier was underwriting demand for the supplier’s own product. Jim Chanos, the short seller who had identified Enron’s accounting fictions in 2001, made the comparison directly. On a series of podcast appearances and notes through autumn 2025, Chanos argued that CoreWeave and Oracle were both depreciating their AI hardware over a roughly six-year schedule that the market for the underlying chips would not support. The H100 had seen rental rates fall from roughly eight dollars an hour in early 2024 to two or three by late 2025, a sixty-to-seventy percent decline in eighteen months as Hopper aged into the shadow of Blackwell. AWS had cut H100 and H200 prices by forty-four percent in a single move in June 2025. If the chips were depreciating that fast, Chanos argued, the six-year depreciation schedules were aspirational, and the reported “earnings” were partly an artifact of the schedule. The marginal customers were AI labs themselves burning cash, which meant the entire chain depended on continued venture and hyperscaler equity for several more years. If sentiment shifted, the order books could “evaporate quickly.”
Michael Burry, the investor who had famously shorted the subprime mortgage market before 2008, made his concerns visible in a different register. Burry’s third-quarter 2025 13F filing disclosed put-option positions on roughly one million Nvidia shares and five million Palantir shares, with notional values of about 187 million and 912 million dollars respectively. In November 2025, weeks after the filing became public, Burry voluntarily de-registered his hedge fund, Scion Asset Management, from the SEC, ending his obligation to file future 13Fs. The signal was its own form of commentary. Burry had spent his career betting against extended speculative cycles. He had concluded that the AI cycle had become extended enough to merit a position, and had then chosen to stop disclosing what he was doing.
The most rigorous version of the bear case came from Bain & Company. In its annual technology report published in late 2025, Bain projected that meeting AI’s compute demand by 2030 would require roughly 500 billion dollars a year in incremental capex on data centers. Sustaining that level implied, on Bain’s modeling of historical capex-to-revenue ratios, somewhere around two trillion dollars a year in revenue from AI services. The actual revenue trajectory the same report estimated, even after generous assumptions, fell short of that target by about eight hundred billion dollars a year. The shortfall could close if AI revenue accelerated faster than Bain’s base case, if compute efficiency improved faster than the projection allowed, or if the industry simply spent for several more years at unsustainable ratios while waiting for revenue to catch up. It would not close if revenue grew at the pace of any prior software category. The gap had become known as the eight-hundred-billion-dollar question.
The bull case, in fairness, was not weak. Microsoft’s commercial cloud revenue had crossed an annualized run rate of more than a hundred billion dollars by 2025, with the AI line growing fast enough that analysts believed it would be a third of the cloud business by 2027. Google’s cloud-AI revenue, broken out for the first time in 2025 disclosures, was running at multi-billion-dollar annualized rates and accelerating. Anthropic was on track to roughly seven billion of revenue in 2025. OpenAI’s revenue had grown more than four-fold from 2023 to 2025. The argument that the marginal user was paying for AI in a real way, that inference workloads were sticky, that productivity gains were appearing in white-collar work, was substantive. CoreWeave’s remaining performance obligations exceeded thirty billion dollars by late 2025. Nebius’s signed contracts approached fifty billion across the next several years. Crusoe’s bookings were measured in tens of gigawatts of pre-leased capacity. The neoclouds were not selling into a vacuum. They were selling into multi-year commitments from the largest enterprise IT spenders in the world.
The bear case did not deny any of this. It turned on a different argument, about the marginal dollar rather than the average one. The hyperscalers were generating real cash flow and could plausibly fund AI capex out of their own cash earnings for years. The labs, by contrast, were not, and their share of the marginal dollar of capex had grown. AI labs as a category were committing to compute spending an order of magnitude greater than current revenue, financing the gap through equity rounds whose investors were, increasingly, the chip company and the cloud companies whose products they would buy with the new equity. The neoclouds in turn were funding their share through GPU-backed debt whose residual values depended on assumptions about chip half-life that were sharply contradicted by spot rental prices. Nvidia’s top customers were, in many cases, recipients of its own equity, beneficiaries of its capacity backstops, or counterparties to warrants that conditioned future delivery on future orders. The system was internally coherent only as long as the cycle continued. Stop the cycle and the components could not be marked at anything close to their reported values, because their reported values were partly a function of the cycle continuing.
This was the disagreement surrounding the chip war’s commercial layer by the spring of 2026. The data did not yet decide it. The optimists could point to real revenue, real productivity, real customer commitments, and a chip company that had grown into the most valuable enterprise on earth not by accounting magic but by selling more of a thing the world had concluded it needed than any company had ever sold of any thing. The pessimists could point to depreciation schedules that did not match secondary market prices, customer concentrations that would have terminated public companies in any prior cycle, and a closed loop of investments and contracts that looked uncomfortably like the apparatus that had collapsed Lucent and Worldcom a generation earlier. The outcome would depend on whether AI revenue continued its 2023-to-2025 trajectory, or merely a softer version of it, for long enough to validate capex commitments that, by 2026, had gotten ahead of revenue by a margin no prior infrastructure cycle had attempted.
The structure had become its own systemic risk story. The chip war had begun, in 2018 and 2019, as a question about which countries could fabricate the world’s most advanced silicon. By 2026 it had grown an additional question about the financial structures the silicon was being sold through. A trillion-dollar sequence of commitments had been written between Nvidia, the hyperscalers, the labs, and the neoclouds, much of it cross-collateralized, much of it contingent on the continued willingness of equity markets to fund the labs and debt markets to fund the clouds. In the telecom build of 1999, the contagion vector had been the carriers. In 2025, it was less obvious which node was the equivalent. Some analysts pointed to the labs, whose cash burn made them most exposed to a sentiment shift. Others pointed to the neoclouds, whose debt loads were tied to the residual values of the underlying silicon. Others still pointed to Nvidia itself, whose role as supplier, investor, capacity backstop, and warrant counterparty had concentrated enough of the system’s circular flows on a single balance sheet that an unexpected weakness anywhere would, eventually, surface there.
Through all of it, the wafers kept coming off the line. CoreWeave kept signing deals. In April 2026 it expanded its agreement with Meta and tapped the junk-bond market for another 1.25 billion. Nebius kept lighting up Finnish capacity for Meta and Microsoft. Crusoe kept pouring concrete in Abilene. Lambda kept signing GB300 commitments. Nvidia kept taking orders. The Federal Reserve’s financial stability reports had begun mentioning AI-related capex and asset valuations in their discussions of macro risk. The Treasury’s office of financial research had circulated internal memos on cross-collateralization in the GPU-backed debt market. The SEC had begun asking questions about depreciation policies and customer concentration disclosures. None of these inquiries had reached the threshold of action. All of them existed.
The strangest feature of the moment was that the systemic risk question had developed not in spite of America’s success but because of it. The country that had built the chips, won the export-control fight on its own terms, marshaled half a trillion of CHIPS Act and induced private capex, and put Nvidia at the top of the global market capitalization rankings, had also built a financial apparatus around the chips whose internal coherence depended on the cycle continuing. The vulnerability was not adversarial. It was endogenous. While Wall Street argued about whether the circular flows constituted economic substance, a different question was forming on the other side of the Pacific, where a Hangzhou hedge-fund subsidiary and a Shenzhen-based national champion had been working, on a different financial logic and with a different set of constraints, on whether any of the apparatus the Americans had built was the only path forward at all.