NVIDIA: From $375B collapse to AI infrastructure dominance

February 13, 2026
6 min

A crash, a write-down, and a timing mismatch

In August 2022, Nvidia reported a $1.34B charge, largely tied to inventory and revised demand expectations in Data Center and Gaming. The company was writing down excess supply. Demand had slowed. Growth stocks were being repriced. Its market value was collapsing.

At almost the same time, ChatGPT was preparing for its public launch.

The paradox is striking in hindsight: just as Nvidia was absorbing one of the sharpest drawdowns in modern corporate history: $375B in market value erased between late 2021 and the end of 2022. The infrastructure it had been building for years was about to become indispensable.

Once that indispensability became clear, the market re-rated the company with unusual speed. Less than a year after losing more than half its market value, Nvidia crossed the $1T mark. By mid-2025, it had climbed past $4T.

How does a company go from “finished success story” to historic collapse — and then to becoming the backbone of a technological revolution?

A story that seemed complete

By early 2022, Nvidia seemed to have already won. It dominated gaming GPUs and had steadily expanded into data centers. It had positioned itself as a leader in accelerated computing for machine learning. The arc looked complete.

Then macro conditions flipped. Rising rates compressed growth valuations. Gaming demand slowed sharply. Crypto-mining demand, which had indirectly supported GPU sales, collapsed. Inventory accumulated across channels.

By the end of 2022, Nvidia’s market capitalization had fallen from roughly $736B to about $361B. More than half its value had vanished.

In the second quarter of fiscal 2023, revenue declined 19% sequentially. Gaming revenue dropped 44%. Gross margin was hit by a $1.34B inventory charge. This was not a minor correction, it was a demand shock.

And yet, even inside that turbulence, a different signal was visible: data center revenue remained strong, reaching $3.81B and growing 61% year-over-year.

The foundation was intact. The narrative was not.

The demand shock

ChatGPT’s public release in November 2022 did not invent artificial intelligence. It changed urgency.

Generative AI moved from “research progress” to “competitive priority” almost overnight. Boards demanded AI strategies. Startups raised capital around AI-native products. Hyperscalers accelerated infrastructure buildouts.

Nvidia did not need to pivot. It was already positioned for that shift.

For more than a decade, the company invested in accelerated computing, meaning GPUs optimized for parallel workloads, as well as in the software stack that made those GPUs usable at scale. CUDA, under development for over 15 years, was long perceived as a niche tool for researchers. When generative AI demand surged, it became critical infrastructure.

Training large language models required more than powerful chips. It required optimized libraries, developer tooling, integration paths — an ecosystem. Nvidia had built that ecosystem early and at scale. Today, more than 5.9 million developers use CUDA and related tools.

As AI projects moved from experimentation to deployment, infrastructure budgets followed. Large-scale GPU clusters became a priority, and Nvidia’s data center platforms were already the default choice.

Revenue composition shows how quickly that shift materialized:

  • Q Jan 2023: Data center revenue — $3.62B

  • Q Jan 2024: Data center revenue — $18.4B

  • Q Jan 2025: Data center revenue — $35.6B

This was not a rebound in gaming demand. It was a structural acceleration in AI infrastructure spending.

The system that was built too early

Nvidia’s resurgence was not accidental. It reflected strategic choices made long before generative AI became mainstream.

Long-term thinking that looked irrational

Investments in CUDA and full-stack development began long before clear commercial applications existed. For years, these efforts looked like overinvestment. Why build deep software layers around hardware products?

Because once a platform scales, switching becomes exponentially more expensive.

By the time generative AI demand exploded, millions of developers were already using Nvidia’s tools. Switching away was not just a procurement decision — it was an ecosystem migration problem.

This is the core insight: the moat was never just silicon performance. It was accumulated developer capital.

Speed built on preparedness

When AI demand surged, Nvidia did not need to redesign its entire strategy. It needed to scale.

Its product roadmap, including successive GPU generations, systems integration, networking enhancements, moved on predictable cycles. Competitors could build chips. Replicating years of integrated R&D cadence was harder.

Speed, in this case, was not improvisation. It was execution on a long-prepared pipeline.

Vertical integration across the stack

Nvidia’s approach has consistently extended beyond chips. Its filings describe a full-stack model spanning architecture, processors, systems, interconnect, algorithms, and software.

The acquisition of Mellanox in 2020 strengthened its networking capabilities — a detail that became decisive as AI training clusters scaled. In large-scale model training, networking throughput determines efficiency as much as compute performance.

Owning more of the stack meant controlling more of the bottlenecks.

A culture of calculated risk

Periods of downturn reveal whether strategy is structural or opportunistic. In 2022, Nvidia absorbed inventory pain and acknowledged limited visibility, but it did not abandon its platform investments.

The company moved quickly operationally while preserving its long-term architecture. That combination of preparation and adaptability reflects a culture of fast execution built on deep technical conviction.

From cyclical supplier to structural platform

By 2023, Nvidia was back in the trillion-dollar club. By 2025, it had crossed $4T, and it even briefly surpassed $5T on peak AI enthusiasm in late 2025.

What changed was the center of gravity of Nvidia’s business. In just two years, quarterly data center revenue grew from $3.62B to $35.6B, and by early 2025, that segment had become the company’s dominant source of earnings.

Market-share estimates tell the same story from another angle. Reuters reported Nvidia controls about 80% of the high-end AI chip market. An IDC slide deck shows Nvidia at 85.2% in an “AI Accelerator Vendor Share” view.

This dominance isn’t just about faster chips. It reflects switching costs embedded in software, developer workflows, and production ecosystems built over years. 

Nvidia didn’t merely benefit from the AI wave. It became the platform the wave runs on.

Compounding beats timing

The most important insight is counterintuitive: success rarely comes from a single brilliant decision. It comes from decades of investment in capabilities that look unnecessary in the moment.

When Nvidia invested heavily in CUDA, few predicted trillion-dollar AI infrastructure markets. When it integrated across hardware and networking, the demand for AI “factories” was not obvious.

But when the environment shifted, those “excess” investments turned into inevitability.

Three strategic lessons stand out:

  1. Invest in the future before it is obvious. The most valuable capabilities often look excessive during stable periods.

  2. Run simulations before going all-in. Full-stack thinking reduces the risk that one bottleneck caps upside when demand spikes.

  3. Build ecosystems, not products. Products compete on features. Ecosystems compete on lock-in, developer capital, and integration depth.

Nvidia did not predict the exact timing of generative AI’s explosion. It built the conditions to benefit from it. And when the world changed, readiness compounded into dominance.

If you want more breakdowns on how infrastructure advantages are built and defended, follow Mezen on X — we announce every new article and share ongoing strategic research there.

Back to Blog
Name:
Token:
Market cap: