top of page

Nvidia’s Cloud Play: A Growing Challenge to Amazon, Microsoft, and Google

As Nvidia pushes deeper into cloud services, Amazon, Microsoft, and Google are navigating a more complicated relationship.
As Nvidia pushes deeper into cloud services, Amazon, Microsoft, and Google are navigating a more complicated relationship.

As Nvidia pushes deeper into cloud services, Amazon, Microsoft, and Google are navigating a more complicated relationship, with their longtime chip supplier now becoming a potential competitor. DGX Cloud, which Nvidia promotes as “your AI factory in the cloud,” signals the company’s intent to offer not just infrastructure—but a full-stack production environment for enterprise AI.


Nvidia Levels Up: From Chipmaker to Cloud Platform

In Wednesday's article “Nvidia Ruffles Tech Giants With Move Into Cloud Computing,” the Wall Street Journal highlights rising tensions in the AI world: Nvidia, which provides the key hardware and software powering most major AI models, is now launching a cloud service that could compete with the very platforms that depend on its technology.


Nvidia’s DGX Cloud, launched in 2023, offers enterprises direct access to high-end GPU infrastructure bundled with software and engineering expertise. What makes it unusual is where that infrastructure comes from.

“Under DGX Cloud’s unusual arrangement,” the WSJ reports, “the cloud giants (Google, Microsoft and Amazon) buy and manage equipment—including Nvidia’s chips—that forms the backbone of the service. Then Nvidia leases back that equipment from them and rents it out to corporate clients.”

In short: Amazon and Microsoft are hosting the infrastructure… and helping Nvidia grow a cloud business that could one day compete with them.

An Nvidia GPU server rack for AI. Photo: Annabelle Chih/Bloomberg News
An Nvidia GPU server rack for AI. Photo: Annabelle Chih/Bloomberg News

DGX Cloud Is Scaling Quickly

Though Nvidia doesn’t break out DGX Cloud revenue directly, the company reported that:

“It had $10.9 billion in multiyear cloud service agreements, up from $3.5 billion the year before, in large part to support DGX Cloud.”

UBS analysts estimate that DGX Cloud alone could become a $10+ billion annual business. Meanwhile, Nvidia’s ecosystem is expanding as it backs AI-native cloud providers like CoreWeave and Lambda—extending its influence well beyond DGX.

As the WSJ notes: CoreWeave… is forecasting around $5 billion of revenue this year.”

(CoreWeave started as a cryptocurrency mining company, pivoted to become a specialized AI-focused cloud service provider. Uses Nvidia’s GPUs in its cloud, meaning it depends on Nvidia as a supplier.)


These AI-first players are built specifically for model training and inference at scale—offering a growing alternative to general-purpose hyperscalers.

NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale.

Why Is Nvidia Moving Into Cloud? A Platocom Perspective:


Nvidia’s move into cloud isn’t just opportunistic—it’s deeply strategic and here is why:


1. Defense Against Custom Silicon (Future-Proofing)

Hyperscalers like Amazon, Google, and Microsoft are all developing custom AI chips, aiming to reduce their reliance on Nvidia’s GPUs over time. That poses a direct threat to Nvidia’s core business.

>>> By launching DGX Cloud, Nvidia creates a direct channel to enterprise customers, owning more of the value chain and reducing its dependence on third-party platforms. This isn't just a growth strategy; it’s a long-term hedge against disintermediation.

>>> DGX Cloud is Nvidia’s insurance policy in a world of rising platform risk.


2. Ecosystem Control

Rather than remaining a parts supplier, Nvidia is positioning itself as a full-stack AI platform, shaping standards, workflows, and the economics of AI development.

DGX Cloud allows Nvidia to bundle its hardware with proprietary software frameworks like CUDA and NeMo, plus enterprise-grade support. This deepens customer lock-in, strengthens the developer ecosystem, and gives Nvidia more control over how AI is built and deployed.

Rather than remaining a parts supplier, Nvidia is positioning itself as a full-stack AI platform, shaping standards, workflows, and the economics of AI development.

  • [Nvidia NeMo is a cloud-native AI framework designed for building, customizing, and deploying large language models (LLMs) at scale, optimized to run on Nvidia’s infrastructure.]


3. AI Workloads Need Custom Infrastructure

Generative AI demands far more than off-the-shelf compute. It requires specialized, high-performance systems: advanced GPUs, high-speed networking, and optimized frameworks.

DGX Cloud provides enterprises with turnkey access to these systems, engineered by Nvidia itself. For many organizations, it’s a faster, more reliable path to deploying advanced AI workloads than waiting for hyperscaler offerings to catch up.


4. Cloud = Recurring, High-Margin Revenue

Cloud platforms are highly profitable. In Amazon’s case, AWS generated over 60% of the company’s operating income last quarter.

>>> By offering cloud-based access to its AI infrastructure, Nvidia shifts from one-time hardware sales to recurring revenue, capturing more value from the AI boom it helped ignite. DGX Cloud gives Nvidia a seat at the cloud revenue table, not just a place in the supply chain.


Cloud Cooperation or Competition? The Nvidia Relationship at a Crossroads

Nvicia is now one of the cloud providers!
Nvicia is now one of the cloud providers!

This dynamic—cooperation with underlying tension—is unsustainable long-term.

As the WSJ puts it: “It would be naive to think Nvidia doesn’t have any further designs.”

This dynamic—cooperation with underlying tension—is unsustainable long-term. Cloud giants like Amazon, Microsoft, and Google rely heavily on Nvidia’s GPUs to power their AI workloads, yet Nvidia’s DGX Cloud directly competes with their own cloud services. While these providers host and manage Nvidia’s equipment for DGX Cloud, they effectively help build a rival platform that could capture their customers.


Some, like Google, have been cautious about fully participating in Nvidia’s cloud ventures, highlighting the fragile nature of the partnership. As Nvidia expands its cloud footprint and invests in AI-native cloud startups, the cloud incumbents face a growing dilemma: continue enabling their suppliers’ rise as a competitor or risk losing their AI edge.


This cooperation-competition balance creates a strategic tension that will be difficult to sustain as both sides vie for leadership in the fast-evolving AI cloud market.


How the Nvidia-Cloud Tension Could Open Doors for TSMC

Platocom Analysis: The growing tension between Nvidia and major cloud providers like Amazon, Microsoft, and Google may create strategic opportunities for Taiwan Semiconductor Manufacturing Company (TSMC) to deepen its role in the AI infrastructure ecosystem. As hyperscalers invest in developing their own custom AI chips to reduce reliance on Nvidia, TSMC’s position as the world’s leading semiconductor foundry makes it a natural and neutral partner. Cloud providers could leverage closer collaboration with TSMC to accelerate chip development, secure manufacturing capacity, and push innovative architectures, potentially shifting some power away from Nvidia. This dynamic suggests TSMC could become a critical enabler for multiple players in the AI cloud race, helping to balance the competitive landscape and foster a more diverse chip ecosystem.


What This Means for Enterprises and Governments

At Platocom, we’ve had both positive and challenging experiences working with (rural) counties, municipalities, and economic development authorities. Too often, we encounter decision-makers who lack the infrastructure knowledge, technology training, and future-focused mindset that our engineering teams bring to the table. Despite offering affordable, future-proof telecom solutions to help bridge the digital divide, we have sometimes struggled to gain traction in these local environments. Mostly because the people tasked with making decisions don't understand the full scope, or have other motives...

Who at the Local Level Will Make The AI Decision?

As artificial intelligence now accelerates across every layer of infrastructure, we are genuinely curious — and concerned — about who, at the local government level, will be in a position to make these critical decisions. Will there be consistent, nationwide standards and protocols for AI infrastructure and procurement? Or will public servants, who may not yet have the technical resources or training, be forced to make choices in the same fragmented way we have observed with broadband, edge data centers and telecom?


In our view, these questions deserve urgent attention because the stakes are higher than ever. As Nvidia’s move into cloud rewrites the rules of AI infrastructure, we think that local governments might be under growing pressure to choose between platforms, chip providers, and entire ecosystems that are more complex than ever. Without adequate technical knowledge and training, they risk being locked out of the benefits — or worse, locked into costly mistakes.


Platocom's Suggestion to the Federal Government

Platocom suggests that the federal government establish a neutral reporting body where technology companies can confidentially share observations about critical skills or knowledge gaps at the local level. While some may see this as federal overreach, technology is agnostic, and its consequences touch every community.


When a tech provider encounters worrying gaps in technical capacity, there should be a trusted federal agency to receive those signals and coordinate support. A simple, structured, confidential reporting framework could help protect communities from poor technology choices, close the urban-rural skills divide, and ensure that everyone benefits from resilient, future-proof AI and cloud infrastructure.


In short, local governments don’t have to become Silicon Valley overnight — but they do need to lean on shared resources, coordinated standards, and independent expert advisors to protect the public interest.


We remain optimistic. If Nvidia can transform itself into a cloud player, then U.S. government employees can absolutely become technologically savvy enough to tell the difference between smart and risky procurement decisions. But that will require serious, sustained investment in training.



P L A T O C O M
P L A T O C O M

Comentarios


About Us

PLATOCOM is a Digital Infrastructure Company. We focus on the planning and execution of data center migration, data center audits & compliance, data center decommissioning, colocation and cloud hosting.

 

We are driven by a dynamic Public-Private Partnership model and operate in all US states. 

Services

Colocation

Cloud Hosting

Data Center Audits & Compliance

Data Center Deployment

Data Center Decommissioning

Data Center Migration

Broadband

"Be kind, for everyone you meet is fighting a harder battle.”

- Plato

 

Contact Us

Platocom logo

If you would like to join our mailing list for monthly updates, please add your email below.

© 2024 PLATOCOM LLC. All Rights Reserved. Website is created by Plato's Media.

bottom of page