OpenAI signed a new deal to run some of its workloads on Google Cloud. Since 2019, the AI company had used Microsoft Azure as its main provider for model training and deployment but this new partnership shows that OpenAI is changing how it works, and it’s also a win for Google’s cloud unit.
This deal gives OpenAI access to Google’s infrastructure and its custom-built AI processors, called TPUs (Tensor Processing Units). It also reflects a wider move by tech companies to avoid depending on a single provider for large computing jobs.
The new deal lets OpenAI run workloads on Google Cloud in addition to Azure. One main reason is access to Google’s TPUs — special chips built for running large AI models.
These chips offer strong performance and help speed up training. Having access to this hardware gives OpenAI more flexibility, especially during times when demand might outstrip what one provider can handle.
From 2019 through early 2025, Microsoft Azure had exclusive rights to host OpenAI’s workloads. That changed in January 2025 when the exclusive part of the agreement ended. Microsoft still has the option to host OpenAI’s future work if it wants to, but the deal with Google means OpenAI is no longer relying on just one company. It shows that OpenAI wants a setup that’s more flexible and less dependent on a single provider.
This is important because Microsoft has invested at least $13 billion in OpenAI. Its funding helped accelerate product launches like ChatGPT. For years, the companies were seen as strongly linked but now, OpenAI is clearly working with others as well.
The shift to Google — and to a multi-provider setup — shows OpenAI’s quick growth. The company has more users, more revenue, and bigger training jobs than ever before.
By mid-2025, OpenAI said it was making about $10 billion in annual revenue. That’s nearly double what it reported just six months earlier. Its tools now serve over 500 million people per week.
Running AI at that scale uses huge amounts of power and infrastructure. OpenAI CEO Sam Altman noted that an average query consumes about 0.34 watt-hours of energy and a small amount of water, which adds up significantly at a global scale.
Across millions of users, that adds up. Some reports say that AI could use over 300 terawatt-hours of power per year by 2028. Even the world’s largest data centers will need support meeting that kind of demand.
Until recently, OpenAI mostly ran on Azure. That single-provider model had limits. For one, the company could run into delays during times of high demand. It also meant sharing sensitive operations with a company that is building its own competing AI tools.
Working with multiple vendors reduces risk. It gives OpenAI more control over costs, speed, and availability. This is a growing trend in the AI industry — one we’re now seeing across many major players.
Google Cloud isn’t the only new partner. OpenAI is also using Oracle, and it secured significant computing power from CoreWeave — two other cloud companies with hardware tuned for AI workloads.
It also signed a major deal with SoftBank. The two companies are working on a long-term project to build out more data center capacity. The project is reported to be worth around $500 billion and is meant to prepare for even more growth in global AI use.
This deal gives Google a high-profile new customer. It also sends a signal to the wider AI industry: Google Cloud is open for business — even to rivals.
OpenAI is not Google’s only partner in this area. Google is also working with companies like Anthropic and Apple. By welcoming other AI firms onto its platforms, Google is positioning its cloud service as a neutral provider. It wants to show that it offers reliable, high-performance tools — even if those customers are building competing AI products.
Access to Google’s TPUs was a key part of this deal. These chips are built just for AI, and OpenAI’s decision to use them is a form of public trust. It’s a clear sign that Google’s hardware is capable of supporting large-scale model training — something other companies will likely take note of.
This deal could push other AI companies to rethink how they use cloud infrastructure. The largest providers — Microsoft, Amazon, and Google — have all invested in custom chips and data centers. But with OpenAI now spreading its workloads more widely, it may become harder for any one provider to lock in exclusive access to major customers.
OpenAI’s move to Google Cloud is part of a wider shift toward using more than one provider. It shows how quickly its needs are growing and how important flexibility has become. For Google, it’s a big win. And for the AI world, it’s a sign that even the biggest players don’t want to rely on just one cloud.
At Revolgy, we help companies figure out how to build infrastructure that supports the tools they actually use — whether that’s AI model development or day-to-day cloud workloads. If you’re sorting through what’s possible with a multi-cloud setup, or how new partners like Google fit in, we’re happy to help make sense of it, just drop us a message.