New Artificial Intelligence Account: Google 80 % Edge VS. Openai’s Ecosystem

Photo of author

By [email protected]


Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more


The innovation that is unavoidable from artificial intelligence does not show any signs of slowdown. In the past two weeks, Openai dropped Strong Thinking Models O3 and O4-MINI Besides GPT-4.1 seriesWhile Google faced Flash Gemini 2.5, Repeat quickly on Gueini 2.5 Pro pioneering It was released shortly ago. For the technical leaders of the institutions who move in this amazing scene, the selection of the artificial intelligence platform requires

While the standards of model models take the headlines of the newspaper, the decision of the technical leaders is much deepening. The selection of the artificial intelligence platform is an adherence to the ecological system, which affects everything from the basic calculations and the strategy of agents’ development to the reliability of the model and the integration of institutions.

But it may be the most famous differentiation, which wanders below the surface but with deep -term deep effects, lies in the economies of devices that operate these giants from artificial intelligence. Google has an enormous cost advantage thanks to the silicone designated for it, which is likely to manage the AI’s work burdens in a small part of the Openai cost that depends on the graphics processing units dominated by NVIDIA (and the threat of margin).

This analysis laters beyond the criteria to compare the ecosystems of Google and Openai/Microsoft AI through critical factors that institutions should consider today: the great contrast in the economy in the economy, the different strategies to build artificial intelligence agents, and the decisive exhibits in model and reliability capabilities and facts that suit projects. The analysis depends on Discuss an in -depth video that explores these regular transformations Between me and the developer of artificial intelligence, Sam Witfum earlier this week.

1. Economy account: TPU from Google “Secret Weapon” for the NVIDIA Openai tax

The most important of them, but it is often not stipulated, is its “secret weapon:” its investment for ten years in the designated tensioner processing units (TPUS). Openai and the broader market depends greatly on the powerful and expensive graphics processing units in NVIDIA (such as H100 and A100). Google, on the other hand, is designed and published by its TPUS, such as It was recently unveiled for the Ironwood generationCore Ai. This includes training and Gemini models.

Why is this important? Makes the massive cost difference.

Driving Nvidia GPUS is amazing total margins, estimated by analysts It is in a range of 80 % to Data center chips like H100 And next B100 graphics processing units. This means that Openai (via Microsoft Azure) pays a large premium – “NVIDIA tax” – for its mathematical power. Google, by manufacturing TPUS inside the house, effectively exceeds these signs.

While the manufacture of graphics processing units may cost NVIDIA $ 3,000 -5,000 dollars, hyperplaces such as Microsoft (Openai) pay $ 20,000-35,000 dollars+ per unit of size, According to to Reports. Industry and analyzes conversations indicate that Google may get artificial intelligence account capacity at about 20 % of the cost incurred by those who buy advanced NVIDIA graphics processing units. Although the exact numbers are internal, the implicit meaning 4x-6X cost efficiency An advantage for each unit from the Google Account at the hardware level.

This structural feature is reflected in API pricing. Comparison of the main models, Openai’s O3 Almost 8 times more expensive For input symbols and more expensive for the output symbols From Google Gemini 2.5 Pro (For standard context lengths).

This cost differentiation is not academic; It has deep strategic effects. Google will probably maintain prices and provide “better intelligence for every dollar”, giving institutions more total cost of ownership (TCO)-this exactly What you are doing now in practice.

Meanwhile, Openai’s costs are fundamentally associated with the ability of NVIDIA pricing and the terms of the Azure deal. In fact, the account costs are appreciated 55-60 % of the total Openai $ 9B operating expenses In 2024, according to some reports, it is expected to It exceeds 80 % in 2025 kEY scale. While the expected growth of revenue in Openai astronomical – is likely to reach $ 125 billion by 2029 According to the reported internal expectations The management of this mathematical spending is still a decisive challenge, Lead their endeavor to allocated silicon.

2. Agent frameworks: The open ecological system of Google versus the integrated Openai

Besides the devices, the giants follow different strategies to build and publish artificial intelligence agents who are preparing to automate the workflow of institutions.

Google pays clearly for the inter -operating and a more open environmental system. In Cloud two weeks ago, unveil The agent protocol to the agent (A2A), designed to allow the agents of buildings on various platforms to communicate, along with the ADK development group (ADK) and the CalmentsPace Center to discover and manage agents. While the adoption of the A2A faces obstacles – the main players such as Venturebeat did not sign communication with Anthropor on this topic, but Antarubor refused to comment) – and some developers discuss the necessity alongside the protocol of the current Anthropor model (MCP). Google’s intention is clear: to enhance a multi -seller agent market, which is likely to be hosted inside the agent garden or through a rumor agent app.

On the contrary, Openai appears to focus on creating strong agents to use tightly built -in tools within their own collection. This new O3 model embodies this, able to make hundreds of tool calls within a single thinking chain. Developers benefit from API responses and SDK agents, as well as tools such as the new Codex Cli, to create advanced agents within the Openai/Azure Trust border. Although frameworks like Microsoft’s Autogen provide some flexibility, Openai’s basic strategy seems less about communication via platforms and more about maximizing the agent’s vertical capabilities in its control environment.

  • Foundation’s ready -made meals: Companies that give priority to flexibility and ability to mix agents from various sellers (for example, to connect the Salesforce agent to Vertex AI) may find attractive from Google. Those who deepened in depth in the AZURO/Microsoft ecosystem or prefer there is a more vertical client that is managed high -performance towards Openai.

3. Form capabilities: Paulte points, performance and pain

The launch cycle is unnecessary to lead the model transient. While Openai’s O3 is currently developing Gemini 2.5 Pro on some coding criteria such as Swe-Bench Verification and Assistant, GIMINI 2.5 Pro matches others like GPQA and AIME. Gemini 2.5 Pro is also the general pioneer in the leading painting in the Grand Language Model (LLM). For many cases of use of institutions, models have reached an approximate equal in basic capabilities.

the TRUE The difference lies in their distinctive bodies:

  • Context against the depth of thinking: GIMINI 2.5 Pro has a huge context window of one million people (with 2m planning), which is ideal for code groups or large document groups. Openai’s O3 provides a 200K window but emphasizes deep -backed tools thinking at one turn, enabling it through the reinforcement learning approach.
  • Reliability versus risks: This appears as a crucial discrimination. While O3 displays impressive thinking, the Openai Card for Form 03 It revealed that it is a much more yellow (2x O1 average on PersonQa). Some analyzes indicate that this may stem from Complex thinking mechanisms and tools. Users often describe Gemini 2.5 Pro, although it may sometimes be considered less innovative in the product structure, as more reliable and predictable to institutions’ tasks. Companies must weigh the advanced capacity of O3 against this documented increase in hallucinogenic risk.
  • Foundation’s ready -made meals: The “best” model depends on the task. To analyze huge quantities of context or define the priorities of the predictive outputs, Gemini 2.5 Pro holds an advantage. For tasks demanding the deepest multi -tool, where hallucinations can be managed carefully, O3 is a strong competitor. Sam Wittin also noted in In -depth podcast about thisThe strict test within cases of use of specific institutions is necessary.

4. Enterprise Fit & Distribution: The depth of integration for access to the market

Ultimately, adoption often depends on the ease of the basic system openings in the current infrastructure of the institution and the functioning of the work.

Google’s strength lies in the deep integration of Google Cloud customers and the current work space. Gemini models, Vertex AI, AgentSpace are designed and tools like Bigquness to work smoothly together, providing a uniform control plane, data governance, and perhaps the fastest time for companies Actually invest in Google’s ecosystem. Google is Flirting with the activity of large companiesView publishing operations with companies such as Wendy’s, Wayfair and Wells Fargo.

Openai, via Microsoft, is proud to access the market unparalleled and easy to reach. The massive user base of ChatGPT (about 800 meters Mao) creates wide knowledge. More importantly, Microsoft includes Openai models (including the latest O) series in Microsoft 365 Copilot and Azure services everywhere, making strong AI capabilities easily available for hundreds of millions of institutions users, and they often use them daily. For already unified organizations on Azure and Microsoft 365, Openai’s dependence can be a natural extension. Moreover, the intense use of the Openai applications interface by developers means that many institutions and workflows have already been improved for Openai models.

  • Strategic decision: The choice is often summarized in the relationships of the current seller. Google offers a convincing and integrated story for its current customers. Openai, supported by the Microsoft Distribution Engine, provides extensive access and possibly builds easier for the huge number of institutions that focus on Microsoft.

Google Vs Openai/Microsoft has bodies for institutions

The artificial intelligence platform war has moved between Google and Openai/Microsoft beyond the simple models comparisons. Although both of them offer the latest capabilities, they represent different strategic stakes and offer distinct advantages and bodies to the institution.

Institutions must weigh the various methods of factors frameworks, and the exact bodies between typical capabilities such as the length of context against advanced thinking and practical gatherings of integration of institutions and distribution.

However, looming on the horizon in all these factors is the stark reality of the calculation of the account, which appears as more important discrimination and a long -term determining, especially if Openai is not able to process it quickly. The Google integrated TPU integrated, allowing it to exceed the “NVIDIA tax” by approximately 80 % in the GPU pricing, which belongs to Openai, is a basic economic advantage, and the game may change.

This is more than just a slight price. It affects everything from the ability to withstand the costs of the application programming interface and the ability to predict the long term to the expansion of artificial intelligence spread. As AI’s work burden grows significantly, the platform with the most sustainable economic engine – which is fed by the efficiency of devices – bears a strong strategic advantage. Google takes advantage of this feature with an open view of the inter -operation.

Openai, supported by the Microsoft scale, meters with models used in depth integrated tools and access to the market unparalleled, although the questions remain about the structure of cost and the reliability of the model.

To make the right decision, the technical leaders of the institutions must look after the criteria and evaluate these ecosystems based on their long -term effects in the long term, their favorite approach in the agent and openness strategy, and to bear them for the risks of typical reliability against the power of raw logic, the current technology stack and their application needs.

Watch the video where Sam Wittfates and I drop things:

https://www.youtube.com/watch?



https://venturebeat.com/wp-content/uploads/2025/04/ChatGPT-Image-Apr-25-2025-01_21_24-PM.png?w=1024?w=1200&strip=all
Source link

Leave a Comment