Openai launches a pair of thinking of artificial intelligence, O3 and O4-MINI

Photo of author

By [email protected]


Openai announced on Wednesday the launch of O3 and O4-MINI, new thinking models of artificial intelligence designed to stop and work through questions before answering.

The O3 company requires that the most advanced thinking model is outperforming the previous models of the company in tests that measure mathematics, coding, thinking, science and visual understanding. At the same time, O4-MINI offers what Openai says is a competitive comparison between price, speed and performance-often the developers of factors think about choosing the AI ​​model to run their applications.

Unlike previous thinking models, O3 and O4-MINI can create responses using tools in ChatGPT such as web browsing, performing a bitoon code, photo processing and generating images. Starting today, models, as well as a variable of O4-MINI called “O4-MINI-HIGH”, which spends more time formulating answers to improve their reliability, for the subscribers of Openai Pro plans, in addition to, team plans.

The new models are part of Openai’s efforts to overcome Google, Meta, Xai, Anthropic and Deepseek in the Cutthroat Global Ai race. Although Openai was the first to issue a model for Amnesty International, O1, the competitors soon followed with their own versions that correspond to or exceed the performance of the Openai collection. In fact, the thinking models began to control the field as AI’s laboratories are looking for more performance from their systems.

O3 was almost not released in ChatGPT. Samtman, CEO of Openai in February, indicated that the company aims to perpetuate more resources for an advanced alternative that included O3 technology. But the competitive pressure apparently motivated Openai, unlike the path in the end.

OPENAI says that O3 has a recent performance on Swe-Becked (without custom scales), a test to measure coding capabilities, and recorded 69.1 %. The O4-MINI model achieves a similar performance, with a record of 68.1 %. Best Openai, O3-MINI, 49.3 % in the test, while Claude 3.7 Sonnet 62.3 %.

Openai claims that O3 and O4-MINI are their first models that can “think about pictures”. In practice, users can download images to ChatGPT, such as blackboard or pdfs graphics, and models will analyze images during the “Idea series” stage before answering. Thanks to this newly discovered capacity, O3 and O4-MINI can understand blurry and low-quality images and can perform tasks such as zooming or rotate images as is the reason.

Besides the possibilities of image processing, O3 and O4-MINI can run and implement the Python icon directly in your browser via the Chatgpt’s Canvas, and search the web when asking about current events.

In addition to ChatGPT, all three models-O3, O4-MINI and O4-MINI are high-will be available through the end points facing OpenAi developer, Chatain API and API applications, allowing engineers to build applications with company models at use rates.

Openai imposes a relatively low price on O3, due to its improved performance, at $ 10 per million input codes (about 750,000 words, longer than the Lord of the Rings) and $ 40 per million output symbols. For O4-MINI, Openai ships the same O3-MINI, $ 1.10 per million input codes and $ 4.40 per million output symbols.

In the coming weeks, Openai says it is planning the O3-PRO version, an O3 version that uses more computing resources to produce his answers, exclusively for Chatgpt Pro.

SAM Altman, CEO of Openai, referred to O3 and O4-MINI to be the last models of thinking about artificial intelligence in Chatgpt before GPT-5, a model that the company said will define traditional models such as GPT-4.1 with thinking models.



https://techcrunch.com/wp-content/uploads/2023/04/openai-chatgpt-GettyImages-1247883047.jpg?resize=1200,800

Source link

Leave a Comment