Anthropor CEO wants to open the Black Fund for Artificial Intelligence Models by 2027

Photo of author

By [email protected]


Anthropier CEO Dario Amani Publish an article On Thursday, it highlights how the researchers understand little about the internal works of artificial intelligence models in the world. To address this, Amodei set an ambitious goal for Athrubor to discover most of the problems of the artificial intelligence model by 2027.

Amodei admits the next challenge. In the “urgency of the interpretation”, the CEO of Anthropor says that Antarbur has made early breakthroughs to track how the models reach their answers – but it confirms that more research is needed to decode these systems with their growth more powerful.

“I am very concerned about spreading such systems without dealing better on the ability to interpret,” Amodei wrote in the article. “These systems will be completely essential to the economy, technology and national security, and will be able to a lot of self -rule that I consider it unacceptable mainly that humanity is completely ignorant of how it works.”

Anthropor is one of the leading companies in mechanical interpretation, a field that aims to open the black square for artificial intelligence models and understand the reason for making the decisions they make. Despite the rapid performance improvements in artificial intelligence models in the technology industry, we still have a relatively little idea how these systems reach decisions.

For example, Openai recently launched new models of the spontaneous organization, O3 and O4-MINI, which work better in some tasks, but also Hellus is more than its other models. The company does not know the reason for its occurrence.

“When the Improvised IQ system does something, such as summarizing a financial document, we have no idea, at a specific or accurate level, and why he takes the options he makes – why he chooses certain words on others, or why he makes a mistake sometimes even though it is usually accurate.”

The co -founder of the Anthropor Chris first says that artificial intelligence models “are cultivated more than they have been built,” as Amodei notes in the article. In other words, artificial intelligence researchers have found ways to improve the intelligence of the artificial intelligence model, but they do not know the reason completely.

In the article, Amodei says it may be dangerous to reach AGI – or as he calls it.A country of geniuses in a data center– Without understanding how these models work. In a previous article, Amodei claimed that the technology industry can reach such a landmark by 2026 or 2027, but it is believed to be much more than understanding these artificial intelligence models.

In the long run, Amodei says that the anthropor is, mainly, to perform “brain tests” or “magnetic resonance imaging” for modern AI models. These tests help determine a wide range of issues in artificial intelligence models, including their tendency to lie, search for power or other weakness, he says. He added that this may take from five to ten years to achieve it, but these measures will be necessary to test and publish artificial intelligence models in Anthropor.

Anthropor has made some research breakthroughs that allowed her to better understand how artificial intelligence models do. For example, the company recently found roads Tracking the paths of thinking about the artificial intelligence model throughWhat the company calls, circles. Anthropor has identified a single circle that helps artificial intelligence models to understand the American cities where the United States is located. The company found only a few of these circles, but the estimates there are millions of artificial intelligence models.

The anthropier has been invested Her first start -up investment Work on the ability to explain. In the article, Amodei Openai and Google DeepMind called to increase its research efforts in this field.

Amodei calls on governments to impose “light touch” regulations to encourage the subject of interpretation, such as companies’ requirements to detect safety and security practices. In the article, Amodei also says that the United States must put the chips export controls to China, in order to reduce the possibility of a race of artificial intelligence outside of control.

Antarbur has always emerged from Openai and Google for safety. While other technology companies prompted the controversial safety bill in California, SB 1047, Anthropor issued modest support and recommendations for the draft lawWhich would determine the safety reporting criteria for Frontier Ai developers.

In this case, Antarbur appears to be pushing for an effort at the level of industry to better understand artificial intelligence models, not only to increase their capabilities.



https://techcrunch.com/wp-content/uploads/2025/02/GettyImages-2153561878.jpg?w=1024

Source link

Leave a Comment