Openai Lead Noam Brown is believed that some “thinking” models of artificial intelligence can arrive decades ago

Photo of author

By [email protected]


Nam Brown, who leads the research of male intelligence in Openai, says some forms of “thinking” that could have arrived 20 years ago through the known (correct) approach and algorithms.

Brown said during a committee in a committee in Nvidia GTC Conference In San Jose on Wednesday. “I have noticed throughout my research, well, there is something missing. Humans spend a lot of time thinking before they behave in a difficult situation. Perhaps this will be very useful (in artificial intelligence).”

Brown was referring to his work in the artificial intelligence game playing the game at Carnegie Mellon University, including Pluribus, who defeated the elite of human professionals in the poker. AI Brown helped create unique at the time, meaning that it was “thinking” through problems instead of trying to approach more brutal power.

Brown is one of the architects behind O1, and it is an Openai AI model that uses a technique called Test time conclusion To “think” before he responds to inquiries. The test time is inference requires the application of additional computing to models to lead a model of “thinking”. In general, alleged thinking models are more accurate and reliable than traditional models, especially in areas such as mathematics and science.

Brown was asked during the committee whether the academic circles can hope to conduct experiments on the scale of artificial intelligence laboratories such as Openai, given the general lack of institutions in reaching computing resources. He admitted that it has become more stringent in recent years as models have become more intense in computing, but academics could have influenced by exploring areas that require lower computing, such as typical architecture design.

(T) here is an opportunity to cooperate between border laboratories (and academic circles), “Braun said. “Certainly, the border laboratories are looking for academic publications and thinking carefully, well, does this make this convincing argument that if the scope of this is expanded, this will be very effective. If there is a convincing argument of the paper, as you know, we will check this in these laboratories.”

Brown’s comments come at a time when the Trump administration is made Deep discounts To manufacture scientific scholarships. Artificial intelligence experts, including Jeffrey Hinton, criticized these cuts, Saying that they may threaten the research efforts of both local and abroad.

Brown called for the formulation of artificial intelligence as a region in which academic circles could have a major impact. He said: “The state of standards in artificial intelligence is very bad, and this does not require much account to do it.”

As we wrote before, the famous artificial intelligence standards today tend to test Ethical knowledge, giving degrees that are badly related to efficiency In the tasks that most people are interested in. This led to Widely confusion About the capabilities of models and improvements.

4:06 PM Pacific: A previous version of this piece contains that Brown was referring to thinking models like O1 in its initial notes. In fact, he was referring to his work in playing artificial intelligence to play before his time in Openai. We regret the error.



https://techcrunch.com/wp-content/uploads/2024/05/openAI-spiral-rose.jpg?resize=1200,675

Source link

Leave a Comment