The researchers say they have discovered a new way to “expand” artificial intelligence, but there is a reason to be skeptical

Photo of author

By [email protected]


Did new researchers discover? Amnesty International “Law of Specific”? This is what Some of the humiliation on social media He suggests – but experts are skeptical.

The laws of artificial intelligence, which is an informal concept, describes how artificial intelligence models improve with an increase in the size of data groups and computing resources used to train them. Even almost a year ago, the expansion of “pre-training”-training on disturbing models on excessive data groups at all-was a largely dominant law, at least in the sense that most AI border laboratories adopted them.

Preparation did not disappear, but additional pilgrims, limited after training and Test time testIt emerged to complete it. The post-training limitation is mainly controlling the behavior of the model, while the scaling of the test time requires the application of more computing to reasoning-that is, ongoing models-leading a form of “thinking” (see: models like R1).

Google and UC Berkelegi researchers recently suggested in A. paper What some commentators online described as a fourth law: “Searching for the time of reasoning.”

Search in the time of reasoning contains a model that generates many possible answers to the query in parallel, then select the “best” group. Researchers claim that it can enhance the performance of a general age model, such as Google’s Gemini 1.5 ProTo a level that exceeds Openai’s O1-PREVIEW “Thinking” model on the standards of science and mathematics.

(B) Y is just taking random samples 200 responses and self-definition, GIMINI 1.5-an old model in early 2024-beats O1-PREVIEW and approaches O1. A series of posts on X. “Magic is that self -transformation becomes widely easier!

Many experts say the results are not surprising, and that research at the time of reasoning may not be useful in many scenarios.

Matthew Gozdiel, an artificial intelligence researcher and assistant professor at the University of Alberta, told Techcrunch that the approach works better when there is a “good evaluation function” – in other words, when the best answer can be checked easily. But most inquiries are not these pieces and dehydration.

He said, “(i) and we cannot write code to determine what we want, we cannot use (time of reasoning).” “For something like interaction in public language, we cannot do this (…) It is generally not a great approach to solving most problems already.”

Mike Cook, a research colleague at Kings College London, a specialist in Amnesty International, with the evaluation of Gosdeel, adding that he sheds light on the gap between “thinking” in the sense of the artificial intelligence of the word and our thinking processes.

((Searching for the time of reasoning) does not raise the thinking process “for the model,” Cook said.

This research at the time of reasoning may be restrictions that are certainly unwanted news of the artificial intelligence industry aspiring to expand the scope of the “logical” model effectively. As authors participating in paper observation, thinking models can be raised today Thousands of dollars from computing In one mathematical problem.

It seems that the search for new scaling techniques will continue.





https://techcrunch.com/wp-content/uploads/2019/09/GettyImages-641263142.jpg?resize=1200,848

Source link

Leave a Comment