Startups and academics collide with whether SuperHuman AI is really “out”

Photo of author

By [email protected]



The noise is increasing from the leaders of the major artificial intelligence companies that “strong” computer intelligence will outperform human beings, but many researchers in this field believe that the claims revolve in marketing.

The belief that human intelligence or Beitr-which is often called “artificial general intelligence” (AGI)-will appear from the current learning techniques of the machine, hypotheses for the future ranging from the excessive abundance of the machine to human extinction.

“The systems that start to refer to AGI appear in a show,” said Sam Al -Tamman at Openai in the blog post last month. Dario Amani from Anthropor said that the teacher “could come early in 2026.”

Such predictions help to justify hundreds of billions of dollars poured into computer devices and energy supplies to operate them.

Others, although they are more skeptical.

“We will not reach artificial intelligence at the human level by expanding the LLMS”-large language models behind current systems such as ChatGPT or Claude, “Yann Lecun, the chief artificial intelligence scientist in Meta, told Agence France-Presse last month.

Lecun’s view of the majority of academics in this field seems to be a majority of academics.

More than three quarters of the respondents agreed to a recent survey conducted by the United States -based association for the progress of artificial intelligence (AAAI) that “increasing the current methods” is unlikely to produce AGI.

“Reap outside the bottle”

Some academics believe that many companies’ claims, which are sometimes determined by presidents with warnings about AGI risk of humanity, are a strategy to attract attention.

A pioneer at Darmstat University in Germany and AAAI fellow for his accomplishments in this field “companies” have made these big investments, and they have to bear fruit. “

“They only say,” This is so dangerous that I only can run it, in fact I am afraid, but we really left the genie from the bottle, so I will sacrifice myself on your behalf – but then depends on me. “

Doubts between academic researchers are not complete, with prominent characters such as Jeffreon Hunton or the 2018 Yoshua Bengio Warning of Risk Warning of Strong Artificial Intelligence.

“It is somewhat similar to the” magical trainee “, you have something that you can’t suddenly control,” said Kersing.

The most similar and modern thinking experience is “Maximiser Paperclip”.

This imagined artificial intelligence will follow its goal of making paper pin, one single, to transform the earth, and ultimately everything that matters in the universe into paper pin or paper-making machines-after getting rid of humans for the first time, may hinder its progress by replacing it.

Although “evil” in this way, the maximum will be shortened to what thinkers in the “alignment” field call artificial intelligence with targets and human values.

Kersting said that he “can understand” such fears – noting that “human intelligence and its variety and quality are so great that it will take a long time, if any” for computers to match them.

It is more interested in damaging the near -term of artificial intelligence already, such as discrimination in cases where it interacts with humans.

“The biggest thing ever”

SEAN O HeightArtaight, Director of the AI: Futures program and responsibility at the University of Cambridge, British, suggested that the bound Gulf in expectations between academics and artificial intelligence leaders may simply reflect people’s positions while choosing a functional path.

“If you are very optimistic about how strong the current technologies are, you are likely to be more likely to go and work in a company that puts a lot of resources in an attempt to achieve this,” he said.

Even if Altman and Amodei are “completely optimistic” about fast time domains, AGI later appears “, we must think about this and take it seriously, because it will be the biggest thing that will happen ever,” added O Heigertaight.

“If anything else … an opportunity to reach foreigners by 2030, or that there will be another giant pandemic or something like that, we will put some time in planning for that.”

The challenge can be the delivery of these ideas to politicians and the public.

He talked about Super-Ai “immediately creates this type of immunotherapy … it looks like science fiction.”

This story was originally shown on Fortune.com



https://fortune.com/img-assets/wp-content/uploads/2025/03/GettyImages-2198328493-e1743075455283.jpg?resize=1200,600
Source link

Leave a Comment