Google DeepMind was published on Wednesday Comprehensive On the safety approach to Aji, it is almost defined as Amnesty International that can accomplish any task that a person can.
AGI is a controversial topic in the field of artificial intelligence, with Rejectionists This indicates that it is a little more than the dream of pipes. Others, including Main Amnesty International Laborators such as humansHe warned that it is just around the corner, and it may lead to catastrophic damage if there are no steps to implement the appropriate guarantees.
The 145 -page Deepmind document, which was co -authored by the co -founder of DeepMind, Shin League, expects that AGI may reach 2030, and that it may lead to what authors call “severe damage.” This paper is not tangible, but it gives an example of the “existential risks” that “permanently destroys humanity”.
(We expect an exceptional AGI to develop before the end of the current decade, “the authors wrote. “AGI is exceptional is a system that has the ability to match at least 99 skilled adults in a wide range of inadable tasks, including beyond cognitive tasks such as learning new skills.”
Outside the bats, the paper contrasts with a DeepMind treatment to relieve AGI risks with anthropor and Openai’s. Anthropor, he says, focuses less on “strong training, monitoring and security”, while Openai is very optimistic about “automation” of AI safety research known as alignment research.
The paper also raises a doubt about the authority of excellent artificial intelligence – artificial intelligence that can perform better functions than any human being. (Openai Recently claimed It turns his goal from Agi to Superintligence.) In the absence of “important architectural innovation”, DeepMind authors are not convinced that excellent systems will appear soon – if any.
However, the paper finds that it is reasonable, that the current models will enable “to improve frequent artificial intelligence”: a positive reaction ring where AI conduct its AI’s research to create more advanced AI systems. This can be incredibly dangerous, the authors emphasize.
At a high level, the paper suggests and defends the development of technologies to prevent bad actors from reaching the default AGI, improving the understanding of the actions of artificial intelligence systems, and “stiffness” of environments in which artificial intelligence can work. He admits that many technologies are emerging and that they are “open research problems”, but they warn against ignoring safety challenges that perhaps on the horizon.
The authors write: “AGI’s transformational nature has the possibility of both amazing benefits as well as severe damage.” “As a result, to build AGI responsibly, it is important to the place of artificial intelligence developers in a proactive planning to relieve severe damage.”
But some experts do not agree to the workplace workplaces.
Heidi Khalaf, head of the artificial intelligence scientist at the Institute of Artificial Intelligence, now, told Techcrunch that she believes that the AGI concept is not very specific so that it cannot be scientifically evaluated.
((Repeated improvement) is the basis of the unique arguments of intelligence, “Gosdeel told Techcrunch,” But we have not seen any evidence of work. “
Sandra Washer, a researcher who is studying technology and organization in Oxford, says the most realistic anxiety is to enhance artificial intelligence with “inaccurate outputs”.
“With the spread of the outputs of the online obstetric intelligence and the gradual replacement of the original data, the models are now learning from its own outputs that were filmed with bad palaces, or hallucinations.” “At this stage, Chatbots are often used for research and prevention of the truth. This means that we are constantly at risk of Muslim nutrition and believing in it because it is presented in very convincing ways.”
The Deepmind paper seems universal, unlikely to settle discussions on the extent of AGI realism – and the areas of artificial intelligence integrity in the most urgent need for attention.
https://techcrunch.com/wp-content/uploads/2023/10/deepmind.jpg?resize=1200,675
Source link