Openai said it will stop evaluating artificial intelligence models before launching them at risk that they can persuade or manipulate, and perhaps help in swinging the elections or creating very effective propaganda campaigns.
The company said that it will now address these risks through the conditions of service, restrict the use of artificial intelligence models in political campaigns and pressure, and monitor how people use models by simply launching signs of violations.
Openai also said that she will consider launching artificial intelligence models that she had ruled as a “high risk” as long as she took appropriate steps to reduce these risks – but rather she will consider launching a model of what was called “critical risks” if his competitor is already a similar model. Previously, Openai said that no model of Amnesty International has made more than a “medium danger”.
Policy changes were put into an update to Openai “Reading framework ” yesterday. This framework explains how the company monitors the artificial intelligence models it adopts for potential catastrophic risks-everything that the models may help person to create a biological weapon for their ability to help infiltrators the possibility of publishing models and escaping from human control.
Politics changes divide safety and security experts, artificial intelligence. Many of them have moved to social media to celebrate MOOF to voluntarily issue the updated framework, indicating improvements such as clearer risk categories and a stronger focus on emerging threats such as independent repetition and protection.
However, others expressed their concerns, including Stephen Adler, a former Openai safety researcher who criticized the fact that the updated framework no longer requires safety tests for the seized models. “Openai quietly reduces safety obligations,” Books on x. However, he emphasized that Openai’s efforts: “I am generally happy to see the framework of willingness to be updated.” “This was more likely a lot of work, and it was not completely required.”
Some critics highlight the removal of persuasion from the risks that the alert frames treat.
“It seems that Openai changes his approach,” said Shiam Krishna, a research leader in artificial intelligence policy and his rule in Rand Europe. “Instead of dealing with persuasion as a basic risk category, it may now be addressed as either a societal and organizational issue, a crane or built -in in the current guidelines of Openai on developing models and use.” He added that it remains to see how you will play in areas such as politics, where the capabilities of Amnesty International are still convincing “a disputed issue.”
Courtney Radch, an older colleague in Brookings, the International Innovation Center for Governance, and the Center for Democracy and Technology that works on artificial intelligence ethics, called the framework in a letter to luck Another example of the arrogance of the technology sector. She emphasized that the decision to reduce “persuasion” ignore the context – for example, may be dangerous to individuals such as children or those who suffer from low literacy from artificial intelligence or in countries and authoritarian societies. “
Orine Etzone, the former CEO of the Allen International Institute and the founder of Truemedia, who provides tools to combat the content that is received from AI, expressed their concern. He said in an e -mail: “Reducing the deception amazes me as a mistake by looking at the increasing persuasive force of LLMS.” “One must wonder if Openai simply focuses on chasing revenues with less than the societal influence.”
However, an artificial intelligence researcher said luck It appears that it is reasonable to address any risks of misinformation or other malicious persuasion used by the conditions of Openai. The researcher, who asked not to be identified because he is not allowed to speak publicly without permission from the current employer, added that the risk of persuasion/manipulation is difficult to evaluate in the pre -publication test. In addition, he pointed out that this category of risks is more inaccurate and competitive compared to other critical risks, such as risks that will help someone commit chemical or biological weapons attack or a person will help in an electronic attack.
It is worth noting that some members of the European Parliament have It also expressed anxiety This last draft of the proposed practice code to comply with the European Union law of artificial intelligence has also reduced the mandatory test of artificial intelligence models for the possibility of spreading misleading information and undermining democracy into a voluntary study.
Studies have found that Chatbots, Amnesty International, is very convincing, although this itself is not necessarily dangerous. Researchers at Cornell University and the Massachusetts Institute of Technology, for example, Find These dialogues with Chatbots were effective in making people skeptical about conspiracy theories.
Another criticism of the updated Openai frame that focuses on a line in which Openai: “If another AI developer launched a high -risk system without similar guarantees, we may set our requirements.”
Max Tegark, head of the Future of Life Institute, a non -profit institution that seeks to address existential risks, including threats from advanced artificial intelligence systems, said in a statement to luck “The race to the bottom speeds up. These companies are publicly racing to build unimportant general intelligence-artificial intelligence systems from the great designed to replace humans-despite the recognition of the tremendous dangers our workers, our families, our national security, and even our continuous existence.”
Gary Marcus, a member of Openai Long said LinkedIn A message, who said the line is a race to the bottom. “What really governs their decisions is competitive pressure-and not safety. Little by little, they eroded everything they promised before. With the proposed new social media platform, they refer to a shift towards becoming a profitable monitoring company that sells special data-from non-profit that focuses on benefiting from humanity.”
In general, it is useful for companies like Openai to share their thinking about their risk management practices. luck In an email.
However, she added that she was concerned about moving goals. She said: “It will be a disturbing trend if, just as it seems that artificial intelligence systems are exposed to certain risks, as those risks themselves are exposed to their manufacture in the instructions that companies define for themselves,” she said.
It also criticized the framework focus on “border” models when Openai and other companies used technical definitions of this term as an excuse to not publish safety assessments of modern and powerful models. (For example, openai Absolute Its model 4.1 yesterday without a safety report, saying it was not a model of borders.) In other cases, companies have either Infact in publishing safety reports Or it was slow to do so, after it was published months after the form of the form.
She said: “Among these types of issues and the emerging pattern between the developers of artificial intelligence, where new models are launched before or completely without the documents that the companies themselves promised to issue, it is clear that voluntary obligations do not go to a large extent.”
Update, April 16: This story was updated to include comments from the head of the Future of Life Max Tegmark.
This story was originally shown on Fortune.com
https://fortune.com/img-assets/wp-content/uploads/2025/02/GettyImages-2198379368-e1739310956573.jpg?resize=1200,600
Source link