Openai may “adjust its guarantees if the competitors” AI “launches

Photo of author

By [email protected]


in Update to the preparation frameworkOpenai uses the internal frame to determine whether artificial intelligence models are safe and what are the guarantees, if any, during development and release, Openai said it may “adjust” its requirements if the AI ​​competitor has issued a “high -risk” system without similar guarantees.

Change reflects the increasing competitive pressures on commercial artificial intelligence developers to spread models quickly. Openai Accused of reducing safety standards In favor of the fastest versions, failure to hand over Timely reports, safety test details.

Perhaps criticism expects Openai that it will not make these political amendments light, and that it will maintain its guarantees in a “more protection level”.

“If another AI Frontier developer has released a high -risk system without comparable guarantees, we may set our requirements,” Openai Books in A. Blog post Posted on Tuesday afternoon. “However, we first assert carefully that the scene has already changed, publicly admitted that we are making an amend, and we evaluate that the amendment does not use it beneficial to the risk of severe damage, and keeping the guarantees is still at a more protection level.”

The refreshing alert frame also shows that Openai is more dependent on automatic assessments to accelerate the development of the product. The company says that although it did not abandon the tests that a person is fully led, it was built “an increasing set of automatic assessments” that could “keep up with a faster rhythm (version of a model).”

According to the Financial TimesOpenai gave laboratories less than a week to check safety for an upcoming main model – a compressed timetable compared to previous versions. The publication sources also claimed that many safety tests in Openai are now being made on previous versions of models from the publications released to the public.

Other changes are related to the Openai frame of how the company classifies models according to risks, including models that can hide its capabilities, escape guarantees, prevent its own stopping, and even self -repetition. Openai says it will now focus on whether the models meet one of two softeers: the “high” ability or “critical” ability.

Openai’s first definition is a model that can “amplify current paths into severe damage.” The latter are models “offering unprecedented new tracks for severe damage,” according to the company.

“The covered systems that reach the high capacity must have enough guarantees that reduce the risk associated with severe damage before publishing them,” Openai wrote in the blog post. “Systems that reach critical ability also require guarantees that reduce enough risk during development.”

The changes are the first made by Openai for preparation since 2023.



https://techcrunch.com/wp-content/uploads/2024/11/GettyImages-1934195443-e.jpg?resize=1200,800

Source link

Leave a Comment