In a new report, the California-based policy group, its headquarters, a pioneer in Amnesty International, indicates that legislators should consider the risks of artificial intelligence that have not yet been observed in the world “when formulating the organizational policies of Amnesty International.
the 41 page temporary report It was released on Tuesday from the California Political Working Group, FRONTIERE AI, an effort organized by Gavin Newsom, followers Calcipation against the controversial safety bill in California, SB 1047. While I found newsom SB 1047 Glorified MarkLast year, he confessed to the need for a more comprehensive evaluation of the risks of artificial intelligence to inform legislators.
In the report, LI, along with the authors participating in the Computing College, are arguing the Dean of Computing Jennifer Chase, Carnegie, President of International Peace, Mariano-Vallinentino Coellar, in favor of laws that will increase transparency to what the AI Labs borders such as Openai is based. The stakeholders in the industry from all over the ideological spectrum reviewed the report before its publication, including the advocates of the intelligence of strong artificial intelligence such as the Torring Award, Yoshua Benio, as well as those who argued against SB 1047, such as the co -founder of Databrics Ion Stoica.
According to the report, the new risks of artificial intelligence systems may require laws that force the developers of artificial intelligence to report their safety tests and practices of obtaining data and security measures publicly. The report also calls for an increase in standards on third -party reviews of these standards and corporate policies, in addition to protecting those who report expanded violations of AI and contractors’ employees.
To me and others. Write, there is an “uninterrupted level of evidence” for the possibility of Amnesty International to help carry out electronic attacks, create biological weapons, or to cause other “extremist” threats. However, they also argue that artificial intelligence policy should not only address the current risks, but also expect future consequences that may occur without sufficient guarantees.
“For example, we do not need to monitor a nuclear weapon (the explosion) to reliably predict that it can cause widespread damage,” says the report. “If those who speculate the most extreme dangers are right – and we are not sure if they will happen – the risks and costs of their failure on Frontier AI at this moment are very high.”
The report recommends a two -part strategy to enhance the transparency of developing the artificial intelligence model: confidence but verification. The report says that models developers of artificial intelligence and their employees must be submitted to report areas of public attention, such as internal safety test, while they are also required to submit test claims to verify the third party.
Although the report, from which the final version in June 2025, does not support specific legislation, it was received by experts on both sides of the policy -making discussion.
Dean Paul, a research colleague who focuses on artificial intelligence at George Masson University and was criticizing SB 1047, said in a post on X that the report was A promising step To organize safety from artificial intelligence in California. It is also a victory for artificial intelligence guns, according to California Senator Scott Winner, who presented SB 1047 last year. Winner said in a press statement that the report relies on “urgent talks on the governance of artificial intelligence that we started in the legislative body (in 2024).”
The report appears to be in line with several components of SB 1047 and Bill’s Wiener’s Experies, SB 53Like the developers of artificial intelligence models to report the results of safety tests. Taking into account a wider vision, it appears to be a victory that affects the need for members of the safety of artificial intelligence, Who lost his agenda last year.
https://techcrunch.com/wp-content/uploads/2018/03/maxresdefault_f665d6.jpg?resize=1200,675
Source link