
Legislators who helped form the AI law of the European Union are concerned that the 27 -member bloc is considering alleviating aspects of artificial intelligence in the face of pressure from American technology companies and pressure from the Trump administration.
the I have a verb It has been approved more than a year ago, but its rules for artificial intelligence models for general purposes such as GPT-4O from Openai will not come into effect until August. Before that, the European Commission – which is the European Union’s executive arm – commissioned its new office of artificial intelligence to prepare the practice blog for great artificial intelligence companies, spoiling how they will need to comply exactly the legislation.
But now a group of European lawmakers, who helped improve the language of law while passing through the legislative process, express anxiety that the Artificial Intelligence Office will occupy the influence of the European Union law on “dangerous and democratic” roads. The prominent artificial intelligence sellers pressed parts of the European Union law from artificial intelligence recently, and legislators also feel that the committee may look forward to preference with the Trump administration, which has already shown that it believes that the artificial intelligence law is to combat adhesion and combat America.
European Union lawmakers say that the third draft of the blog, published by the Artificial Intelligence Office, which was published earlier this month, is committed to mandatory obligations under the AI law and offered them inaccurately as “completely voluntary.” These obligations include testing examples to see how it can allow things like widely discrimination and the spread of misleading information.
In a letter sent on Tuesday to the Vice President of the European Commission and the head of the henna technology, Verconine, who was first reported Financial times But it was fully published for the first time below, the current and previous legislators said that conducting these typical tests can allow artificial intelligence providers who “adopt more extreme political sites” to distort European elections, restrict freedom of information, and disrupt the European Union economy.
They wrote: “In the current geopolitical situation, it is more important than ever that the European Union rises to the challenge and stands strong on basic and democratic rights.”
Brando Benevi, who was one of the main negotiators of the European Parliament on the text of the law of artificial intelligence and the first occurrence in this week, luck Wednesday that the political climate may have a relationship with the watering of the Blog Blog. Trump’s second management Hostile Towards the regulation of European technology; Vice President JD Vance to caution In a fiery speech at the Paris AI summit in February that “tightening screws on American technology companies” would be a “terrible mistake” for European countries.
“I think there is pressure coming from the United States, but it will be very naive (to think) to make the Trump administration happy to go in this direction, because it will never be sufficient,” Benvini, who is currently heading the European Parliament delegation to relations with the United States, noted
Beneville said that the former negotiator of the artificial intelligence law who met the committee’s artificial intelligence office experts, who formulated the practice code, on Tuesday. On the basis of this meeting, he was optimistic that the violating changes can be declined before the code is completed.
“I think the issues that we have raised have been considered, and therefore there is space for improvement,” he said. “We will see this in the coming weeks.”
Virkkunen did not give the message, nor for Beenifei’s comment about pressing us, at the time of publication. However, she has Previously insisted The rules of technology in the European Union are applied fairly and fixed to companies from any country. Competition Commissioner Theresa Ribera also confirmed that the European Union “cannot deal with human rights (or) democracy and values” to clarify the United States
Change obligations
The main part of the artificial intelligence law here is Article 55Which puts great obligations on service providers in artificial intelligence models for general purposes that come with “systematic risks”-a term known to the law that means that the model can have a significant impact on the economy of the European Union or “has effective negative or expected effects on public health, safety, public security, basic rights, or society as a whole, that can be widely published.”
The law says it can be assumed that the model has a systematic risk if the mathematical force used in its training is “measured in the floating point operations (FLOPS) greater than 1025” this It is likely to include Many of the most powerful artificial intelligence models today, although the European Commission can also set any general purposes as a systematic risk if its scientific advisers recommend this.
Under the law, the providers of these models must evaluate “with the aim of identifying and mitigating” any systematic risks. This evaluation should include the aggressive test – in other words, in an attempt to get the model to do bad things, to find out what to protect. Then they must inform the UNHCR office about the evaluation and what it found.
This is the place The third version of the draft practice code It becomes a problem.
The first version of the code was evident that artificial intelligence companies needed to deal with widespread misleading information or wrong information as systematic risks when evaluating their models, due to their threat to democratic values and their capabilities in overlapping the elections. The second edition did not talk specifically about misinformation or wrong information, but he still said that “manipulation is widely with risks to basic rights or democratic values, such as the intervention of elections, was a systematic risk.
Each of the first and second editions was also clear that models service providers should consider the possibility of widespread discrimination as a systematic risk.
But the third edition narrates the risks to democratic operations only Essential European rights Like non -discrimination, as “a possible consideration of the choice of regular risks.” The official summary of changes in the third draft confirms that these “additional risks may choose service providers in the future.”
In this week’s speech, the legislators who negotiated with the committee insisted on the final text of the law that “this was never the intention of” the agreement that they derived.
“The risks of basic and democratic rights are methodological risks that the most influential artificial intelligence providers must evaluate and mitigate.” “It is a dangerous and democratic matter and creates a state of legal uncertainty to re -explain and narrow the legal text approved by the participating participants, through the practice code.”
This story was originally shown on Fortune.com
https://fortune.com/img-assets/wp-content/uploads/2025/03/GettyImages-2205585469_7217a6-e1743002923625.jpg?resize=1200,600
Source link