Almost all of the big AI news this year has been about how quickly the technology is advancing, the damage it’s causing, and speculation about how quickly it will grow past the point where humans can control it. But 2024 also sees governments making significant progress in regulating algorithmic systems. Below is a breakdown of the most important AI-related legislation and regulatory efforts over the past year at the state, federal, and international levels.
state
US lawmakers take the lead on regulating artificial intelligence in 2024, introducing… Hundreds of billsSome had modest goals such as establishing study committees, while others imposed serious civil liability on AI developers should their creations cause catastrophic harm to society. The vast majority of bills failed to pass, but many states enacted meaningful legislation that could serve as models for other states or Congress (assuming Congress ever gets to work again).
As artificial intelligence flooded social media ahead of the election, politicians in both parties supported anti-deepfake laws. more than 20 states We now have a ban on deceptive AI-generated political ads in the weeks immediately before an election. Bills aimed at curbing artificial intelligence-generated pornography, especially images of minors, have also received strong bipartisan support in states including Alabama, California, Indiana, North Carolina, and South Dakota.
Not surprisingly, given that it’s the tech industry’s backyard, some of the most ambitious AI proposals have come from California. A high-profile bill would force artificial intelligence developers to take safety precautions and hold companies liable for catastrophic damage caused by their systems. This bill was approved by both legislative bodies amid fierce lobbying efforts, but it got through It was eventually rejected By Governor Gavin Newsom.
But Newsom signed more than a dozen Other bills Aiming for less serious but more urgent damage to the AI. A new law in California requires health insurance companies to ensure that the artificial intelligence systems they use to make coverage decisions are fair and equitable. Another requires AI developers to create tools that label content as AI-generated. Two of the bills would prohibit the distribution of AI-generated likenesses of a dead person without prior consent and stipulate that agreements for AI-generated likenesses of living people must clearly specify how the content will be used.
Colorado passed a The first of its kind in American law Require companies that develop and use AI systems to take reasonable steps to ensure that the tools are not discriminatory. Consumer advocates called for the legislation to Baseline is important. Similar bills will likely be hotly debated in other states in 2025.
And on the middle finger to both the future robot overlords and the planet Utah He issued a law Which prohibits any government entity from granting legal personality to artificial intelligence, inanimate objects, bodies of water, atmospheric gases, weather, plants, and other non-human things.
Federalism
Congress talked a lot about artificial intelligence in 2024, and the House of Representatives ended the year by passing a A 273-page bipartisan report Identify guidelines and recommendations for future regulation. But when it comes to actually passing the legislation, federal lawmakers have done very little.
On the other hand, were federal agencies Busy all year round Trying to achieve the goals set forth in President Joe Biden’s 2023 Executive Order on Artificial Intelligence. Several regulatory agencies, especially the Federal Trade Commission and the Department of Justice, have cracked down on misleading and harmful AI systems.
Labor agencies’ compliance with the AI Executive Order was not particularly exciting or headline-grabbing, but it laid important foundations for the governance of public and private AI systems in the future. For example, federal agencies have embarked on and created a hiring spree for AI talent Standards To develop a responsible model and mitigate harm.
In a major step toward increasing public understanding of how the government uses AI, the Office of Management and Budget has wrangled (most) of its fellow agencies to disclose Important information About the AI systems they use that may impact people’s rights and safety.
On the enforcement side, the Federal Trade Commission (FTC). AI compliance process Target companies that use AI in deceptive ways, such as writing fake reviews, providing legal advice, etc punished AI weapon company Evolv has been called out for making misleading claims about what its product can do. Agency too settle An investigation into facial recognition company IntelliVision, which accused it of falsely saying its technology was free of racial and gender bias. Forbidden Pharmacy chain Rite Aid has banned facial recognition technology for five years after an investigation found the company was using the tools to discriminate against shoppers.
Meanwhile, the Justice Department has joined state attorneys general in a lawsuit accusing the real estate software company RealPage of a massive price-setting algorithm That raised rents across the country. It has also won several antitrust lawsuits against Google, including one involving the company Monopoly on Internet searches This could significantly change the balance of power in the burgeoning AI research industry.
worldwide
In August, the EU Artificial Intelligence Law was passed entered into force. The law, which is already serving as a model for other jurisdictions, requires that AI systems performing high-risk functions, such as assisting with hiring or medical decisions, undergo risk mitigation and meet certain standards around the quality of training data and human oversight. It also prohibits the use of other artificial intelligence systems, such as algorithms that can be used to assign social scores to a country’s population which are then used to deny rights and privileges.
In September, China issued a key governance for AI safety range. Like similar frameworks published by the US National Institute of Standards and Technology, it is non-binding but creates a common set of standards for AI developers to follow when identifying and mitigating risks in their systems.
One of the most interesting parts of AI policy legislation Comes from Brazil. In late 2024, the country’s Senate passed a comprehensive AI safety bill. It faces a challenging road ahead, but if passed, it would create an unprecedented set of protections for the types of copyrighted materials commonly used to train generative AI systems. Developers would have to disclose what copyrighted material was included in their training data, and creators would have the ability to block the use of their work to train AI systems or negotiate compensation agreements that would depend in part on the size of the AI. The developer and how the materials are used.
As with the EU’s AI law, the proposed Brazilian law would also require high-risk AI systems to follow certain security protocols.
https://gizmodo.com/app/uploads/2024/12/2024-ai-overview.jpg
Source link