OpenAI CEO Sam Altman expected Aji, O Artificial general intelligence– AI that outperforms humans at most tasks – around 2027 or 2028. Elon Musk’s prediction is either 2025 or 2026and it has He claimed He “loses sleep over the threat of AI danger.” Such predictions are wrong. as Restrictions Today’s AI is becoming increasingly evident, and most AI researchers have come to the point of view that simply building bigger and more powerful chatbots will not lead to AGI.
However, in 2025, AI will still pose a huge risk: not from super-AI, but from human misuse.
These may be cases of unintended misuse, such as lawyers over-relying on AI. After the launch of ChatGPT, for example, a number of lawyers were penalized for using AI to create false court briefs, apparently unaware of chatbots’ tendency to make things up. in British Columbiaattorney Chung Kee was ordered to pay opposing counsel’s costs after she included fake AI-generated cases in a legal filing. in New YorkStephen Schwartz and Peter LoDuca were fined $5,000 for submitting false citations. in ColoradoZachariah Krabill was suspended for a year for using fake court cases created using ChatGPT and blaming a “legal intern” for errors. The list is growing quickly.
Other abuse is intentional. In January 2024, sexually explicit Taylor Swift deepfake Social media platforms were flooded. These images were created using Microsoft’s “Designer” AI tool. While the company had guardrails to avoid creating images of real people, a misspelling of Swift’s name was enough to get her through. Microsoft since then Pinned This error. But Taylor Swift is just the tip of the iceberg, and non-consensual deepfakes are widespread, in part because open source tools for creating deepfakes are generally available. Ongoing legislation around the world seeks to combat deepfakes in hopes of limiting the harm. Whether it is effective remains to be seen.
In 2025, it will be difficult to distinguish between what is real and what is made up. The accuracy of AI-generated audio, text, and images is great, and video will be next. This may lead to a “liar’s profit”: those in positions of authority disavow evidence of their misconduct by claiming it is fake. In 2023, Tesla Argue that a 2016 video by Elon Musk could have been faked in response to allegations that the CEO exaggerated the safety of Tesla’s Autopilot leading to an accident. An Indian politician has claimed that audio clips of him admitting to corruption in his political party have been doctored (at least one of his clips has been doctored). Verified As it is true from a newspaper outlet). Two of the defendants in the January 6 riots claimed that the videos they appeared in were deep fakes. They both were Found guilty.
Meanwhile, companies are exploiting public confusion to sell fundamentally questionable products by labeling them as “artificial intelligence.” This can go terribly wrong when such tools are used to categorize people and make consequential decisions about them. Hiring Retorio, for example, Claims Its artificial intelligence predicts the suitability of job candidates based on video interviews, but a study found that the system can be fooled simply by the presence of glasses or by replacing a plain background with a bookshelf, showing that it relies on superficial associations.
There are also dozens of applications in healthcare, education, finance, criminal justice, and insurance where AI is currently being used to deprive people of important life opportunities. In the Netherlands, the Dutch Tax Authority used an artificial intelligence algorithm to identify people who committed child care fraud. He – she wrongly accused Thousands of parents, often demanding tens of thousands of euros in repayment. Following this, the Prime Minister and his entire government resigned.
In 2025, we expect AI risks to arise not from AI acting on its own, but from what people do with it. This includes cases where It seems to work well and is overly relied upon (lawyers using ChatGPT); When it works well and is abused (non-consensual deepfakes and liar profits); and when it is not fit for purpose (depriving people of their rights). Mitigating these risks is a huge task for companies, governments and society. It’ll be hard enough without getting distracted by sci-fi scares.
https://media.wired.com/photos/67445b38565a770652e4cbe4/191:100/w_1280,c_limit/WW25-Security-AN-SK-Felix-Decombat.jpg
Source link