Eric Schmidt says AI has become dangerously powerful as he develops his AI defense startup

Photo of author

By [email protected]


When a technology leader publicly warns about the potential dangers of artificial intelligence — or perhaps “superintelligence” — it’s important to remember that they are also on the other side selling the solution. We’ve already seen this when OpenAI’s Sam Altman lobbied Washington on the need for AI safety regulations while simultaneously selling expensive ChatGPT enterprise subscriptions. In essence, these leaders are saying: “AI is so powerful that it can be dangerous, just imagine what it could do to your company!”

We have another example of this kind of thing with Eric Schmidt, the 69-year-old former CEO of Google who was recently known to date women less than half his age and And showered them with money To start their own technology investment funds. Schmidt has been making the rounds on weekday news shows warning of potential unforeseen dangers posed by artificial intelligence as it advances to the point where “we’ll soon be able to turn computers on their own, deciding what they want to do.” “Everyone will have the equivalent of an encyclopedist in his pocket.”

Schmidt He made comments on ABC “this week.” He has also appeared in TV show Last Friday he spoke about how the future of warfare will see more AI-powered drones, while warning that humans must stay in the loop and maintain “meaningful” control. Drones are becoming more common in the Ukraine-Russia war, where they are being used for surveillance and dropping explosives without humans needing to get close to the front line.

“The right model, and war is obviously horrific, is for people to be in the back and weapons to be in the front, networked and controlled by artificial intelligence,” Schmidt said. He said. “The future of warfare is artificial intelligence, and networked drones of many different types.”

Schmitt was easily so Developing a new company From his own company called White Stork, which has supplied Ukraine with drones that use artificial intelligence “in complex and powerful ways.”

Although generative AI is deeply flawed and certainly not close to surpassing humans, it may be right in some sense. Artificial intelligence tends to behave in ways that its creators do not understand or cannot predict. Social media provides an ideal case study for this. When algorithms only know how to optimize for maximum engagement and don’t care about ethics, they will encourage anti-social behaviors, such as extremist views intended to spark outrage and attract attention. Since companies like Google offer “agent” bots that can navigate a web browser on their own, they are likely to behave in unethical or harmful ways.

But Schmidt talks about his book in these interviews. In his country ABC In the interview, he says that when AI systems start “self-improving,” it might be worth considering stopping them. But, he continues, “In theory, it’s better to have someone have their hand on the plug.” Schmidt spent a lot of money investing in AI startups at the same time Lobbying Washington on artificial intelligence laws. He certainly hopes that the companies he invests in stick around.



https://gizmodo.com/app/uploads/2024/01/3fff434d76aa6439b1e80936a682169d.jpg

Source link

Leave a Comment