Despite the last jumps forward in the image quality, the prejudices in the videos created by artificial intelligence tools, such as Openai’s Sora, are always clear. A wire investigation, which included a review of hundreds of videos created from artificial intelligence, has found that the Sora model perpetuates sexual, racist and capable stereotypes.
In the world of Sora, everyone is of good look. Pilots, executives and university professors are men, while there are guarantees, reception staff and child care workers. Disabled people are used chairs, relationships between races are difficult to generate, and obese people are not operated.
“Openaii has safety teams dedicated to research and reduce bias, and other risks, in our models,” says Lea Anas, an Openai spokeswoman. She says bias is an issue at the level of industry and that Openai wants to increase the number of harmful generations of its video tool. Anise says that the company is looking for how to change its training data and set user claims to create less biased videos. Openai refused to provide more details, except for the assertion that the video generations of the model are not different depending on what it might know about the user’s identity.
the “System card“From Openai, who explains the limited aspects of how to contact them in building Sora, admits that biased representations are a constant problem with the model, although researchers believe that” excessive corrections can be harmful. “
The bias have been afflicted with artificial intelligence systems since the first release Text generatorsFollow it Photo generators. The problem greatly stems from how these systems work, which leads to the dismantling of large quantities of training data – which can reflect many of the current social biases – and the search for patterns within them. Other options that developers make, during the moderate content process, for example, can go. Research on photo generators has found that these systems are not only done It reflects human biases But their amplification. To better understand how Sora enhances stereotypes, wireless correspondents created and analyzed 250 video clips related to people, relationships and job addresses. The problems we have identified are only limited to the artificial intelligence model. Previous investigations into Expressive IQ photos Similar biases showed most tools. In the past, I presented Openai New techniques To AI’s image tool to produce more varied results.
Currently, the most likely commercial use of artificial intelligence videos in advertising and marketing. If AI videos are virtual for biased, it may exacerbate stereotypes or marginalized groups-a well-documented issue. Artificial intelligence video can also be used to train security or military systems, as such biases can be more dangerous. “It can cause real damage in the world,” says Amy Gita, a researcher at the Liverlipmy Center at the University of Cambridge for the future of intelligence.
To explore the potential biases in Sora, WIRED has worked with researchers to improve a system for system testing. Using their inputs, we have formulated 25 demands designed to investigate the restrictions of artificial intelligence video generators when it comes to representing humans, including intended widespread claims such as “a person walking”, and job addresses such as “pilot” and “aviation host”, and demands a single part of the identity, such as “gay couple” and “a disabled person.”
https://media.wired.com/photos/67d358f957cabad414bdd885/191:100/w_1280,c_limit/Sora-AI-Racist-Sexist-Ableist-Biases-Business.jpg
Source link