In addition to CSAM, Fowler says, there were pornographic images created from adult intelligence in the database as well as possible “SWAP” images. Among the files, he seemed to note that it is pictures of real people, which is likely to be used to create “frank or sexual images of artificial intelligence,” he says. “So they were taking real pictures of people and exchanging their faces there,” and claiming that some of the pictures were created.
When it was direct, Gennomis allowed explicit adult photos. Many of the photos that appeared on its main page, and the “Models” section included Amnesty International’s sexual photos of women-some of them were “realistic” while others were completely created or in animated patterns. It also included “NSFW” and “Marketplace” where users can share photos and sell photo albums created from artificial intelligence. The site’s description line said that people can “create unrestricted images and videos”; A previous version of the site from 2024 said that “images that are not subject to control can be created.”
Jinomis user policies stated that “respected content” is only permitted, saying “frank violence” and prohibits hate speech. “Children’s pornography and any other illegal activities are strictly prohibited from Genomis,” the guidelines of society says, saying that accounts that publish prohibited content. (Researchers, victims’ advocates, journalists, and technology companies, and more have stored the phrase “pornography for children” for CSAM, over the past decade).
It is not clear to the extent of the genomese of any tools or moderation systems to prevent or prohibit the creation of CSAM caused by artificial intelligence. Some users posted on the “Society” page last year that they could not generate pictures of people who have sex and that their claims were banned for the “dark humor”. Another account published on the community page is that the “NSFW” content must be addressed, and the Federal Reserve may be seen. “
“If you can see those images that do not contain anything more than URL, this shows me that they do not take all the steps needed to prevent this content,” Fowler claims that the database.
Henry Ajdir, a Deepfake expert and founder of consulting consulting in the field of Consultance Catent Space, even if the creation of harmful and illegal content by the company was not, and there may be a “clear association with intimate content without safety measures.”
It is better that he was surprised that the English language site is linked to a South Korean entity. Last year, the country was afflicted with a non -self -driver.emergencyThis is targeted girlsBefore taking measures To fight Wave Of abuse deepfake. Ajder says more pressure should be placed on all parts of the ecosystem that allows the creation of unusual images using artificial intelligence. “The more we see, the more it is the question on the legislators, on technology platforms, on web hosting companies, on payment service providers. All people who are in one form, intentionally or otherwise – mostly unaware – are easy to happen and empower,” he says.
Fowler also says that the open database also appeared to include artificial intelligence claims. The researcher says that user data, such as login or user names, has not included in the open data. Request footage shows words such as “Tiny” and “Girl”, and signals to sexual acts between family members. The claims also contained sexual acts among celebrities.
“It seems to me that technology has been raced before any of the guidelines or controls,” says Fowler. “From a legal point of view, we all know that the frank images of the child are illegal, but that did not prevent technology from being able to generate these images.”
Since the Trucitomic intelligence systems have greatly strengthened the ease of creating and modifying images in the past two years, there has been an explosion in CSAM caused by artificial intelligence. “Web pages containing sexual assault materials on children created from artificial intelligence have also jumped in development, says Derek Rai Hill, the temporary CEO of the Internet Corporation (IWF), which is a non -profitable UK.
IWF has documented How criminals create CSAM created by artificial intelligence and develop the methods they use to create. “It is currently very easy for criminals to use artificial intelligence to create and distribute sexual sexual content for children on a large scale and speed,” says Ray Hill.
https://media.wired.com/photos/67e731650be4992e4504a76f/191:100/w_1280,c_limit/Nudify-App-Database-Security.jpg
Source link