If you’re going to call the police and charge someone because they expressed an interest in child sexual abuse material (CSAM) to you, it’s probably not the best idea to have the same material on your own devices. Or to further consent to the search so law enforcement can collect more information. But that’s allegedly what an Alaskan man did. Which led him into police custody.
404 media I mentioned Earlier this week, the arrest of a man, Anthani O’Connor, ended up being arrested after a police search of his devices revealed artificial intelligence-generated child sexual abuse material (CSAM).
From 404:
According to the new file Shipping documentsAnthani O’Connor contacted law enforcement in August to alert them to an unidentified pilot who shared child sexual abuse material (CSAM) with O’Connor. While investigating the crime, and with O’Connor’s consent, federal authorities searched his phone for additional information. A review of electronics revealed that O’Connor offered a virtual reality CSAM pilot job, according to the criminal complaint.
According to police, the unidentified pilot shared with O’Connor a photo he took of a child in a grocery store, and the two discussed how they could integrate the minor into an explicit virtual reality world.
Law enforcement claims to have found at least six explicit AI-generated CSAM images on O’Connor’s devices, which he said were intentionally downloaded, along with several “real” images that were inadvertently mixed up. At Connor’s home, law enforcement uncovered a computer as well as several hard drives hidden in a vent in the house; A computer review allegedly revealed a 41-second video of a child being raped.
In an interview with authorities, O’Connor said he regularly reported child sexual assaults to Internet service providers “but still received sexual satisfaction from the images and videos.” It is unclear why he decided to report the pilot to law enforcement. Perhaps he had a guilty conscience or perhaps he truly believed that his AI CSAM device did not violate the law.
AI image generators are usually trained using real images; This means that the images of children “generated” by artificial intelligence are mainly based on real images. There is no way to separate the two. AI-driven suicide attacks are not a victimless crime in this sense.
The first such arrest of a person for possession of AI-generated CSAM weapons occurred Back in May When the FBI arrested a man for using Stable Diffusion to create “thousands of photorealistic images of prepubescent minors.”
AI proponents will say that it has always been possible to create explicit images of minors using Photoshop, but AI tools are making it significantly easier for anyone to do so. A recent report found this One in six congresswomen They were targeted by AI-generated deep porn. Many products have guardrails to prevent the worst uses, similar to the way printers do not allow currency imaging. Implementing obstacles prevents at least some of this behavior.
https://gizmodo.com/app/uploads/2024/12/GettyImages-1481787950.jpg
Source link