An Ohio man has been convicted of cybercrimes, including the publication of AI-generated images depicting abusive sexual activity, in a historic first under the federal 2025 Take It Down Act. But experts warn that prosecuting these cases is increasingly difficult.
James Strahler, 37, pleaded guilty to cyberstalking, producing obscene visual representations of child sexual abuse and publication of digital forgeries – crimes that included both real and AI-generated images, according to the U.S. Attorney's Office in the Southern District of Ohio.
The Take it Down Act makes it illegal to publish nonconsensual, intimate digital content.
Strahler used dozens of AI platforms and over 100 AI web-based models on his phone to create more than 700 illicit images to post to a website dedicated to child sexual abuse material, according to the Justice Department.
According to court records, Strahler was caught when one of his adult victims reported receiving threatening and harassing messages.
Court records also state that Strahler admitted to being the one behind the violent calls and texts. Information extracted from his seized phone revealed additional victims and the extent of his AI abuse.
Small risks for big payouts
Kolina Koltai is a senior researcher at Bellingcat — an investigative journalism group — who specializes in AI technology.
She said the sheer volume of the content Strahler created is not unusual for these sorts of offenders, and that is part of what makes it so difficult for law enforcement to manage.
"Even when we think about early, early days of AI technology, people would have to learn how to maybe install or host something locally on their own devices," Koltai said.
"But nowadays, you can even go to a web domain and put in a prompt, and you have to have very little technical knowledge to be able to start creating the content. This poses a huge challenge because there's just an overwhelming amount of content."
Koltai cited earlier editing programs like Photoshop — a pricey graphic design software that pioneered early amateur image engineering and required some degree of skill to make realistic edits.
"Nowadays," however, she said, "with a dollar or sometimes even cheaper, you can take a photo of anyone on the internet and put it into a 'nudifier' or some sort of AI-generation platform and create a convincing new image even based on that person's face."
Also adding to law enforcement's woes when seeking out these cybercriminals is the overwhelming number of platforms dedicated to creating deepfake material.
"Oftentimes it's incredibly, incredibly difficult to know what technology, what service, what platform the person is using … unless we get access to their devices or their browser history," Koltai said.
"It's not like there's only just two or three providers. Everyone's trying to get into the game because it's a multimillion-dollar industry," she said, adding that sites will often buy multiple domains under different extensions (dot com, dot io, etc.) to avoid being taken offline.
"Even for our investigative site, when we shut down a site, which is great, unfortunately, it's a bit of a hydra, where there's still many other services willing to take the place of that other one," she said. "It's a difficult problem to solve until we make it harder for these platforms to be used."
Deepfakes and young people
AI's transition from obscure to mainstream technology came faster than the law's ability to adapt, said Matthew Faranda-Diedrich, an attorney who has handled cases dealing with deepfaked nudes.
"We went from two years ago, never having heard of this, never having seen a case like this, to right now having at any one time, five or six of these cases, unfortunately," he said.
Faranda-Diedrich said he works closely alongside police to help them understand the rapidly evolving technology and support them throughout their investigations into potentially illegal behavior.
But for police and civilians alike, he said, there is often a learning curve in understanding just how sophisticated many of these apps can be at manipulating images inappropriately.
"They'll think back to when they were younger or other generations and say, 'Oh, this is like Photoshop,' and have this idea in their head that you can easily tell that the doctored image is fake. But in fact the images produced by the 'nudify' apps look very real and are nothing like a Photoshopped image."
"Let's call it what it is"
The distribution of nonconsensual deepfakes is a multigenerational problem, but it is particularly rampant among young people, research shows.
And women and girls are especially at risk, representing an estimated 90% of the victims of these crimes.
Faranda-Diedrich said that in most cases with which he has been involved, both the victims and perpetrators of the crimes have been children, ranging in age from 14 to 16 years old.
"You want to try to educate them about the dangers of this and the harms it can cause so that kids don't make 'dumb' decisions that actually end up hurting people so disastrously," he said.
And schools, he said, have a major responsibility to get involved at the first signs of these technologies being abused.
"Let's call it what it is: it's child pornography," Faranda-Diedrich said. "And I don't think any school administrator would ever say, 'Oh, if I knew of [child sexual abuse material], I would not call the police.' Of course they would. And they need to make that same call here and get it into law enforcement's hands quicker."
Copyright 2026 NPR