Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Denmark introduces legislation to protect its citizens from AI deepfakes

AYESHA RASCOE, HOST:

You know when you're scrolling through social media and you see a picture or a video and think, hang on, is that real? With rapidly improving technology, it's getting harder and harder to spot deepfakes. And while artificially-generated images, videos or audio can be harmless fun, deepfakes can also be used to create nonconsensual pornography or to spread political misinformation.

Now Denmark is introducing a new bill that aims to protect citizens from deepfakes by giving them copyright over their own likeness. If the law passes, then, in theory, people could demand that online platforms remove content that uses their image without consent. Henry Ajder is an expert on AI and deepfakes and joins us now from Cambridge, England. Welcome to the show.

HENRY AJDER: Thank you so much for having me.

RASCOE: If you could start by giving us a brief history of deepfakes and why they've now become such an issue.

AJDER: Yeah, of course. I mean, a brief history is always a challenge with this technology because it has moved so, so quickly in such a short period of time. But deepfakes as a term emerged in late 2017 on the social media platform Reddit, where it was being used to describe face-swapping content, so women celebrities having their faces swapped into pornographic footage without consent. And at that time, that's all deepfake meant.

But if we fast-forward a few years, it started to be used to refer to really any kind of AI-generated content, and the realism of the outputs has massively improved. The technology has just got much better. It's also that the kinds of media or content that can now be generated using AI is also much more diverse, so you can generate entirely new videos from a prompt, not just, for example, swapping a face in a video.

RASCOE: So how is Denmark's proposed legislation to tackle deepfakes different from attempts we've seen elsewhere?

AJDER: They are looking at, what is the unique thread that runs throughout all of the different kinds of harmful deepfakes that are being put out there? And they're saying, OK, we can identify one of the things that is common throughout all of these cases, which is someone's likeness, the way they look, the way they sound. Those are things that are unique to them which are being weaponized, which are being hijacked by bad actors.

So that's one interesting part of it. The other is the way that they've gone to copyright as the way to secure this, right. So copyright is something that is transferable, so you can transfer the copyright of, let's say, a song or a book or a piece of music to someone else. But it also applies to the likeness not just necessarily when it comes to AI-generated content but also photos that are taken by a photographer. So Dua Lipa, the famous pop star - she posted, I believe, a photo of herself online that someone else had taken, and she got sued because the photographer owned the rights to the image of her. Whereas this legislation might change the way that we think about rights of ownership to our - both our AI-generated and synthetic likenesses but also our likeness in our organic photos, so to speak, that were taken of us.

RASCOE: Do you think Denmark's proposed legislation will work?

AJDER: It's hard to say at this stage because it's a whole new way of understanding how we think about our likeness. And I think that is where it's really impressive and really powerful, is it has recognized that, look, a world where our synthetic likeness is incredibly malleable and all it takes is a few photos of you or a few seconds of your voice audio for someone to clone your likeness with increasing realism and ease, is one where we need to be reevaluating what it means to be safe as yourself, to be able to represent yourself as you online.

RASCOE: How do you think the big tech companies will view this legislation?

AJDER: What I think we're going to see is platforms potentially being inundated with requests for takedowns, and they're going to have to try and find a way to process those requests and also make sure that the process isn't weaponized by people asking for perhaps more legitimate content to actually be taken down.

There are going to be lots of different cases and lots of sticky cases. Satire and critical speech are protected under this Danish legislation, for example, for politicians and for public figures, perhaps. But where is that line drawn when it comes to private individuals? Enforcement is always a challenge with legislation. This isn't unique to this particular proposed legislation, but I think this has some particular challenges given - the key stakeholders, they're saying, will need to really up their game in terms of the platforms and in terms of some of the gray areas around how it should be enforced and what is seen as kind of fair use or protected use.

RASCOE: That's Henry Ajder, an expert on AI and deepfakes. Thank you for joining us.

AJDER: Thanks so much. Pleasure.

(SOUNDBITE OF THE LUMINEERS' "PATIENCE") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Ayesha Rascoe is a White House correspondent for NPR. She is currently covering her third presidential administration. Rascoe's White House coverage has included a number of high profile foreign trips, including President Trump's 2019 summit with North Korean leader Kim Jong Un in Hanoi, Vietnam, and President Obama's final NATO summit in Warsaw, Poland in 2016. As a part of the White House team, she's also a regular on the NPR Politics Podcast.