Deep Dive Episode 168 – Deepfakes: What, If Anything, Should Policymakers Do?

“Deepfakes” are one of the latest technologies to prompt debate about online media. Using Deepfake techniques, users can make realistic-looking fake media in which people say and/or or do things they never, in fact, said or did. Although artists, documentarians, filmmakers, and many others have used Deepfakes to produce creative, and potentially life-saving, content, Deepfakes can also be used for harm, including assaults on people’s dignity and political stability. The technology, like many other innovations before it, presents risks and opportunities.

Lawmakers and academics have proposed laws to mitigate such harms. How should lawmakers approach the abusive use of Deepfakes? Can lawmakers craft legislation that limits the worst uses of Deepfakes without hampering the creation of valuable and creative Deepfake media? In this live podcast, leading experts discuss these and other questions related to this emerging technology, using Matthew Feeney’s new paper on the topic, “Deepfake Laws Risk Creating More Problems Than They Solve,” as a jumping-off point.

Transcript

Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.

[Music and narration]

 

Introduction:  Welcome to the Regulatory Transparency Project’s Fourth Branch podcast series. All expressions of opinion are those of the speaker. 

 

Jack Derwin:  Hello and welcome to The Federalist Society’s Fourth Branch Podcast for the Regulatory Transparency Project. My name is Jack Derwin, and I’m Assistant Director of RTP. As always, please note that all expressions of opinion are those of the guest speakers joining us today.

 

To learn more about any of the speakers and their work, you can visit www.regprogect.org to view their full bios. After opening remarks and discussion between our panelists, we’ll go to audience Q&A. Please enter any questions into the Q&A or chat functions, and we’ll address them as time allows. 

 

Today, we’re pleased to host a conversation titled “Deepfakes: What, If Anything, Should Policymakers Do?” To discuss this interesting topic today, we have Josh Abbott, Bobby Chesney, Matthew Feeney, and or moderator, Kathryn Ciano Mauler.

 

Kat, who will introduce our other speakers before we get started, is currently a Product Counsel at Google. Previously, Kat was Senior Regulatory Counsel at Uber Technologies and General Counsel at i360 LLC. She has also spent time at Mark Barnes and Associates, the Institute for Justice, and the Competitive Enterprise Institute. And we’re grateful to have her as our host today. With that, Kat, the floor is yours.

 

Kathryn Ciano Mauler:  Hi. Thanks so much for being with us today. Deepfakes are one of the new hot topics in technology and the use of technology. And we have some interesting experts to chat through the use and implications and, of course, the abuse of this new technology. It is a new technology but not a new problem, and so it’ll be interesting to hear the discussion around what could or should be done, and what we think about the good use and abuse of this new tech.

 

To discuss today, we have Matthew Feeney, who is a Director of Cato’s Project on Emerging Technologies where he works on issues concerning the intersection of new technologies and civil liberties. Before coming to Cato, Matthew worked at Reason magazine as assistant editor of Reason.com. He has also worked at The American Conservative, the Liberal Democrats, and the Institute of Economic Affairs. His writing has appeared in The New York Times, The Washington Post, HuffPost, The Hill, the San Francisco Chronicle, the Washington Examiner, City A.M., and others. He also contributed a chapter to Libertarianism.org’s Visions of Liberty. Matthew received both his B.A. and M.A. in philosophy from the University of Reading. 

 

Bobby Chesney is the James A. Baker III Chair in the Rule of Law and World Affairs at the University of Texas School of Law, where he also serves as the law school’s Associate Dean and Director of the Robert Strauss Center for International Security and Law. He is a co-founder of the site Lawfare and co-hosts the National Security Law Podcast with Steve Vladeck. He and co-author Danielle Citron wrote groundbreaking articles on deepfakes in 2018, helping to introduce the legal and national security communities to the topic, including “Deepfakes: A Looming Challenge for Privacy, Democracy, and National Security” in California Law Review and “Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics” in Foreign Affairs

 

Finally, Josh Abbott serves as Executive Director of the Center for Law, Science & Innovation at Arizona State University’s Sandra Day O’Connor College of Law. His research interests focus on the ethical, legal, and social issues surrounding new and emerging digital technologies. He is currently collaborating on a grant project on Soft Law Governance of AI Technologies. He also organizes ASU’s annual conference on Governance of Emerging Technologies and Science as well as the ASU-Arkfeld eDiscovery, Law, and Technology Conference. Prior to joining the College of Law, Josh worked as an attorney in Washington, D.C., where his practice focused on international telecom regulation and antitrust litigation.

 

As always, the opinions expressed today will be those of the authors and the discussion panelists and not of anybody’s places of employment.

 

To start off, I’d love to hear an overview of the technology itself. And Matthew, if you would kick us off with the definition of what we’re talking about when we’re talking about deepfakes.

 

Matthew Feeney:  Yeah, sure, happy to. And thank you, Kat, for moderating. And thanks to Josh and Bobby for joining me for this discussion. And thanks everyone listening on the live podcast here.

 

As Kat mentioned, we’re here to discuss deepfake, which is a word that applies to a category of content that’s developed using certain adversarial deep learning techniques. And what that means, basically, is that you have a set of tools that include generators and discriminators, so detectives and counterfeiters. The technology is trying to develop material while another set of the technology is trying to detect it. And the result is very realistic-looking fake media that may be familiar to some listeners.

 

For those who haven’t seen a deepfake, they are readily available on YouTube and other websites. But the result is a very realistic-looking but nonetheless fake image of someone. And it has numerous applications. Perhaps the most harmless is one of satire, for example. In recent weeks, I saw on social media, for example, someone—I believe it was on Instagram—using deepfakes to impersonate Tom Cruise. There have been numerous instances of deepfake technology being used to impersonate celebrities. 

 

But there are, of course, other concerns associated with this. While it has been used for satirical effect, it has been used to impose people’s images onto pornography in which they did not appear. There are widespread concerns about the impact this could have on political interference. But there are benefits beyond just satire, of course. I don’t think I need to go into a huge, long catalogue of all of the benefits, but safe to say it’s manipulated realistic-looking media that has positive and negative applications. 

 

Kathryn Mauler:  No question about it. Anytime you have as your homework to read about Sassy Justice from South Park and watch the 30 Rock technical cloning of Seinfeld episode, it’s not a bad way to start off.

 

So we’ve talked a little bit on preparation about the pros and cons, who’s driving this use, and what the various concerns and legal pushes are in terms of regulating or quashing it. Bobby, would you mind giving a brief synopsis of your research in terms of the legal responses to this technology?

 

Prof. Robert Chesney:  Sure. And thanks, everybody, for letting me join you today. There is obviously a great deal of attention at this point. By 2021, political figures in particular can see the use cases against them, all of them. It’s a bipartisan issue. Everyone can understand how this could harm them. And of course, they also appreciate the harms to individuals and institutions that malicious use of this not inherently bad technology, as Matt points out, could be put. And so there’s a great hunger in the land for something to be done.

 

And when Danielle and I first wrote about this in 2018, after Danielle had observed the emergence of sexually explicit deepfakes — and of course, the term itself is a play on deep throat, originating from that period where there was a Reddit community that was beginning to use an early version of the technology to create pornography.

 

And we sat down to map out the full spectrum of both benefits and harms this technology probably would be able to yield as it began to become more accessible. And then we tried to match that with an assessment of all the tools to respond to the harmful aspects and wrestled with the question of for each of these possible interventions, how likely is it to work at all vis-à-vis the malicious aspects, and what would the spillover effects be on the beneficial aspects? And as Matt said, there’s pros and cons here, not just harms, as with any sort of creative technology.

 

We broke it down into things like possible criminal law innovations, possible tort law innovations, possible regulatory interventions, educational interventions, marketplace interventions, and so on around the horn. As a result, Danielle and I have been in these conversations for many years. And one thing that always comes up is never mind trying to suppress it. We just need to educate everybody. We’ve got to make people understand that they can’t necessarily trust what their eyes are seeing, what their ears are hearing.

 

And that all sounds great. It also sounds like the type of need for media literacy and consumption literacy that’s been true for a very long time, well before there were deepfakes. And we all know how well that’s going for us, and so we need to be realistic about how far we’ll ever be able to leverage systematically society’s understanding of these things. 

 

But, critically, and I think this was one of the key points that Danielle and I introduced back when, this idea that we called the liar’s dividend, which is that if you actually succeed in educating lots of people about the dangers here, you are perhaps chipping away at people being misled. But at the same time, you could get the mirror image effect where people who are, in fact, busted on tape having said certain things or doing certain things will have a greater ability to get away with it by simply denying it. And instead of crying fake news, they could cry deepfake news. And we’re beginning now to see examples around the world of exactly this phenomenon taking place.

 

But as for what are legislators trying to do about it now, there are an array of interventions, very often in the nature of piggy-backing on top of existing criminal law and tort law concepts that could, even without modification, could be brought to bear on these types of technologies and their uses because the smarter efforts tend to focus on things that already are malicious to the point of being criminalized or tortious. There’s also some IP aspects of that where you’re dealing with somebody’s name, image, or likeness.

 

And these haven’t really caught fire yet, in part because it’s, in my opinion, and what Danielle and I wrote, was this really isn’t a problem set that exists or is going to exists and grow because we just failed to realize we could criminalize it or attach civil liability to it. Even if you could do those things, the idea that you’re then going to suppress the emergence of malicious uses is unrealistic. Low-tech fakery, shallow fakes, cheap fakes, whatever you want to call them, exist and are used in all sorts of criminal and tortious ways, and we haven’t yet found a way to completely stop that, so deepfakes won’t be different.

 

Perhaps the bottom line, then—and I’ll stop here—is to say that this is a problem set that needs attention across every mechanism or tool of intervention one can come up with, but we should never view any one of them as a silver bullet.

 

Kathryn Ciano Mauler:  That makes sense. Josh, can you speak a little bit to this technological advancement as whether it is different in category or kind from other areas that have been potentially falsifiable that we’ve seen in the past? Is this different in a way that we need to treat it or think about it differently, or is it just the next step along a path of information that can be used and abused by every person who has access to that?

 

Joshua Abbott:  Yeah, thank you, Kat. I appreciate that question, and it’s a good one. It’s an important one, I think, a threshold question, whenever we’re talking about potential abuses of new technologies, and that’s really to kind of look at what is it that we’re actually concerned about? Is it the underlying conduct, the behavior, or what you’re trying to do with the technology, or is it the technology itself?

 

And in the vast majority of cases, and I think in most instances with the use of deepfakes as well, what we’re really concerned about is the use of deepfakes as a powerful tool. And so if you can make an analogy back to almost any other powerful new technology, firearms, for example, they’re a powerful tool, more powerful than a bow and arrow. You could still harm people in ways that we find inappropriate. Of course, then there are ways that harm people that are clearly intentional, such as in wars and battles.

 

But the point is that when more powerful tools come along, how does the law respond to their use in things that we want to prevent anyway? So for example, the use of certain kinds of weapons in the commission of crimes like assault or something could be an aggravating factor. And the purpose of that is not so much that — it’s not that it’s a separate crime. It’s that what we’re really trying to do is prevent the assault. We’re trying to prevent the harm.

 

The fact that somebody used a firearm, or in this case, the fact that somebody used a deepfake to defame somebody or to otherwise spread some kind of malicious rumor or perpetrate some kind of fraud really goes to what is the interest of the law and society in preventing the use of these tools in those bad acts? And so it may be that we look at deepfakes as being something that is increasingly cheap and easy for non-experts to use in a way that they’re able to engage in some of the criminal or otherwise bad behavior more easily and do it more effectively.

 

And I think in the case of deepfakes, what is it that makes it such a powerful tool? Like Bobby was saying, there are plenty of other ways to falsify some image or video. And these things have been around for a long, long time. For as long as there’s been evidence, you can make it seem like somebody is doing something that they didn’t or saying something that they didn’t say.

 

But I think what deepfakes reveals to us is something deeper about human psychology. And I’m not trying to pretend to be any kind of expert in the remotest sense on those kinds of things, but just the idea that I think everyone can grasp that as people, when we see video and images, our brains are really designed or primed to accept those on a certain visceral level, even an unconscious level that we accept, often without examining it and thinking about it and considering, okay, now, what is this really saying?

 

And it reminds me of a quote from someone who was showing optical illusions. And he says, “Why can’t we see what’s actually there?” And the answer is that it’s impossible to see what’s actually there. So what is it that we actually see? We see what we’ve learned is useful to us. And so in the case of deepfakes, if they’re powerful images or videos and it affects people in a strong way and they have a strong impact, it’s because we’ve learned through experience that our brains are designed to accept it at a certain level as being real. And really, that’s what it’s about, is what’s real.

 

We were talking a little bit about how on platforms like this you have to be careful about if your microphone is on, if you’re muted, or if you say something that you didn’t intend for a broader audience, and facing the death of privacy. But with the advent of deepfakes and similar technologies, I think it’s really worth considering whether we are facing the death of objective reality as we know it, or at least any kind of agreed upon objective reality, and especially at a time when you have very strongly held competing narratives leading to political and social upheaval. So this is, I think, an important time, an important discussion to have about deepfakes and how we want to deal with them. 

 

Kathryn Ciano Mauler:  Josh, thanks for setting it up that way. That’s a helpful way to think about it because we’ve talked a little bit about the criminal and civil — or to put it another way, the public and private legal ways of addressing this technology. But isn’t it a benefit if we have a world in which everybody begins again to bring a little personal skepticism to the table? Sometimes we see an eagerness to believe whole cloth whatever it is that is out there. Is it a good thing if there’s a little bit of skepticism that each person now has to put on their skeptical hat when they’re looking at the news or at social media? Matt, can it start with you on this? I know that you have some developed thoughts on this topic.

 

Matthew Feeney:  I think that whether media is created by deepfake or photoshop or more traditional photo manipulation, I don’t think that people’s skepticism is universally applied to all media at all times. I think oftentimes it depends on who the creator of the media is. People are more likely to vet sources they feel are untrustworthy, whether that’s for political reasons or other reasons found in personal bias. So I think it’s fair to say that you want there to be some degree of skepticism in a society, but as the Australian comedian Tim Minchin once said, “You don’t want to be so open minded your brain falls out,” and so willing to examine everything with equal scrutiny.

 

I will just say I think in the past, people predicted that media manipulation would lead to an insufferable degree of skepticism. For example, with the advent of photoshop, there was at least one article that said, well, because of this kind of photo manipulation, no one’s going to believe photos of real atrocities that come from authoritarian governments. And I don’t think that’s borne out to be true.

 

I think the wider concern is to something Bobby said earlier, that we suffer, unfortunately, from a lot of very partisan media. If the now very famous Billy Bush video featuring Donald Trump had emerged years later, I think it’s very likely that many of Trump supporters would say, “This is a deepfake.” I don’t think the timing was quite right for the release of that video, but that’s the sort of thing we should expect. So the amount of skepticism in a society, the ideal amount, it’s not zero, it’s not 100 percent. But I don’t think it’s got anything to do with deepfakes, I suppose, per se, but the kind of venue where people go to be informed.

 

Prof. Robert Chesney:  Kat, can I jump in to expand and pile on to something Matt was just talking about, and the theme that really emerges both from Josh’s comments and his?

 

Kathryn Ciano Mauler:  Please.

 

Prof. Robert Chesney:  Great. So I think one of the questions we all need to wrestle with, and we’ve been somewhat talking about it here already, is whether there’s a meaningful difference in kind as opposed to in evolution of technology. That is, are deepfakes and the products of adversarial learning networks, is this a revolution or an evolution?

 

And if it’s just a granular enhancement of credibility and ubiquity, then we might be able to trust in the ability that we’ve seen in the past to learn to live with digital manipulation and analog manipulation techniques, and maybe the same will be true here. But it’s possible that it is more of an inflection point, and that it is a little bit more revolutionary. And so let me make the case for it not definitely being different, but at least arguably being different to the extent where maybe it won’t be — maybe it won’t go so smoothly this time. And it may be a bit more nihilistic of objective reality, as Josh was putting it, which is obviously a problem we’ve already got a significant issue with.

 

So here’s the case. Part one is the idea that, in contrast to most of the preexisting technologies, this one, with some exceptions, is relatively capable of getting above the eyes and ears test for human ability simply to detect that something uncanny, inappropriate, something’s altered, something’s off. A true generative adversarial network product, not something that’s maybe described colloquially as a deepfake but isn’t really the product of dueling neural networks, but something that really is is practically defined by being above that threshold. So that’s one piece. 

 

But, of course, you could say the same thing is true for the fruits of industrial light and magic and other Hollywood outfits that have provided that sort of deeply plausible imagery and audio for a long time. That leads to the second thing that’s different. The technology that yields a deepfake is ultimately going to be scalable in a way that’s dramatically different, and diffusible, and democratizeable, you might say, in a way that’s dramatically different than what’s true about Hollywood’s special effects studios, whereas the barrier for using abilities to do this are through the roof. That’s why they’re Hollywood studios.

 

This is the sort of thing that is disseminating rapidly and taking shape both as commercially viable products, or perhaps viable products, free academic research, free tools made available. And increasingly, deepfakes as a service, just as botnet as a service and other forms of malicious online activity as a service on the dark web becomes available for those who want to cause harm and aren’t sure how to do this, but they have the idea that they could do something like this. So there’s that democratization of the ability to cause harm that’s associated with this as well, which means that more people can play the propaganda and information warfare game.

 

And then the third piece — and this is something that came to my mind as I thought about what Matt was saying about how we did manage to deal with photoshop, it’s true. But as Matt also points out, we are now in a social media driven information environment where it’s already quite clear that there are deep pathologies having everything to do with what Josh was talking about with our cognitive biases and our inclination.

 

So in our original paper, Danielle and I mapped out a lot of the key cognitive bias concepts that interact so powerfully with social media to contribute to the general problem of misinformation. And what Danielle and I concluded was that at least to some extent, deepfakes ultimately are going to be a problem of truth manipulation that is different in kind, is more meaningfully problematic by perhaps a large amount because of the confluence of those factors.

 

Kathryn Ciano Mauler:  That makes sense. So is the solution, then, just to have this detecting technology, detection technology be improved? Would that solve the problem here?

 

Joshua Abbott:  If I could jump in—and feel free to disagree with me, anyone—but unfortunately, I don’t think that is a solution. Because of the very nature of how these videos and images and sounds are created, they are, as Bobby said, almost by definition outpacing the ability of detection, not just by people but even by machines that are designed specifically for the purpose of detecting fake content or deepfake content.

 

And listening to some of these other comments, I think, again, hopefully not to broaden out the conversation too much, but if, in fact, this is, as Bobby said, some kind of an inflection point or maybe even a revolution in these technologies that really challenge some of our traditional notions of what and how we should regulate these activities, I think — if that is the case, then I think it’s fair to say that this is potentially a challenge to notions as fundamental as ideas around free speech and what the First Amendment means anymore.

 

A model of speech that I’ve had in my mind recently that helps tease out some of the issues around speech and the competing interests around speech, which I think we can mostly, in many cases, at least, agree that the production of a deepfake video, or video or audio content would be viewed as an expression of speech. Not to say that it may not be also malicious or false or many kinds of things, but that’s okay because free speech doctrines also deal with those kinds of things. But they are viewed in the context of speech in how you assess them.

 

A model that I’ve found useful recently is to think of speech in terms of imagining somebody who has something to say. And if it’s in the olden times or something like that, they’re watching their flock of sheep, and so they get up and they shout it out. And they have ultimate free speech, absolutely no constraints on what they can say or do or express, but the sheep aren’t that interested.

 

What they really want is to find an audience, and so where do you find audiences? Well, you find them in towns and gatherings. And so you can imagine going to a city state or a walled city and wanting to go in, and going to the markets, the marketplace or the public square, and standing up on a platform. And now you’ve got a crowd of people that are all engaging in commerce and expressing themselves that way.

 

And the reason this simple model helps me think about this is, first of all, who do you exclude from these kinds of spaces? So if we’re talking about deepfakes, and we identify people who are engaging in deepfakes, is this the kind of thing that we are interested in excluding from certain kinds of spaces? There’s a U.S. Supreme Court in 2017 called Packingham v. North Carolina where this is discussed about the inclusion or exclusion in public spaces, especially online.

 

But assuming it is allowed to be included because of the potential benefits of these uses of this technology, then still, if the state wants to interfere, and we want to regulate it in some way, we have to cross certain lines. This is in the context of commerce, this is in the context of communication networks, of technology platforms, all these things that are necessary for the exchange of ideas.

 

And if there really is some serious harm on the horizon here where we’re talking about cognitive biases and the ability to cause harm that state has a valid interest in curtailing speech to prevent some of these harms, then I think we are starting to talk about a challenge to traditional notions of speech, that we need to start thinking about it very differently than we have. 

 

Matthew Feeney:  There’s a lot to — oh, go ahead, Kat. Sorry.

 

Kathryn Ciano Mauler:  No, I was going to call you in, Matt, because I think that’s a really helpful outlay of, or at least an introduction to the various types of speech protections. I think that there’s major First Amendment matters when it comes to the use or regulation of this technology that are certainly private rights of action that may or may not apply. I was going to ask you, Matthew, you spelled that out well in the paper that kicked off this discussion. If you wouldn’t mind laying some of that out?

 

Matthew Feeney:  I do think that the issue of deepfakes raises difficult First Amendment questions. You’ve seen some state lawmakers seeking to regulate specific uses of deepfakes. And in at least two states, there’s legislation governing the use of the technology before elections. And I do think it’s fair to say that if the First Amendment protects anything, it’s political speech. And that includes the kind of content that we’re used to seeing around elections, content that criticizes a political opponent or perhaps even seeks to embarrass them. It’s not hard to see why lawmakers are concerned about that kind of material. But the First Amendment is a pretty significant barrier there.

 

I do think, though, that when it comes to other contexts, you’ve seen very narrowly tailored attempts to regulate, especially in the nonconsensual pornography context. I think there you have the kind of legislation that certainly isn’t without its concerns, but I think even many civil libertarians will be happy to acknowledge that, look, it’s targeting a very specific case that’s relatively easy to identify. And that is, I would say, slightly different, perhaps, from the political context. 

 

When it comes to the detection, though, and the First Amendment, your query made me think about journalism because I think that journalists are in a really unenviable position here because they often depend on sources sending them images or video. And I don’t think it’s a surprise that many journalism outlets are thinking hard about deepfakes because they don’t want to be caught out reporting on fake content.

 

Now, that could, at least in journalism, this particular industry, provide incentives for journalist outlets to err on the side of caution, especially because of the potential legal implications of publishing something that’s potentially defamatory. But journalism is just one industry. There are many others where people, perhaps, won’t be nearly as careful. And I think we all can think of examples of outlets on the left and the right that may call themselves journalism outlets but hold themselves, perhaps, to different standards. 

 

Kathryn Ciano Mauler:  We won’t be naming names today. Bobby, when it comes to the detection and how that interacts, one thought that I can’t get past in thinking about the way that regulation would interact with deepfakes and with this technology is that oftentimes without sufficient detection, there would be so much confusion, for example, if on the eve of an election if some video comes out with explosive information or claimed information using this technology, I don’t know that there would be time to dig in or prosecute or address that harm. 

 

And so it seems like detection is a big part of what the approach needs to be. I know that you’ve looked into this a lot. Would you lay out some of your thoughts around the detection technology and around where you think the limitations may lie?

 

Prof. Robert Chesney:  Sure. You’re quite right that we all understand that the truth doesn’t catch up with the lie, even if you have a really capable, rapid reaction. One can document the circulation of social media posts that have manipulated imagery from the past couple of years. There are, sadly, lots of cheap fake examples of this that spread really, really widely. And then the similar content describing how things were modified and tweaked don’t spread nearly as widely. So you’re right. It suggests that the dream outcome would be that we could keep this stuff from getting uploaded to begin with. So that raises a technical question, but it also raises business model and market questions.

 

It’s probably good to step back and help people to think of the technical solutions as having an ex ante set and an ex post set. Ex ante technical solutions, usually this is talked about with the phrase digital providence or content capture providence. There are companies like Truepic that are in the business, in academics as well, and NGOs in the business of trying to develop and disseminate and make ubiquitous technologies for photo capture, video capture, sound capture, that in various ways encrypt, and watermark, and authenticate, and create the technical foundation for detecting when the original has been modified.

 

In theory—and I put a big emphasis on the quotes around that—in theory, if you had that such that every camera and microphone had this hardwired into it, and every entity to which, every platform to which content then got uploaded decided to integrate that as a filter into their system, either to stop things from being uploaded when there’s variance at all or to cue things up for closer review when a variance is detected, in theory you could exercise a lot of control here.

 

But, of course, even if you had such technology, which it’s getting there, why would we think that the vast market, which is way beyond our smartphones, the vast market of products that contain cameras and microphones all will incorporate this technology into their devices? Some have, and thank goodness some have. But the idea that that’d become ubiquitous to the point where it would be weird and something you could discriminate against in the market if someone had content that wasn’t watermarked, if you will, in that way, it’s hard to see how the market gets there on its own.

 

So does that mean, well, maybe we need the government to push that and to try to do it? Well, I guess the government could. I think there would be tremendous constitutional questions trying to compel the platforms all to have this sort of filtering. And again, think about how much modification we all engage in with the digital content that we capture. I have teenagers. It seems like half their time is spent digitally modifying the content they capture. 

 

It’s not enough to show that something’s changed from the original capture. There’s an element of judgement, and that judgement extends not from just is this an aesthetic modulation or is there a content modulation that’s more substantive? But if so, is it problematic? Is it malicious? Is it tortious? Is it criminal? Is it satire? Is it just for fun? Is it just a TikTok? Calm down, it’s Tom Cruise, no big deal. Or is it something that we should actually be alarmed about? There’s a huge scalability problem with that.

 

So this goes back to my theme that this is not a silver bullet. It will help. It will help in certain settings more than others to have digital providence popularized. That’s something we should all be able to get behind. But we shouldn’t think it can become the silver bullet.

 

Now, what about ex post? What about detection after the fact? As I think Josh said a moment ago, it’s in the nature of the GAN model, the adversarial network model, that if you have a deepfake that’s generated with a particular GAN algorithm, and you’ve got the detector piece of that algorithm, well, it will be pretty good at detecting it, but not if you don’t. And there’s all different — they’re not all the same. There’s a variety of different capabilities. 

 

So at the end of the day, yes, we want to popularize these things. If all our phones had Truepic or something like that on them, great. That would help a lot in a lot of settings. It would be very useful in litigation proceedings, for example, where authentication of evidence is going to start getting trickier. But it’s not going to make the larger problem go away.

 

Kathryn Ciano Mauler:  That’s helpful. I would love to hear especially from Josh on your thoughts about any privacy implications of having additional eyes on our smartphones and additional ears paying attention to what’s going out and what’s coming in. 

 

Joshua Abbott:  Yeah, thank you. The — goodness. The privacy concerns are so enormous. I look at privacy as the other side of the coin of data security. And some of the tools or techniques or approaches to securing our data, to ensuring our privacy, really overlap with a lot of these same concerns that we’ve been talking about already, like Bobby was talking about digital providence of things.

 

And something, for example, that I just recently learned about, I’m probably late to the game, but I’m still trying to stretch my mind around, is something that shows the enormous value of being able to authenticate something, anything. And the example I’m thinking about is the recent rise, apparently very recent rise or popularization of these things called NFTs. You’ve heard of these, these non-fungible tokens, where people are paying millions of dollars, almost like collectors, for these digital images or short video clips.

 

Not that that gives them any kind of intellectual property or ability to exclude others from viewing them or looking at them or using them, just on a blockchain somewhere. It just verifies that, yes, you are now the owner of this piece of digital property. And I think what that shows, not to go too into that, but I think what that shows is that the ability to authenticate the digital providence of something is itself something with enormous value, and also shows how the inability to properly authenticate the source of something also can be thought of as an enormous cost or enormous risk.

 

And so I think when we’re talking about things like deepfakes, again, the problem with regulation in traditional forms is the — well, we talked about this pacing problem where the reality on the ground of what the technology can do and how it’s being used is always going to outstrip the ability of regulators and lawmaking bodies to keep up with it in any kind of meaningful way.

 

In fact, when I was practicing, that’s what most of my practice entailed was dealing with network architectures for which — that were ostensibly regulated by some code in the CFR. But it certainly didn’t make any sense anymore because it was written for several generations before, generations of the technology that simply didn’t apply anymore. And the underlying assumptions of those laws and regulations just were no longer valid. And so how do you apply them to what’s going on right now?

 

This is where we — the project that I think you mentioned that I’m working on relating to soft law, which is not a widespread term, but, essentially, they’re rules that are agreed upon but don’t have the force of law, so substantive expectations that aren’t directly enforceable by governments but in the gaps or while we’re waiting for governments to catch up, which, if they ever can, we’ve got to come up with some kind of rules of the road or some kind of solution. And I think that people are opening up their minds to considering some of these alternative approaches. And the need for those, I think, is becoming more and more apparent, especially in the context of deepfakes and the need for security and privacy in these different contexts. 

 

Kathryn Ciano Mauler:  I think that that’s a helpful overview. What about defamation law? And I know that Matt and Bobby especially have gone back and forth on this question. And I’ll start with you, Matt. Is that a solution that might be one way to address the extent to which there’s abuse or any civil harms around the use of this technology?

 

Matthew Feeney:  It potentially is, although it’s not without its issues. I do recall that, I think it was a few years ago, Senator Warner issued a white paper where he did mention Section 230 of the Communications Decency Act. And I believe it was in that paper where he explored the idea of, well, if a court has found that deepfake content is defamatory, then an interactive computer service will not enjoy 230 protection for that content.

 

That’s an interesting approach. I think like a lot of others that you see in the deepfake space, it’s, fortunately, narrowly tailored. I think anyone who’s seeking or thinking about legislation on this needs to be aware that it has to be very narrowly tailored because as we’ve discussed, it has a huge number of applications, many of which are beneficial.

 

So I do think there are so many people out there thinking about deepfakes and defamation. I don’t know if you need — I might be the only person on the call who doesn’t have a J.D., but I don’t think you need to have a deepfake specific law, necessarily, to handle the defamation or to handle the harm caused by deepfakes there. But, certainly, I think it’s an area where I think the calls for legislation or regulation are probably the strongest, just because of the kind of content we’re talking about. 

 

Prof. Robert Chesney:  I’ll jump in and say that, of course, I think Matt and I agree that the places where you do want there to be legal pressure are places that cause the sorts of harms that almost certainly the law already recognizes in tort liability and sometimes criminal liability. And with tort, let’s include defamation, of course. And you don’t need a deepfake specific intervention because those laws exist, but what you might need is enhanced resources devoted to it, or as Josh was saying earlier, in some contexts, perhaps, it’s an aggravating factor that should show up in terms of how the consequences of violations are expressed.

 

There’s also — there’s sometimes a public educational value in new legislation that is actually formally superfluous on top of existing liabilities, but by singling out particular problem sets, there’s some value there that the law can conventionally serve. So there is some of that. Yeah, so I’ll just stop there. 

 

Kathryn Ciano Mauler:  That is helpful. It seems like one of the ways to address how this might work would be just to give, for example, social media companies additional bandwidth for or additional legal support for simply taking it upon themselves to do more editing and searching. Might that be a solution, and would Section 230 permit that?

 

Matthew Feeney:  Sorry, would Section 230 permit them to be more [inaudible 46:43]?

 

Kathryn Ciano Mauler:  Would there be any additional liability to those companies if they were to engage in closer review of potential deepfakes?

 

Matthew Feeney:  Of course, you can have a large company like Facebook saying to the public, “Look, we view this as a really serious issue, and we have a dedicated team to tackling all this. We’re on it.” But we shouldn’t forget, unfortunately, there will be people out there who will build and already have built websites where people will share some of the more revolting content that we’ve discussed already on this podcast.

 

You have seen, for example, in the wake of the Christchurch shooting in New Zealand that some social media companies took a very aggressive approach to that and sought to take down anything that even looked like it. And that’s certainly something that a lot of social media companies can do, given their resources, but it might be harder for smaller websites.

 

And smaller doesn’t necessarily mean less visited. So for example, a site like Reddit or Wikipedia are very — these are widely visited, but they don’t have nearly as many lawyers and engineers as the Googles and the Facebooks of the world. Maybe, perhaps, they would find it a little harder. I’m not an expert on the technology here.

 

But, certainly, I don’t think 230 will impact any company that seeks to root out deepfake content from doing that. The worry is that I think Section 230 critics will say that the law doesn’t provide or at least removes incentives for some websites to police platforms more diligently for this kind of stuff. And that’s a whole — well, we could do a whole separate podcast just on that, but that’s my thoughts on the 230 question.

 

Prof. Robert Cheney:  This is Bobby. I totally agree with that. 230 originates with the idea of making it clear that companies can engage in the moderation they want to, the particular concerns they had about abusive child imagery at the time, and making clear that if they want to police themselves and the content that users upload in that way, that they won’t somehow become liable for or make themselves more exposed to liability for doing that. That was the original idea.

 

In terms of what their incentives are and how we might help to leverage platforms of various kinds so that they don’t unintentionally become part of this problem, which is going to exist to some extent no matter what, there is market pressure, and we should always look first to that to see whether the market is resolving this over time. In lots of contexts, I think a lot of people, with good reason, say that a lot of the costs are externalized. They’re not internalized by the company, perhaps in part because of Section 230.

 

And so the market pressure alone isn’t enough to move the needle, although I do think that over the past — really, since the 2016 election, certainly if you look at the big platforms, they’ve experienced a lot of political and market pressure, and there’s been some movement of the needle, despite the fact that they’ve got liability protection under Section 230.

 

Ultimately, what are we trying to achieve here? I think the goal should be make sure that to a reasonable extent that companies are doing things that are reasonable in light of—it’s not one size fits all—in light of their own particular resources and capacities to act when they are hosting content that is illegal under existing law, that is defamatory, that is criminal, etc. You think about how when IP, when protected musical content, video content, how much companies like Google, YouTube, and others, they’re pretty quick to swoop in and enforce IP content.

 

We want to see some amount of the same thing happening as much as it can be, whether it’s with deepfakes or cheap fakes, when someone is posting, say, a malicious, sexually explicit image that they’ve generated to harm their ex, that sort of thing. How exactly you tailor that if you decide, yes, we’re going to intervene with law to try to tweak the incentives there, clearly extremely difficult to do. Maybe it’s not even possible to do it, I don’t know. But it’s a conversation worth having.

 

So when we first read about this in 2018, Danielle and I, to be provocative, we put on the table the idea that you should lose your Section 230 immunity to the extent that you don’t have some reasonable degree of process suited to whatever the nature of your organization is and your resources to at least learn that someone on the outside is trying to tell you that there is illegal content on your site. You’ve got to have some amount — you can’t just ignore that and bury your head in the sand. I know Matt has made a strong argument that that’s a lot easier said than done, and I might even be persuaded by him. But I do think that’s an important conversation to have.

 

Kathryn Ciano Mauler:  That’s — sorry, Josh. I see you coming off mute.

 

Joshua Abbott:  Well, just to jump in real quick here, I think we’ve all been hinting at this, but I think it bears to just repeat again expressly that the importance of Section 230 to the entire digital technological world that we live in — it’s been called by others the 26 words that created the internet. The ability to innovate and provide new kinds of services online under the liability protection of 230, I think, is really hard to overestimate in terms of its importance.

 

And I think that when we start to see calls, especially from dominant firms, for greater government intervention and regulation or amendments to 230, I think that it’s worth viewing those with some heightened level of skepticism. There’s a — like Bobby was talking about, just the differentials in resources and the abilities of different companies depending on their size and their market dominance, there’s, I think, a legitimate danger of freezing in place market dynamics that have evolved and should continue to evolve to bring out more innovation, and the risks of freezing things in place and giving an advantage to certain firms.

 

In some ways, this actually reminds me of defamation law and how — again, not that I’m an expert in that particular area at all, but there are differentiated rules depending on whether the person who’s been allegedly defamed is considered a public figure with the idea that public figures have greater access to undo the harm of the defamatory content because they have access to their own channels to refute it. And so you have to show actual malice in saying it was defamatory. And if it’s not public figure or — and it depends. We can have limited purpose public figures or if they’ve inserted themselves into the controversy.

 

When we’re now dealing with online and thinking about what kind of access do people have to undo the harm caused by deepfakes, in some ways, we have to really rethink where those standards lie and whether it makes sense to have differentiated standards depending on the different kinds of companies and actors and players in this whole thing. Anyway, I think it’s just something to keep in mind about the importance of 230 and being careful about how we approach it. 

 

Kathryn Ciano Mauler:  We have just about five minutes left, and so I’d like to use the last couple of minutes just to discuss what each person thinks will likely be the landscape here, cutting to the end of the tape or fast-forwarding a few years. It seems like the vast majority of the use case so far for deepfakes has been more of a civil or private harm. But, obviously, the potential for national security or large public harm is one use issue that could hurt a lot more people or affect a lot more people.

 

So I’m curious to hear whether you think that the end result winds up being sort of a patchwork of state ways of addressing this, or if there’s a federal law, or if there’s no regulatory change that we anticipate coming at all. I’ll start again, Matt, with you, please. 

 

Matthew Feeney:  Well, I think I’ve been wrong with every political and legislative prediction I’ve made in the last few years, but with that throat clearing, I’m happy to give this a go. I think that we should expect that the states will probably take the lead. I do think that the laws that target specifically nonconsensual deepfake pornography are the ones to keep an eye on because they, I think, have probably the most support from the public and also are narrowly tailored. I expect that the deepfake laws that target election speech will not last too long once they eventually get challenged. But I think it’s unlikely we’ll see a comprehensive federal approach to this. I do think it will be the states that lead the way on deepfake regulation and law.

 

Kathryn Ciano Mauler:  Sounds like a safe prediction. And also, it’s not a bad thing to be wrong on these kinds of predictions. Bobby, I’ll move over to you.

 

Prof. Robert Chesney:  I’m impressed with Matt’s predictive capabilities. That sounds right to me. We’re going to see laboratory of the states. We’re going to see lots and lots of experiments that will in some cases be niche, some of which will be more sweeping. The too sweeping ones will fall by the wayside, but some of the niche ones will stand.

 

I think we will continually see interest in it at the federal level. I agree with Matt’s observation that probably the stuff that tries to regulate campaign speech — the brass ring for many a politician is to come up with something clever that would help prevent a final few days sabotage instance, the dropping of a deepfake, that there won’t be enough time to walk it back, even if you could entirely walk it back. And I think that’s going to be very tricky.

 

I don’t think it’s impossible to do that in a constitutional way, but I’m doubtful that anyone’s going to — anything that gets enacted probably will do the trick. And so I think that over the next four or five years, particularly in response to what will be a growing array of known instances where real, identifiable, societal harm is inflicted by various things of this kind, you’ll see people pick up the banner and wave it for a while and try to get stuff passed.

 

But my main takeaway has been all along there is no silver bullet solution. There’s no law to be passed that makes this problem go away and recede entirely. 

 

Kathryn Ciano Mauler:  Makes sense. Josh, what about you? What do you think?

 

Joshua Abbott:  Yeah, I won’t make any predictions. They’re always tricky, especially when they’re about the future, as it’s been said. But I agree with what they’ve said generally. To the extent that societal issues and concerns are the aggregations of those at individual levels, I would just say when it comes to these issues around deepfakes, never underestimate the power of an individual’s confirmation bias. The narratives that we hold in our heads and whether we give credit to images, real or faked, is so powerfully connected to our own cognitive biases that if we just keep that in mind, then I think whatever we come up with that respects or acknowledges that reality will be better.

 

Kathryn Ciano Mauler:  Thank you so much. Thanks to our panelists, and thanks so much to our listeners. We really appreciate you and the Regulatory Transparency Project making this possible today.

 

Prof. Robert Chesney:  Thanks so much.

 

Jack Derwin:  Thank you, Kat. And a big thank you to all of our panelists as well. I think it was a great discussion, and I wish we had more time. And thank you so much to our audience for tuning in today. You can follow us on any of the major social media platforms to keep up with our content. And we appreciate you joining us. With that, we are adjourned.

 

[Music]

 

Conclusion:  On behalf of The Federalist Society’s Regulatory Transparency Project, thanks for tuning in to the Fourth Branch podcast. To catch every new episode when it’s released, you can subscribe on Apple Podcasts, Google Play, and Spreaker. For the latest from RTP, please visit our website at www.regproject.org.

 

[Music]

 

This has been a FedSoc audio production.

Joshua Abbott

Executive Director

Center for Law, Science and Innovation, Sandra Day O'Connor College of Law


Robert Chesney

James A. Baker III Chair in the Rule of Law and World Affairs and Associate Dean for Academic Affairs

University of Texas at Austin School of Law


Matthew Feeney

Head of Tech & Innovation

Centre for Policy Studies


Kathryn Ciano Mauler

Corporate Counsel

Google


Emerging Technology

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].

Related Content

Skip to content