Deep Dive Episode 41 – General Data Protection Regime & California Consumer Privacy Act

This Deep Dive episode brings you the recording of the first panel from the Pepperdine Law Review’s 2019 Symposium “Regulating Tech: Present Challenges and Possible Solutions”.

In this panel, the speakers discuss the implications of internet privacy legislation in both California and Europe on innovation, small businesses, and consumer protection.

Transcript

Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.

Operator:  Welcome to Free Lunch, the podcast of The Federalist Society’s Regulatory Transparency Project. All expressions of opinion are those of the speakers.  On March 1st, RTP co-sponsored a symposium at Pepperdine Law School with the Pepperdine Law Review titled “Regulating Tech: Present Challenges and Possible Solutions.” Today, we bring you Panel One, which was titled “General Data Protection Regime & California Consumer Privacy Act.” Please send us your feedback.

Anna Hsia:  Hi. Good morning, everybody. My name is Anna Hsia. I am the Head of the West Coast Offices of ZwillGen. We are a law firm that helps companies navigate legal issues with doing business online. So a lot of our work is on privacy and data security, which is essentially what we’re going to talk about here today, the chief topics being the GDPR, General Data Protection Regulation in the EU, and the forthcoming California law, the California Consumer Privacy Act, otherwise known as the CCPA. So we have four panelists here.

To my immediate left is Tom Hazlett. He is a professor of economics at Clemson University. He previously taught at George Mason, UC Davis, and the Wharton School, and he was Chief Economist of the FCC. He is a noted expert in regulatory economics and information markets with research appearing in academic forums such as The Journal of Law and Economics, The University of Pennsylvania Law Review, and The Columbia Law Review. He’s also written for a number of periodicals, including the Wall Street Journal, The Economist, Slate, and The New York Times. And his most recent book was featured as one of the top tech books of the year at CES 2018.

And to his left is Matthew Heiman. He is a Senior Fellow and Associate Director for Global Security at the National Security Institute at the George Mason Law School. He is the Chairman of the Cyber & Privacy Working Group of the Regulatory Transparency Project, and he is the Chairman of the International & National Security Law Executive Committee Practice Group of the Federalist Society. He previously held roles as the Vice President, Corporate Secretary and AGC at Johnson Controls. He was a lawyer at the National Security Division of the DOJ, and he was a trial lawyer with the law firm of McGuireWoods. He has a B.A. and a J.D. from Indiana University, and he’s a member of the International Institute for Strategic Studies.

And then to his left is Gus Hurwitz. He’s a law professor at the University of Nebraska College of Law where he co-directs the Space, Cyber, and Telecom Law Program. His work focuses on the regulation of technology, in particular in telecom and emerging technology industries. And he has appeared in many law reviews and other journals. He’s been cited by courts, Congress, and federal regulatory agencies, and he has degrees in both law and economics, along with a background in computer science.

And then, finally, we have Chris Riley at the end. He’s the Director of Public Policy at Mozilla. He works to advance the open internet through public policy analysis and advocacy, strategic planning, coalition building, and community engagement. He manages the global Mozilla public policy team and its active engagements in Washington and around the world. Before Mozilla, Chris was a program manager at the U.S. Department of State on Internet Freedom. He was a policy counsel with the non-profit public interest organization Free Press, and he was an attorney-advisor at the FCC. Chris holds a PhD in Computer Science from Johns Hopkins and a J.D. from Yale Law School. He’s published scholarships on topics including innovation policy, cognitive framing, graph drawing, and distributed load balancing.

So I’m going to ask all the panelists to give kind of a high-level overview on thoughts. But before I do that, I just want to give a brief summary of what the GDPR and the CCPA are. I’m sure all of you have some sense as to what the GDPR is because I’m sure, last year, you were bombarded with inbox messages about “We’ve updated our privacy policy.” So basically what the GDPR is — it became effective last May, and it’s a pretty landmark piece of legislation that effectively applies to all personal data of EU residents. And personal data is defined very broadly in the EU to be anything that relates to an identified or an identifiable individual. So typically in the U.S., when we think of personal data, we think of a name, something specific to your address or email address. The EU takes a much broader view, so like an IP address would be considered personal data because it’s identifiable to someone. Your device identifier on your phone would be similar.

So one key difference between how Europeans see personal data versus the U.S. is that in the U.S. you can typically do whatever you want with personal data, as long as there’s not a law that says you can’t do it or there’s a law that says “This is how you have to do it.” In the EU, you can’t collect that, store it, or process it in any way unless you have a legal basis to do so. So that’s a huge difference between the U.S.

So the CCPA, which is the California law that’s going into effect in 2020, it’s similar to the GDPR, but it’s different in a number of material respects. So CCPA’s focus is not so much on you have to have a legal basis — again, because in the U.S., you don’t need to have that. It’s more on giving consumers notice as to what you’re collecting and also control over what you collect about them.

So one of the main things about the CCPA is that it requires companies that sell personal data — again, no one really knows what sell means, but those who sell personal data of California consumers have to, for example, include a conspicuous opt-out link and basically allow consumers to opt out of that sale. Both the GDPR and CCPA also afford individuals with these individual rights to be able to access their data and, in some instances, delete data the companies have about them. And both statutes carry some type of a private right of action, and they have fines for violation. So if each of the panelists could just give some high-level remarks on these, that’d be great. Tom?

Thomas Hazlett:  Thanks very much. Thanks for having me. As has been said, it’s a great topic, a great time, a great place to have this. And speaking of tech and the latest thing, I’m delighted to be at the Malibu Beach Inn where I was recently, in the last few hours, introduced to the concept of a smart bathroom. So this was highly impressive, slightly creepy, a little unnerving, but I’m not going to go into detail here. But it does have to do with motion detectors and heated seating.

Anna Hsia:  This was not on the agenda, so I’m not really sure how to ask questions.

Justin “Gus” Hurwitz:  These sounds like a privacy topic.

Thomas Hazlett:  So yes, we can actually spend our time on what people want to talk about now or go back to the agenda. Okay. So recently, I saw Senator Wyden from Oregon introducing legislation on the issue of privacy. And the argument was that Facebook can’t be trusted to protect users’ data on its own. It’s time for Congress to step in. And there’s a number of features in the legislation, more resources for the Federal Trade Commission to monitor certain privacy breaches, increase in potential fines on social media platforms like Facebook and Google, up to 4 percent of annual revenues. And there’s a lot in there. Some of it, I think, is arguably helpful, and I also think the basic approach is extremely dangerous, extremely problematic. The idea that Facebook is being trusted to do this on its own now—and that doesn’t work. Congress will hold their feet to the fire—puts us almost automatically on the wrong path. We have to see a lot more of the problem and see it much more clear-headed than that.

First of all, since the Cambridge Analytica scandal — you can call it a breach. Facebook originally tried to deny it was a breach — March of 2018. Since that time, while Congress has been putting together legislation and talking, holding hearings, the stock markets have been busy figuring out solutions of their own. Facebook is down about $40 billion in capital value. That’s of February 1 and includes the very positive January quarterly earnings report that increased Facebook’s capital value. And in fact, Facebook was down over $100 billion in capital value earlier in that period. That is to say — and this is market adjusted and looks to be associated with the privacy problems. At Facebook, many changes have been made getting the right mix, the optimal configuration of privacy rights, user rights and so forth – very, very difficult.

That’s why we’re here talking about it. It’s very problematic to have accounts that consumers are going to be protected by rules that do not allow them to trade their privacy or information for free services because we know, across a wide range of behavior, that’s what consumers want to do. By the way, a recent laboratory experiment asked human subjects, in experiments, “How much would we have to pay you to get you off Facebook for a week?” And so this was sort of a D-listing of Facebook accounts that was enforced by the experiment. And it turns out that, on average, the bids that came in constituted $1,000 a year payment, on average, $1000 a year payment to take Facebook users off Facebook. The value that’s being generated by Facebook for shareholders is a small fraction of the value that’s being created for users.

Now, better disclosure, better information, better consumer adjustments can certainly be thought about, maybe improvements for things like government monitoring posting, better transparency and information, even rules for additional causes of action in litigation, obviously, on the table. But in trusting government to step in, we have to remember the other side of this. In 2014/2015, perhaps the most serious privacy breach in American history took place, and that’s when about 22 million personal records kept by the United States Office of Personnel Management on Department of Interior Computers were hacked with what U.S. intelligence authorities thinks were Chinese government bots, which actually lurked around for about a year because they found so little security.

And by the way, these 22 million personal records were not encrypted by the United States government. And the bots sort of move around for a year, finding the lay of the land, then, after one year, began encrypting the U.S. files. This is the Chinese hacker encrypting U.S. files. It wanted to be secretive in withdrawing the information and not allowing U.S. security forces, if they were monitoring the situation, to actually see what the Chinese were doing. They took these records — of this treasure chest, 4 million Americans working for the federal government or previous federal government workers had their medical histories, their entire employment histories, their Social Security numbers, a raft of information about family affairs and so forth taken and held by the Chinese government.

Now, that’s a bad breach. The United States federal government has never admitted the breach. They have never told the victims what happened. The only compensation that was given — there was notification for some millions of federal workers that they would be given one year of free credit monitoring, without a description of what the potential problems were. The trick on that, by the way, is that credit monitoring is the last thing they need. The reason the U.S. intelligence authorities know this was a state actor is because none of the information has turned up on black markets, which is the first thing that hackers that are going to breach your credit or do identity theft are doing. There’s some other purpose that the Chinese government has in mind for this personal information.

In fact, James Clapper, Head of National Security, did testify a couple years ago on this, would not explicitly describe the breach. But he actually congratulated the Chinese on their excellent success and, using the term “glass house,” infuriated the Senate panel and Senator McCain, the late Senator McCain, by saying it really couldn’t be called an attack. And the reason for that position is because the United States is of two minds on cyber warfare. We’re on offense; we’re on defense. And there’s a lot that goes on that, in fact, U.S. government official policy will not say is a crime and is not criminal activity, even when it involves breaches of the sort that we think should be identified, communicated to the customers who are victims, and, in many cases, remedied by civil and criminal laws. This is something that is very controversial within government policy.

I bring that up to give a flavor for — and a context for the idea that the market is not doing it’s job. The government needs to step in. We have to be a little more fulsome in our analysis of the complexity of the problem. The ability of markets to quickly impose penalties on firms that actually do engage in practices that are not pro-consumer and the alternative ability for governments to step in and grab the rules that are very favorable.

Anna Hsia:  Matthew?

Matthew R.A. Heiman:  So as one of the victims of that breach, I stand before you with clean hands. When you hear my comments about privacy, in my estimation, one of the reasons that we’re all sitting here and we’re all here at Pepperdine is because of the incredible growth of the internet industry, the innovation economy, whatever you want to call it. So 30 years ago, we weren’t talking about these platforms, these business models. And the reason we’ve experienced this tremendous innovation, particularly in the United States, is because entrepreneurs, thinkers, dreamers, were operating in an environment with fairly light to no regulation. They were allowed to create new business models. They were allowed to use new technologies and create tremendous wealth and jobs and industry as a result.

And because of that lightly regulated space, we have the tech titans that we all talk about, such as Alphabet and Amazon and Apple and others, Facebook, and they all face the rigors of the marketplace. And so when it comes to GDPR and CCPA, I’m extremely skeptical about those laws, those regulations’ ability to protect privacy or to level the playing field. And I’m extremely concerned that what those laws tend to do is what most regulations tend to do, particularly when you have large industry players, which is entrench the large players at the expense of the small players. And I can give you an example, having worked for large corporations.

In general, large corporations can manage regulatory impositions. They have staff. They have lawyers. They have public policy people, essentially, lobbyists on staff. And they can manage that regulation. It becomes what’s considered a cost of doing business. For a small innovator, maybe starting in a dorm room, maybe starting in a garage, the barrier to entry is quite high the more you regulate the space. And so the question that I always put to people that are talking about this is where do you want that trade to be? Do you want to have the next version of Amazon? Do you want to have the next version of Apple? Or do you just want to have Amazon and Apple for the next 30 or 40 years because they’re the only ones that can manage this regulatory regime?

One might think about the automobile industry. Obviously, at the beginning of the automobile industry, the regulatory footprint of the government on the auto industry was fairly light, and it grew and grew and grew over time to the point where, for about 30 to 40 years, when people would talk about the automobile industry in the U.S., which at one point was the leader in the automobile industry and no longer is, people would talk about the big three because there were only three. It was almost impossible for an entrepreneurial innovative company to enter that space, compete, and to compete on an equal footing with what was then GM, Chrysler, and Ford.

And so my opening salvo is I would ask that we think very carefully about regulating a space that’s generated such tremendous growth, tremendous innovation, tremendous opportunity for the country, and we do so with a very light hand. And we also have to think very carefully about what do we mean when we talk about privacy? Because everyone’s got a slightly different view of what privacy is. And one quick example, so the Europeans — my European colleagues and friends often say Americans have no appreciation for privacy, which I know is false.

But they point to GDPR and other values they hold very dear, and I, having lived in Europe for a couple of years, am reminded of the fact that the first village I moved to in Europe, one of the first acts I had to do with 72 hours of arriving there was go to the police department and register my name and all my personal details and all the details of my family. And if I moved a mile to the next village, I’d have to register at that police department. So I think when we talk about these terms of privacy, we need to be very careful about what we mean, what privacy is, and also what is our ability to actually defend privacy versus the benefits we’re getting by giving up a little privacy.

Justin “Gus” Hurwitz:  Okay. So first, thanks for having me out here. Interesting, fun, timely, important and hard topic. I actually want to start by talking a bit about Qualcomm and CDMA, which has nothing to do with this panel, but I’m going to tie it in. One of my favorite stories, for those who are not familiar with it, if you Google Qualcomm CDMA demonstration, you’ll quickly find a video talking about this, I’m sure. The first live public demonstration of the CDMA technology, it is an incredible story of the challenges of bringing a new technology to market.

The entire system, basically, fell apart. It couldn’t get a GPS signal that it used for timing during the demonstration, or in the prep for the demonstration, so they had to stretch out the intro remarks for like 45 minutes while the engineers were back in the van trying to resync with the GPS satellites. And they managed to pull it off. It’s one of the greatest stories, I think, in the tech industry out there, and you should go watch it.

Unidentified Male:  What year?

Justin “Gus” Hurwitz:  ’88, I think. One of the comments that we heard in the rule of law discussion I want to build on for my opening remarks for this panel. The GDPR is a European law. It applies to American companies that do business in Europe. It potentially applies to American citizens working — doing business with European firms. It’s a European law that affects the whole world. The CCPA is a California law. It is a California law that affects every business in the United States that touches the state of California above a fairly modest size. These are both laws that are very difficult to understand and comply with in an interconnected world. We have a California privacy statute. We could very easily end up with 4, 5, 6, 50 other state privacy statutes, all imposing obligations, compliance requirements on companies operating throughout the country.

This very quickly becomes a regulatory thicket and quagmire. Each state, each country, each region of the world imposing its own views on others. This is exceptionally difficult for any form of innovation or startup environment. Matthew has already commented on some of these issues, and I’m sure we will get into them a bit more.

What’s going on with these statutes? Privacy means very different things to different people. Again, building on Matthew’s final remarks, there’s a huge difference between concerns about my medical information and my banking information. Those are very different from biometric information about me, very different from information about what posts I share, or things that I tweet, or posts that I like, or who my friends are, or what websites I visit.

These are all vastly different things, yet we are sweeping them all up into this concept of privacy. And in these statutes, we’re treating them with a very, very broad brush. That’s one exceptionally difficult problem with these statutes and the challenge that we need to face. More generally, what’s going on with these statutes? Why do we have them? Why do we care about privacy? I would say my greatest concern with these statutes is the concerns that they’re responding to have nothing to do with privacy. They’re not concerns about “I don’t like Facebook having my information.” Rather, they’re about a generalized loss of control or feeling of a loss of control that citizens have in the modern era.

And we’re striking back against companies that we don’t understand what they’re doing with our information, how they’re collecting it, how they’re using it. And even if these statutes are incredibly effective, I expect that this generalized concern of lack of control, loss of autonomy will still continue to exist. So we’re going to be responding to the wrong problem and, in so doing, hampering our innovative economy and ending up, 10, 15 years from now, still having the same set of concerns. The last thing I want to say, these companies — the reason that we’re having this discussion is Facebook has just made such a mess on the PR side for themselves. The entire industry has had problem after problem after problem. They need to hire some better PR people, I’ve got to say.

I am, generally — speaking of writing on these issues, I’m one of those people saying, “Yeah. I don’t like the GDPR. I don’t like the CCPA. We don’t need federal privacy regulation. We have good basic tools that are great.” And with friends like Facebook, trying to defend that sort of position it’s really hard because, at some point, regulation is the backstop for when the market pisses enough people off. And yeah, the market has managed to piss enough people off that regulation is inevitable. So that leaves us, I think, in an uncomfortable place for how we’re going to approach these issues.

Chris Riley:  Thanks. I’m Chris Riley from Mozilla. Thanks for having me here, as well, and thanks for setting me up so well, Gus. You set up a lot of good things there, and the point about regulation being inevitable is one I’m fond of in contexts like this. So I’m here on both the virtual and, in fact, actual left flank of this conversation to be the voice of Mozilla, the company that’s quite supportive of having greater privacy regulations. We’ve supported the GDPR in Europe. We supported the California Privacy Bill, and we’re now actively working on federal privacy legislation across the United States.

We do this because we try to lead the tech industry in privacy. We practiced data minimization before it was cool. We try to do lots of good things with how we handle data to show the industry how data can be used to good business benefit but in ways that enable users to feel more control, to get to that underlying lack of control problem that Gus mentioned, which I do think is a very real problem here.

I think that it’s clear to any observer that the honeymoon with the tech industry is over. For a long time, there was very little detailed scrutiny of what the tech industry did with data, what kinds of practices it ran, because it was generating just so much economic value for this country. The rest of the world didn’t have that same period because so much of that economic revenue was to the U.S., and so Europeans, Indians, Kenyans, Latin America have always been more skeptical of tech industry practices than we have been in the U.S. But the tech clash over the past couple of years has really helped the U.S. catch up in its skepticism and concern over what technology companies are doing. As people learn more and more about what’s happening to their data in these systems, that feeds this feeling of lack of control in a way that properly designed regulation actually can help address.

I was going to make a big deal of this, but you kind of already took my thunder out from under me. I do believe greater privacy regulation in the U.S. is a matter of not if but when and what and how do we design it in such a way to try to not undermine innovation that we care about in the economic growth, but yet establish a baseline that people want. I do think more awareness is needed across industry, not just in big companies, of the sensitivities and the harms that can result from how you handle users’ end user data. So I’m thinking of a story from Mike Masnick on Twitter a few days ago. He went around — I think it was CES. He was walking around and looking at all these technology vendors of tools and apps that were designing and targeting their software and their services to kids. And he asked them if they’d ever heard of COPPA, the Child Online Protection Privacy Act or some acronym much along those lines. And he got blank stares. That worries me.

I think that we need to figure out how to be more thoughtful and more understanding of data and of the importance of protecting it. And I feel the same way about federal baseline privacy legislation. I’m worried about the future of technology without effective, enforceable baseline privacy regulations. There’s still plenty of room for sector-specific laws on top of that, like HIPPA and others, but without having that baseline privacy legislation, without getting the industry as a whole to be more thoughtful, I’m worried about what will happen to Americans. I’m worried about what will happen to trust in our internet economy. And I’m worried about how separate we will become from Europe, from India, from the rest of the world that has already gotten such a big head start on us.

Anna Hsia:  So Chris, just following up, as an advocate of federal privacy legislation and more regulation over privacy issues in general, what kind of effect do you think that has on innovation? A number of the other panelists have talked about the potential negative effects. And if it does have a negative effect, do you think it matters?

Chris Riley:  I think that you have to look as well at how it creates opportunities and incentives to innovate in ways that are pro-privacy. The California Privacy Bill has a very specific section for this, trying to incentivize the creation of a business model for privacy protection. So it has all of this stuff in it about empowering users to make effective choices but an explicit setup to delegate your choices to another company. So I think that, if it does come into force and is not amended in a significant or preempted by federal law, we will see innovation in how you can help users ensure and realize their privacy experiences online. And of course, I have to give a shout out to Fast Company which named Mozilla among its 50 most innovative companies for 2019 and number three on the security chart —

Justin “Gus” Hurwitz:  That sounds like it’s a shout out to yourself.

Chris Riley:  It was a shout out to myself, but via Fast Company, let’s say. In part because of the work that we’ve tried to do in innovating in ways that help users realize their privacy goals in an effective way. I do think that there’s one point of sympathy that I want to sound here. Privacy is a very individual, very cultural, very different kind of thing. Not everyone will choose to express their privacy preferences in the same way, and I think it’s very important for regulation to acknowledge and be built around that — be built around the idea of giving users control over their experience.

And if users want to have their data be able to be used for all of these purposes, that needs to be part of this ecosystem as well. So I really want to see a privacy become a pro-innovation policy and regulation by creating more options, more choices, and more different kinds of business models, not viewed so narrowly as to just say it takes away the things that we have today. So I think if it’s designed well, it doesn’t need to do that.

Justin “Gus” Hurwitz:  If I may offer a couple of thoughts as well. First, on that last point, I want to echo it wholeheartedly. More business models more consumer-facing options I think are great. I think we might be jumping the gun a little bit, for instance, on a transition to or an addition of subscription-based options for a lot of services. Over the last several years, we’ve been seeing more pay to use sorts of services coming online, and it’s been a slow high transaction cost environment getting consumers comfortable with these approaches to consuming services online.

Two issues that I don’t think have been mentioned yet on the innovation front. First, we need to recognize and acknowledge that many companies use information that they gather from their users for innovative purposes. And very frequently, you don’t know what you’re going to be innovating five years down the road until you come up with the idea. And you might not come up with idea if you don’t have that information. So to the extent that regulation curtails the collection of data, you can only collect data for specific uses or you can only collect data with clear consent to use it in specific ways that can dramatically hamper some types of innovation.

Another really thorny issue in this area is what is information? Who owns the actual information? So if Facebook creates a service that didn’t exist before and users engage with that service, is the fact that a user is engaging with that service a fact about the user that the user owns, or is that a fact about Facebook? Facebook owns the fact that, “Hey, my service is being used by such and such a user.” And in fact, the user wouldn’t have been able to engage — create that new piece of information but for Facebook having innovated that platform.

This is, putting my economist hat on, this is a joint production problem, and this industry is rife with complex joint production problems. Understanding ownership and control issues in a joint production environment is really challenging. And if we’re focusing solely on the user side of the equation, we are setting ourselves up for really a difficult questions down the road.

Anna Hsia:  So if you think about some of the everyday services that we use nowadays — I’m going to turn to Matthew on this because your concern was that there would be an impact on innovation. So the Airbnbs of the world, the Ubers of the world, what do you think would happen if companies like that were required to first overcome the hurdles of GDPR and CCPA before launch? Do you think those companies would make it? If not, why?

Matthew R.A. Heiman:  I think they’re less likely to make it. So I take some issue with Chris’ point that more regulation creates more opportunities for more business models. If that were the case, then, boy, why don’t we lay regulation across every industry and watch the business models blossom? I mean, I know that’s not exactly what you’re saying, but that seems to be what I drew from your comment, in that more privacy regulation will create more opportunities for more business models. I think that’s true if you’re a law firm. I think it’s true if you’re a privacy consultant advising big businesses about how to negotiate these regulations. I think it’s true if you’re a lobby shop on Capitol Hill trying to figure out how do I get my client through this next round of regulatory scrutiny.

I’m not interested in creating more business opportunities, despite it would be in my self-interest for lawyers and lobbyists and consultants. I’m interested in creating more Airbnbs. My big concern is, particularly for small businesses, there are certain areas that small businesses just don’t want to touch because they don’t have the budget to hire lawyers and to hire consultants. And so I’m not really worried about Airbnb or Google or anything that’s truly established at this point, but what I am worried about are the new businesses we’ll never see, the new businesses that will pop up in a place that’s got a lighter hand in terms of regulation.

So right now, if I was advising someone that wanted to do a tech startup or wanted to create a business model involving data, I would say, “For goodness sake, don’t do it in Europe. Don’t do it in California. And I question, with the direction of travel, whether you want to do it in the U.S. at all.” You know, a lot of these businesses are now popping up in places like Singapore because it’s viewed as a lighter regulatory state where you can innovate and take chances and take risks and let the market respond to what you’re doing and tell you if you’re being successful or not.

The other thing I note is, while Facebook has certainly done itself no favors over the last 18 months, I haven’t seen scores of people deleting their accounts. So as much as people might be annoyed about Facebook using their data in unexpected ways, people aren’t so upset that they’re not posting family photos or that they’re deleting their accounts or that they’re walking away from Twitter, any of these platforms. Everyone has a choice to do this, but clearly the market’s view is “This trade I’m making, some of my personal data in exchange for free or heavily subsidized services, is a pretty good trade.” I mean, does anyone here want to pay for their Facebook account or their email account or their LinkedIn account?

I’ve never had anyone say, “Boy, if I could get more privacy, I’d pay a monthly fee for all these services.” So I think the market has pretty loudly said, “We’re okay with a certain trade.” And I think to the extent that companies abuse that privilege or abuse that trust, they suffer for it. And other companies take advantage of it. This is one of Apple’s big selling points is “We really care about your privacy. Use our products and services.” So my view, I’d like to let the market sort this out because I think it’ll do it in a more efficient way, and I think it’ll do it in a way that pleases more people than Senator Wyden.

Anna Hsia:  So is there an argument that the market does, in some way, sort this out with startups? For example, I counsel a number of companies, and one of the considerations I have when counseling them is that, if you are a startup with less cash, you are less likely to be targeted because you just have fewer users, not deep pockets. So does the market kind of already account for that, such that, if you are a new player, you can take some more liberties with various laws and innovate? I’ll open that up to anybody.

Thomas Hazlett:  Well, I think we’re slipping into some kind of a context where there’s not a market, and we’re going to create one by having new rules that really incentivize business models that respect privacy. I mean, this game is very well developed so far. The reason that these hundred billion dollar plus platforms exist is because there’s this constant dynamic churn of alternative models, and with breaches and some outrage and some mistakes and public relationship nightmares and crises, and so forth and so on, and 40 billion or $100 billion loses by certain firms, we’re already in the game. We’re down the road. There have been — business models have been shaped severely by privacy concerns. I mean, there were screw ups at Google where you had — it lasted just about 30 days where you could — on a simple click, you could allow all your friends to know who your other friends were in terms of your addresses.

And Larry Page thought that that just was — why would anybody object to sending out an email and every correspondent sees everybody else you corresponded with. And there was outrage, and Google quickly readjusted that. There was a service called Beacon at Facebook that started taking Facebook users’ previous — you know better than I about this kind of thing. So yeah. So they just took things like, “You know, Joe was looking for sex toys on Amazon last week,” just to try to tell you that your friend Joe forgot to post this on Facebook, and you might be interested to know what he was looking for online. So you know, that didn’t fly too well.

So these adjustments get made. We laugh because you do see — who was the engineer who came up with that? And who’s the exec that signed off? Good idea. We’ll get more information that way. So these adjustments are made. You also have Tim Cook, the church of Apple, descending from on high to tell everybody how they take care of your privacy and they will never do what Facebook does. And I actually caught CNBC. They couldn’t help but smirk when that came over — when that first came over. The obvious implication being of course Apple won’t sell your information. They’ve taken all your money. There’s nothing left. And you really have a different business model there.

Some of us like the Apple model for exactly what it’s been trashed for — the insular, vertically integrated, high-priced entry fee. And now it looks a little better. These kinds of concepts are shaping themselves all the time and very subtly adjusted with all kinds of considerations to reflect the constraints of providing value to customers and not outraging them with privacy problems. So the idea that new law is going to improve that is not absurd, and I take Chris’ comments to say the transparency can be improved. I don’t know if I’m interpreting correctly. I say it so you can correct me if that’s not where you’re aiming. But yes, we should see where some transparency can be improved. Always having cost benefit in mind is done — and the opening comments correctly notes it’s not free to do these things.

These platforms will be imperfect. Privacy will occasionally be less than we would like. The whole reason we have this ecosystem is because we’re waving certain quality dimensions. And with all due respect to Qualcomm, the only reason we have these — the only reason we have these is because regulators allowed 59’s reliability over carrier class telecommunications to be waived as a requirement for public switch networks. That is to say, when you talk on a cellphone, you accept that it’s a different quality phone call than a fixed-line connection. Now, that may seem like a trivial point. You’re all used to somebody saying, “I’m on my cell. I’m in a bad spot. I’ll call you later.” Or if you do a radio interview, “Make sure you get on a fixed line for this call.” That’s just the way these products developed.

We don’t consider this as an inferior product because the quality is different than some other quality. In fact, the lower quality combined with additional benefits of mobility is now dominating fixed lines. I don’t have to explain that to any audience. So all through the economy — by the way, agricultural markets are routinely cartelized on quality standards, where we don’t allow, quote/unquote, “lower quality products” to compete. They’re called marketing orders. But there’s restrictions of output and lower consumer welfare because we say the standards have to be higher. If the standards always have to be higher, in privacy or anything else, you’re going to give up something. Matt’s talking about the innovation, the dynamic elements of that. But you’re also giving up a lot of consumer choice where consumers might like to have something that is priced by Facebook and not by Apple and is readily available because of that low price, that low entry fee, to hundreds of millions of other customers who are trading information and paying for their access by allowing others to productively use that information.

So just ratcheting, yeah, it can create new business models. Ratcheting up those protections in a single-entry bookkeeping world is good, but it’s a double entry world. And so you’re going to be giving up a cost there that really has to be reflected if you protect consumers. Consumers want to be able to use the whole raft of choice, whether it’s a fixed line choice versus a mobile choice or, in this case, more privacy versus a little less privacy.

Chris Riley:  I’d love to riff on that, if I could, if you don’t mind me just continuing. I do think transparency is a big component of this, but the biggest thought that I want to share after what Gus and Tom, in particular said, is I feel sort of simultaneously like the outcome that privacy advocates, like myself, want from legislature — we are both closer to it and, at the same time, further away from it in the world that we have today than I think has been realized.

So let me explain that. Looking at the principles about what we want out of federal privacy legislation in particular, we want rights for users like access to data about new ability to request deletion, clear rules on data handlers and processors, purposeful collection of personal data, and make sure that you’re collecting it for a purpose, granular and revocable consent over what information is collected about you and how it is used, limits on secondary use, and effective enforcement authority.

I actually don’t think that’s that radical a change from the way the market is heading. I think Tom’s point is right that we already see limits to some of the most egregious practices that come up naturally. We already see things like — I have an Android phone. Like the amount of information you get over the apps on this phone and what they can do with and what kinds of things they have access to — a lot of progress that’s been made in recent years to help users understand that. So on some level, I feel like we’re already heading in a good direction. And so some will argue, “Then why do we need law?” I would argue instead, well, law is protecting the growth that we’ve made in establishing a foundation and a protection for the future. So different ideological points there.

At the same time, though, I think we’re also further away in how we think about this and why. So I’m fond of saying the original culture of Silicon Valley was a culture of collect all the data, keep it all forever, and figure out how to monetize it later. And that that’s the cultural shift that we actually want to encourage moving away from to be more thoughtful and purposeful in what you collect data for and what you use it for.

So back to the market point, I think we’re already seeing a lot of push back from the public when data is used for purposes that surprise users, when it’s being used in ways that they don’t expect. But there’s still some major changes that I think we need to make.

So my favorite illustrative example of this is Facebook and phone numbers. So Facebook collects your phone number for two-factor authentication. Let’s say you’ve been resisting giving Facebook your phone number because you don’t want them to know that about you. You don’t want them to use that for other purposes, cross-correlate that into other databases of information and so forth. You give it to them for two-factor authentication or security purposes. You give it to them with the expectation that the only thing Facebook will do with that number is, if you’re trying to log into Facebook from a new computer, they’ll send you a text message with a little code, like many of us in the room probably have set up for our banking services and others and so forth. Facebook then turned around and used that phone number for marketing and for identification purposes, and people started seeing cross-correlation of behavior based on their phone number and how that was being used.

So that kind of secondary use, it feels wrong to me, and it came about almost by an accident because I think that companies weren’t thinking about that. And they have this record of a person that says, “Oh, what’s the person’s phone number?” They put it in. They take it in for one purpose. You slot that phone number into the person’s record, and then everything else that’s been built on how do you use a person’s phone number starts to trigger it and start to kick in. So it’s sort of a relatively minor change, but it’s also sort of a relatively major change to create and put a person in power to say, “No, use my phone number for only one purpose.” I’m sure I set up a lot of other things, but that’s all I’ll say.

Justin “Gus” Hurwitz:  I’d like to riff on this and continue responding to the question, as well. The question of how to advise startups in this area, I think, is really hard. And I’m going to say something possibly surprising, which is the area we need, perhaps, the most regulation, or at least awareness in thinking about these concerns, is in the startup environment. It’s not the Facebooks; it’s the folks in the garage. The reason for this is every startup aspires to be the monopolist. Google, one day, was a startup. Facebook, one day, was a startup. And there was a period, and I still hear stories of this even today, when venture capitalists, if they heard that a startup was thinking not about privacy, but about security concerns in their products early on, they’d back out.

They didn’t want companies to be wasting time and money incorporating security features into a product that probably was going to fail. So they thought — the thinking was “Don’t do this now. If you blow up, if you’re a billion-dollar company, then we can put all this stuff back in.” Well, the problem is refactoring is hard. It’s really hard to add security. It’s really hard to add privacy features into a product once it is a live product, just as a technical matter, not even as a user expectation and experience matter. So the story about COPPA is a really important one. We want startups. We want the small companies to be thinking, “Okay. If I’m really successful, do I want to be Mark Zuckerberg being grilled by Congress, or do I want to be thinking about this stuff, hopefully not in a regulated, from my perspective, way, but do I want to be recognizing the mines that I’m laying for myself in the future at the ground floor?”

Another concept that I’ve been borrowing and starting to throw around in these discussions, I think a lot of folks are familiar with the idea of the uncanny valley, that if a machine is clearly a machine, we’re comfortable with it. If it’s a perfect android that it’s indistinguishable from another person, we’re comfortable with it. But if it’s kind of obviously trying to be human-like but also not quite, it’s really creepy. I think privacy has an uncanny valley problem. I think that targeted ads, user preferences that are tracking you and making products better, more convenient, doing predictive stuff for you, when it works, it is awesome. And it’s seamless. And we think it’s great. When the products aren’t doing anything with this information, it’s great. We’re not worried about it. But when we’re getting the ads that are clearly trying to be targeted and they’re misfiring or it’s something that’s being targeted that really shouldn’t be targeted, we’re in an uncanny valley sort of situation.

And I think an unrecognized part of the discussion is to what extent is the market adapting to this, responding to this. Is the technology continuing to improve to get us out of the uncanny valley, either by saying, “Okay. We can get out on the side that lets us use these technologies,” or “There’s no way to get out of it, so we need to stop using these technologies”? A lot of what’s driving the privacy discussion, I think, isn’t that people are concerned that their information is being used. It’s that they are uncomfortable with the perception of how it’s being used. And that’s largely an uncanniness effect dynamic.

Anna Hsia:  So panelists, you’ve talked a lot about giving consumers choice and how that seems to be — notice in transparency and choice seem to be a motivating factor behind these statutes. So if we talk about a lot of different services in the U.S., a lot of it is free because of advertising. So you know, I have small kids. I go on Amazon. I look up Elmo, and suddenly, Elmo is basically creepily following me throughout the rest of the internet. But what if we decide to have a different type of system? For example, if Facebook was free if you are fine with being tracked for advertising purposes, but yet you offer an alternative where Facebook is $10 a month on a subscription model. What do you think the effects are of that, in terms of potentially having discriminatory effects? Chris, I’ll start with you.

Chris Riley:  So I’ll wear more of a European hat than an American one for a second and just say there is something fundamentally anathema to many people about having to pay for privacy. In Europe in particular, privacy is a fundamental right and you should never have to pay for it. So I think that we need to be thinking about options to users in a broader lens than just Facebook as we have it now or a pro-privacy Facebook that’s for money. I think there is a lot more creativity and diversity that could go into a pro-privacy business model than just that.

Anna Hsia:  Matthew, do you have any thoughts?

Matthew R.A. Heiman:  I fundamentally disagree with Chris. You know, I find it odd that everyone wants all the benefits and all the goodies that these platforms provide, but nobody wants to provide any information about themselves, which is the lifeblood of these platforms. In other words, I want my free LinkedIn. I want my free Facebook. I want my free YouTube. I want to be able to see every episode of Gilligan’s Island. I want free social networks. I want free Twitter. I don’t want to give any information about myself away. So what are these companies to do? How do they have a revenue stream? How do they have a business model? If the answer is, well, the business model can’t be advertising anymore, then we destroy a lot of tremendous economic value that’s already been ginned up. And as Tom said, we’re already pretty far down the track. And so I’m not troubled. If you want to have an advertising-free experience, then you can pay for that. Or if you don’t want to be bombarded with ads and you don’t want to pay for it, you can choose not to participate in that platform.

I mean, I believe YouTube offers opportunities to pay and be free of looking at the four second ad before that clip of someone falling down a flight of stairs runs, or you can choose not to watch the video on YouTube. Consumers have choices, and the companies are responding to those choices all the time. I just don’t see how introducing very ridged, regulatory regimes is going to advance the cause and make it a more free-flowing environment with greater privacy protections because I think the market is already pretty responsive to consumer complaints. That’s why Facebook is tripping all over itself constantly to change things, adjust things, because people get upset, and they respond.

So I’d rather see it play out that way then regulations which wind up being on the books for 20, 30, 40 years and don’t change and stifles a lot of innovation, and not just innovation, but competition. I think competition is really important. So if Alphabet and Facebook have these moats around them and they’re not challenged, they’re not going to innovate. So you know, I’m not troubled by setting up tiered systems where you pay for the kinds of service you want.

Chris Riley:  Can I respond very briefly to that? I think it’s extremely misleading to confuse privacy and advertising. There are different models of advertising that are not built on the collection and the building of behavioral profiles of individuals and actions. I’m fully comfortable with having a pay for an ad-free experience world. But that’s different from having an “advertising supported but friendlier on privacy” business model. So to conflate and confuse those two is not how the advertising world is looking at this situation today and not how we should be looking at it when we’re talking about privacy law. It’s not saying get rid of all the advertising, by any stretch of the imagination.

Thomas Hazlett:  Well, I was recently writing something on Senator Wyden’s proposed legislation, and I went to read his essay posted on NBC’s website. And as I was reading about the outrage of using information without the express consent of consumers, I noticed that the ads adjacent to Senator Wyden’s essay were telling me about automobiles that I had recently scanned on the internet and popping up with real estate ads in some of my favorite real estate markets. And I smiled to myself that, instead of posting this on his website, advertising free, Wyden wanted to take the advantages of the commercial platform provided by NBC using my personal information, which I certainly don’t recall giving to NBC. But may be buried down there in some contractual daisy chain.

And in fact, people are, including Senator Wyden, taking advantage in so many complex and multitudinous ways that we can’t even create the chart for the system we have. Now, that does scare some people, and there can be violations of privacy that are problematic. We want to — and would reasonably want to fix. But the idea that there are tremendous advantages that are being generated here seems to be lost in the mix. You go back to the beginning of the Google Search algorithm. Search was a commodity in the mid to late 1990s. It was thought to be a problem that had been solved. It didn’t work that well, as we now know. But it was just a keyword search, and that was fine.

And these two young punk-a-loos, who never finished school by the way, in the computer science department at Stanford had an algorithm. They had this page rank, and then they had this crawler. And they put them together and they started doing this stuff. And they had to think about a business model to pay for it. There was an entrepreneur who desperately tried to talk Page and Brin into intention-based advertising, using the private information from search users to sell them ads. And that was evil. So in ’98, ’99, you read their ’98 article on this where they explicitly say you can’t do the search engine on an advertising supported for-profit basis. It has to be nested in a non-profit, like a university. And Grosse was the name of the guy who actually ended up selling them this model a few years later, and he ended up on an intellectual property dispute getting about a quarter of a billion dollars’ worth of stock when the company went public.

But he had intention-based advertising worked out. He finally sold it to the people who explicitly said this would be evil and then switched on it. They saw it as a violation of privacy, a corporate play to create a commercial platform at the expense of the rights of the users. And they were wrong. This has been a tremendously important breakthrough for society. Of course, Page and Brin now see the error of their ways, and they have, what, $30 billion of capital value urging them on, 30 billion good reasons to understand that they were wrong, each plus or minus depending on the market today. But that is the force of the marketplace. That is the expression of consumer demand, and the margins of this are important.

But the infra-marginal explosion that we’re dealing with here, this real transformation of information markets is enormously productive. And we have to understand how that comes in. So when Matt talks about the next Google, the next breakthrough, we do have to be very careful that dealing with the increment does not destroy the process of disruptive technology innovation that has been extremely powerful. And the last thing I’ll say is I really don’t understand the idea that people won’t or can’t pay for privacy, particularly Europeans, where I learned that privacy was costly when it said private bath.

Now, you’ll think that my mind runs in these directions generally, but as soon as you said the Europeans don’t want to pay for privacy, I thought, “Well, if you get a private bathroom, you pay for that, right? That’s all about privacy.” And it’s completely on a continuum with every other product essentially. More privacy is more costly. Real estate, privacy is costly. It’s everywhere, and this is the latest generation. And it presents challenges, but I don’t think it’s sui generis.

Justin “Gus” Hurwitz:  I want to quickly add something. I don’t think we’ve responded really to Anna’s question about discrimination and advantaging or disadvantaging marginalized communities. And this is an important element of the discussion that I think we need to highlight because it’s one of the hardest and, from a policy perspective, most important things to recognize. And I don’t know that there are good answers here. One of the great things about the current internet is you don’t need to pay in order to use most services, which means if you don’t have much money, you can use most services, which is incredible. Another important thing about internet business models, and this isn’t internet business models. This is business models generally. It’s hard to have a blended business model.

That is to say for both technological reasons it would be hard to design and also, from a consumer perspective, it would be hard to have a Facebook where you could either pay for a service, you could pay for a service and have some information collected, you could pay nothing and have all of your information collected. Maybe it could be done. Maybe it would be successful. More likely, the successful platforms are going to have a single, simple revenue model. If we transition to a payment-based one or if, through regulations, legislation, we prevent companies from monetizing consumer information or we make it so complicated to monetize the information through regulation, companies just say, “Okay. I’m not going to bother collecting information. I’m going to go to a subscription model instead.” There are a lot of marginalized communities that might not have access to this information.

There are, I think, potentially problematic, also very normal paternalistic overtones to a fair amount of discussion about marginalized communities getting access by exchanging their information, their private information. By marginalized communities, sometimes referring to communities that have information they might not want to be made public or shared about their characteristics, gender, sexual orientation sort of concerns that you don’t want these internet giants to be gobbling up or just the economically disadvantaged lower income individuals who can’t afford to pay for this information. My general view, and I think there’s a lot of studies to back this up, is that marginalized communities tend to understand these issues a lot better than folks who are not members of those communities.

Lower-income individuals tend to be much savvier when it comes to deciding how and where to spend their relatively few dollars than the people in this room, on this panel, sitting in the halls of Congress either give them credit for or themselves are. So I think we need to be thinking and understanding these concerns, and we also need to be thinking and understanding about how well we understand these concerns.

Anna Hsia:  I think we’re running a little low on time, so does anyone have any questions, at all? Yes. There are microphones, too, if you want to…

Questioner 1:  This is a question for Mr. Riley. Can you — from the small to medium size firm perspective, so say 50 million to 100 million is still covered by the California statute but not by as much as some of the major players in the area. Why is it preferable to have regulation versus say market correction that Professor Hurwitz was mentioning? Yeah. Does that make sense?

Chris Riley:  Well, clarify for me, from the perspective of these companies, do you mean imagine that I am at one of these companies and I say that it is preferable?

Questioner 1:  Yeah. Would it be preferable? Or you can also base this on [Inaudible 01:07:35].

Chris Riley:  So I mean, I won’t try to put myself into the perspective of a company like that. I think about this much more at the societal perspective lever. I mean, the Mozilla mission — we’re a mission driven organization, and we set the policies and the positions we do based on what we believe will make the internet better, first and foremost, before anything else. So I do believe that having baseline regulation on privacy will make the internet better than strictly letting market forces do that. And I believe that that’s true when federal privacy regulation is applied not just to the largest companies, but also to mid-sized companies and, I would say, to startups. Some of that could come from the arguments that some of my fellow panelists made, Gus in particular, that it’s hard to refactor privacy in.

You have to think about how you design your data collection and your data policies and so forth earlier on in your business cycle, not just bolt them back on after the factor doesn’t work. But Mozilla is also, in particular, a big believer that being lean about your data reduces your risk as a company. So now, that doesn’t directly answer your question because we try to do direct advocacy. We have a tool kit called the lean data principles tool kit. We directly socialize this with smaller and mid-sized companies to try to help teach them what we’ve learned about how to be more purposeful and more limited with data collection and data processing. So that’s not an answer to market forces versus regulation. It’s more a statement that we believe this is an unequivocal good and a good thing for businesses of all sizes to be doing.

Frankly, we’re supportive of regulation because we’re not seeing that happen yet. Right? We don’t see the market delivering on the kinds of purposeful and limited collection and use of data that we believe the technology industry should offer. We don’t see the kinds of granular and revocable consent that we call for and support in legislation being offered by the market today. And so we’d like to help shape a better internet of the future. And we see regulation as a very effective way to do that.

Anna Hsia:  Yes.

Questioner 2:  So my question is mostly for Matthew, but also to anybody else who has insight on it. I think a lot of people argue that consumers don’t really understand exactly what they’re giving up when they give their privacy or their data over. So I think some argue that the regulations and laws are trying to equalize that transaction to try to educate consumers what they’re giving up, or at least give them more bargaining power, I guess. So how do you respond to the idea that the regulation’s necessary to do that and that consumers maybe don’t realize the value of what they’re giving up for these services that they’re getting?

Justin “Gus” Hurwitz:  I can jump in on a response, if you want.

Matthew R.A. Heiman:  Oh, I’m happy to take a crack at it. I think at this stage of the game, I think consumers have a pretty good understanding of what they’re giving away. And so my running assumption on any of these platforms is, if I’m putting information out there and it can be attached to my name, that company is trying to figure out a way to monetize it. Now, could companies be better at making the disclosures more detailed or putting it into more laymen’s language? Of course they can. And perhaps the companies that figure out how to set up those feature sets in terms of making the consumer aware of exactly what they’re trading away will be advantaged in the marketplace. I’d just much rather let the marketplace sort those issues out or let consumers choose the companies they think are most vigilant around their privacy than having Congressmen in Capitol Hill that barely understand these technologies trying and regulate them.

I feel much more comfortable with the marketplace responding to the needs of consumers than regulators. And again, I go back to the point I made earlier, we don’t see masses of people fleeing these platforms because they’re that outraged about the privacy practices. Now, privacy advocates are really outraged about the privacy practices, but I think there’s a significant gap between where the privacy advocates are and their response to these issues and where the marketplace is. I think if the marketplace was upset about it as the privacy advocates, we would have had regulation long ago, and we would have had many more companies providing services that are more in line with what Chris might think are best practices.

So I think at this stage of the game, consumers get the idea that, you know, “When I give away my information and I’m getting this free service, that’s the trade that’s being made.” I don’t know, Gus, if you want to chime in.

Justin “Gus” Hurwitz:  So in the background of your question is one of my least favorite arguments that folks very frequently make when we’re talking about this stuff, which is no one reads these privacy policies. No one reads the contracts, so they’re totally ineffective. If that’s your form of transparency or disclosure, it’s garbage. My response to that is all hail the marginal user. Which is to say we don’t need everyone reading these contracts. We don’t need everyone reading these agreements. We just need one person reading them, or we just need a small cohort of folks reading and understand them. In all markets, the marginal user protects the inframarginal users or customers.

And this is a market where every change to a privacy policy is scrutinized by an entire industry of crazy consumer advocates like Chris. So if there are problematic things going into these disclosures, problematic policies, they’re not going to be hidden for long. They’re going to be front page New York Times, FTC is opening an investigation because such and such a consumer group saw this change and is mad as hell and not going to take it anymore. To the extent that underlying your question is a concern about, “Well, are these disclosures really effective?” Yeah. They are.

Chris Riley:  If I could very briefly add to that, I do think that we’ve sort of surfaced one of the most fundamental areas of disagreement because I don’t believe that consumers have enough understanding. I wish they all listened to Mozilla in how we interpret and try to be that intermediary. But we don’t have that kind of visibility or awareness. And I also don’t believe, at the risk of overly segueing into what I think is the next panel, I don’t believe that competitive forces work well enough to allow for the rapid creation of new competitors and new business models in the sticky markets in technology bases that we have today. And I can completely understand, intellectually, why people who believe differently than I do about customers’ level of awareness and the effectiveness of competition in the technology sector come out in a different place than I do on the need for regulation.

Thomas Hazlett:  Well, I was going to say that — this nirvana fallacy that there’s some pristine state of the world and we’re violating it. People sign click throughs all the time. Get me access to the platform. Why are all these click throughs coming? And let’s get out of here and it’s not to be consumers are ill informed and they need to be better informed because, in some kind of perfectly functioning market, they’re going to know everything there is to know. It’s not true. You walk into Chipotle. There’re all kinds of agreements and contracts and all kinds of precedential law that governs that relationship with Chipotle.

I use this example because it was given wrong at a law conference I was at some time back. Well, with Chipotle, you don’t have any of this click-through stuff. Yeah. But it’s just as complicated a relationship that the customer doesn’t care about, doesn’t want to take the time to invest in. It’s really — it’s a market transition. If we don’t like it, we won’t come back. You have every time you buy ibuprofen, okay? You get that little thing out of the box that you through away. Well, you know, read it sometime. It’s got chemical compound reactions on there. It’s got language from Supreme Court cases. It’s got extreme information on there, and that’s just the tip of the iceberg. They tell you to go to the website to get the real story. Nobody reads that stuff. Okay?

You know, we have other mechanisms for figuring out costs and benefits, and that’s efficient. We don’t have complete contracts. We know that incomplete contracts are efficient. So there’s no nirvana state here, and it really is a cost-benefit world. So if we have time, I’d like to know, GDPR, is it working? Do we like the direction it’s going? I’ve noticed two effects. I have more click-throughs now, and the big firms seem to have no problem with this at all, the big platforms. And I do know that there are, you know, a lot of missing European access sites. If you go to Europe, you can’t get a lot of American sites.

About 1,000 American sites have taken down their European operations because they don’t want to have the expense. They don’t have large audiences, and it’s just too costly. So we see that there is an innovation cost, a competition cost to that kind of extra overhead. The question is is it worth it? Is there a benefit that’s coming through GDPR that we can see? I don’t honestly know the answer to that. So if anybody wants to comment on that, if we have time, I’d love to hear.

Anna Hsia:  I think we’re running a little low on time. I don’t want to completely roll over our lunch.

Thomas Hazlett:  Okay. We’ll click through.

Anna Hsia:  We’ll click through. All of us are available at lunch, if you’d like to chat about any of these topics with us. So I’d like to thank our panelists and Pepperdine for having us. Thank you.

Thomas Hazlett

H.H. Macaulay Endowed Professor of Economics

Clemson College of Business


Matthew R. A. Heiman

Chief Legal + Administrative Officer

Waystar Health


Justin “Gus” Hurwitz

Professor of Law and the Menard Director of the Nebraska Governance and Technology Center

University of Nebraska College of Law


Chris Riley

Director, Public Policy

Mozilla


Anna Hsia

Head of West Coast Office

ZwillGen


Cyber & Privacy

Federalist Society’s Pepperdine Student Chapter

Pepperdine Law Review

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].

Related Content

Skip to content