Should the Government Regulate Artificial Intelligence?
Artificial intelligence brings tremendous opportunity for business and society. But it has also created fear that letting computers make decisions could cause serious problems that might need to be addressed sooner rather than later.
Broadly speaking, AI refers to computers mimicking intelligent behavior, crunching big data to make judgments on everything from how to avoid car accidents to where the next crime might happen. Yet algorithms aren’t always clear on their decision-making logic. If a computer consistently denies a loan to members of a certain sex or race, is that discrimination? Will regulators have the right to examine the algorithm that made the decision?
Some big technology companies are seeking to set ethical standards through alliances with futurists, civil-rights activists and social scientists—which critics see as an effort to prevent regulation by government. Some experts are calling for regulations to define the boundaries of the technology while it is still new; others worry about quashing innovation just as it is getting started.
To get a sense of the options and potential pitfalls, The Wall Street Journal reached out to three experts in artificial-intelligence policy: Julia Powles, a researcher in law and technology at New York University School of Law and Cornell Tech; Adam Thierer, a researcher with the Technology Policy Program at George Mason University’s Mercatus Center; and Ryan Calo, an associate law professor at the University of Washington and a co-director of the school’s Tech Policy Lab.
Here are edited excerpts.
The rules of the road
WSJ: Should there be any government regulation of artificial intelligence? What form could it take?
MR. THIERER: AI applications already are regulated by a host of existing legal policies. If someone does something stupid or dangerous with AI systems, the Federal Trade Commission has the power to address unfair and deceptive practices. State attorneys general and consumer-protection agencies also routinely address unfair practices and advance their own privacy and data-security policies.
There are other issues that deserve policy consideration and perhaps new rules. But before we resort to heavy-handed, legalistic solutions, we should exhaust all other potential remedies. When innovators have to seek permission before they offer a new product or service, it raises the cost of starting a new venture and discourages activities that benefit society.
MS. POWLES: Whether or not regulation is heavy-handed is going to be entirely case-specific. If AI-powered systems are going to be carrying our bodies [with driverless cars], surely we should have as much capacity to independently assess AI as we do for cars, food and drugs? Asking for some proof to support these claims of societal benefit doesn’t stop innovation, it saves it.
MR. CALO: What’s fascinating about Adam’s typically thoughtful reply is that, were it followed 20 years ago, it would have harmed the development of the commercial internet. If we had decided that no change was needed, Congress would never have needed to pass the Communications Decency Act, a law that immunizes platforms such as Google or Facebook for unlawful content users post there.
WSJ: Adam, you mention issues that could warrant some new rules. Can you give an example?
MR. THIERER: Probably the thorniest issue comes down to the transparency of decision making. Innovators using AI to accomplish important tasks will be challenged at every juncture to identify how and why some decisions were made. More profoundly, we’re going to be debating the fairness of many AI-enabled outcomes for many years to come. If algorithmic accountability becomes a regulatory straitjacket, then we will lose out on many socially and economically enriching innovations.
MS. POWLES: I agree that we don’t want to introduce overly complex explanations. But that doesn’t mean decision-making systems shouldn’t be required to provide explanations that allow people to understand why they are treated in certain ways and to have recourse to vindicate their rights. The demand to explain how and why decisions are made is essential.