Should We Trust Artificial Intelligence Regulation by Congress If Facebook Supports It?

Try to imagine for a moment a declaration from Congress to the effect that safeguarding the environment is important, that the effects of pollution on the environment ought to be monitored, and that special care should be taken to protect particularly vulnerable and marginalized communities from toxic waste. So far, so good! Now imagine this resolution is enthusiastically endorsed by ExxonMobil and the American Coal Council. You would have good reason to be suspicious. Keep that in mind while you consider the newly announced House Resolution 153.

Last week, several members of Congress began pushing the resolution with the aim of “supporting the development of guidelines for ethical development of artificial intelligence.” It was introduced by Reps. Brenda Lawrence and Ro Khanna — the latter of whom, crucially, represents Silicon Valley, which is to the ethical development of software what West Virginia is to the rollout of clean energy. This has helped make Khanna a national figure, in part because, far from being a tech industry cheerleader, he’s publicly supported cracking down on the data Wild West his home district helped create. For example, he has criticized the wrist-slaps Google and Facebook receive in the wakes of their regular privacy scandals and called for congressional action against Amazon’s labor practices.

The resolution, co-sponsored by seven other representatives, has some strange fans. Its starting premises are unimpeachable: “Whereas the far-reaching societal impacts of AI necessitates its safe, responsible, and democratic development,” the resolution “supports the development of guidelines for the ethical development of artificial intelligence (AI), in consultation with diverse stakeholders.” It also supports adherence to a list of crucial values in the development of any kind of machine or algorithmic intelligence, including “[i]nformation privacy and the protection of one’s personal data”; “[a]ccountability and oversight for all automated decision making”; and “[s]afety, security, and control of AI systems now and in the future.”

These are laudable goals, if a little inexact: Key terms like “control” and “oversight” are left entirely undefined. Are we talking about self-regulation here — which algorithmic software companies want because of its ineffectiveness — or real, governmental regulation? When the resolution mentions accountability, are Khanna and company envisioning harsh penalties for AI mishaps, or is this a call for more public relations mea culpas after the fact?

Read more of this article from The Intercept by Sam Biddle by clicking here.

Sign up now to receive early access to RTP content and exclusive materials available ONLY to our subscribers.