By Peter Eckersley
Published on 2018-06-07, on the EFF blog.
Today Google released a new set of AI ethics principles, which were prompted, at least in part, by the controversy over the company's work on the US military's Project Maven. This post contains some quick preliminary analysis on the strengths and weaknesses of those principles.
On many fronts, the principles are well thought-out and promising. With some caveats, and recognizing that the proof will be in their application by Google, we recommend that other tech companies consider adopting similar guidelines for their AI work. But we do also have some concerns that we recommend Google and other tech companies address:
- One concern is that Google hasn't committed to the type of independent, informed and transparent review which would be ideal for ensuring the principles are always applied and applied well. Without that, the public will have to rely on the company's internal, secret processes to ensure that these guidelines are followed. That's a common (and generally unfortunate) pattern in corporate governance and social accountability, but there's an argument that AI ethics is so important and the stakes can be so high, that there should be independent review as well, with at least some public accountability.
- Another concern is that by relying on “widely accepted principles of international law and human rights” for the purposes that Google will not pursue, the company is potentially sidestepping some harder questions. It is not at all settled — at least in terms of international agreements and similar law — how many key international law and human rights principles should be applied to various AI technologies and applications. This lack of clarity is one of the key reasons that we and others have called on companies like Google to think so hard about their role in developing and deploying AI technologies, especially in military contexts. Google and other companies developing and deploying AI need not only to follow “widely accepted principles” but to take the lead in articulating where, how and why their work is consistent with principles of international law and human rights.
-
On surveillance, however, we do have some specifics for Google and other companies to follow. Google has so far constrained itself to only assisting AI surveillance projects that don't violate internationally accepted norms. We want to hear clearly that those include the Necessary and Proportionate Principles, and not merely the prevailing practice of many countries spying on the citizens of almost every other country. In fact, in the light of this practice, it would be better if Google tried to avoid building AI-assisted surveillance systems altogether.
We hope Google will consider addressing these issues with their principles. There may be other issues that come to light with further analysis. But beyond that, we think this is a good first step by the company, and with some improvements on these fronts, could become an excellent model for AI ethics guidelines across the tech industry. And we're ready to hear from the rest of that industry that they too are stepping up.