Google Should Not Help the U.S. Military Build Unaccountable AI Systems
By Peter Eckersley and Cindy Cohn
.
Thousands of Google staff have been speaking out against the company’s work for “Project Maven,” according to a New York Times report this week. The program is a U.S. Department of Defense (DoD) initiative to deploy machine learning for military purposes. There was a small amount of public reporting last month that Google had become a contractor for that project, but those stories had not captured how extensive Google’s involvement was, nor how controversial it has become within the company.
Outcry from Google’s own staff is reportedly ongoing, and the letter signed by employees asks Google to commit publicly to not assisting with warfare technology. We are sure this is a difficult decision for Google’s leadership; we hope they weigh it carefully.
This post outlines some of the questions that people inside and outside of the company should be mulling about whether it’s a good idea for companies with deep machine learning expertise to be assisting with military deployments of artificial intelligence (AI).
What we don’t know about Google’s work on Project Maven
According to Google’s statement last month, the company provided "open source TensorFlow APIs” to the DoD. But it appears that this controversy was not just about the company giving the DoD a regular Google cloud account on which to train TensorFlow models. A letter signed by Google employees implies that the company also provided access to its state-of-the-art machine learning expertise, as well as engineering staff to assist or work directly on the DoD’s efforts. The company has said that it is doing object recognition “for non-offensive uses only,” though reading some of the published documents and discussions about the project suggest that the situation is murkier. The New York Times says that “the Pentagon’s video analysis is routinely used in counterinsurgency and counterterrorism operations, and Defense Department publications make clear that the project supports those operations.”
If our reading of the public record is correct, systems that Google is supporting or building would flag people or objects seen by drones for human review, and in some cases this would lead to subsequent missile strikes on those people or objects. Those are hefty ethical stakes, even with humans in the loop further along the “kill chain”.
We’re glad that Google is now debating the project internally. While there aren’t enough published details for us to comment definitively, we share many of the concerns we’ve heard from colleagues within Google, and we have a few suggestions for any AI company that’s considering becoming a defense contractor.
What should AI companies ask themselves before accepting military contracts?
We’ll start with the obvious: it’s incredibly risky to be using AI systems in military situations where even seemingly small problems can result in fatalities, in the escalation of conflicts, or in wider instability. AI systems can often be difficult to control and may fail in surprising ways. In military situations, failure of AI could be grave, subtle, and hard to address. The boundaries of what is and isn’t dangerous can be difficult to see. More importantly, society has not yet agreed upon necessary rules and standards for transparency, risk, and accountability for non-military uses of AI, much less for military uses.
Companies, and the individuals who work inside them, should be extremely cautious about working with any military agency where the application involves potential harm to humans or could contribute to arms races or geopolitical instability. Those risks are substantial and difficult to predict, let alone mitigate.
If a company nevertheless is determined to use its AI expertise to aid some nation’s military, it must start by recognizing that there are no settled public standards for safety and ethics in this sector yet. It cannot just assume that the contracting military agency has fully assessed the risks or that it doesn't have a responsibility to do so independently.
At a minimum, any company, or any worker, considering whether to work with the military on a project with potentially dangerous or risky AI applications should be asking:
- Is it possible to create strong and binding international institutions or agreements that define acceptable military uses and limitations in the use of AI? While this is not an easy task, the current lack of such structures is troubling. There are serious and potentially destabilizing impacts from deploying AI in any military setting not clearly governed by settled rules of war. The use of AI in potential target identification processes is one clear category of uses that must be governed by law.
- Could it incorporate sufficient expertise to address subtle and complex technical problems? And would those leading the process have sufficient independence and authority to ensure that it can check companies' and military agencies' decisions?
- Are the contracting agencies willing to commit to not using AI for autonomous offensive weapons? Or to ensuring that any defensive autonomous systems are carefully engineered to avoid risks of accidental harm or conflict escalation? Are present testing and formal verification methods adequate for that task?
- Can there be transparent, accountable oversight from an independently constituted ethics board or similar entity with both the power to veto aspects of the program and the power to bring public transparency to issues where necessary or appropriate? For example, while Alphabet’s AI-focused subsidiary DeepMind has committed to independent ethics review, we are not aware of similar commitments from Google itself. Given this letter, we are concerned that the internal transparency, review, and discussion of Project Maven inside Google was inadequate. Any project review process must be transparent, informed, and independent. While it remains difficult to ensure that that is the case, without such independent oversight, a project runs real risk of harm.
These are just starting points. Other specific questions will surely need answering, both for future proposals and even this one, since many details of the Project Maven collaboration are not public. Nevertheless, even with the limited information available, EFF is deeply worried that Google’s collaboration with the Department of Defense does not have these kinds of safeguards. It certainly does not have them in a public, transparent, or accountable way.
The use of AI in weapons systems is a crucially important topic and one that deserves an international public discussion and likely some international agreements to ensure global safety. Companies like Google, as well as their counterparts around the world, must consider the consequences and demand real accountability and standards of behavior from the military agencies that seek their expertise—and from themselves.
Update 2018-04-08: add & improve citations