The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
By Peter Eckersley
.
In the coming decades, artificial intelligence (AI) and machine learning technologies are going to transform many aspects of our world. Much of this change will be positive; the potential for benefits in areas as diverse as health, transportation and urban planning, art, science, and cross-cultural understanding are enormous. We've already seen things go horribly wrong with simple machine learning systems; but increasingly sophisticated AI will usher in a world that is strange and different from the one we're used to, and there are serious risks if this technology is used for the wrong ends.
Today EFF is co-releasing a report with a number of academic and civil society organizations1 on the risks from malicious uses of AI and the steps that should be taken to mitigate them in advance.
At EFF, one area of particular concern has been the potential interactions between computer insecurity and AI. At present, computers are inherently insecure, and this makes them a poor platform for deploying important, high-stakes machine learning systems. It's also the case that AI might have implications for computer [in]security that we need to think about carefully in advance. The report looks closely at these questions, as well as the implications of AI for physical and political security. You can read the full document here.
- 1. Other institutions releasing the report include the Universities of Cambridge and Oxford, the Center for the Study of Existential Risk, the Future of Humanity Institute, OpenAI, and the Center for a New American Security.