About Peter Eckersley
Peter Eckersley does research, policy and leadership work on AI ethics, safety, cybersecurity, privacy and other topics. Currently, he is a co-founder and Chief Scientist at the AI Objectives Institute, a new non-profit organization to work on artificial intelligence and transformations of capitalism. Previously Peter spent many years as Chief Computer Scientist at the Electronic Frontier Foundation and served as the first Director of Research at the Partnership on AI, and as a Visiting Senior Fellow at OpenAI.
Peter's AI policy work has mostly been on setting sound policies around high-stakes machine learning applications such as recidivism prediction, self-driving vehicles, cybersecurity, and military uses of AI. He also has an interest in measuring progress in the field as a whole. His technical projects have included SafeLife, a benchmark environment for reinforcement learning safety; studying the need and role for uncertainty in ethical objectives of powerful optimising systems, and evaluating calibration and overconfidence in large language models.
Peter has also cofounded or [co]-created many impactful privacy and cybersecurity projects, including Let's Encrypt, Certbot, Privacy Badger, HTTPS Everywhere, Panopticlick; during the COVID-19 pandemic he convened the the stop-covid.tech group, advising many groups working on privacy-preserving digital contact tracing and exposure notification, assisting with several strategy plans for COVID mitigation.
Peter holds a PhD in Computer science and Law from the University of Melbourne.