Important Dates

Paper Submission Deadline: May 15th, 2020
Notification of Acceptance: June 5th, 2020
Camera-Ready Deadline: June 15th, 2020
Workshop: August 29th, 2020
Register with ECAI2020

Scope

In recent years, Safety, Security and Fairness are becoming emerging and relevant topics in Machine Learning (ML), mainly because ML has also become an important and inseparable part of our daily lives. ML is everywhere, ranging from traffic prediction, recommendation systems, marketing analysis, medical diagnosis to autonomous driving, robot control or decision-making support for businesses or even Governments. ML systems have produced a disruptive change in the society, enabling the automation of many tasks by leveraging the huge amount of information available in the Big Data era. In some applications, ML systems have shown impressive capabilities, even outperforming humans in some cases. Despite these achievements, the penetration of ML in many real-world applications has brought new challenges related to the trustworthiness on these systems. The potential of these algorithms to cause undesirable behaviors is a growing concern in the ML community, especially when they are integrated in real-world systems. Deploying ML in the real world has real-world consequences: it has been shown that ML could delay medical diagnoses, causes environmental damage, harms to humans, adopts racist, sexist and other discriminatory behaviours, or even provokes traffic accidents. On the other side, learning algorithms are vulnerable and can be compromised by smart attackers, who can gain a significant advantage by exploiting the weaknesses of ML systems.


How can we trust ML systems? Can we design learning algorithms which never catastrophically fail, even at training time? Can safe controllers be learned and understood by a robot to avoid putting in risk the integrity of people or the robot itself? Can we design robust algorithms against adversarial attacks? Can we make sure that prejudices, demographic inequalities, and bias contained in the data are not reflected in ML-based systems? And, ultimately, the main question that this workshop tries to answer is: can we avoid undesirable behaviors and design ML algorithms that behave safely and fairly?


Researchers, industry and society recognise the need for approaches that ensure the safe, beneficial and fair use of ML technologies. This workshop aims to bring together papers outlining the safety and fairness implications (from a legal, ethical, psychological, or technical point of view) of the use of ML in real-world systems, papers proposing methods to detect, prevent and/or alleviate undesired behaviors that ML-based systems might exhibit, papers analyzing the vulnerability of ML systems by adversarial attacks and the possible defense mechanisms, and, actually, any paper that stimulates discussion among researchers on different topics related to safe and fair ML.

Call for Papers

This workshop aims to bring researchers in diverse areas such as Safe Reinforcement Learning, Safe and Fair Machine Learning, Adversarial Machine Learning, but also researchers analyzing the impact of ML systems in the real-world from a legal, psychological and/or ethical point of view. The objective is to further the field of safety and fairness in Machine Learning from as many perspectives as possible.


TOPICS

Contributions are sought in (but are not limited to) the following topics:
Bias in Machine Learning
Fairness and/or Safety in Machine Learning
Safe Reinforcement Learning
Safe Exploration for Optimization
Safe Robot Control
Adversarial Machine Learning and AI/ML robustness
Adversarial examples and evasion attacks
Data poisoning
Backdoors in Machine Learning
Reward Hacking
Ethical and legal consequences of using Machine Learning in real-world systems
Transparency in Machine Learning


SUBMISSION AND FORMAT

SafeML2020 welcomes original, unpublished papers. Papers must be written in English and should be between 12 and 15 pages in length . All submissions must follow the Springer's guidelines for authors. Authors are encouraged to use the provided Latex or Word templates to prepare the papers. Submissions are accepted only via the EasyChair conference management system at https://easychair.org/conferences/?conf=safeml2020 . Only the PDF of the manuscript is required for the first submission. Each submission will undergo a peer-review process with three peer reviews.


PUBLICATION

Proceedings: Proceedings of SafeML 2020 will be published in the bookseries CCIS of Springer. CCIS is indexed by various A&I services, e.g., Scopus, EI-Compendex, DBLP, etc. We also plan a Special Issue on the topic of Safe Machine Learning in a ranked AI journal. Authors of selected papers will be invited to submit extended versions of their papers to this special issue.

Event Schedule

We plan a full day workshop, opening with a presentation by one of the organizing members to set the stage. The rest of the day will consist of paper sessions with ample time for questions and breaks for discussion. The goal is to bring participants up to speed on the issues and solutions in these fields, outline key research problems, and encourage collaborations to address these problems.


The schedule will be announced before the workshop celebration.

Organization

Workshop Chairs

Javier García, Universidad Carlos III de Madrid, Madrid, Spain
Moisés Martínez, Universidad Internacional de La Rioja, Logroño, Spain
Nerea Luis, Sngular, Madrid, Spain
Luis Muñoz-González, Imperial College London, UK

Contact us: safeml2020@gmail.com


Programme Committee

Peter Stone, University of Texas at Austin, USA
Ibrahim Habli, University of York, UK
Stefanos Kollias, University of Lincoln, UK
Wray Buntine, Monash University, Australia
Marco Wiering, University of Groningen, The Netherlands
Fernando Fernández, Universidad Carlos III de Madrid, Spain
Philip S. Thomas, University of Massachusetts Amherst, USA
Albert Bifet, Télécom ParisTech, France
Kenneth T. Co, Imperial College London, UK
Adrià Garriga-Alonso, University of Cambridge, UK
Mathieu Sinn, IBM Research, Ireland
Alessandro Abate, University of Oxford, UK
Andrea Aler Tubella, Umea University, Sweden
Eleni Vasilaki, The University of Sheffield, UK
Fabio Pierazzi, King's College London, UK
Giulio Zizzo, Imperial College London, UK
Chris Hankin, Imperial College London, UK
Rohin Shah, University of California - Berkeley, USA
Liwei Song, Princeton University, USA
Tomas Svoboda, Czech Technical University in Prague, Czech Republic
Adam Gleave, University of California - Berkeley, USA
Theo Araujo, University of Amsterdam, The Netherlands
Victoria Krakovna, DeepMind, UK
Ann Nowé, Vrije Universiteit Brussel, Belgium
Arjun Bhagoji, Princeton University, USA
Jean-Michel Loubes, Institut de Mathématiques de Toulouse, France
Paul Miller, Queen's University Belfast, UK
Roderick Bloem, Graz University of Technology, Austria
Alejandro Pazos Sierra, Universidad da Coruña, Spain