Track A – Trustworthy AI

Track A – Trustworthy AI

Tue, July 20

8:30-9:00 CET – Opening (joint with Track B)
9:00-10:00 CET – Keynote 1 (joint with Track B)
Mihaela van der Schaar (University of Cambridge)
Mihaela van der Schaar is a Professor of Machine Learning, AI and Medicine at the University of Cambridge, a Fellow at The Alan Turing Institute, and a Chancellor’s Professor at UCLA. She has received numerous awards, including the Oon Prize on Preventative Medicine, a National Science Foundation CAREER Award, the IBM Exploratory Stream Analytics Innovation Award, and the Philips Make a Difference Award. Her work has led to 35 USA patents and 45+ contributions to international standards for which she received 3 ISO Awards. In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Mihaela’s research focus is on machine learning, AI and operations research for healthcare and medicine. She is founder and director of the Cambridge Centre for AI in Medicine.

Why medicine is creating exciting new frontiers for machine learning and AI

Medicine stands apart from other areas where machine learning and AI can be applied. While we have seen advances in other fields with lots of data, it is not the volume of data that makes medicine so hard, it is the challenges arising from extracting actionable information from the complexity of the data. It is these challenges that make medicine the most exciting area for anyone who is really interested in the frontiers of machine learning – giving us real-world problems where the solutions are ones that are societally important and which potentially impact on us all. Think Covid 19! In this talk I will show how machine learning is transforming medicine and how medicine is driving new advances in machine learning, including new methodologies in automated machine learning, interpretable and explainable machine learning, causal inference, reinforcement and inverse reinforcement learning.
10:00-13:00 CET – Course 1
Michèle Sebag (CNRS - LISN)
Michèle Sebag is a senior researcher at CNRS and Paris-Saclay University. Her research interests include causal modeling, preference learning, isurrogate optimization and machine learning applications to social sciences. She received the GECCO Test of Time Award in 2018. She was the President of the French Association for AI, and a member of the French National Digital Council. She is an EurAI Fellow, a member of the French Academy of Technologies, and the Head of Steering Committee for ECML-PKDD.

Why and how learning causal models

A main concern in Machine Learning to build accurate predictive models, and an effective way to do so is to exploit correlations. Depending on how the models are used however, correlation-based models might have varied effects, from being ineffective to harmful. The tutorial will describe how to learn causal models, and discuss the underlying scientific and algorithmic challenges. The course will be illustrated with examples taken from social sciences and economics, and health.
14:30-17:00 CET – Course 2
Isabel Valera (Saarland University) and André Meyer-Vitali (TNO)
Isabel Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany). She is also a scholar of the European Laboratory for Learning and Intelligent Systems (ELLIS). Prior to this, she was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany). She has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK). Her research focuses on developing machine learning methods that are flexible, robust, interpretable and fair to analyze real-world data.

Dr. André Meyer-Vitali is a senior scientist at TNO, The Netherlands, and obtained his PhD at the University of Zürich, Switzerland. His interests are in software engineering, patterns and architectures, distributed systems/AI, social simulation, multi-agent systems, knowledge representation and reasoning and combinations with learning and self-organisation. He guides research on trustworthy AI in the context of European collaborations, such as TAILOR.

Robust learning of generative models for mixed and multimodal data

17:30-19:00 CET – Participant posters and demos (joint with Track B)

Wed, July 21

9:00-10:00 CET – Keynote 2
Serge Abiteboul (Inria)
Serge Abiteboul is a senior researcher at Inria and ENS, and a member of the board of the French telecommunications regulator ARCEP. His research work focuses mainly on data, information and knowledge management, particularly on the Web. He has received the ACM SIGMOD Innovation Award, the EADS Award from the French Academy of sciences, the Milner Award from the Royal Society, and an Advanced Grant from the European Research Council. He is a member of the French Academy of Sciences, and a member the Academy of Europe. He also writes novels, essays, and is editor and founder of the Blog Binaire.

Responsible data analysis algorithms: a realistic goal?

When algorithms are invading our lives, we are more and more concerned with their effects on society and expect them to behave in responsible manner. We will consider through examples technical difficulties that are raised and lessons that have been learnt.
10:00-13:00 CET – Course 3
Patrick Gallinari (Sorbonne University and Criteo AI Lab)
Patrick Gallinari is a professor at Sorbonne University and distinguished researcher at Criteo AI Lab – Paris. His research focuses on statistical learning and deep learning with applications in different fields such as semantic data processing or complex data analysis. A recent research topic involves physico-statistical systems combining the model based approaches of physics and the data processing approaches of statistical learning. He leads a team whose central theme is Statistical Learning and Deep Learning (https://mlia.lip6.fr). He was director of the Sorbonne computer lab. (LIP6) from 2005 to 2013.

Deep generative models & Deep learning and differential equations

We provide an introduction to two topics. One targets the modeling of potentially high dimensional probability distributions using a large number of samples with Deep Neural Networks (DNN). Once trained, these networks can generate new realistic patterns from the learned distributions. This has been an extensive research field over the last years. Recent advances even reached the public sphere, like the so-called Deep Fakes. We briefly introduce the topic and review three popular approaches: Generative Adversarial Networks, Variational Autoencoders and Normalizing Flows. The second topic concerns the use of DNNs for modeling dynamical systems and helping to solve partial differential equations (PDE). PDEs are the main mathematical tool for simulating complex dynamical phenomena in domains as diverse as climate, graphics, or aeronautics. The increasing availability of data makes it possible to consider hybridizing numerical solvers encoding prior physics knowledge and machine learning extracting complementary knowledge from the data. This recent topic has rapidly gained popularity because of the huge stakes involved. We will provide a rapid tour of the problems and of the recent developments.
14:30-17:00 CET – Course 4
Freddy Lecue (Thales and Inria) and Christian Müller (DFKI)
Freddy Lecue is the Chief AI Scientist at CortAIx (Centre of Research & Technology in AI eXpertise) at Thales in Montreal – Canada. He is also a research associate at Inria, in WIMMICS, Sophia Antipolis – France. Before joining the new R&T lab of Thales dedicated to AI, he was AI R&D Lead at Accenture Labs in Ireland from 2016 to 2018. Prior joining Accenture he was a research scientist, lead investigator in large scale reasoning systems at IBM Research from 2011 to 2016, a research fellow at The University of Manchester from 2008 to 2011 and research engineer at Orange Labs from 2005 to 2008. His research area is at the frontier of intelligent / learning and reasoning systems. He has a strong interest on Explainable AI i.e., AI systems, models and results which can be explained to human and business experts.

Principal Researcher and DFKI Research Fellow, Christian Müller began working in automotive in 2007 when getting in contact with Silicon Valley OEM think tanks during his postdoc in Berkeley. At DFKI, he built up the automotive Intelligent User Interfaces group, gaining international recognition for the first eye-gaze controls in the car and early contributions to gesture recognition systems. He is considered the father of OpenDS, the world’s most popular open-source driving simulator. In 2017, he became Head of the Competence Center for Autonomous Driving and initiated OpenGenesis together with TÜV Süd as the first open platform for AI validation in highly automated driving. Since 2019, he leads the research and engineering “Think Tank” iMotion Germany with a fast growing number of PhD students in the areas of robust and explainable AI for environment perception and trajectory planning of self-driving cars.

Explaining AI with narratives

17:30-19:00 CET – Open discussion with industry
Juliette Mattioli (Thales) and Frédéric Jurie (Safran)
Juliette Mattioli began her industrial career in 1990 with a PhD on pattern recognition by mathematical morphology and neural networks at Thomson-CSF. In the course of her career, through the various R&D laboratories she has directed, she has extended her spectrum of skills from image processing to information fusion, decision support, and combinatorial optimization. Her presence in national (#FranceIA mission, Ile-de-France’s AI 2021 plan, Systematic Paris-Region’s Data Science & AI hub) or international bodies (G7 Innovators) also shows her intention to share her knowledge and participate in the emancipation of business research. Since 2010, she has been attached to the technical department of Thales to help define the research and innovation strategy for the algorithmic field with a particular focus on trusted AI but also on algorithmic engineering in order to accelerate the industrial deployment of AI-based solutions.

Frédéric Jurie is a Scientific expert with Safran, specialized in Computer Vision and Deep Learning. Before joining Safran, he was a professor at the University of Caen Normandie and the director of its Computer Science, Imaging, Automation and Instrumentation lab. His research interests lie predominantly in the area of Computer Vision and Machine Learning, particularly with respect to object recognition, image classification and object detection. He leads one of the forty Research Chairs in AI awarded by the French National Research Agency.

Industry use cases involving trusted AI

Thu, July 22

9:00-10:00 CET – Keynote 3 (joint with Track B)
Joanna Bryson (Hertie School)
Joanna Bryson is recognised for broad expertise on intelligence and its impacts, advising governments, transnational agencies, and NGOs globally. She holds two degrees each in psychology and AI (BA Chicago, MSc & MPhil Edinburgh, PhD MIT). From 2002-19 she was Computer Science faculty at Bath; she has also been affiliated with Harvard Psychology, Oxford Anthropology, Mannheim Social Science Research, and the Princeton Center for Information Technology Policy. During her PhD she observed confusion generated by anthropomorphised AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. In 2010 she coauthored the first national-level AI ethics policy, the UK’s Principles of Robotics. She presently researches the impact of technology on human cooperation, and AI/ICT governance.

AI ethics

10:00-13:00 CET – Course 5
Catuscia Palamidessi (Inria) and Miguel Couceiro (University of Lorraine)
Catuscia Palamidessi is Director of Research at Inria Saclay. She has been Full Professor at the University of Genova and Penn State University. Palamidessi’s research interests include Machine Learning, Privacy, Fairness, Secure Information Flow, and Concurrency. In 2019 she obtained an Advanced Grant from the European Research Countil for the project “Hypatia”, which focuses on identifying methods for local differential privacy offering an optimal trade-off with quality of service and statistical utility. She is in the Editorial board of various journals, including the IEEE Transactions on Dependable and Secure Computing, the Journal of Computer Security, Mathematical Structures in Computer Science, and Acta Informatica. She is member of the advising committee of the French National Information Systems Security Agency (ANSSI).

Miguel Couceiro is a Professor of Computer Science at University of Lorraine in Nancy, and head of the ORPAILLEUR team. His research focuses on knowledge discovery and multicriteria decision making and, recently, with a particular emphasis on fair and explainable models. He has (co-)authored more than 180 papers and book chapters. He was an elected member (2018-2020) of IEEE Computer Society Technical Committee on Multiple-Valued Logic, and a PC member of several conferences. He is the local coordinator of the European Erasmus Mundus Masters program LCT (Languaguage and Communication Technologies) and the head of the 2nd year Masters program in NLP at the University of Lorraine.

Addressing algorithmic fairness through metrics and explanations

Algorithmic decisions are now being used on a daily basis on a wide range of human scenarios (e.g., criminal justice, stop&frisk, job applications, credit scores, decision support in health care), and are based on Machine Learning (ML) processes that may be opaque and biased. This raises several concerns given the critical impact that biased decisions may have on minorities or on society as a whole. Not only unfair outcomes may affect human rights, they also undermine public trust in ML and AI. This tutorial aims to provide the audience with an understanding and a wide range of notions to address fairness issues of ML models based on decision outcomes. It will also survey several state of the art approaches, e.g., based on fairness metrics and explanations, for tackling different algorithmic biases. These frameworks will be illustrated with different ML models on various use case scenarios ranging from bank loans, home mortgages, school admissions, etc.
14:30-17:00 CET – Course 6
Jessica Schroers (KU Leuven) and Benjamin Nguyen (INSA Centre Val de Loire)
Jessica Schroers joined the KU Leuven Centre for IT & IP Law (CiTiP) as a legal researcher in 2013. Next to her work on research projects, she started her doctoral research in 2018. She has worked on various national and European research projects (FP7 and H2020), and is currently involved in the H2020 project KRAKEN. She holds two master degrees in law of Tilburg University. Her research focuses on data protection law and the legal issues related to electronic signatures and identity management.

Privacy and data protection

17:30-19:00 CET – Participant posters and demos (joint with Track B)

Fri, July 23

9:00-10:00 CET – Keynote 4
Simon Burton (Fraunhofer IKS)
Simon Burton has spent the last two decades in the automotive sector, working in research and development projects as well as leading consulting, engineering service and product organisations. Most recently, he held the role of Director of Vehicle Systems Safety at Robert Bosch GmbH where, amongst other things, his efforts were focused on developing strategies for ensuring the safety of automated driving systems. In September 2020, he joined the leadership of Fraunhofer IKS in the role of research division director where he steers research strategy into “safe intelligence”. His own personal research interests include the safety assurance of complex, autonomous systems, and the safety of machine learning. In addition to his role within Fraunhofer IKS, he has the role of honorary visiting professor at the University of York where he supports a number of research activities and interdisciplinary collaborations.

Safety, complexity, AI and automated driving - holistic perspectives on safety assurance

Assuring the safety of autonomous driving is a complex endeavour. It is not only a technical difficult and resource intensive task but autonomous vehicles and their wider sociotechnical context demonstrate characteristics of complex systems in the stricter sense of the term. That is, they exhibit emergent behaviour, coupled feedback, non-linearity and semi-permeable system boundaries. These drivers of complexity are further exacerbated by the introduction of AI and machine learning techniques. All these factors severely limit our ability to apply traditional safety measures both at design and operation-time. In this presentation, I present how considering AI-based autonomous vehicles as a complex system could lead us towards better arguments for their overall safety. In doing so, I address the issue from two different perspectives. Firstly by considering the topic of safety within the wider system context including technical, management, and regulatory considerations. I then discuss how these viewpoints lead to specific requirements on AI components within the system. Residual inadequacies of machine learning techniques are an inevitable side effect of the technology. I explain how an understanding of root causes of such insufficiencies as well as the effectiveness of various measures during design and operation is key to the construction of a convincing safety assurance argument of the system. I will finish the talk with a summary of our current research in this area as well as some directions for future work.
10:00-13:00 CET – Course 7
Guillaume Charpiat (Inria), Zakaria Chihani (CEA), and Julien Girard-Satabin (CEA)
Guillaume Charpiat is a researcher at Inria / Paris-Saclay University. He has worked in computer vision, optimization and machine learning, and now focuses on deep learning, both on theoretical aspects (automatic architecture design, guarantees, …) and practical ones thanks to collaborations with experts in various application fields (satellite imagery, population genetics, molecular conformations…). Previously, he did a post-doc at the MPI for Biological Cybernetics, after a PhD thesis with Olivier Faugeras and Renaud Keriven and having graduated from École Normale Supérieure.

Zakaria Chihani is a researcher in the Software Safety and Security Laboratory at Commissariat à l’énergie atomique et aux énergies alternatives (CEA) in Saclay, France, where he currently heads the effort around AI Safety, particularly using formal methods, seeking to allow harvesting the potential of AI without falling victim to its dangers, especially in the field of critical systems. He previously worked on formal methods after receiving a PhD at Ecole Polytechnique in formal proof certification.

Julien Girard-Satabin is a PhD student at CEA LIST / Inria. His research focus is on making AI software safer, bridging the gap between machine learning and formal methods communities. He received a MSc from Université Paris-Sud and a MEng from ENSTA Paris.

Formal verification of deep neural networks: theory and practice

Deep neural networks are becoming ubiquitous in the software industry, especially on said perceptual tasks (analysing high-dimensional data such as images, text and sound). On the downside, they exhibit vulnerabilities endangering the privacy of the data, the robustness of the program against small perturbations or targeted attacks, and their ability to predict as specified. We propose in this tutorial to explore how we can prove a neural network correct, according to a specification, that is, prove that it will never mistake. We will rely on formal methods, a field that delivered tools and techniques for software safety in the last 40 years. Formal methods aim to provide mathematically sound results to verification queries. After an introduction to AI certification and to formal methods, we will show how they can be used to soundly check that a deep neural network is robust and reliable in its predictions. This will be demonstrated with a practical session.
14:30-17:00 CET – Course 8
Hatem Hajri (IRT SystemX)
Hatem Hajri earned his MS and PhD degrees in applied mathematics at Paris Sud University, France in 2008 and 2011. Between 2011 and 2015, we held teaching and research positions at University Paris 10, Luxembourg University, and University of Bordeaux and worked on probability theory and information geometry. In 2015, he moved to industry and spent three years working in Institut Vedecom on the field of autonomous driving. In 2018, he joined IRT SystemX and since then, he has been working on adversarial examples, trusthworthiness and robustness of machine learning in industrial applications. Currently, he is project technical leader in the confiance.ai program led by IRT SystemX.

Adversarial examples and robustness of neural networks

Deep learning classifiers are used in a wide variety of situations, such as vision, speech recognition, financial fraud detection, malware detection, autonomous driving, defense, and more. The ubiquity of deep learning algorithms in many applications, especially those that are critical such as autonomous driving or that pertain to security and privacy makes their attack particularly useful. Indeed, this allows firstly to identify possible flaws in the intelligent model and secondly to set up a defense strategy to improve its reliability. In this lecture, we first present a unified introduction to adversarial examples in machine learning. We outline the developement of this field since the earlier works and up to recent papers. For instance, we discuss the Fast Gradient Sign Method, Basic Iterative Method, Projected Gradient Descent, JSMA, DeepFool, Universal Adversarial Perturbations and Carlini-Wagner attacks among others. In a second part, we show how to exploit adversarial examples to improve the robustness of neural networks relying on adversarial training. Finally, and as a second approach to improve the robustness of these models, we discuss more recent and emergent techniques such as randomised smoothing and Game of noise approach. Illustrations by codes of some methods will be provided during the lecture.
17:30-19:00 CET – Collaborative wrap-up (joint with Track B)

Comments are closed.