Track A – Trustworthy AI
Tue, July 20
|8:30-9:00 CET – Opening (joint with Track B)|
|9:00-10:00 CET – Keynote 1 (joint with Track B)
Mihaela van der Schaar (University of Cambridge)
Mihaela van der Schaar is a Professor of Machine Learning, AI and Medicine at the University of Cambridge, a Fellow at The Alan Turing Institute, and a Chancellor’s Professor at UCLA. She has received numerous awards, including the Oon Prize on Preventative Medicine, a National Science Foundation CAREER Award, the IBM Exploratory Stream Analytics Innovation Award, and the Philips Make a Difference Award. Her work has led to 35 USA patents and 45+ contributions to international standards for which she received 3 ISO Awards. In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Mihaela’s research focus is on machine learning, AI and operations research for healthcare and medicine. She is founder and director of the Cambridge Centre for AI in Medicine.
Why medicine is creating exciting new frontiers for machine learning and AI
|10:00-13:00 CET – Course 1
Michèle Sebag (CNRS - LISN)
Michèle Sebag is a senior researcher at CNRS and Paris-Saclay University. Her research interests include causal modeling, preference learning, isurrogate optimization and machine learning applications to social sciences. She received the GECCO Test of Time Award in 2018. She was the President of the French Association for AI, and a member of the French National Digital Council. She is an EurAI Fellow, a member of the French Academy of Technologies, and the Head of Steering Committee for ECML-PKDD.
Why and how learning causal models
|14:30-17:00 CET – Course 2
Isabel Valera (Saarland University) and André Meyer-Vitali (TNO)
Isabel Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany). She is also a scholar of the European Laboratory for Learning and Intelligent Systems (ELLIS). Prior to this, she was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany). She has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK). Her research focuses on developing machine learning methods that are flexible, robust, interpretable and fair to analyze real-world data.
Dr. André Meyer-Vitali is a senior scientist at TNO, The Netherlands, and obtained his PhD at the University of Zürich, Switzerland. His interests are in software engineering, patterns and architectures, distributed systems/AI, social simulation, multi-agent systems, knowledge representation and reasoning and combinations with learning and self-organisation. He guides research on trustworthy AI in the context of European collaborations, such as TAILOR.
Trustworthy AI: The next wave of learning and hybrid AI
|17:30-19:00 CET – Participant posters and demos (joint with Track B)|
Wed, July 21
|9:00-10:00 CET – Keynote 2
Serge Abiteboul (Inria)
Serge Abiteboul is a senior researcher at Inria and ENS, and a member of the board of the French telecommunications regulator ARCEP. His research work focuses mainly on data, information and knowledge management, particularly on the Web. He has received the ACM SIGMOD Innovation Award, the EADS Award from the French Academy of sciences, the Milner Award from the Royal Society, and an Advanced Grant from the European Research Council. He is a member of the French Academy of Sciences, and a member the Academy of Europe. He also writes novels, essays, and is editor and founder of the Blog Binaire.
Responsible data analysis algorithms: a realistic goal?
|10:00-13:00 CET – Course 3
Patrick Gallinari (Sorbonne University and Criteo AI Lab)
Patrick Gallinari is a professor at Sorbonne University and distinguished researcher at Criteo AI Lab – Paris. His research focuses on statistical learning and deep learning with applications in different fields such as semantic data processing or complex data analysis. A recent research topic involves physico-statistical systems combining the model based approaches of physics and the data processing approaches of statistical learning. He leads a team whose central theme is Statistical Learning and Deep Learning (https://mlia.lip6.fr). He was director of the Sorbonne computer lab. (LIP6) from 2005 to 2013.
Deep generative models & Deep learning and differential equations
|14:30-17:00 CET – Course 4
Freddy Lecue (Thales and Inria) and Christian Müller (DFKI)
Freddy Lecue is the Chief AI Scientist at CortAIx (Centre of Research & Technology in AI eXpertise) at Thales in Montreal – Canada. He is also a research associate at Inria, in WIMMICS, Sophia Antipolis – France. Before joining the new R&T lab of Thales dedicated to AI, he was AI R&D Lead at Accenture Labs in Ireland from 2016 to 2018. Prior joining Accenture he was a research scientist, lead investigator in large scale reasoning systems at IBM Research from 2011 to 2016, a research fellow at The University of Manchester from 2008 to 2011 and research engineer at Orange Labs from 2005 to 2008. His research area is at the frontier of intelligent / learning and reasoning systems. He has a strong interest on Explainable AI i.e., AI systems, models and results which can be explained to human and business experts.
Principal Researcher and DFKI Research Fellow, Christian Müller began working in automotive in 2007 when getting in contact with Silicon Valley OEM think tanks during his postdoc in Berkeley. At DFKI, he built up the automotive Intelligent User Interfaces group, gaining international recognition for the first eye-gaze controls in the car and early contributions to gesture recognition systems. He is considered the father of OpenDS, the world’s most popular open-source driving simulator. In 2017, he became Head of the Competence Center for Autonomous Driving and initiated OpenGenesis together with TÜV Süd as the first open platform for AI validation in highly automated driving. Since 2019, he leads the research and engineering “Think Tank” iMotion Germany with a fast growing number of PhD students in the areas of robust and explainable AI for environment perception and trajectory planning of self-driving cars.
Explainable AI: a focus on narrative, machine learning and knowledge graph-based approaches
|17:30-19:00 CET – Open discussion with industry
Juliette Mattioli (Thales) and Frédéric Jurie (Safran)
Juliette Mattioli began her industrial career in 1990 with a PhD on pattern recognition by mathematical morphology and neural networks at Thomson-CSF. In the course of her career, through the various R&D laboratories she has directed, she has extended her spectrum of skills from image processing to information fusion, decision support, and combinatorial optimization. Her presence in national (#FranceIA mission, Ile-de-France’s AI 2021 plan, Systematic Paris-Region’s Data Science & AI hub) or international bodies (G7 Innovators) also shows her intention to share her knowledge and participate in the emancipation of business research. Since 2010, she has been attached to the technical department of Thales to help define the research and innovation strategy for the algorithmic field with a particular focus on trusted AI but also on algorithmic engineering in order to accelerate the industrial deployment of AI-based solutions.
Frédéric Jurie is a Scientific expert with Safran, specialized in Computer Vision and Deep Learning. Before joining Safran, he was a professor at the University of Caen Normandie and the director of its Computer Science, Imaging, Automation and Instrumentation lab. His research interests lie predominantly in the area of Computer Vision and Machine Learning, particularly with respect to object recognition, image classification and object detection. He leads one of the forty Research Chairs in AI awarded by the French National Research Agency.
Industry use cases involving trusted AI
Thu, July 22
|9:00-10:00 CET – Keynote 3 (joint with Track B)
Joanna Bryson (Hertie School)
Joanna Bryson is recognised for broad expertise on intelligence and its impacts, advising governments, transnational agencies, and NGOs globally. She holds two degrees each in psychology and AI (BA Chicago, MSc & MPhil Edinburgh, PhD MIT). From 2002-19 she was Computer Science faculty at Bath; she has also been affiliated with Harvard Psychology, Oxford Anthropology, Mannheim Social Science Research, and the Princeton Center for Information Technology Policy. During her PhD she observed confusion generated by anthropomorphised AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. In 2010 she coauthored the first national-level AI ethics policy, the UK’s Principles of Robotics. She presently researches the impact of technology on human cooperation, and AI/ICT governance.
|10:00-13:00 CET – Course 5
Catuscia Palamidessi (Inria) and Miguel Couceiro (University of Lorraine)
Catuscia Palamidessi is Director of Research at Inria Saclay. She has been Full Professor at the University of Genova and Penn State University. Palamidessi’s research interests include Machine Learning, Privacy, Fairness, Secure Information Flow, and Concurrency. In 2019 she obtained an Advanced Grant from the European Research Countil for the project “Hypatia”, which focuses on identifying methods for local differential privacy offering an optimal trade-off with quality of service and statistical utility. She is in the Editorial board of various journals, including the IEEE Transactions on Dependable and Secure Computing, the Journal of Computer Security, Mathematical Structures in Computer Science, and Acta Informatica. She is member of the advising committee of the French National Information Systems Security Agency (ANSSI).
Miguel Couceiro is a Professor of Computer Science at University of Lorraine in Nancy, and head of the ORPAILLEUR team. His research focuses on knowledge discovery and multicriteria decision making and, recently, with a particular emphasis on fair and explainable models. He has (co-)authored more than 180 papers and book chapters. He was an elected member (2018-2020) of IEEE Computer Society Technical Committee on Multiple-Valued Logic, and a PC member of several conferences. He is the local coordinator of the European Erasmus Mundus Masters program LCT (Languaguage and Communication Technologies) and the head of the 2nd year Masters program in NLP at the University of Lorraine.
Addressing algorithmic fairness through metrics and explanations
|14:30-17:00 CET – Course 6
Jessica Schroers (KU Leuven) and Benjamin Nguyen (INSA Centre Val de Loire)
Jessica Schroers joined the KU Leuven Centre for IT & IP Law (CiTiP) as a legal researcher in 2013. Next to her work on research projects, she started her doctoral research in 2018. She has worked on various national and European research projects (FP7 and H2020), and is currently involved in the H2020 project KRAKEN. She holds two master degrees in law of Tilburg University. Her research focuses on data protection law and the legal issues related to electronic signatures and identity management.
Benjamin Nguyen is a Full professor at INSA Centre Val de Loire and the Head of the Fundamental Computer Science Laboratory of Orléans (LIFO). His current research focuses on Privacy & Security in Information Management Systems and Applications. More specifically, he is interested in anonymization techniques; models to represent, quantify and enforce limited data collection; methods to enforce existing privacy models using secure hardware devices or cryptographic techniques (e.g., Blockchain); and the design and implementation of large scale privacy-by-design personal information management applications (in general interdisciplinary research).
Privacy and data protection
|17:30-19:00 CET – Participant posters and demos (joint with Track B)|
Fri, July 23
|9:00-10:00 CET – Keynote 4
Simon Burton (Fraunhofer IKS)
Simon Burton has spent the last two decades in the automotive sector, working in research and development projects as well as leading consulting, engineering service and product organisations. Most recently, he held the role of Director of Vehicle Systems Safety at Robert Bosch GmbH where, amongst other things, his efforts were focused on developing strategies for ensuring the safety of automated driving systems. In September 2020, he joined the leadership of Fraunhofer IKS in the role of research division director where he steers research strategy into “safe intelligence”. His own personal research interests include the safety assurance of complex, autonomous systems, and the safety of machine learning. In addition to his role within Fraunhofer IKS, he has the role of honorary visiting professor at the University of York where he supports a number of research activities and interdisciplinary collaborations.
Safety, complexity, AI and automated driving - holistic perspectives on safety assurance
|10:00-13:00 CET – Course 7
Guillaume Charpiat (Inria), Zakaria Chihani (CEA), and Julien Girard-Satabin (CEA)
Guillaume Charpiat is a researcher at Inria / Paris-Saclay University. He has worked in computer vision, optimization and machine learning, and now focuses on deep learning, both on theoretical aspects (automatic architecture design, guarantees, …) and practical ones thanks to collaborations with experts in various application fields (satellite imagery, population genetics, molecular conformations…). Previously, he did a post-doc at the MPI for Biological Cybernetics, after a PhD thesis with Olivier Faugeras and Renaud Keriven and having graduated from École Normale Supérieure.
Zakaria Chihani is a researcher in the Software Safety and Security Laboratory at Commissariat à l’énergie atomique et aux énergies alternatives (CEA) in Saclay, France, where he currently heads the effort around AI Safety, particularly using formal methods, seeking to allow harvesting the potential of AI without falling victim to its dangers, especially in the field of critical systems. He previously worked on formal methods after receiving a PhD at Ecole Polytechnique in formal proof certification.
Julien Girard-Satabin is a PhD student at CEA LIST / Inria. His research focus is on making AI software safer, bridging the gap between machine learning and formal methods communities. He received a MSc from Université Paris-Sud and a MEng from ENSTA Paris.
Formal verification of deep neural networks: theory and practice
|14:30-17:00 CET – Course 8
Hatem Hajri (IRT SystemX)
Hatem Hajri earned his MS and PhD degrees in applied mathematics at Paris Sud University, France in 2008 and 2011. Between 2011 and 2015, we held teaching and research positions at University Paris 10, Luxembourg University, and University of Bordeaux and worked on probability theory and information geometry. In 2015, he moved to industry and spent three years working in Institut Vedecom on the field of autonomous driving. In 2018, he joined IRT SystemX and since then, he has been working on adversarial examples, trusthworthiness and robustness of machine learning in industrial applications. Currently, he is project technical leader in the confiance.ai program led by IRT SystemX.
Adversarial examples and robustness of neural networks
|17:30-19:00 CET – Collaborative wrap-up (joint with Track B)|