logo confiance.ai

Workshop on Trustworthy Artificial Intelligence

In conjunction with ECML/PKDD 22
Grenoble, France
September 19, 2022 (9:00-13:00)

logo tailor

Program

9:00-9:10 Introduction to the workshop

9:10-9:50 First Invited Talk: "Detecting and Explaining Privacy Risks on Temporal Data", Marie-Christine Rousset

9:50-10:30 session on XAI and Non-discrimination & Fairness (8 short presentations)

10:30-11: Coffee break

11:00-11:40 Second Invited Talk: "Trustworthy AI – From Reliable Algorithms to Certified Systems", Stefan Wrobel

11:40-12:10 session on Safety & Robustness and Trustworthy Artificial Intelligence (6 short presentations)

12:10-13:00 Poster session


List of accepted papers


XAI

XAI and geographic information: application to paleoenvironmental reconstructions
Matthieu BOUSSARD, Zimmermann Bastien, Sophie Grégoire, Nicolas Boulbes

Fooling Perturbation-Based Explainability Methods,
Rahel Wilking, Matthias Jakobs, Katharina Morik

Explaining autoencoders with local impact scores, 
Clément Picard, Hoel Le Capitaine

Explaining object detectors: the case of transformer architectures,
Stéphane Herbin, Baptiste Abeloos

PDD-SHAP: Fast Approximations for Shapley Values using Functional Decomposition,
Arne Gevaert, Yvan Saeys

Robustness of Explanation Methods for NLP Models,
Shriya Atmakuri, Tejas Chheda, Dinesh Kandula, Nishant Yadav, Taesung Lee, Hessel Tuinhof

 

Non-discrimination & Fairness

Addressing Underestimation Bias in Machine Learning Through Multi-Objective Optimization
William Blanzeisky, Padraig Cunningham

Mitigating Gender Bias of Pre-Trained Face Recognition Models with an Ethical Module, 
Jean-Rémy Conti, Nathan Noiry, Vincent Despiegel, Stéphane Gentric, Stephan Clémençon

 

Safety & Robustness

Concept-level Debugging of Part-Prototype Networks, 
Andrea Bontempelli, Stefano Teso, Fausto Giunchiglia, Andrea Passerini

Test-Time Adaptation with Principal Component Analysis, 
Thomas Cordier, Victor Bouvier, Gilles Hénaff, Céline Hudelot

Towards an Evaluation of Lipschitz Constant Estimation Algorithms by building Models with a Known Lipschitz Constant, 
William PIAT, Jalal Fadili, Frédéric JURIE, Sébastien Da Veiga

Narrowing the Gap: Towards Analyzable and Realistic Simulators for Safety Analysis of Neural Network Control Systems,
Vivian Lin, James Weimer, Insup Lee

 

Trustworthy Artificial Intelligence

Crystal Ball: Prediction-based Real-time Evasion Against Deep Reinforcement Learning, 
Ziyan Wang, Hsiao-Ying Lin, Chengfang Fang

Transparency and Reliability Assurance Methods for Safeguarding Deep Neural Networks - A Survey,
Elena Haedecke, Maximilian Alexander Pintz

 


Invited talks


Detecting and Explaining Privacy Risks on Temporal Data, Marie-Christine Rousset 

Personal data are increasingly disseminated over the Web through mobile devices and smart environments, and are exploited for developing more and more sophisticated services and applications. All these advances come with serious risks for privacy breaches that may reveal private information wanted to remain undisclosed by data producers. It is therefore of utmost importance to help them to identify privacy risks raised by requests of service providers for utility purposes. In this talk, I will focus on the temporal aspect for privacy protection since many applications handle dynamic data (e.g., electrical consumption, time series, mobility data) for which temporal data are considered as sensitive and aggregates on time are important for data analytics. I will present a formal approach for detecting incompatibility between privacy and utility queries expressed as temporal aggregate conjunctive queries. The distinguishing point of our approach is to be data-independent and to come with an explanation based on the query expressions only. This explanation is intended to help data producers understand the detected privacy breaches and guide their choice of the appropriate technique to correct it.

 

Trustworthy AI – From Reliable Algorithms to Certified Systems, Stefan Wrobel

The question of the trustworthiness of AI systems is a basic prerequisite for the practical usability of such systems and the subject of manifold research activities. At the latest with the publication of the recommendations of the High-Level Expert Group of the same name, a consensus emerged about what should constitute AI trustworthiness and has since then been further sharpened by activities in the field of standardization and regulation. At the same time, for many critical contexts of use, such as in the safety-critical area, it is still unclear how the requirements for AI systems can be met and systematically demonstrated. One reason is that the “requirements world” is dominated by concepts such as "static" or "explainable", which the space of potential AI solutions tends to counter with diametrically opposed concepts such as "dynamic" and "evolving".

This talk draws a map of current research activities in the area of trustworthy AI and gives a brief overview of the requirements from regulation and standardization. The second part of the talk discusses how, via the concept of assurance cases, attempts are being made to demonstrably meet the high specifications of deploying AI systems through a combination of methods such as uncertainty assessment of neural networks and systematic testing. An interesting question here is how this concept, which originally comes from the field of functional safety, can be transferred to the proof of other trustworthiness dimensions such as fairness. The talk ends with an outlook on still open research questions.

Online user: 3 Privacy
Loading...