Labels:
knowledge-graphs graph networks knowledge modelling reasoning bias human-AI interaction decision-support systems contrastive-explanations explainable-artificial-intelligence

Description

The FATE flagship develops AI capabilities for a digital assistant that acquires and extends its expertise through continuous learning from multiple potentially confidential and biased (subject) data sources and from human experts who add to and reflect on the AI-outcomes. The system provides decision support for multiple user roles, such as a researcher, consultant and subject.

Problem Context

The idea behind the FATE flagship is to implement responsible human-machine teaming across a variety of use cases. Such responsible human-machine teaming can be distinguished through four core values, which align with the key research topics of FATE: Fair AI, Explainable AI, Co-learning, and Secure learning. Fair AI makes it possible to detect, mitigate, and evaluate bias in the data an AI system uses, as well as the overall system itself. Explainable AI is capable of explaining the reasons behind an advice to various user roles. The concept of Co-learning implies an AI can learn from users at both a system and individual level. At the individual level the task is to learn from the user (and adapt) and observe that the user has learned from the given advice. Secure learning means an AI can handle (distributed) sensitive data by identifying and assessing potential information leaks and proposing secure-by-design alternatives.

Solution

By creating a flagship architecture that offers reusable functional components, FATE aims to provide easy adoption of the architecture for linked use cases. Specifically, the FATE flagship develops AI capabilities for a digital assistant that acquires and extends its expertise through continuous learning from multiple potentially confidential and biased (subject) data sources and from human experts who add to and reflect on the AI-outcomes. The system provides decision support for multiple user roles, such as a researcher, consultant, and subject. The overarching aim of the FATE flagship is to develop an expert assistant, where both the system and the user learn from each other through iterative interaction. The resulting classifications, predictions and advices will comply with the applicable fairness principles and will be communicated in an understandable and trustworthy way to the direct stakeholders.

Results

FATE has been running for three years. In year one we adopted a healthcare use case (decision support for diabetes), in year two a juridical use case (AI4Justice), and this year Skills Matching is adopted as use case, in which we take inspiration from a job seeking and vacancy/CV skills matching scenario.

Demonstrators

Contact

  • Milena Kooij-Janic, Sr Project Leader, TNO, e-mail: milena.kooij@tno.nl
  • Joachim de Greeff, Sr Consultant, TNO, e-mail: joachim.degreeff@tno.nl