Causality in Machine Learning

  • Matwin, Stan S. (PI)

Projet: Research project

Détails sur le projet

Description

Causality in Machine Learning is often understood as the ability to understand decisions provided by a machine-learning model in terms of the knowledge of the domain in which the model operates, and the ability to reason about such decisions. A causal model has an "introspective" ability to reason about itself. Learning a causal model is a much more difficult task than the one performed by current Machine Learning methods, including Deep Learning, that determine a "correlational" or "pattern-matching" relationship between the inputs of the model and its decision. I propose here a research program in causality in Machine Learning. Causality is one of the main challenges before the field of Machine Learning. Moreover, having a causal representation of a model will allow a progress of Machine Learning towards the abilities of human intelligence, such as learning with a few examples. I propose to connect with the rich body of existing Artificial Intelligence work exploring the use of logic to reason about causes of changing states of the world and variables describing the world. The proposed research program is founded on my previous work. In particular, my active participation in a sub-area of Machine Learning known as Inductive Logic Programming will be useful. I propose to interpret models obtained with Deep Learning using logic. Use of Inductive Logic Programming will enable us to build models behaving similarly to models obtained by Deep Learning. These ILP models will be "distilled" from the Deep Learning models, and will be expressed as rules in first-order logic. This will make them interpretable by humans. It will also facilitate integrating previous knowledge expressed in logic with the learned models. Even partial success of research on causality is likely to have significant impact. Causality is necessary for broader social acceptance of models developed using Machine Learning for decision-making concerning humans. For instance, the European Union GDPR directive stipulates that any such model should be explainable, i.e. a person about whom the model has made a decision should be able to obtain an explanation of the model's decision understandable to them. Understanding models will eventually allow us to avoid models that make decisions about humans based on gender, ethnicity, etc. For example, the group in Pisa with which I collaborate has access to claim processing data of one of the leading Italian insurance companies. We will look at the explainability of decisions taken by their automated insurance clam processing systems. Addressing causality is a huge challenge. In this program I propose to make inroads into distillation of Deep Learning models into understandable models that also make causality explicit, and in being able to assign multiple factors as combined causes of a given effect predicted by a model. I will also young researchers who will continue the important work in causality for Machine Learning.

StatutActif
Date de début/de fin réelle1/1/23 → …

Financement

  • Natural Sciences and Engineering Research Council of Canada: 21 491,00 $ US

ASJC Scopus Subject Areas

  • Artificial Intelligence
  • Decision Sciences(all)
  • Physics and Astronomy(all)
  • Chemistry(all)
  • Agricultural and Biological Sciences(all)
  • Engineering(all)
  • Management of Technology and Innovation