Machine learning has enabled significant improvements in many areas. Most of these ML methods are based on inferring statistical correlations, they can become unreliable where spurious correlations present in the training data do not hold in the testing setting. One way of tackling this problem is to learn the causal structure of the data generating processing (causal models). The general problem of causal discovery requires performing all interventions on the model. However, this may be too expensive and/or infeasible in real environments: understanding how to most efficiently intervene in the environment in order to uncover the most amount of information is therefore a necessary requirement to be able to uncover causal information in real-world applications. In this workshop, we investigate a few key questions or topics.
- What is the role of an underlying causal model in decision making?
- What is the difference between a prediction that is made with a causal model and one made with a non‐causal model?
- What is the role of causal models in decision-making in real-world settings, for example in relation to fairness, transparency, and safety?
- The way current RL agents explore environments appears less intelligent than the way human learners explore. One reason for this disparity might be due to the fact that when faced with a novel environment, humans do not merely observe, but actively interact with the world affecting it through actions. Furthermore, curating a causal model of the world allows the learner to maintain a set of plausible hypotheses and design experiments to test these hypotheses.
- Can we use a distributional belief about the agent's model of the world as a tool for exploration (minimize entropy, maximize knowledge acquisition)?
- Can we learn an incomplete causal model that is sufficient for good decision making as only parts of the model might be relevant for the tasks at hand. How can we efficiently learn these causal sub-models?