Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Multimodal Representation Learning (MRL): Perks and Pitfalls

Impossibility of Collective Intelligence

Krikamol Muandet

Keywords: [ OOD generalization ] [ Multi-Modal Learning ] [ Algorithmic fairness ] [ social choice ] [ Democratic AI ] [ federated learning ]


Abstract:

This work provides a minimum requirement in terms of intuitive and reasonable axioms under which an empirical risk minimization (ERM) is the only rational learning algorithm when learning in heterogeneous environments. We provide an axiomatization of any learning rule in terms of choice correspondences over a hypothesis space and seemingly primitive properties. Then, we show that the only feasible algorithm compatible with these properties is the standard ERM that learns arbitrarily from a single environment. This impossibility result implies that Collective Intelligence (CI), the ability of algorithms to successfully learn across heterogeneous environments, cannot be achieved without sacrificing at least one of these basic properties. More importantly, this work reveals an incomparability of performance metrics across environments as one of the fundamental limits in critical areas of machine learning such as out-of-distribution generalization, federated learning, algorithmic fairness, and multi-modal learning.

Chat is not available.