Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Associative Memories

Associative Memory Learning Through Redundancy Maximization

Mark Blümel · David Ehrlich · Andreas C. Schneider · Abdullah Makkeh · Marcel Graetz · Valentin Neuhaus · Viola Priesemann · Michael Wibral


Abstract:

Hopfield networks mark an important milestone in the development of modern artificial intelligence architectures. In this work, we argue that a foundational principle for solving such associative memory problems at the neuron scale is to promote redundancy between the input pattern and the network's internal state in the neurons' activity. We demonstrate how to quantify this redundancy in classical Hebbian Hopfield networks using Partial Information Decomposition (PID), and reveal that redundancy plays a dominant role compared to synergy or uniqueness when operating below capacity. Beyond analysis, we show that redundancy can be used as a learning goal for Hopfield networks by constructing associative memory networks from neurons that directly optimize PID-based goal functions. In experiments, we find that these "infomorphic'' Hopfield networks greatly outperform the original Hebbian networks and achieve comparable performance to state-of-the-art associative memory learning rules. This work offers novel insights into how associative memory functions at an information-theoretic level of abstraction and opens pathways to improve existing learning rules by optimizing their alignment with the goal of maximizing redundancy.

Chat is not available.