Skip to yearly menu bar Skip to main content


Poster

Concept Bottleneck Language Models For Protein Design

Aya Ismail · Tuomas Oikarinen · Amy Wang · Julius Adebayo · Samuel Stanton · Hector Corrada Bravo · Kyunghyun Cho · Nathan Frey

Hall 3 + Hall 2B #504
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: We introduce Concept Bottleneck Protein Language Models (CB-pLM), a generative masked language model with a layer where each neuron corresponds to an interpretable concept. Our architecture offers three key benefits: i) Control: We can intervene on concept values to precisely control the properties of generated proteins, achieving a 3×× larger change in desired concept values compared to baselines. ii) Interpretability: A linear mapping between concept values and predicted tokens allows transparent analysis of the model's decision-making process. iii) Debugging: This transparency facilitates easy debugging of trained models. Our models achieve pre-training perplexity and downstream task performance comparable to traditional masked protein language models, demonstrating that interpretability does not compromise performance. While adaptable to any language model, we focus on masked protein language models due to their importance in drug discovery and the ability to validate our model's capabilities through real-world experiments and expert knowledge. We scale our CB-pLM from 24 million to 3 billion parameters, making them the largest Concept Bottleneck Models trained and the first capable of generative language modeling.

Live content is unavailable. Log in and register to view live content