Skip to yearly menu bar Skip to main content


Poster

MAESTRO: Masked Encoding Set Transformer with Self-Distillation

Matthew Lee · Jaesik Kim · Matei Ionita · Jonghyun Lee · Michelle McKeague · YONGHYUN NAM · Irene Khavin · Yidi Huang · Victoria Fang · Sokratis Apostolidis · Divij Mathew · Shwetank · Ajinkya Pattekar · Zahabia Rangwala · Amit Bar-Or · Benjamin Fensterheim · Benjamin Abramoff · Rennie Rhee · Damian Maseda · Allison Greenplate · John Wherry · Dokyoon Kim

Hall 3 + Hall 2B #12
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

The interrogation of cellular states and interactions in immunology research is an ever-evolving task, requiring adaptation to the current levels of high dimensionality. Cytometry enables high-dimensional profiling of immune cells, but its analysis is hindered by the complexity and variability of the data. We present MAESTRO, a self-supervised set representation learning model that generates vector representations of set-structured data, which we apply to learn immune profiles from cytometry data. Unlike previous studies only learn cell-level representations, whereas MAESTRO uses all of a sample's cells to learn a set representation. MAESTRO leverages specialized attention mechanisms to handle sets of variable number of cells and ensure permutation invariance, coupled with an online tokenizer by self-distillation framework. We benchmarked our model against existing cytometry approaches and other existing machine learning methods that have never been applied in cytometry. Our model outperforms existing approaches in retrieving cell-type proportions and capturing clinically relevant features for downstream tasks such as disease diagnosis and immune cell profiling.

Live content is unavailable. Log in and register to view live content