firstbacksecondback
37 Results
Workshop
|
Perplexed by Perplexity: Perplexity-Based Pruning with Small Reference Models Zachary Ankner · Cody Blakeney · Kartik Sreenivasan · Max M Marion · Matthew Leavitt · Mansheej Paul |
||
Workshop
|
Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning Wuyang Chen · Jialin Song · Pu Ren · Shashank Subramanian · Dmitriy Morozov · Michael W Mahoney |
||
Workshop
|
Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals Ran Liu · Ellen Zippi · Hadi Pouransari · Christopher Sandino · Jingping Nie · Hanlin Goh · Erdrin Azemi · Ali Moin |
||
Workshop
|
Prompting a Pretrained Transformer Can Be a Universal Approximator Aleksandar Petrov · Adel Bibi · Philip Torr |
||
Workshop
|
Equivariant Pretrained Transformer for Unified Geometric Learning on Multi-Domain 3D Molecules Rui Jiao · Xiangzhe Kong · Ziyang Yu · Wenbing Huang · Yang Liu |
||
Workshop
|
Autonomous Data Selection with Language Models for Mathematical Texts Yifan Zhang · Yifan Luo · Yang Yuan · Andrew Yao |
||
Workshop
|
Prompting a Pretrained Transformer Can Be a Universal Approximator Aleksandar Petrov · Adel Bibi · Philip Torr |
||
Workshop
|
Pretraining Sleep Staging Models without Patient Data Niklas Grieger · Siamak Mehrkanoon · Stephan Bialonski |
||
Workshop
|
Pretraining Sleep Staging Models without Patient Data Niklas Grieger · Siamak Mehrkanoon · Stephan Bialonski |
||
Poster
|
Tue 7:30 |
COSA: Concatenated Sample Pretrained Vision-Language Foundation Model Sihan Chen · Xingjian He · Handong Li · Xiaojie Jin · Jiashi Feng · Jing Liu |
|
Affinity Workshop
|
Wed 1:45 |
BACKTRACKING MATHEMATICAL REASONING OF LANGUAGE MODELS TO THE PRETRAINING DATA Yasaman Razeghi |
|
Workshop
|
Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling Pratyush Maini · Skyler Seto · He Bai · David Grangier · Yizhe Zhang · Navdeep Jaitly |