Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Advances in Financial AI: Opportunities, Innovations, and Responsible AI

Decoding Federal Reserve Communications with LLM Embeddings for Enhanced Market Return Predictions

Mingjun Sun · Beier Liu · Haiyun Zhu


Abstract:

Federal Reserve communications, particularly those released by the Federal Open Market Committee (FOMC), play a central role in shaping market expectations and asset pricing. In this paper, we advance a novel approach that employs large language model (LLM) embeddings—derived from models such as GPTs and LLaMAs—to extract contextualized representations of FOMC communications. By converting these texts into high-dimensional numerical features, our framework identifies subtle linguistic cues often overlooked by conventional analyses. Through extensive out-of-sample evaluations, we demonstrate that LLM-generated embeddings significantly outperform conventional benchmarks in predicting short and medium-term equity market returns, yielding both statistically and economically meaningful gains.

Chat is not available.