Skip to yearly menu bar Skip to main content


Poster
in
Workshop: I Can't Believe It's Not Better: Challenges in Applied Deep Learning

Impact of Task Phrasing on Presumptions in Large Language Models

Kenneth Ong


Abstract:

Concerns with the safety and reliability of applying large-language models (LLMs) in unpredictable real-world applications motivate this study, which examines how task phrasing can lead to presumptions in LLMs, making it difficult for them to adapt when the task deviates from these assumptions. We investigated the impact of these presumptions on the performance of LLMs using the iterated prisoner's dilemma as a case study. Our experiments reveal that LLMs are susceptible to presumptions when making decisions even with reasoning steps. However, when the task phrasing was neutral, the models demonstrated logical reasoning without much presumptions. These findings highlight the importance of proper task phrasing to reduce the risk of presumptions in LLMs.

Chat is not available.