firstbacksecondback
10 Results
Workshop
|
Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation Yixin Wan · Fanyou Wu · Weijie Xu · Srinivasan Sengamedu |
||
Workshop
|
Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation Yixin Wan · Fanyou Wu · Weijie Xu · Srinivasan Sengamedu |
||
Workshop
|
Hallucination Augmented Recitations for Language Models Abdullatif Köksal · Renat Aksitov · Chung-Ching Chang |
||
Poster
|
Tue 7:30 |
Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation Niels Mündler · Jingxuan He · Slobodan Jenko · Martin Vechev |
|
Poster
|
Tue 1:45 |
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning Fuxiao Liu · Kevin Lin · Linjie Li · Jianfeng Wang · Yaser Yacoob · Lijuan Wang |
|
Poster
|
Fri 7:30 |
INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection Chao Chen · Kai Liu · Ze Chen · Yi Gu · Yue Wu · Mingyuan Tao · Zhihang Fu · Jieping Ye |
|
Poster
|
Fri 1:45 |
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models Yiyang Zhou · Chenhang Cui · Jaehong Yoon · Linjun Zhang · Zhun Deng · Chelsea Finn · Mohit Bansal · Huaxiu Yao |
|
Poster
|
Tue 1:45 |
Teaching Language Models to Hallucinate Less with Synthetic Tasks Erik Jones · Hamid Palangi · Clarisse Ribeiro · Varun Chandrasekaran · Subhabrata Mukherjee · Arindam Mitra · Ahmed H Awadallah · Ece Kamar |
|
Affinity Workshop
|
Fri 7:30 |
Hallucination Benchmark in Medical Visual Question Answering Jinge Wu · Yunsoo Kim · Honghan Wu |
|
Affinity Workshop
|
Hallucination Benchmark in Medical Visual Question Answering Jinge Wu · Yunsoo Kim · Honghan Wu |