Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: 5th Workshop on practical ML for limited/low resource settings (PML4LRS) @ ICLR 2024

A Low-Resource Framework for Detection of Large Language Model Contents

Linh Le

[ ]
Sat 11 May 7:25 a.m. PDT — 7:35 a.m. PDT

Abstract:

Current Large Language Models (LLMs) are able to generate texts that are seemingly indistinguishable from those written by human experts. While offering great opportunities, such technologies also pose new challenges in education, science, information security, and a multitude of other areas. To add up, current approaches in LLM text detection either are computationally expensive or need the LLMs' internal computational states, both of which hinder their public accessibility. To provide better applications for users, especially, in lower-resource settings, this paper presents a new paradigm of metric-based detection for LLM contents that is able to balance among computational costs, accessibility, and performances. Specifically, the detection is performed through utilizing a metric framework to evaluate the similarity between a given text to an equivalent example generated by LLMs and through that determining the former's origination. Additionally, we develop and publish five datasets totalling over 95,000 prompts and responses from human and GPT-3.5 TURBO or GPT-4 TURBO for benchmarking. Experiment studies show that our best architectures maintain F1 scores in between 0.87 to 0.96 across the tested corpora in both same-corpus and out-of-corpus settings, either with or without paraphrasing. The metric framework also spends significantly less time in training and inference compared the supervised RoBERTa in multiple tests.

Chat is not available.