LLMs are a selection of a handful of words from billions of possibilities. They are selected one by one until they arrive at a destination that aligns with what we ask of it. But at no point does it understand the words it is writing, in the same way the Macrocilix Maia doesn’t have any conception of looking like bird shit and how that benefits. It simply happens, much like LLM generated sentences! No awareness needed.

@kconrad@mastodon.social
Professor of English @ University of Kansas. Exploring science, technology, education, literature & culture. Particularly interested in generative AI.
Hallucination is Inevitable: An Innate Limitation of Large Language Models
https://arxiv.org/abs/2401.11817
"In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. [...] By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hallucinate. "