Ideally you could adjust a dial and and set the degree of hallucination in advance. For fact-checking you would choose zero hallucination, for poetry composition, life advice, and inspiration you might want more hallucination, to varying degrees of course. After all, you don’t choose friends with zero hallucination, do you? And you do read fiction, don’t you?
(Do note that you can ask the current version for references and follow-up — GPT is hardly as epistemically crippled as some people allege.)
In the meantime, I do not want an LLM with less hallucination. The hallucinations are part of what I learn from. I learn what the world would look like, if it were most in tune with the statistical model provided by text. That to me is intrinsically interesting. Does the matrix algebra version of the world not interest you as well?
What is an optimum degree of LLM hallucination?
from Tyler Cowen
Filed under:
Related Notes
- The upshot for the industry at large, is: the **LLM-as-Moat model h...from Steve Yegge
- We don't quite know what to do with language models yet. But we...from Maggie Appleton
- These are wonderful non-chat interfaces for LLMs that I would total...from Maggie Appleton
- combining search and AI chat is actually the wrong way to go and I ...from Garbage Day
- the tech am I digging recently is a software framework called **Lan...from Interconnected
- Part of what makes LoRA so effective is that - like other forms of ...from Dylan Patel
- In many ways, this shouldn’t be a surprise to anyone. The current r...from Dylan Patel
- We’re building apps to surround and harness AI, but we need microsc...from Interconnected