Artificial Intelligence
The Human Blind Spot Around Non-Deterministic Machines
Why LLM’s will always make mistakes and we shouldn’t call them hallucinations I saw a tweet from Paul Graham a while back about how as LLMs become better their hallucinations will become more convincing. And it makes sense, a smart confident person saying something wrong often sounds more reliable than Read more