Are bad incentives to blame for AI hallucinations?

Are bad incentives to blame for AI hallucinations?
How can a chatbot be so wrong — and sound so confident in its wrongness?