Extrinsic Hallucinations in LLMs

📰 Lilian Weng's Blog

Extrinsic hallucinations in LLMs refer to fabricated output not grounded in context or world knowledge

advanced Published 7 Jul 2024
Action Steps
  1. Identify cases of hallucination in LLM output
  2. Distinguish between in-context and extrinsic hallucinations
  3. Analyze the role of context and world knowledge in mitigating hallucinations
  4. Develop strategies to mitigate extrinsic hallucinations in LLMs
Who Needs to Know This

NLP researchers and AI engineers working with LLMs can benefit from understanding hallucinations to improve model performance and reliability

Key Insight

💡 Extrinsic hallucinations are a specific type of hallucination that can be addressed by improving model understanding of context and world knowledge

Share This
🤖 LLMs can hallucinate! Extrinsic hallucinations occur when output is fabricated & not grounded in context or world knowledge
Read full article → ← Back to News