Extrinsic Hallucinations in LLMs
📰 Lilian Weng's Blog
Extrinsic hallucinations in LLMs refer to fabricated output not grounded in context or world knowledge
Action Steps
- Identify cases of hallucination in LLM output
- Distinguish between in-context and extrinsic hallucinations
- Analyze the role of context and world knowledge in mitigating hallucinations
- Develop strategies to mitigate extrinsic hallucinations in LLMs
Who Needs to Know This
NLP researchers and AI engineers working with LLMs can benefit from understanding hallucinations to improve model performance and reliability
Key Insight
💡 Extrinsic hallucinations are a specific type of hallucination that can be addressed by improving model understanding of context and world knowledge
Share This
🤖 LLMs can hallucinate! Extrinsic hallucinations occur when output is fabricated & not grounded in context or world knowledge
DeepCamp AI