What Are “Hallucinations” in AI – and Why Are They Dangerous in Legal Work?

AI tools like ChatGPT sound very confident. But they have a weakness: hallucinations.
This means the AI makes up facts, laws, or court rulings – while sounding completely sure.
Why is this so risky in legal matters?
1. Wrong legal references
The AI may cite articles or laws that do not exist or do not apply.
2. Costly decisions
HR managers or SME leaders may take wrong decisions on dismissals, salaries, or contracts.
3. Liability risks
Acting on invented information can lead to legal disputes or financial loss.
4. Restrictions by providers
ChatGPT and similar tools forbid legal use because of these issues.
Conclusion:
Legal work requires 0% hallucinations and 100% verified Swiss law.
Safe legal bots use expert-verified content, strict rule-based checks, and Swiss legal sources – not improvisation.
