Why Generic AI Models Fail β and Why This Is Especially Dangerous in the Legal Domain (Watson.ch)

Based on a recent Swiss analysis (watson.ch)
https://www.watson.ch/!669651522
β
A new analysis from Switzerland shows clearly: Many AI models draw their knowledge from opaque and sometimes questionable sources. Users cannot tell where statements come from β or whether they are even correct. The investigation highlights several core problems that are particularly relevant for legal applications:
β
1. AI invents sources or uses unreliable data
The tested models delivered answers based on false, fabricated, or poorly traceable sources.
π In the legal world, this would be fatal: one wrong source = one wrong legal answer = a potential liability case.
β
2. Models contradict themselves
Depending on the phrasing of the question, AI systems produced different and contradictory answers.
π In law, this creates complete uncertainty β a ruling or statute does not change depending on wording.
β
3. Lack of transparency: users cannot know what is true
The analysis shows that users have no way to verify the origin or quality of the answer.
π This is the biggest risk: without clear references, no one can determine whether the legal conclusion is valid.
β
4. With more complex questions, the AI βhallucinatesβ
As questions became more complex, the models began inventing facts or oversimplifying.
π Yet complex cases are precisely where legal accuracy is critical β an invented answer can lead to costly mistakes.
β
What does this mean for legal work?
This analysis demonstrates why general-purpose AI is unsuitable for legal questions:
β No reliable sources
β No guarantee of up-to-date information
β No transparency
β High error and hallucination rates
In law, βalmost correctβ is not enough. It must be correct.
β
Why Lawise / Jurilo is different
β Legally verified answers based on Swiss laws, commentaries, and Federal Supreme Court rulings
β 0% hallucinations β every answer is grounded in real legal sources
β Transparent references β always traceable and verifiable
β A specialised model instead of a black-box chatbot
While general AI models often provide entertaining or approximate responses, Lawise delivers correct answers β and in law, that is what counts.
β
Conclusion
The Swiss analysis shows that even for simple everyday questions, AI models deliver inaccurate or unclear information.
In legal practice, this would be irresponsible.
π This is why we need specialised, verified Legal-AI β like Lawise β built on real legal sources and designed for maximum reliability.
β
Reference to the Watson article:
When we ask ChatGPT, the AI also explains complex topics in Swiss politics. But its research is not always broadly supported or balanced.
Image: KI-generiert/ChatGPT/bev
β
