10 reasons: Why ChatGPT & Claude cannot reliably answer legal questions

ChaGPT and Claude or other foundation models alone will never be able to answer your legal questions reliably.

The reasons for this are:

  1. No legal verification
    The answers have not been checked by lawyers and may be professionally incorrect or misleading.
  2. No access to current legal texts & rulings
    Models work with training data, not with constantly updated official sources such as the OR, ZGB or federal court rulings.
  3. Lack of contextual knowledge of Swiss law
    Legal systems are local. Foundation models are often based on US or global law — not on Swiss specifics.
  4. hallucinations (= wrong answers)
    Models “invent” answers when they are unsure — this is highly risky in the legal sector.
  5. No liability or traceability
    There is no legal responsibility or documentation as to how a response came about.
  6. Not always up to date
    The models are not aware of any legislative changes or new rulings based on their level of training (e.g. ChatGPT as of the end of 2023).
  7. Data protection issues in sensitive cases
    For confidential legal issues, no text may simply be entered into a generic AI model.
  8. Lack of detail and nuances
    Legal answers often require differentiated interpretations — models usually issue superficial standard texts.
  9. No citation or legal basis
    Foundation models do not provide direct legislative articles, paragraphs, or court rulings — you don't know what they're based on.

10th Not a substitute for legal advice
What counts in legal contexts is: Reliability, traceability and usability in court. Foundation models don't offer that.