How are AI hallucinations detected and prevented?
amaise uses a multi-layered approach to detect and prevent AI hallucinations:
Structured extraction: The pipeline uses schema-bound extraction tasks with predefined fields instead of free text generation. This significantly limits the scope for hallucinations.
Model pinning: Model versions are fixed per environment and validated in the development environment before going into production.
Output sanitization: LLM outputs are validated before storage and display. Automatic escaping prevents injection of harmful content.
Audit trail: Every AI result is linked to the processed document, model version, and pipeline stage, enabling full traceability.
Manual review: On customer request, manual review or sampling stages can be integrated into the pipeline to specifically check AI results.