Skip to main content

How does amaise protect against prompt injection attacks?

Written by amaise Support

How does amaise protect against prompt injection attacks?

amaise uses multiple measures to prevent prompt injection attacks through malicious content in documents:

  • Server-side prompt construction: LLM prompts are created server-side using controlled templates. Users do not have direct access to prompt creation.

  • Separation of instructions and data: Document contents are passed as data context, clearly separated from system instructions.

  • Structured extraction: The pipeline performs schema-bound extraction tasks — no free text generation. This significantly limits the potential impact of injection attempts.

  • Input validation: Validation at the API level for all incoming requests.

  • Output validation: LLM outputs are validated and sanitized before storage and display.

  • No user-provided URLs: The application does not accept user-supplied URLs for server-side fetching, preventing data exfiltration via SSRF.

  • Tenant-isolated outputs: AI results are strictly isolated per tenant. Queries are restricted to the respective tenant context, so a prompt injection attempt in one document cannot access data from other tenants or cases.

Did this answer your question?