AI Processing & Data Privacy

Our commitment to ensuring your data is processed securely, privately, and transparently.

At Veritas Lab, we are committed to ensuring your data is processed securely, privately, and transparently. Our architecture is built on modern cloud-native principles, giving you full control over your data and how AI models interact with it.

Where Does the AI Run?

In the Veritas Lab SaaS model, each tenant (client) operates in a dedicated, isolated environment using containerized infrastructure (Docker) deployed within your AWS account or dedicated sandbox.

ComponentLocation (Per Tenant)Notes
spaCy, Legal-BERT
Inside Docker container
Installed locally inside your tenant's container. No external calls.
OpenAI (GPT-4)
External API (OpenAI Cloud)
Only used if you opt to enable GPT-4 powered features.
Claude (Anthropic)
External API (Anthropic Cloud)
Used optionally for advanced insights/summaries.
scikit-learn + Rules
Inside Docker container
All custom anomaly detection and logic runs locally.

SaaS Architecture Highlights

Single-Tenant by Design

Each client has their own set of Docker containers, databases, and storage—ensuring data isolation and compliance readiness.

Containerized AI Engine

All core AI tasks (NER, clause matching, verification logic) run inside secure Docker containers.

Optional LLM Usage

Calls to OpenAI or Claude are made only when requested and controlled at runtime.

Scalable on AWS

Containers are deployed on ECS, EKS, or EC2 (based on your setup), with the ability to scale horizontally based on document volume or concurrency.

Need More Information?

Our team is available to discuss specific compliance requirements, provide additional documentation, or answer any questions about our AI privacy practices.