Our commitment to ensuring your data is processed securely, privately, and transparently.
At Veritas Lab, we are committed to ensuring your data is processed securely, privately, and transparently. Our architecture is built on modern cloud-native principles, giving you full control over your data and how AI models interact with it.
In the Veritas Lab SaaS model, each tenant (client) operates in a dedicated, isolated environment using containerized infrastructure (Docker) deployed within your AWS account or dedicated sandbox.
| Component | Location (Per Tenant) | Notes |
|---|---|---|
| spaCy, Legal-BERT | Inside Docker container | Installed locally inside your tenant's container. No external calls. |
| OpenAI (GPT-4) | External API (OpenAI Cloud) | Only used if you opt to enable GPT-4 powered features. |
| Claude (Anthropic) | External API (Anthropic Cloud) | Used optionally for advanced insights/summaries. |
| scikit-learn + Rules | Inside Docker container | All custom anomaly detection and logic runs locally. |
Each client has their own set of Docker containers, databases, and storage—ensuring data isolation and compliance readiness.
All core AI tasks (NER, clause matching, verification logic) run inside secure Docker containers.
Calls to OpenAI or Claude are made only when requested and controlled at runtime.
Containers are deployed on ECS, EKS, or EC2 (based on your setup), with the ability to scale horizontally based on document volume or concurrency.
Our team is available to discuss specific compliance requirements, provide additional documentation, or answer any questions about our AI privacy practices.