Use any AI without exposing sensitive data.

Two API calls. We strip PII before the LLM, restore it after. Your customers' names never touch external servers.

# Before: Risky — PII goes to external API
response = openai.chat("Summarize John Smith's file at john@acme.com")

# After: Safe — PII never leaves your control
safe = ambientmeta.sanitize("Summarize John Smith's file at john@acme.com")
# → "Summarize [PERSON_1]'s file at [EMAIL_1]"

response = openai.chat(safe.text)  # LLM never sees real data
final = ambientmeta.rehydrate(response, safe.session_id)
# → Original names and emails restored

Your AI features are stuck in legal review.

Every company wants AI. Most can't ship it. Sound familiar?

🚫

Legal won't approve sending customer data to OpenAI

📋

HIPAA, PCI, GDPR all say "don't send PII to third parties"

Building PII detection yourself takes 3-6 months

🏃

Your competitors are shipping while you wait

How It Works

Two API calls. That's it.

1

Sanitize

Send us text, we replace PII with placeholders.

"Email John at john@acme.com"
→ "Email [PERSON_1] at [EMAIL_1]"
2

Call Any LLM

Send the safe text to Claude, GPT-4, Gemini—whatever you want.

response = llm.complete(safe.text)
3

Rehydrate

Send us the response, we restore the original entities.

"I'll contact [PERSON_1]"
→ "I'll contact John"
Your App
Sanitize
External LLM
Rehydrate
Your App

What We Detect

Standard entities out of the box. Plus custom patterns for your org-specific data.

Names

John Smith, Dr. Jane Doe

Emails

john@acme.com

Phone Numbers

(555) 123-4567

SSNs

123-45-6789

Credit Cards

4532-1234-5678-9012

Locations

NYC, 123 Main St

Need to detect employee IDs or project codes? Create custom patterns →

Works with the tools you already use.

Native integrations for popular frameworks. Model-agnostic by design.

LangChain

pip install langchain-ambientmeta

LlamaIndex

pip install llama-index-ambientmeta

OpenAI

Drop-in wrapper included

Anthropic

Drop-in wrapper included
from langchain_ambientmeta import PrivacyGateway

gateway = PrivacyGateway(api_key="am_live_xxx")
safe_llm = gateway.wrap(your_llm)

# Use normally — PII handled automatically
response = safe_llm.invoke("Summarize the employee file")

Simple, usage-based pricing.

Start free. Pay as you grow. No credit card required.

Free

$0
For evaluation
  • 1,000 requests/month
  • All entity types
  • Community support
Get Started

Team

$49/mo
+ usage
  • 5 team members
  • Shared dashboard
  • Priority support
  • Audit logs
Contact Sales

Self-Hosted

$500/mo
Your infrastructure
  • Deploy anywhere
  • Air-gapped option
  • Same API
  • Dedicated support
Contact Sales

Finally, AI your security team will approve.

Data sovereignty without building infrastructure.

🔒

PII Never Leaves

Sensitive data stays in your control. The LLM only sees placeholders.

🌍

Data Sovereignty

Self-hosted option for maximum control. Deploy in any region.

📋

Audit Ready

Detailed logs for every request. SOC 2 Type II in progress.

Frequently Asked Questions

What LLMs does this work with?

All of them. Claude, GPT-4, Gemini, Llama, Mistral—we're model-agnostic. We work at the text level, so any LLM that accepts text input works.

How accurate is the detection?

95%+ for standard entities like names and emails. Custom patterns can achieve 99%+. Our feedback system continuously improves accuracy.

What about HIPAA/PCI/GDPR?

Designed for compliance. PII never touches external APIs. Self-hosted option for maximum control. SOC 2 Type II certification in progress.

Can I define custom entity types?

Yes. Use the /patterns endpoint to add detection for employee IDs, project codes, or any org-specific identifiers.

What's the latency?

Less than 20ms p50 for sanitize, less than 5ms for rehydrate. Fast enough that users won't notice.

Is there a self-hosted option?

Yes. Same API, your infrastructure. Single Docker image. Starts at $500/month.

Ready to ship AI features safely?

Get your API key in 30 seconds. No credit card required.

Get Free API Key