LangChain Integration

Safely use LangChain with any LLM without exposing PII.

Install

pip install langchain-ambientmeta

Quick Start

from langchain_ambientmeta import PrivacyGateway
from langchain_openai import ChatOpenAI

# Initialize
gateway = PrivacyGateway(api_key="am_live_xxx")
llm = ChatOpenAI(model="gpt-4")

# Wrap your LLM
safe_llm = gateway.wrap(llm)

# Use normally — PII is automatically handled
response = safe_llm.invoke("Summarize John Smith's file at john@acme.com")
# OpenAI never sees the real PII

That's it! The wrapper automatically sanitizes input, calls the LLM with safe text, and rehydrates the response.

How It Works

  1. Your input is sanitized before reaching the LLM
  2. The LLM processes the sanitized text
  3. The response is rehydrated with original entities
  4. You get back the complete response

With Chains

from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

prompt = PromptTemplate(
    input_variables=["query"],
    template="Answer this question: {query}"
)

chain = LLMChain(llm=safe_llm, prompt=prompt)
result = chain.run("What is John Smith's email?")

With RAG

from langchain.chains import RetrievalQA

qa = RetrievalQA.from_chain_type(
    llm=safe_llm,
    retriever=your_retriever
)

result = qa.run("Find information about employee EMP-123456")

Configuration

gateway = PrivacyGateway(
    api_key="am_live_xxx",
    entities=["PERSON", "EMAIL", "SSN"],  # Optional: specific entities only
    custom_patterns=True  # Include your custom patterns
)