Product Launch
Stop Sending Your .env to OpenAI: A Privacy Layer for OpenCode
Mar 24, 2026
By
Tom Jordi Ruesch

AI coding agents are the most productive and the most dangerous tools on your machine.
They read your files, execute shell commands, write infrastructure code, and reason about your entire project context. To do any of this well, they need access to the real stuff: API keys, database credentials, JWTs, connection strings. The kind of values that live in .env files and should never leave your device.
But they do leave your device. Every single message you send to your coding agent (including the one where you pasted your Stripe secret key to debug a webhook) is transmitted to an LLM provider's inference endpoint. The model sees everything.
This is the fundamental tension: the agent needs your secrets to be useful (or at least to be autonomous), but the LLM doesn't need to see your secrets to reason about them.
We built a plugin to resolve this. Today we're releasing @rehydra/opencode, a privacy layer for OpenCode that anonymizes secrets before they reach the LLM and restores them before any tool executes locally.
What the LLM Actually Sees
Let's make this concrete. You ask your agent to set an environment variable:
Without the plugin, that secret hits the LLM provider's API verbatim. With it, the LLM receives:
The model treats the placeholder as the real value. It reasons about it, generates commands with it, references it in follow-up messages. When it produces a tool call like:
The plugin intercepts the command before it executes on your machine and restores the real credential. Your shell runs export STRIPE_KEY=sk_live_4eC39HqLyjWDarjtT1zdp7dc. The LLM never saw it.
This is the same principle behind Semantic Redaction: typed placeholders that preserve structure and meaning, not generic [REDACTED] tokens that turn your conversation into garbage.
Session Consistency Matters
If you've read our piece on why context matters for PII, you know that naive redaction (replacing everything with ***) destroys an LLM's ability to reason. The same principle applies here.
Each OpenCode session gets its own Rehydra session with consistent mappings. The same API key always maps to the same placeholder. If your database password and your Redis password are different, the LLM sees two distinct tokens. If you reference the same secret three times across different messages, it's the same <PII id="1"/> every time.
This consistency is what lets the model maintain referential integrity across a long conversation. It can reason about which secret goes where without ever knowing what the secret actually is.
What Gets Caught
The plugin uses Rehydra's full detection engine — the same hybrid Regex + ONNX pipeline, adapted for the coding agent use case.
Secrets (pattern-based): API keys from major providers (OpenAI, Anthropic, GitHub, Stripe, AWS, Slack), JWTs, PEM private keys, connection strings, AWS credentials.
Environment variables: The plugin reads your .env files and matches those exact values anywhere they appear in conversation, even if they don't follow a known pattern. If it's in your .env, it's scrubbed.
Structured PII: Emails, phone numbers, credit card numbers, IBANs, tax IDs. Twenty-eight types in total.
We disable URL and IP address detection by default because coding agents work with these constantly (you probably don't want localhost:3000 redacted). You can flip that with a single config option.
Rehydra can also redact unstructured PII like Names, Places, your Birthday, etc. For the OpenCode plugin we decided to disable this by default as well. The local NER model required for this just adds too much latency and overhead for too little added value. But also here, you can switch it on in the configuration.
Setup
Add to opencode.json:
That's it. The plugin reads .env in your project root and starts scrubbing immediately. No configuration file, no API key, no account.
For teams with specific requirements, you can customize the behavior:
What This Doesn't Do
This isn't a VPN. It doesn't mask your identity or anonymize your network traffic.
It isn't a compliance certification. It's a technical control — a deterministic layer that prevents specific sensitive values from leaving your machine during AI-assisted development.
And it isn't destructive. Unlike regex scrubbing for analytics pipelines, this is fully reversible. Your data is abstracted during transit and restored for local execution. Nothing is lost (that's the whole point of Rehydra).
The Bigger Picture
As coding agents evolve from autocomplete into autonomous systems that deploy infrastructure, manage secrets, and interact with production APIs, the security surface grows with them. We can't keep sending raw credentials to third-party inference endpoints and hoping for the best.
Security doesn't have to mean crippling your tools. By applying the same semantic anonymization we use for translation workflows, we let coding agents reason about your secrets without ever seeing them.
The agent stays powerful. Your secrets stay local.
GitHub: github.com/rehydra-ai/rehydra-sdk
NPM: npm install @rehydra/opencode
Docs: docs.rehydra.ai


