Catch secrets before they leave the perimeter.
A regex pass over every prompt body, applied at the gateway, before the LLM call goes out. AWS access keys, GitHub PATs, Slack tokens, Stripe secrets, JWTs, RSA private keys — recognised, blocked or tokenised, audited.
Apps leak credentials to LLMs constantly.
Pasted-in stack traces with environment variables. Logs forwarded "for the AI to debug". Config files dropped into a chat input "to ask Claude about". Build outputs that include CI tokens. The pattern is everywhere — and once a credential leaves your perimeter it's burned, no matter how trustworthy the receiving provider claims to be.
The Secret Scanner is the last line of defence before the egress hop. If a known credential pattern shows up in the request body, the gateway either rejects the call (block mode) or rewrites the secret as a reversible token (redact mode) so the LLM still has context but the credential never goes upstream.
— What your developer pastes into Claude Code — "My deploy is failing with this error:" " AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" " Error: invalid signature" — Block mode (default) — <- 422 Unprocessable Content { "error": "AWS_SECRET_ACCESS_KEY pattern detected in request", "detector": "aws.secret_access_key", "action": "blocked" } — Redact mode — " AWS_SECRET_ACCESS_KEY=[[AWS_SECRET_001]]" " Error: invalid signature" # LLM gets the context, the secret never leaves the gateway.
18 built-in patterns, plus your own.
Hand-tuned regex per pattern with sensitivity calibrated to minimise false positives. Add custom detectors for in-house token formats.
Cloud providers
AWS access key + secret key, GCP service-account JSON, Azure tenant + client secret.
Git hosting
GitHub PAT (ghp_, gho_, ghs_, fine-grained), GitLab tokens, Bitbucket app passwords.
SaaS APIs
Slack bot + user tokens, Stripe live + test keys, OpenAI sk-…, Anthropic, SendGrid, Twilio.
Generic credentials
JWT (header + payload + signature shape), RSA / EC private keys, SSH keys, generic high-entropy strings.
Database URLs
PostgreSQL, MySQL, MongoDB, Redis connection strings with passwords inline.
Custom patterns
YAML-defined regex + label per project — for in-house token shapes the catalogue doesn't cover yet.
# Per-project policy — switch each detector independently mode: block # or "redact"; default block detectors: aws.access_key: on aws.secret_access_key: on github.pat: on github.fine_grained: on slack.bot_token: on slack.user_token: on stripe.live_key: on stripe.test_key: off jwt: on rsa_private_key: on ssh_private_key: on custom: - name: internal_api_token regex: 'acme_(live|test)_[A-Za-z0-9]{32}' - name: vault_path regex: 'kv/data/secret/[a-z\-]+'
Block by default. Redact when context matters.
Block mode rejects the request with a 422 and surfaces the matched detector in the response and audit log. The user sees an explicit error; no LLM call happens.
Redact mode rewrites the secret with a reversible token before the LLM call. The model sees the surrounding context but not the credential. The audit log captures what was redacted, where it appeared in the body, and which token authenticated the request.
Both modes log every detection — even when nothing was blocked, you can see how often credentials slip into prompts on your team. That metric alone is uncomfortable.
Stop credentials from ever leaving.
Your egress is a single point — make it work for you. Built into every project type, including the Agent Proxy mode for coding agents.