agent_proxy Product type

Agent Proxy

Point Claude Code, Codex, Cursor, Aider, Continue, and Cline at PromptGate. Same provider routing, with reversible PII redaction and a secret scanner default-secure for the egress flow.

WHY IT MATTERS

Coding agents need an egress gateway

Without a proxy, every developer's prompts (with their customer data, internal hostnames, and accidental secrets) leave your network. The Agent Proxy is the choke point that makes that visible AND containable.

Four API shapes on one project

OpenAI Chat Completions, OpenAI Responses, Anthropic Messages, Embeddings — all on one project. Cross-routing translates between shapes (Anthropic in, OpenAI out, Anthropic back).

Reversible PII redaction

The LLM sees [[EMAIL_001]]; your developer sees john@acme.com. Tool-call arguments are restored too — agents stay useful, sensitive data stays home.

Secret scanner

18 well-known credential patterns. Block-mode 422 the request loudly without echoing the match (so the secret never re-leaks via error logs).

Connected-agents dashboard

Inferred from gateway_logs request bodies. The Setup page shows "12 Cursor / 8 Claude Code / 4 Aider" without manual instrumentation.

Per-developer tokens

Mint pg_live_* tokens per user via the Management API. Each carries its own scopes, budgets, rate limits — runaway scripts hit a 429 instead of the team budget.

Streaming on every shape

Native SSE for Anthropic Messages, Responses, Chat Completions. Restoration-safe with reversible redaction (no token boundaries lost across chunks).

SETUP

One env var per tool

The Setup page renders these snippets for every supported tool. No code change in any agent — they keep working exactly as before, except now every request flows through your gateway.

  • Claude Code → ANTHROPIC_BASE_URL
  • Codex CLI → OPENAI_BASE_URL
  • Cursor / Aider / Continue / Cline → OpenAI base override in their settings
  • Anthropic SDK / OpenAI SDK in custom Python / TS scripts → same env vars
~/.zshrc shell
# Claude Code
export ANTHROPIC_BASE_URL="https://promptgate.your.co/api/<uuid>"
export ANTHROPIC_AUTH_TOKEN="pg_live_..."

# Codex CLI
export OPENAI_BASE_URL="https://promptgate.your.co/api/<uuid>/v1"
export OPENAI_API_KEY="pg_live_..."

# Aider
export OPENAI_API_BASE="https://promptgate.your.co/api/<uuid>/v1"
export OPENAI_API_KEY="pg_live_..."
DECISION HELPER

Pick this type when…

Use Agent Proxy when

  • Your team uses 2+ coding agents and you want one cost / audit / DLP dashboard across all of them
  • Compliance / legal asks "what did we send to LLM providers last quarter" and you need a real answer
  • You want each developer to have their own token + budget
  • You want to catch AWS keys / customer emails before they leave your perimeter

Pick something else when

  • You're shipping a single product feature backed by one LLM endpoint → AI Gateway
  • Your team only uses one agent and you don't need cross-agent observability → either type works; ai_wrapper is sufficient
  • You want to proxy a non-LLM HTTP API → API Gateway

Stop scattering env vars.

One Agent Proxy project, one Setup page, every coding agent your team uses pointed in the same direction.

Install Community Edition Claude Code recipe ↗