One gate for
every AI model.
PromptGate is a self-hosted AI gateway. Expose any provider through an OpenAI-compatible surface, route by provider:model, and observe every request — without locking yourself into a vendor.
The infrastructure layer between your app and the models.
Stop scattering API keys and SDKs across services. Consolidate, route, observe, and secure every call from one place.
OpenAI-compatible wrapper
Keep your existing SDKs. Swap the base URL and route to any provider with the provider:model convention.
Multi-provider routing
YAML routing rules pick a provider/model based on input size, schema presence, monthly spend, or time of day. Failover transparent to the client.
Live observability
Per-request traces, token counts, real-time cost dashboard, p95 latency, anomaly alerts. Stream logs with filters like status:429 model:"gpt-4o".
Reversible PII redaction
Tokenize emails, phones, IBANs, SSNs (and custom regexes) before the LLM call; substitute back on response. The LLM sees [[EMAIL_001]], your user sees their real data.
Secret scanner
18 well-known credential patterns (AWS, GitHub, Slack, OpenAI, Stripe, JWT, private keys, …). Block-mode rejects with 422; redact-mode tokenizes via reversible redaction.
Self-hosted & private
Your keys, your data, your VPC. Runs on Docker, Kubernetes, or bare metal. Zero calls leave your perimeter — credentials encrypted at rest with your application key.
Your team's coding agents, behind one egress gateway.
Set ANTHROPIC_BASE_URL on Claude Code, OPENAI_BASE_URL on Codex CLI, the OpenAI base on Cursor / Aider / Continue / Cline — and every request flows through PromptGate. No code change in any agent.
- Four API shapes on one project: Chat Completions, Responses, Anthropic Messages, Embeddings
- One cost dashboard across all coding agents your team uses
- Reversible redaction + secret scanner default-secure for the egress flow
- Per-developer API tokens, with their own budgets and rate limits
# Point Claude Code at PromptGate export ANTHROPIC_BASE_URL="https://promptgate.your.co/api/<uuid>" export ANTHROPIC_AUTH_TOKEN="pg_live_..." # Point Codex CLI at PromptGate export OPENAI_BASE_URL="https://promptgate.your.co/api/<uuid>/v1" export OPENAI_API_KEY="pg_live_..." # That's it. Both agents now flow through your gateway, # with full audit, cost tracking, and PII / secret guards.
Your existing code, minus the vendor lock.
PromptGate exposes the exact OpenAI Chat Completions surface. Point your SDK at your gateway, use provider:model, and you're done.
- No SDK migration — works with openai, langchain, llamaindex, curl
- Stream-through SSE, tool calls, and JSON mode pass transparently
- Per-endpoint credentials — rotate without touching client code
- Free-form aliases: model: "smart" → whatever you decide
from openai import OpenAI client = OpenAI( base_url="https://promptgate.your.co/api/<uuid>/v1", api_key="pg_live_...", # PromptGate token ) # Route to any provider by name resp = client.chat.completions.create( model="anthropic:claude-3-5-sonnet", messages=[{"role": "user", "content": "Hi"}], ) # Or use an alias defined in the UI resp = client.chat.completions.create( model="smart", # → anthropic:claude-3-5-sonnet stream=True, )
Run the Community Edition.
One container. No license key, no phone-home. Bring your own provider credentials and start routing.
Coming soonComing soonStart free. Scale when you need to.
Community Edition is free forever and includes everything a single team needs. Cloud adds multi-tenant management, SSO, and a hosted control plane.
Community Edition
- Unlimited endpoints, tokens & projects
- All 8 providers + agent proxy mode
- Reversible PII redaction & secret scanner
- Live logs, metrics, cost dashboard, anomaly alerts
- Routing rules, response cache, evals, replay
- Single-admin · community support on GitHub
Cloud
- Everything in Community
- Multi-workspace & cross-team RBAC
- SSO (SAML, OIDC) & SCIM provisioning
- Unlimited log retention & managed upgrades
- SLA-backed uptime & priority support
- On-prem enterprise tier — get in touch
Frequently asked.
Is the Community Edition really free?
Which providers are supported out of the box?
Where do my provider API keys live?
APP_KEY you supply via environment variable. Keys never leave your perimeter — requests are proxied from your instance directly to each provider.
When will Cloud be available?
Ready to consolidate your AI stack?
Pull the image, point your SDK at your gateway, and route every call through one place.