v1.0 Community Edition · Docker image available now

One gate.
Every model, every API,
every agent.

The self-hosted gateway between your apps and the AI providers, HTTP APIs, MCP servers, and coding agents they call every day. One surface for auth, routing, audit, and cost — without locking yourself into a vendor.

Self-hosted · your keys, your data Source-available · BSL 1.1 Runs on Docker, Kubernetes, bare metal
promptgate.yourcompany.internal / dashboard
Requests · 24h
1.42M
+8.2%
Tokens · 24h
284.6M
+5.1%
p95 Latency
612ms
+4ms
Error rate
0.42%
-0.08%
Request traffic · by provider
ONE GATEWAY · EVERY MAJOR PROVIDER
OpenAI
Anthropic
Google Gemini
Cohere
Mistral
Groq
Together AI
Ollama (local)
See provider capabilities →
WHY PROMPTGATE

The infrastructure layer between your app and the models.

Stop scattering API keys and SDKs across services. Consolidate, route, observe, and secure every call from one place.

OpenAI-compatible wrapper

Keep your existing SDKs. Swap the base URL and route to any provider with the provider:model convention.

Multi-provider routing

YAML routing rules pick a provider/model based on input size, schema presence, monthly spend, or time of day. Failover transparent to the client.

Live observability

Per-request traces, token counts, real-time cost dashboard, p95 latency, anomaly alerts. Stream logs with filters like status:429 model:"gpt-4o".

Reversible PII redaction

Tokenize emails, phones, IBANs, SSNs (and custom regexes) before the LLM call; substitute back on response. The LLM sees [[EMAIL_001]], your user sees their real data.

Secret scanner

18 well-known credential patterns (AWS, GitHub, Slack, OpenAI, Stripe, JWT, private keys, …). Block-mode rejects with 422; redact-mode tokenizes via reversible redaction.

Self-hosted & private

Your keys, your data, your VPC. Runs on Docker, Kubernetes, or bare metal. Zero calls leave your perimeter — credentials encrypted at rest with your application key.

Explore the full feature catalog →
AGENT PROXY MODE

Your team's coding agents, behind one egress gateway.

Set ANTHROPIC_BASE_URL on Claude Code, OPENAI_BASE_URL on Codex CLI, the OpenAI base on Cursor / Aider / Continue / Cline — and every request flows through PromptGate. No code change in any agent.

  • Four API shapes on one project: Chat Completions, Responses, Anthropic Messages, Embeddings
  • One cost dashboard across all coding agents your team uses
  • Reversible redaction + secret scanner default-secure for the egress flow
  • Per-developer API tokens, with their own budgets and rate limits
Read the Agent Proxy story →
~/.zshrc shell
# Point Claude Code at PromptGate
export ANTHROPIC_BASE_URL="https://promptgate.your.co/api/<uuid>"
export ANTHROPIC_AUTH_TOKEN="pg_live_..."

# Point Codex CLI at PromptGate
export OPENAI_BASE_URL="https://promptgate.your.co/api/<uuid>/v1"
export OPENAI_API_KEY="pg_live_..."

# That's it. Both agents now flow through your gateway,
# with full audit, cost tracking, and PII / secret guards.
client.py python
from openai import OpenAI

client = OpenAI(
    base_url="https://promptgate.your.co/api/<uuid>/v1",
    api_key="pg_live_...",  # PromptGate token
)

# Route to any provider by name
resp = client.chat.completions.create(
    model="anthropic:claude-3-5-sonnet",
    messages=[{"role": "user", "content": "Hi"}],
)

# Or use an alias defined in the UI
resp = client.chat.completions.create(
    model="smart",   # → anthropic:claude-3-5-sonnet
    stream=True,
)
AI GATEWAY

Endpoints with policy baked in.

Define an endpoint once — provider, model, prompt template, JSON schema, session policy, failover — and your apps call it through the OpenAI SDK they already use. The endpoint's controls travel with every request.

  • Drop-in OpenAI Chat Completions surface — no SDK migration
  • Aliases (model: "smart") decouple your code from your model choice
  • Streaming, tool calls, JSON mode pass through transparently
  • Provider failover at the gateway level — clients never see it
API GATEWAY

Proxy any HTTP API. The key stays here.

Configure an upstream URL, attach a static credential or an OAuth connection, and your apps reach the upstream through a PromptGate token they own. The provider's secret never leaves the gateway.

  • Method allowlist — clients only do what you let them
  • OAuth-token injection on the gateway side, never on the client
  • SSRF guard blocks RFC1918 / loopback / cloud-metadata at dispatch
  • Same audit + rate-limit surface as the AI Gateway
terminal shell
# Your app calls the gateway with its own PromptGate token —
# the upstream credential lives in the gateway's vault.
curl https://promptgate.your.co/api/<uuid>/proxy/stripe/v1/charges \
  -H "Authorization: Bearer pg_live_..." \
  -d "amount=2000" -d "currency=eur"

# PromptGate adds the right OAuth token, audits the call,
# and forwards it. Stripe's secret stays out of your apps.
oauth-connection.yaml yaml
# Configure once in the admin UI — admin signs in to the upstream
# provider, PromptGate stores the encrypted tokens.
name:          GitHub Issues
authorize_url: https://github.com/login/oauth/authorize
token_url:     https://github.com/login/oauth/access_token
client_id:     Iv1.abc123…
client_secret: "********"           # encrypted at rest
scopes:        [repo, read:user]

# API Gateway endpoints pointing at GitHub now get the right
# Authorization header injected automatically:
curl https://promptgate.your.co/api/<uuid>/proxy/gh-issues/repos/foo/bar/issues \
  -H "Authorization: Bearer pg_live_..."

# PromptGate refreshes the GH token before it expires.
# Your apps never see access_token or refresh_token.
OAUTH CONNECTIONS

The OAuth dance, handled here once.

Connect any OAuth-capable provider — GitHub, Linear, Stripe, Slack, Notion, your own OIDC — once from the admin UI. PromptGate stores the encrypted tokens, refreshes them before they expire, and injects the right Authorization header on every proxied request.

  • One-click authorise → callback flow on the gateway side
  • Tokens encrypted at rest with the instance APP_KEY
  • Automatic refresh — your apps never deal with token lifecycles
  • Per-project, per-endpoint connections — scope by team or use case
  • Works with any OIDC-compliant provider plus tuned built-in presets
MCP GATEWAY

One MCP endpoint in front of many.

Aggregate every MCP server your team uses behind a single JSON-RPC endpoint. Tool names are namespaced per upstream so calls route back automatically; auth and SSRF guard apply once at the gateway.

  • Health checks per upstream + transparent failover
  • Token-scoped access — which agents see which upstreams
  • Bearer / OAuth credential injection per upstream
  • Single point to audit every MCP call across the team
json-rpc json
// One MCP endpoint, every upstream MCP server behind it.
POST https://promptgate.your.co/api/<uuid>/mcp
{"jsonrpc": "2.0", "id": 1, "method": "tools/list"}

// → returns the union of every tool from every active upstream,
//   prefixed with the server namespace
{
  "result": {
    "tools": [
      {"name": "github__create_issue", ...},
      {"name": "linear__list_issues", ...},
      {"name": "fs__read_file", ...}
    ]
  }
}
json-rpc json
// MCP-aware client calls a tool exposed by an AI Gateway endpoint
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "summarize-ticket",
    "arguments": { "ticket_id": "PG-1234" }
  }
}

// → executes the endpoint with its provider, prompt,
//   schema, and guardrails — agents speak MCP, you
//   keep the gateway's controls.
MCP SERVER

Turn any AI Gateway endpoint into a tool.

Flip expose_as_mcp_tool on an endpoint — it appears in tools/list for any MCP-aware client. Claude Code, Cursor, custom agents call it via JSON-RPC and get the endpoint's full policy: prompt, schema, guardrails, audit.

  • One toggle per endpoint — no separate MCP plumbing
  • Same provider routing, observability, cost dashboard
  • Tool schema published via the endpoint's input_schema
  • Per-token MCP scope — agents only see what they're allowed to
GET STARTED · 60 SECONDS

Run the Community Edition.

One container. No license key, no phone-home. Bring your own provider credentials and start routing.

Coming soonComing soon
Install guide → · Coming soonComing soon · Docs
EDITIONS

Start free. Scale when you need to.

Community Edition is free forever and includes everything a single team needs. Cloud adds multi-tenant management, SSO, and a hosted control plane.

Coming soon

Cloud

Managed· hosted control plane
PromptGate as a service. Multi-region, multi-tenant, with a managed control plane and enterprise auth.
  • Everything in Community
  • Multi-workspace & cross-team RBAC
  • SSO (SAML, OIDC) & SCIM provisioning
  • Unlimited log retention & managed upgrades
  • SLA-backed uptime & priority support
  • On-prem enterprise tier — get in touch
Join the waitlist
QUESTIONS

Frequently asked.

Is the Community Edition really free?
Yes. The Community Edition is source-available under BSL 1.1, published as a public Docker image, and contains the full feature set described on this page. No license key, no telemetry, no phone-home. After 4 years the same code automatically converts to Apache 2.0.
Which providers are supported out of the box?
OpenAI, Anthropic, Google Gemini, Cohere, Mistral, Groq, Together AI, and Ollama (local). Any OpenAI-compatible endpoint can be added via a custom provider configuration.
Where do my provider API keys live?
In your database. Credentials are encrypted at rest with the Laravel APP_KEY you supply via environment variable. Keys never leave your perimeter — requests are proxied from your instance directly to each provider.
When will Cloud be available?
A private beta is targeted for later this year. Get in touch to be among the first invited. Until then, the Community Edition runs the same engine — Cloud only adds the multi-tenant control plane and managed hosting on top.
All questions →

Ready to consolidate your AI stack?

Pull the image, point your SDK at your gateway, and route every call through one place.

Install Community Edition Get notified about Cloud