Back to console

Context Guard Cloud

Integration Guide

Context Guard is a hosted proxy that sits in front of your LLM calls. You do not need to clone a repo, run Docker, or self-host anything for the free trial. Just create an API key in Settings, point your client at https://api.ctx-guard.com, and add your key header.

01 - Overview

How it works

Your app sends prompts to Context Guard first. We inspect them for prompt injection, data exfiltration, PII leaks, and tool misuse before forwarding them upstream.

Flow: Your App → Context Guard Proxy → OpenAI / Anthropic → Response back to your app
  • • Create an API key in Settings
  • • Change your LLM client base URL to https://api.ctx-guard.com
  • • Add X-API-Key: cg_live_... to every request
  • • Keep using your normal OpenAI / Anthropic SDK
02 - Quick Start

Fastest possible setup

If you already have an API key, this is the minimum change required.

bash
curl -X POST https://api.ctx-guard.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "X-API-Key: cg_live_your_key_here" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

That's it. Same shape as OpenAI - just send the request to Context Guard instead.

03 - OpenAI

OpenAI SDK integration

Keep using the official SDK. Just change the base URL and add your Context Guard key.

python
from openai import OpenAI

client = OpenAI(
    api_key="your-openai-key",
    base_url="https://api.ctx-guard.com/v1",
    default_headers={"X-API-Key": "cg_live_your_key_here"},
)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello"}],
)

print(response.choices[0].message.content)
javascript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "your-openai-key",
  baseURL: "https://api.ctx-guard.com/v1",
  defaultHeaders: { "X-API-Key": "cg_live_your_key_here" },
});

const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello" }],
});

console.log(response.choices[0].message.content);
04 - Anthropic

Anthropic integration

Same idea - point the client at Context Guard and include your key header.

python
import anthropic

client = anthropic.Anthropic(
    api_key="your-anthropic-key",
    base_url="https://api.ctx-guard.com",
    default_headers={"X-API-Key": "cg_live_your_key_here"},
)

message = client.messages.create(
    model="claude-3-5-sonnet-latest",
    max_tokens=256,
    messages=[{"role": "user", "content": "Hello"}],
)

print(message.content)
05 - Alerts

Webhooks

Send threat events to Slack, a SIEM, or your internal incident pipeline.

Configure webhook endpoints in Settings. You can subscribe to block, redact, log, and allow events.

json
{
  "event": "block",
  "request_id": "req_123",
  "risk_score": 0.97,
  "threat_type": "prompt_injection",
  "severity": "critical",
  "timestamp": "2026-05-07T13:00:00Z"
}
06 - API

API reference

Main endpoints you'll actually use on the hosted service.

MethodEndpointPurpose
POSThttps://api.ctx-guard.com/v1/chat/completionsOpenAI-compatible proxy
POSThttps://api.ctx-guard.com/v1/messagesAnthropic-compatible proxy
POSThttps://api.ctx-guard.com/api/v1/inspectDirect prompt inspection
GEThttps://api.ctx-guard.com/api/v1/threatsThreat log
GEThttps://api.ctx-guard.com/api/v1/statsDashboard stats
GEThttps://api.ctx-guard.com/api/v1/settingsRead settings
PUThttps://api.ctx-guard.com/api/v1/settingsUpdate settings

Use X-API-Key on your requests. Your LLM provider key stays in the normal SDK auth field.

07 - Troubleshooting

Common errors

The main ones trial users are likely to hit.

401 Invalid API key

Your Context Guard key is missing, revoked, or malformed.

403 API key expired

Your trial ended or the key expiry date passed.

429 Rate limited

You hit the per-key request cap. Slow down or upgrade.

503 Upstream error

The underlying model provider returned an error or timed out.

08 - Plans

Trial & upgrade

Free trial users use the hosted cloud proxy. Self-hosting is not part of the free-trial path.

Need to host locally?

Local hosting is not part of the free trial. If you need a private or self-hosted deployment, speak to us and we can discuss an enterprise setup.