Context Guard Cloud
Integration Guide
Context Guard is a hosted proxy that sits in front of your LLM calls. You do not need to clone a repo, run Docker, or self-host anything for the free trial. Just create an API key in Settings, point your client at https://api.ctx-guard.com, and add your key header.
How it works
Your app sends prompts to Context Guard first. We inspect them for prompt injection, data exfiltration, PII leaks, and tool misuse before forwarding them upstream.
- • Create an API key in Settings
- • Change your LLM client base URL to
https://api.ctx-guard.com - • Add
X-API-Key: cg_live_...to every request - • Keep using your normal OpenAI / Anthropic SDK
Fastest possible setup
If you already have an API key, this is the minimum change required.
curl -X POST https://api.ctx-guard.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "X-API-Key: cg_live_your_key_here" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello"}]
}'That's it. Same shape as OpenAI - just send the request to Context Guard instead.
OpenAI SDK integration
Keep using the official SDK. Just change the base URL and add your Context Guard key.
from openai import OpenAI
client = OpenAI(
api_key="your-openai-key",
base_url="https://api.ctx-guard.com/v1",
default_headers={"X-API-Key": "cg_live_your_key_here"},
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
apiKey: "your-openai-key",
baseURL: "https://api.ctx-guard.com/v1",
defaultHeaders: { "X-API-Key": "cg_live_your_key_here" },
});
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Hello" }],
});
console.log(response.choices[0].message.content);Anthropic integration
Same idea - point the client at Context Guard and include your key header.
import anthropic
client = anthropic.Anthropic(
api_key="your-anthropic-key",
base_url="https://api.ctx-guard.com",
default_headers={"X-API-Key": "cg_live_your_key_here"},
)
message = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=256,
messages=[{"role": "user", "content": "Hello"}],
)
print(message.content)Webhooks
Send threat events to Slack, a SIEM, or your internal incident pipeline.
Configure webhook endpoints in Settings. You can subscribe to block, redact, log, and allow events.
{
"event": "block",
"request_id": "req_123",
"risk_score": 0.97,
"threat_type": "prompt_injection",
"severity": "critical",
"timestamp": "2026-05-07T13:00:00Z"
}API reference
Main endpoints you'll actually use on the hosted service.
| Method | Endpoint | Purpose |
|---|---|---|
| POST | https://api.ctx-guard.com/v1/chat/completions | OpenAI-compatible proxy |
| POST | https://api.ctx-guard.com/v1/messages | Anthropic-compatible proxy |
| POST | https://api.ctx-guard.com/api/v1/inspect | Direct prompt inspection |
| GET | https://api.ctx-guard.com/api/v1/threats | Threat log |
| GET | https://api.ctx-guard.com/api/v1/stats | Dashboard stats |
| GET | https://api.ctx-guard.com/api/v1/settings | Read settings |
| PUT | https://api.ctx-guard.com/api/v1/settings | Update settings |
Use X-API-Key on your requests. Your LLM provider key stays in the normal SDK auth field.
Common errors
The main ones trial users are likely to hit.
Your Context Guard key is missing, revoked, or malformed.
Your trial ended or the key expiry date passed.
You hit the per-key request cap. Slow down or upgrade.
The underlying model provider returned an error or timed out.
Trial & upgrade
Free trial users use the hosted cloud proxy. Self-hosting is not part of the free-trial path.
Local hosting is not part of the free trial. If you need a private or self-hosted deployment, speak to us and we can discuss an enterprise setup.