OWASP LLM Top 10 — #1 Covered

Stop prompt injection
in your AI app.
One line change.

CollieAI is a drop-in security proxy for LLM applications. Point your OpenAI client at CollieAI and block prompt injection, jailbreaks, PII leaks, and more — before they reach your users.

Get started free

Free up to 15,000 API calls/month · No credit card · 5 min to integrate

protect_chatbot.py
# Before — direct call to OpenAI
from openai import OpenAI
client = OpenAI(api_key="sk-...")
# After — change base_url, full protection
from openai import OpenAI
client = OpenAI(
    base_url="https://api.collieai.com/v1",
    api_key="clai_your_project_key",
)

# Same code, now protected
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{ "role": "user", "content": user_msg }]
)
Ready. Inbound + outbound filtering · 9 rule types · full OpenAI compatibility
Dashboard & Logs

See every attack in real time

Your dashboard shows blocked attacks, detection stats, and full request logs — the moment they happen.

Drop-in proxy

One line change. Works with:

47M+
API calls protected
1.8M
Attacks blocked
99.2%
Detection accuracy
~10ms
Avg. rule latency

Based on internal testing and early deployments. ML model rules: 10–50ms; regex/pattern rules: <5ms.

Live Demo

See CollieAI catch
a real injection — right now

Type any message and watch how CollieAI classifies it in milliseconds. Try a benign request or a prompt injection attempt.

Runtime detection
Every prompt scanned before it reaches your model
Full observability
Every request logged with triggered rules, tokens, and latency
Inbound + outbound
Filter both user prompts and model responses with configurable policies
CollieAI Proxy — Live Inspector● LIVE
User prompt
Waiting for input
Click "Analyze prompt" to inspect
Quick start

How CollieAI keeps your AI safe

3 simple steps — integrate in minutes, see blocked attacks immediately.

1
Point your client at CollieAI
Sign up for a free API key and change your base_url. No SDK required, no architecture changes.
# Change two config values
base_url = "https://api.collieai.com/v1"
api_key  = "clai_your_project_key"

# Free: 20K tokens/month
2
Your code stays the same
CollieAI is a transparent proxy. Use the standard OpenAI SDK — every request is filtered by your policy rules automatically.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.collieai.com/v1",
    api_key="clai_...",
)
client.chat.completions.create(...)
3
Threats blocked, full audit trail
Only safe, compliant responses reach your users. Dashboard shows every blocked request with triggered rules and context.
// Blocked response (OpenAI-compatible)
{
  "error": {
    "message": "Content blocked by policy",
    "type":    "content_blocked",
    "code":    400
  }
}
Incoming threats
Prompt Injection
Malicious instructions
Jailbreak
Bypass safety rules
PII Exposure
Personal data leaks
Policy Violation
Off-topic, harmful content
Firewall
AI Firewall for LLM Applications
GuardrailsBlock injectionsi
Guardrails
  • Prompt injection detection & blocking
  • Jailbreak attempt prevention
  • Input / output content validation
  • Token limit & rate enforcement
Protect layer
Policy EngineContent rulesi
Policy Engine
  • Custom content moderation rules
  • Topic restriction & allow/deny lists
  • GDPR & HIPAA compliance templates
  • Role-based access policies
Protect layer
ML-rulesAI detectioni
ML-rules
  • AI-powered threat classification
  • Anomaly & behavioral detection
  • Semantic similarity attack recognition
  • Continuously auto-updated models
Protect layer
EvaluationScore qualityi
Evaluation
  • Response quality scoring
  • Hallucination detection
  • Relevance & toxicity metrics
  • A/B testing support
Observe layer
LoggingFull audit traili
Logging
  • Full request / response logging
  • Per-interaction audit trail
  • Cost & latency tracking per request
  • Export to SIEM & analytics platforms
Observe layer
Safe output
Safe for RAG
Clean retrieval
Safe for Agents
Trusted actions
Safe for Apps
Compliant responses

Prices and plans

Start free on our cloud. Scale when you're ready.

Free

€0forever

Evaluate CollieAI on our cloud. Perfect for testing and prototyping.

15,000 API calls / month

Maximum prompt size 10k tokens

1 project

All filtering functionality

Dashboard & logs

Custom policies & rules

Community support (GitHub Issues)

Async jobs & webhooks

Enterprise

Customcontact us

For companies with compliance requirements, large volumes, or full data control.

Unlimited API calls

Everything in Growth

On-premise / dedicated deployment

Dedicated support + SLA

Includes onboarding & dedicated CSM