Guardrail AI
The Real-Time Security Layer for GenAI Applications.
Secure your LLM calls with a one-line configuration change.
How It Works
A seamless security layer that integrates directly into your existing workflow
Instead of calling your LLM provider directly, you route API calls through the Guardrail AI service. Our multi-agent pipeline inspects every request and response in real-time, blocking threats before they can cause harm.
Core Features
Comprehensive security features designed specifically for AI applications
Multi-Agent Pipeline
A fast Triage agent for efficiency, and powerful Specialist agents for deep analysis of prompt injections and harmful content.
Rate Limiting
Built-in protection against Denial of Service and Denial of Wallet attacks, ensuring your service remains stable and cost-effective.
Seamless Integration
No custom SDK needed. Works with your existing tools by simply changing the API endpoint URL in your configuration.
False Positive Resistant
Intelligently trained to understand user intent, allowing legitimate security questions without being incorrectly blocked.
Get Started in Minutes
Simple integration that takes just a few steps to implement
Install Dependencies
In your application's backend, you'll need an HTTP client like `axios` to call the Guardrail AI service.
pnpm add axiosGet Your API Keys
You will need two keys: Your LLM provider key and your Guardrail AI API key.
You will need two keys:
- Your own API key from the underlying LLM provider (e.g., Cerebras).
- Your Guardrail AI API key (for this demo, use the master key below).
Set Environment Variables
Add your keys and the Guardrail AI endpoint URL to your project's environment file.
# The key you get from us (Guardrail AI)
GUARDRAIL_API_KEY="HAdXvTPSHwdzOdDEr6aMxvfLMGjT75YkfxoBrT+aAFM="
# The URL for the Guardrail AI service
GUARDRAIL_API_URL="https://guardrail-ai-2trd.vercel.app/api/guard/v1/chat/completions"
# Your own key for the target LLM provider
CEREBRAS_API_KEY="sk-..."Secure Your API Calls
Create a route in your backend that calls the Guardrail AI service instead of the LLM provider directly.
// In your backend, e.g., /app/api/ask-ai/route.ts
import { NextResponse } from 'next/server';
import axios from 'axios';
export async function POST(request: Request) {
try {
const { message, model } = await request.json();
const response = await axios.post(
process.env.GUARDRAIL_API_URL!,
{
message,
model,
},
{
headers: {
'x-guardrail-api-key': process.env.GUARDRAIL_API_KEY!,
'Authorization': `Bearer ${process.env.CEREBRAS_API_KEY}`
},
}
);
return NextResponse.json(response.data);
} catch (error: any) {
// Handle errors, including blocks from Guardrail AI
return NextResponse.json(
error.response?.data || { error: 'An unexpected error occurred' },
{ status: error.response?.status || 500 }
);
}
}