API Key Management
Keep API keys separate for each AI system to maintain clear usage tracking and system-specific policies.
Let’s get you up and running with Overseer’s AI safety checks in minutes. This guide will show you how to integrate our safety engine with your existing AI system.
Set Up Your Environment
Create a .env
file to store your API keys. Each AI system you want to validate needs its own API key:
You can find your api keys in the dashboard in the connections section after you’ve defined your systems
# API keys for different AI systemsOPENAI_SYSTEM_API_KEY=your_openai_system_keyANTHROPIC_SYSTEM_API_KEY=your_anthropic_system_keyCUSTOM_SYSTEM_API_KEY=your_custom_system_key
# You can add more systems as needed
Install the SDK
npm install @overseerai/sdk
Initialize Overseer
import { Overseer } from '@overseerai/sdk';
// Initialize clients for different systemsconst openaiValidator = new Overseer({ apiKey: process.env.OPENAI_SYSTEM_API_KEY});
const anthropicValidator = new Overseer({ apiKey: process.env.ANTHROPIC_SYSTEM_API_KEY});
Implement Safety Checks
// Example with OpenAIasync function getOpenAIResponse(prompt) { const completion = await openai.chat.completions.create({ model: "gpt-4", messages: [{ role: "user", content: prompt }] });
// Validate using the OpenAI system validator const result = await openaiValidator.validate(completion.choices[0].message.content);
if (result.isAllowed) { return result.text; } else { throw new Error("Response failed safety checks: " + result.details?.reason); }}
// Example with Anthropicasync function getAnthropicResponse(prompt) { const message = await anthropic.messages.create({ model: "claude-2", messages: [{ role: "user", content: prompt }] });
// Validate using the Anthropic system validator const result = await anthropicValidator.validate(message.content);
if (result.isAllowed) { return result.text; } else { throw new Error("Response failed safety checks: " + result.details?.reason); }}
// Check AI response before sending to userapp.post('/api/chat', async (req, res) => { try { // Use the appropriate validator based on the AI system const validator = req.body.system === 'anthropic' ? anthropicValidator : openaiValidator;
// Get response from appropriate AI system const aiResponse = await getAIResponse(req.body.prompt); const result = await validator.validate(aiResponse);
if (result.isAllowed) { res.json({ response: result.text }); } else { res.status(400).json({ error: 'Response failed safety checks: ' + result.details?.reason }); } } catch (error) { res.status(500).json({ error: error.message }); }});
// Example with proper error handlingasync function validateAIResponse(content, system = 'openai') { try { const validator = system === 'anthropic' ? anthropicValidator : openaiValidator; const result = await validator.validate(content);
if (result.isAllowed) { return result.text; } else { console.warn('Content rejected:', result.details?.reason); return 'I apologize, but I cannot provide that response.'; } } catch (error) { console.error('Validation error:', error); throw new Error('Failed to validate response'); }}
API Key Management
Keep API keys separate for each AI system to maintain clear usage tracking and system-specific policies.
Error Handling
Always implement proper error handling for safety check failures. Consider having fallback responses ready.
Safety Policies
Configure system-specific safety policies in the Overseer dashboard to match each use case.