Build advanced AI agents with Anthropic. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.
Guidelines for deploying Metorial with AI SDK in production environments.
Use different API keys and deployments for each environment:
// .env.development
METORIAL_API_KEY=dev_key_here
METORIAL_DEPLOYMENT_ID=dev_deployment_id
// .env.production
METORIAL_API_KEY=prod_key_here
METORIAL_DEPLOYMENT_ID=prod_deployment_id
Centralize your configuration:
const config = {
metorial: {
apiKey: process.env.METORIAL_API_KEY!,
deploymentId: process.env.METORIAL_DEPLOYMENT_ID!
}
};
// Validate at startup
if (!config.metorial.apiKey || !config.metorial.deploymentId) {
throw new Error('Missing Metorial configuration');
}
Implement robust error handling:
async function executeWithMetorial(prompt: string) {
try {
return await metorial.withProviderSession(
metorialAISDK,
{ serverDeployments: [config.metorial.deploymentId] },
async (session) => {
const result = await generateText({
model: yourModel,
messages: [{ role: 'user', content: prompt }],
tools: session.tools,
maxSteps: 10
});
return result.text;
}
);
} catch (error) {
// Log error for monitoring
console.error('Metorial execution failed:', error);
// Return user-friendly message
return 'Sorry, I encountered an issue. Please try again.';
}
}
Handle cases where Metorial is unavailable:
async function executeAI(prompt: string) {
try {
// Try with Metorial tools
return await executeWithMetorial(prompt);
} catch (error) {
console.warn('Falling back to basic mode:', error);
// Fallback to basic AI without tools
const result = await generateText({
model: yourModel,
messages: [{ role: 'user', content: prompt }]
});
return result.text;
}
}
Reuse Metorial client across requests:
// Create once at application startup
const metorial = new Metorial({
apiKey: config.metorial.apiKey
});
// Reuse for all requests
async function handleRequest(prompt: string) {
return await metorial.withProviderSession(
metorialAISDK,
{ serverDeployments: [config.metorial.deploymentId] },
async (session) => {
// Handle request
}
);
}
Set reasonable maxSteps
to prevent long execution times:
const result = await generateText({
model: yourModel,
messages: [...],
tools: session.tools,
maxSteps: 5 // Reasonable limit for most tasks
});
Implement timeouts for long-running operations:
const TIMEOUT_MS = 30000; // 30 seconds
async function executeWithTimeout(prompt: string) {
const timeoutPromise = new Promise((_, reject) => {
setTimeout(() => reject(new Error('Timeout')), TIMEOUT_MS);
});
const executionPromise = executeWithMetorial(prompt);
return Promise.race([executionPromise, timeoutPromise]);
}
Validate user input before passing to AI:
function sanitizeInput(input: string): string {
// Remove potentially harmful content
// Limit length
// Validate format
return input.trim().slice(0, 1000);
}
const prompt = sanitizeInput(userInput);
Review AI outputs before showing to users:
function filterOutput(output: string): string {
// Remove sensitive information
// Check for inappropriate content
// Format for display
return output;
}
Monitor tool usage and performance:
const result = await generateText({
model: yourModel,
messages: [...],
tools: session.tools
});
// Log metrics
console.log({
timestamp: new Date(),
stepsUsed: result.steps?.length || 0,
toolsCalled: result.toolCalls?.length || 0,
executionTime: Date.now() - startTime
});
Integrate with error tracking services:
try {
await executeWithMetorial(prompt);
} catch (error) {
// Report to error tracking (e.g., Sentry)
errorTracker.captureException(error, {
tags: { service: 'metorial' },
extra: { prompt, deploymentId }
});
throw error;
}
Before going to production:
As your application grows:
Integrate Anthropic Claude with Metorial to give your AI agents access to over 600 powerful integrations through the Model Context Protocol (MCP). Our open-source platform is purpose-built for developers creating intelligent applications with Claude, Anthropic's advanced language model. With Metorial's Python and TypeScript SDKs, connecting Claude to calendars, CRMs, databases, project management tools, and hundreds of other services takes just a couple of lines of code. Stop spending days building custom integrations—Metorial handles authentication, API calls, and data formatting automatically so you can focus on crafting intelligent agent behaviors and user experiences. Whether you're building customer support bots, research assistants, or workflow automation with Claude, Metorial provides the integration infrastructure you need. Our MCP-native approach ensures your Anthropic-powered agents can interact with the tools your users already depend on, creating more valuable and practical AI solutions that integrate seamlessly into existing workflows.
Let's take your AI-powered applications to the next level, together.
Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.