AI SDK

Connect Integrations to
AI SDK

Build advanced AI agents with AI SDK. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.

Back to AI SDK overview

Production Best Practices

Guidelines for deploying Metorial-powered applications to production environments.

Security

API Key Management

Never hardcode API keys:

// ❌ Bad
let metorial = new Metorial({ 
  apiKey: 'met_12345abcde' 
});

// ✅ Good
let metorial = new Metorial({ 
  apiKey: process.env.METORIAL_API_KEY 
});

Environment Separation

Use different keys and deployments for each environment:

let metorial = new Metorial({ 
  apiKey: process.env.METORIAL_API_KEY 
});

let deploymentId = process.env.NODE_ENV === 'production'
  ? process.env.PROD_DEPLOYMENT_ID
  : process.env.DEV_DEPLOYMENT_ID;

Principle of Least Privilege

Only enable integrations you actually need in your server deployments.

Error Handling

Comprehensive Error Handling

metorial.withProviderSession(
  metorialAiSdk,
  { serverDeployments: [deploymentId] },
  async session => {
    try {
      let result = await generateText({
        model: openai('gpt-4o'),
        prompt: userInput,
        maxSteps: 10,
        tools: session.tools
      });

      return { success: true, data: result.text };
    } catch (error) {
      // Log error for monitoring
      console.error('Agent error:', {
        message: error.message,
        stack: error.stack,
        timestamp: new Date().toISOString()
      });

      // Return user-friendly error
      return { 
        success: false, 
        error: 'Unable to process request. Please try again.' 
      };
    }
  }
);

Timeout Protection

Set timeouts to prevent hanging requests:

async function generateWithTimeout(options, timeoutMs = 30000) {
  return Promise.race([
    generateText(options),
    new Promise((_, reject) => 
      setTimeout(() => reject(new Error('Request timeout')), timeoutMs)
    )
  ]);
}

Performance Optimization

Set Appropriate maxSteps

Don't set maxSteps too high unnecessarily:

// For simple queries
let simple = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Quick fact check',
  maxSteps: 3, // Low for simple tasks
  tools: session.tools
});

// For complex workflows
let complex = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Multi-step analysis',
  maxSteps: 15, // Higher for complex tasks
  tools: session.tools
});

Implement Caching

Cache responses when appropriate:

let cache = new Map();

async function getCachedResponse(prompt) {
  if (cache.has(prompt)) {
    return cache.get(prompt);
  }

  let result = await generateText({
    model: openai('gpt-4o'),
    prompt,
    maxSteps: 10,
    tools: session.tools
  });

  cache.set(prompt, result.text);
  return result.text;
}

Monitoring and Logging

Log Key Events

console.log('Agent request:', {
  timestamp: new Date().toISOString(),
  userId: user.id,
  prompt: prompt.substring(0, 100), // Log preview only
  deployment: deploymentId
});

let result = await generateText({
  model: openai('gpt-4o'),
  prompt,
  maxSteps: 10,
  tools: session.tools
});

console.log('Agent response:', {
  timestamp: new Date().toISOString(),
  userId: user.id,
  success: true,
  stepsUsed: result.steps?.length
});

Monitor API Usage

Track your Metorial and AI provider usage:

  • Set up alerts for unusual activity
  • Monitor rate limits
  • Track costs per request
  • Log failed requests for analysis

Rate Limiting

Implement rate limiting to prevent abuse:

let userRequests = new Map();

function checkRateLimit(userId, maxRequests = 100, windowMs = 3600000) {
  let now = Date.now();
  let userHistory = userRequests.get(userId) || [];
  
  // Remove old requests outside the window
  userHistory = userHistory.filter(time => now - time < windowMs);
  
  if (userHistory.length >= maxRequests) {
    throw new Error('Rate limit exceeded');
  }
  
  userHistory.push(now);
  userRequests.set(userId, userHistory);
}

Input Validation

Always validate and sanitize user input:

function validatePrompt(prompt) {
  if (!prompt || typeof prompt !== 'string') {
    throw new Error('Invalid prompt');
  }
  
  if (prompt.length > 10000) {
    throw new Error('Prompt too long');
  }
  
  // Remove or sanitize sensitive patterns
  return prompt.trim();
}

let sanitizedPrompt = validatePrompt(userInput);

Deployment Checklist

  • [ ] Environment variables configured
  • [ ] Separate API keys for production
  • [ ] Error handling implemented
  • [ ] Logging and monitoring set up
  • [ ] Rate limiting enabled
  • [ ] Input validation in place
  • [ ] Timeouts configured
  • [ ] Security audit completed
  • [ ] Load testing performed
  • [ ] Backup and recovery plan ready

Scaling Considerations

  • Connection Pooling: Reuse Metorial instances when possible
  • Load Balancing: Distribute requests across multiple instances
  • Async Processing: Use queues for long-running tasks
  • Graceful Degradation: Have fallbacks when services are unavailable

Cost Management

  • Monitor Usage: Track API calls and costs
  • Optimize Prompts: Shorter prompts = lower costs
  • Choose Right Models: Balance performance and cost
  • Cache Aggressively: Reduce redundant API calls

By following these best practices, you'll build reliable, secure, and efficient applications with Metorial and the AI SDK.

AI SDK on Metorial

Connect Vercel AI SDK to Metorial and unlock instant access to over 600 integrations for your AI-powered applications. Our open-source, MCP-powered platform makes it effortless to add tools, APIs, and services to your AI SDK projects without writing complex integration code. With Metorial's TypeScript SDK, you can integrate calendars, databases, communication tools, and hundreds of other services in just a couple of lines of code. Whether you're building chatbots, AI assistants, or intelligent workflows with Vercel's AI SDK, Metorial eliminates integration headaches so you can focus on creating exceptional user experiences. Our developer-friendly approach means less time wrestling with authentication, API documentation, and maintenance—and more time innovating. Join developers who are shipping AI applications faster by letting Metorial handle the integration layer while you concentrate on what makes your app unique.

Connect anything. Anywhere.

Ready to build with Metorial?

Let's take your AI-powered applications to the next level, together.

About Metorial

Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.

Star us on GitHub