Anthropic

Connect Integrations to
Anthropic

Build advanced AI agents with Anthropic. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.

Back to Anthropic overview

Best Practices for Production

Guidelines for deploying Metorial with AI SDK in production environments.

Environment Configuration

Separate Environments

Use different API keys and deployments for each environment:

// .env.development
METORIAL_API_KEY=dev_key_here
METORIAL_DEPLOYMENT_ID=dev_deployment_id

// .env.production
METORIAL_API_KEY=prod_key_here
METORIAL_DEPLOYMENT_ID=prod_deployment_id

Configuration Management

Centralize your configuration:

const config = {
  metorial: {
    apiKey: process.env.METORIAL_API_KEY!,
    deploymentId: process.env.METORIAL_DEPLOYMENT_ID!
  }
};

// Validate at startup
if (!config.metorial.apiKey || !config.metorial.deploymentId) {
  throw new Error('Missing Metorial configuration');
}

Error Handling

Comprehensive Error Catching

Implement robust error handling:

async function executeWithMetorial(prompt: string) {
  try {
    return await metorial.withProviderSession(
      metorialAISDK,
      { serverDeployments: [config.metorial.deploymentId] },
      async (session) => {
        const result = await generateText({
          model: yourModel,
          messages: [{ role: 'user', content: prompt }],
          tools: session.tools,
          maxSteps: 10
        });
        
        return result.text;
      }
    );
  } catch (error) {
    // Log error for monitoring
    console.error('Metorial execution failed:', error);
    
    // Return user-friendly message
    return 'Sorry, I encountered an issue. Please try again.';
  }
}

Graceful Degradation

Handle cases where Metorial is unavailable:

async function executeAI(prompt: string) {
  try {
    // Try with Metorial tools
    return await executeWithMetorial(prompt);
  } catch (error) {
    console.warn('Falling back to basic mode:', error);
    
    // Fallback to basic AI without tools
    const result = await generateText({
      model: yourModel,
      messages: [{ role: 'user', content: prompt }]
    });
    
    return result.text;
  }
}

Performance Optimization

Connection Reuse

Reuse Metorial client across requests:

// Create once at application startup
const metorial = new Metorial({
  apiKey: config.metorial.apiKey
});

// Reuse for all requests
async function handleRequest(prompt: string) {
  return await metorial.withProviderSession(
    metorialAISDK,
    { serverDeployments: [config.metorial.deploymentId] },
    async (session) => {
      // Handle request
    }
  );
}

Limit Tool Iterations

Set reasonable maxSteps to prevent long execution times:

const result = await generateText({
  model: yourModel,
  messages: [...],
  tools: session.tools,
  maxSteps: 5  // Reasonable limit for most tasks
});

Timeout Handling

Implement timeouts for long-running operations:

const TIMEOUT_MS = 30000; // 30 seconds

async function executeWithTimeout(prompt: string) {
  const timeoutPromise = new Promise((_, reject) => {
    setTimeout(() => reject(new Error('Timeout')), TIMEOUT_MS);
  });
  
  const executionPromise = executeWithMetorial(prompt);
  
  return Promise.race([executionPromise, timeoutPromise]);
}

Security

API Key Security

  • Never commit API keys to version control
  • Use environment variables or secure secret management
  • Rotate keys regularly (every 90 days recommended)
  • Use separate keys for different services/environments

Input Validation

Validate user input before passing to AI:

function sanitizeInput(input: string): string {
  // Remove potentially harmful content
  // Limit length
  // Validate format
  return input.trim().slice(0, 1000);
}

const prompt = sanitizeInput(userInput);

Output Filtering

Review AI outputs before showing to users:

function filterOutput(output: string): string {
  // Remove sensitive information
  // Check for inappropriate content
  // Format for display
  return output;
}

Monitoring and Logging

Track Usage

Monitor tool usage and performance:

const result = await generateText({
  model: yourModel,
  messages: [...],
  tools: session.tools
});

// Log metrics
console.log({
  timestamp: new Date(),
  stepsUsed: result.steps?.length || 0,
  toolsCalled: result.toolCalls?.length || 0,
  executionTime: Date.now() - startTime
});

Error Tracking

Integrate with error tracking services:

try {
  await executeWithMetorial(prompt);
} catch (error) {
  // Report to error tracking (e.g., Sentry)
  errorTracker.captureException(error, {
    tags: { service: 'metorial' },
    extra: { prompt, deploymentId }
  });
  throw error;
}

Deployment Checklist

Before going to production:

  • ✅ Use production API keys and deployment IDs
  • ✅ Implement comprehensive error handling
  • ✅ Set up monitoring and logging
  • ✅ Configure appropriate timeouts
  • ✅ Test failover scenarios
  • ✅ Document integration points
  • ✅ Set up alerting for failures
  • ✅ Review security configurations
  • ✅ Load test with realistic traffic
  • ✅ Have rollback plan ready

Scaling Considerations

As your application grows:

  • Monitor rate limits and upgrade plan as needed
  • Consider caching frequently accessed data
  • Implement request queuing for high traffic
  • Use multiple deployments for different use cases
  • Optimize prompts to reduce unnecessary tool calls
  • Review and optimize slow-performing integrations

Anthropic on Metorial

Integrate Anthropic Claude with Metorial to give your AI agents access to over 600 powerful integrations through the Model Context Protocol (MCP). Our open-source platform is purpose-built for developers creating intelligent applications with Claude, Anthropic's advanced language model. With Metorial's Python and TypeScript SDKs, connecting Claude to calendars, CRMs, databases, project management tools, and hundreds of other services takes just a couple of lines of code. Stop spending days building custom integrations—Metorial handles authentication, API calls, and data formatting automatically so you can focus on crafting intelligent agent behaviors and user experiences. Whether you're building customer support bots, research assistants, or workflow automation with Claude, Metorial provides the integration infrastructure you need. Our MCP-native approach ensures your Anthropic-powered agents can interact with the tools your users already depend on, creating more valuable and practical AI solutions that integrate seamlessly into existing workflows.

Connect anything. Anywhere.

Ready to build with Metorial?

Let's take your AI-powered applications to the next level, together.

About Metorial

Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.

Star us on GitHub