TogetherAI

Connect Integrations to
TogetherAI

Build advanced AI agents with TogetherAI. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.

Back to TogetherAI overview

Best Practices for Production

Follow these guidelines to ensure your Metorial integration is production-ready, reliable, and performant.

Error Handling

Always implement comprehensive error handling:

import { generateText } from 'ai';

async function processUserRequest(userMessage: string) {
  try {
    const result = await metorial.withProviderSession(
      metorialAiSdk,
      { serverDeployments: [process.env.METORIAL_DEPLOYMENT_ID!] },
      async session => {
        return await generateText({
          model: openai('gpt-4'),
          messages: [{ role: 'user', content: userMessage }],
          tools: session.tools,
          maxToolRoundtrips: 5
        });
      }
    );
    
    return { success: true, text: result.text };
  } catch (error) {
    console.error('Error processing request:', error);
    
    // Return user-friendly error message
    return { 
      success: false, 
      error: 'Unable to process your request. Please try again.' 
    };
  }
}

Rate Limiting

Implement rate limiting to prevent abuse and manage costs:

// Example using a simple in-memory rate limiter
const requestCounts = new Map<string, number>();

function checkRateLimit(userId: string, maxRequests: number = 10): boolean {
  const count = requestCounts.get(userId) || 0;
  
  if (count >= maxRequests) {
    return false;
  }
  
  requestCounts.set(userId, count + 1);
  
  // Reset after 1 hour
  setTimeout(() => {
    requestCounts.delete(userId);
  }, 3600000);
  
  return true;
}

Timeout Management

Set appropriate timeouts to prevent hanging requests:

async function generateWithTimeout(config: any, timeoutMs: number = 30000) {
  const timeoutPromise = new Promise((_, reject) => {
    setTimeout(() => reject(new Error('Request timeout')), timeoutMs);
  });
  
  const generationPromise = generateText(config);
  
  return Promise.race([generationPromise, timeoutPromise]);
}

Monitoring and Logging

Implement proper logging for debugging and monitoring:

import { generateText } from 'ai';

async function generateWithLogging(config: any, requestId: string) {
  const startTime = Date.now();
  
  console.log(`[${requestId}] Starting generation`, {
    model: config.model,
    messageCount: config.messages.length
  });
  
  try {
    const result = await generateText({
      ...config,
      onToolCall: (toolCall) => {
        console.log(`[${requestId}] Tool called: ${toolCall.toolName}`);
      }
    });
    
    const duration = Date.now() - startTime;
    console.log(`[${requestId}] Generation completed in ${duration}ms`);
    
    return result;
  } catch (error) {
    console.error(`[${requestId}] Generation failed`, error);
    throw error;
  }
}

Caching Strategies

Cache session configurations when appropriate:

// Cache deployment configurations that don't change often
const deploymentCache = new Map<string, any>();

async function getCachedSession(deploymentId: string) {
  if (deploymentCache.has(deploymentId)) {
    return deploymentCache.get(deploymentId);
  }
  
  // This is conceptual - actual implementation depends on your needs
  const config = { serverDeployments: [deploymentId] };
  deploymentCache.set(deploymentId, config);
  
  return config;
}

Resource Cleanup

Ensure proper cleanup of resources:

async function processWithCleanup(userMessage: string) {
  let sessionActive = false;
  
  try {
    const result = await metorial.withProviderSession(
      metorialAiSdk,
      { serverDeployments: [process.env.METORIAL_DEPLOYMENT_ID!] },
      async session => {
        sessionActive = true;
        // Your processing logic
        return await generateText({
          model: openai('gpt-4'),
          messages: [{ role: 'user', content: userMessage }],
          tools: session.tools
        });
      }
    );
    
    sessionActive = false;
    return result;
  } finally {
    // Cleanup code
    if (sessionActive) {
      console.log('Session was not properly closed');
    }
  }
}

Testing in Production

  1. Use separate deployments: Create different server deployments for development, staging, and production
  2. Implement feature flags: Control rollout of new integrations
  3. Monitor usage metrics: Track API calls, errors, and latency
  4. Set up alerts: Get notified of unusual patterns or errors

Performance Optimization

  • Stream responses for better user experience
  • Limit tool roundtrips to prevent long execution times
  • Choose appropriate models (faster models for simple tasks)
  • Implement request batching where applicable

Security Checklist

  • ✅ API keys in environment variables only
  • ✅ Server-side API calls only (never client-side)
  • ✅ Input validation and sanitization
  • ✅ Rate limiting implemented
  • ✅ Error messages don't leak sensitive info
  • ✅ Regular security audits
  • ✅ Monitoring for suspicious activity

Cost Management

  • Set usage budgets in your Metorial dashboard
  • Monitor token usage and costs
  • Implement usage limits per user
  • Choose cost-effective models for appropriate tasks
  • Cache responses when possible

Deployment Checklist

Before deploying to production:

  • ✅ All API keys configured in production environment
  • ✅ Error handling implemented
  • ✅ Logging and monitoring set up
  • ✅ Rate limiting in place
  • ✅ Timeouts configured
  • ✅ Load testing completed
  • ✅ Rollback plan prepared
  • ✅ Team trained on debugging procedures

TogetherAI on Metorial

Build powerful AI applications with TogetherAI and Metorial's comprehensive integration platform. Connect TogetherAI's diverse collection of open-source language models to over 600 integrations through our MCP-powered, open-source SDKs. Metorial makes it effortless to give your TogetherAI-based agents access to calendars, databases, communication tools, project management platforms, and hundreds of other services in just a couple of lines of Python or TypeScript code. Whether you're leveraging Llama, Mistral, or other models available through TogetherAI's platform, Metorial eliminates integration complexity so you can focus on building intelligent features. Our developer-first approach handles authentication, API management, error handling, and rate limiting automatically—no more maintaining brittle integration code or debugging OAuth flows. With Metorial's open-core model, you get the transparency and flexibility of open source with the reliability and support you need for production applications. Stop wasting engineering cycles on integration plumbing and start shipping AI-powered features that differentiate your product and delight your users. Let Metorial handle the connections while you concentrate on creating breakthrough AI experiences.

Connect anything. Anywhere.

Ready to build with Metorial?

Let's take your AI-powered applications to the next level, together.

About Metorial

Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.

Star us on GitHub