Build advanced AI agents with TogetherAI. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.
Follow these guidelines to ensure your Metorial integration is production-ready, reliable, and performant.
Always implement comprehensive error handling:
import { generateText } from 'ai';
async function processUserRequest(userMessage: string) {
try {
const result = await metorial.withProviderSession(
metorialAiSdk,
{ serverDeployments: [process.env.METORIAL_DEPLOYMENT_ID!] },
async session => {
return await generateText({
model: openai('gpt-4'),
messages: [{ role: 'user', content: userMessage }],
tools: session.tools,
maxToolRoundtrips: 5
});
}
);
return { success: true, text: result.text };
} catch (error) {
console.error('Error processing request:', error);
// Return user-friendly error message
return {
success: false,
error: 'Unable to process your request. Please try again.'
};
}
}
Implement rate limiting to prevent abuse and manage costs:
// Example using a simple in-memory rate limiter
const requestCounts = new Map<string, number>();
function checkRateLimit(userId: string, maxRequests: number = 10): boolean {
const count = requestCounts.get(userId) || 0;
if (count >= maxRequests) {
return false;
}
requestCounts.set(userId, count + 1);
// Reset after 1 hour
setTimeout(() => {
requestCounts.delete(userId);
}, 3600000);
return true;
}
Set appropriate timeouts to prevent hanging requests:
async function generateWithTimeout(config: any, timeoutMs: number = 30000) {
const timeoutPromise = new Promise((_, reject) => {
setTimeout(() => reject(new Error('Request timeout')), timeoutMs);
});
const generationPromise = generateText(config);
return Promise.race([generationPromise, timeoutPromise]);
}
Implement proper logging for debugging and monitoring:
import { generateText } from 'ai';
async function generateWithLogging(config: any, requestId: string) {
const startTime = Date.now();
console.log(`[${requestId}] Starting generation`, {
model: config.model,
messageCount: config.messages.length
});
try {
const result = await generateText({
...config,
onToolCall: (toolCall) => {
console.log(`[${requestId}] Tool called: ${toolCall.toolName}`);
}
});
const duration = Date.now() - startTime;
console.log(`[${requestId}] Generation completed in ${duration}ms`);
return result;
} catch (error) {
console.error(`[${requestId}] Generation failed`, error);
throw error;
}
}
Cache session configurations when appropriate:
// Cache deployment configurations that don't change often
const deploymentCache = new Map<string, any>();
async function getCachedSession(deploymentId: string) {
if (deploymentCache.has(deploymentId)) {
return deploymentCache.get(deploymentId);
}
// This is conceptual - actual implementation depends on your needs
const config = { serverDeployments: [deploymentId] };
deploymentCache.set(deploymentId, config);
return config;
}
Ensure proper cleanup of resources:
async function processWithCleanup(userMessage: string) {
let sessionActive = false;
try {
const result = await metorial.withProviderSession(
metorialAiSdk,
{ serverDeployments: [process.env.METORIAL_DEPLOYMENT_ID!] },
async session => {
sessionActive = true;
// Your processing logic
return await generateText({
model: openai('gpt-4'),
messages: [{ role: 'user', content: userMessage }],
tools: session.tools
});
}
);
sessionActive = false;
return result;
} finally {
// Cleanup code
if (sessionActive) {
console.log('Session was not properly closed');
}
}
}
Before deploying to production:
Build powerful AI applications with TogetherAI and Metorial's comprehensive integration platform. Connect TogetherAI's diverse collection of open-source language models to over 600 integrations through our MCP-powered, open-source SDKs. Metorial makes it effortless to give your TogetherAI-based agents access to calendars, databases, communication tools, project management platforms, and hundreds of other services in just a couple of lines of Python or TypeScript code. Whether you're leveraging Llama, Mistral, or other models available through TogetherAI's platform, Metorial eliminates integration complexity so you can focus on building intelligent features. Our developer-first approach handles authentication, API management, error handling, and rate limiting automatically—no more maintaining brittle integration code or debugging OAuth flows. With Metorial's open-core model, you get the transparency and flexibility of open source with the reliability and support you need for production applications. Stop wasting engineering cycles on integration plumbing and start shipping AI-powered features that differentiate your product and delight your users. Let Metorial handle the connections while you concentrate on creating breakthrough AI experiences.
Let's take your AI-powered applications to the next level, together.
Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.