TogetherAI

Connect Integrations to
TogetherAI

Build advanced AI agents with TogetherAI. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.

Back to TogetherAI overview

Handling Tool Calls and Responses

Learn how to properly handle tool calls from your AI model and process the responses from Metorial integrations.

The Basic Flow

When using Metorial with the AI SDK, the typical flow is:

  1. Send a message to the AI model with tools available
  2. The model decides which tools to call (if any)
  3. You execute those tool calls through Metorial
  4. You send the results back to the model
  5. The model uses the results to form its final response

Using generateText with Tools

The simplest approach uses the AI SDK's generateText function:

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

await metorial.withProviderSession(
  metorialAiSdk,
  { serverDeployments: ['your-deployment-id'] },
  async session => {
    const result = await generateText({
      model: openai('gpt-4'),
      messages: [
        { role: 'user', content: 'Schedule a meeting for tomorrow at 2pm' }
      ],
      tools: session.tools,
      maxToolRoundtrips: 5 // Allow up to 5 rounds of tool calls
    });
    
    console.log(result.text);
  }
);

The AI SDK automatically handles tool calls and responses when using generateText with tools.

Manual Tool Call Handling

For more control, you can manually handle the tool call loop:

import { generateText } from 'ai';

const messages = [
  { role: 'user', content: 'What tasks are due today?' }
];

await metorial.withProviderSession(
  metorialAiSdk,
  { serverDeployments: ['your-deployment-id'] },
  async session => {
    for (let i = 0; i < 10; i++) {
      const result = await generateText({
        model: openai('gpt-4'),
        messages,
        tools: session.tools,
        maxToolRoundtrips: 0 // Disable automatic tool calling
      });
      
      // Check if the model wants to call tools
      if (result.toolCalls && result.toolCalls.length > 0) {
        // Call tools through Metorial
        const toolResponses = await session.callTools(result.toolCalls);
        
        // Add to message history
        messages.push(
          { role: 'assistant', toolCalls: result.toolCalls },
          ...toolResponses
        );
        continue;
      }
      
      // No more tool calls - we have the final answer
      console.log(result.text);
      break;
    }
  }
);

Understanding Tool Responses

Tool responses from Metorial contain the results of integration calls. The session.callTools method automatically formats these responses for the AI SDK.

Error Handling

Always wrap your tool calls in try-catch blocks:

try {
  const result = await generateText({
    model: openai('gpt-4'),
    messages,
    tools: session.tools
  });
  console.log(result.text);
} catch (error) {
  console.error('Error during generation:', error.message);
  // Handle the error appropriately
}

Best Practices

  • Set reasonable maxToolRoundtrips to prevent infinite loops
  • Log tool calls during development to understand agent behavior
  • Handle errors gracefully and provide fallback responses
  • Consider setting timeouts for long-running operations

TogetherAI on Metorial

Build powerful AI applications with TogetherAI and Metorial's comprehensive integration platform. Connect TogetherAI's diverse collection of open-source language models to over 600 integrations through our MCP-powered, open-source SDKs. Metorial makes it effortless to give your TogetherAI-based agents access to calendars, databases, communication tools, project management platforms, and hundreds of other services in just a couple of lines of Python or TypeScript code. Whether you're leveraging Llama, Mistral, or other models available through TogetherAI's platform, Metorial eliminates integration complexity so you can focus on building intelligent features. Our developer-first approach handles authentication, API management, error handling, and rate limiting automatically—no more maintaining brittle integration code or debugging OAuth flows. With Metorial's open-core model, you get the transparency and flexibility of open source with the reliability and support you need for production applications. Stop wasting engineering cycles on integration plumbing and start shipping AI-powered features that differentiate your product and delight your users. Let Metorial handle the connections while you concentrate on creating breakthrough AI experiences.

Connect anything. Anywhere.

Ready to build with Metorial?

Let's take your AI-powered applications to the next level, together.

About Metorial

Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.

Star us on GitHub