TogetherAI

Connect Integrations to
TogetherAI

Build advanced AI agents with TogetherAI. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.

Back to TogetherAI overview

Streaming Responses with AI SDK

Learn how to implement streaming responses when using Metorial with Vercel's AI SDK for real-time user experiences.

Why Stream?

Streaming responses provide a better user experience by showing results as they're generated rather than waiting for the complete response. This is especially important for longer responses or when tool calls are involved.

Basic Streaming Setup

Use the streamText function from the AI SDK:

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

await metorial.withProviderSession(
  metorialAiSdk,
  { serverDeployments: ['your-deployment-id'] },
  async session => {
    const result = streamText({
      model: openai('gpt-4'),
      messages: [
        { role: 'user', content: 'Explain how our CRM integration works' }
      ],
      tools: session.tools
    });
    
    // Stream the text
    for await (const chunk of result.textStream) {
      process.stdout.write(chunk);
    }
  }
);

Handling Tool Calls in Streams

The AI SDK automatically handles tool calls even during streaming:

const result = streamText({
  model: openai('gpt-4'),
  messages: [
    { role: 'user', content: 'Find my next meeting and summarize it' }
  ],
  tools: session.tools,
  maxToolRoundtrips: 5
});

// Listen for tool calls
result.onToolCall((toolCall) => {
  console.log('Tool called:', toolCall.name);
});

// Stream the final text
for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

Full Delta Stream

For complete control over the stream, including tool calls:

const result = streamText({
  model: openai('gpt-4'),
  messages,
  tools: session.tools
});

for await (const delta of result.fullStream) {
  switch (delta.type) {
    case 'text-delta':
      process.stdout.write(delta.textDelta);
      break;
    case 'tool-call':
      console.log('Calling tool:', delta.toolName);
      break;
    case 'tool-result':
      console.log('Tool result received');
      break;
  }
}

Server-Side Streaming (Next.js)

Here's an example for Next.js API routes:

import { streamText } from 'ai';
import { metorialAiSdk } from '@metorial/ai-sdk';
import { Metorial } from 'metorial';

export async function POST(req: Request) {
  const { messages } = await req.json();
  
  const metorial = new Metorial({
    apiKey: process.env.METORIAL_API_KEY!
  });
  
  return metorial.withProviderSession(
    metorialAiSdk,
    { serverDeployments: [process.env.DEPLOYMENT_ID!] },
    async session => {
      const result = streamText({
        model: openai('gpt-4'),
        messages,
        tools: session.tools
      });
      
      return result.toDataStreamResponse();
    }
  );
}

Best Practices

  • Always use streaming for user-facing applications
  • Monitor tool calls to show loading states to users
  • Handle stream errors with proper error boundaries
  • Consider implementing retry logic for failed streams
  • Use toDataStreamResponse() for easy Next.js integration

TogetherAI on Metorial

Build powerful AI applications with TogetherAI and Metorial's comprehensive integration platform. Connect TogetherAI's diverse collection of open-source language models to over 600 integrations through our MCP-powered, open-source SDKs. Metorial makes it effortless to give your TogetherAI-based agents access to calendars, databases, communication tools, project management platforms, and hundreds of other services in just a couple of lines of Python or TypeScript code. Whether you're leveraging Llama, Mistral, or other models available through TogetherAI's platform, Metorial eliminates integration complexity so you can focus on building intelligent features. Our developer-first approach handles authentication, API management, error handling, and rate limiting automatically—no more maintaining brittle integration code or debugging OAuth flows. With Metorial's open-core model, you get the transparency and flexibility of open source with the reliability and support you need for production applications. Stop wasting engineering cycles on integration plumbing and start shipping AI-powered features that differentiate your product and delight your users. Let Metorial handle the connections while you concentrate on creating breakthrough AI experiences.

Connect anything. Anywhere.

Ready to build with Metorial?

Let's take your AI-powered applications to the next level, together.

About Metorial

Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.

Star us on GitHub