Deepseek

Connect Integrations to
Deepseek

Build advanced AI agents with Deepseek. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.

Back to Deepseek overview

Handling Tool Calls with AI SDK

When using Metorial with Vercel's AI SDK, the framework handles most tool execution automatically. This guide explains how tool calling works and how to customize the behavior.

Automatic Tool Execution

The AI SDK can automatically execute tools for you:

import { generateText } from 'ai';

await metorial.withProviderSession(
  metorialAiSdk,
  { serverDeployments: ['your-deployment-id'] },
  async session => {
    const result = await generateText({
      model: yourModel,
      messages: [{ role: 'user', content: 'Check my calendar for today' }],
      tools: session.tools,
      maxSteps: 5 // Allow multiple tool calls
    });
    
    console.log(result.text);
  }
);

The maxSteps parameter controls how many times the AI can call tools before returning a final response.

Streaming Responses

For real-time updates, use streaming:

import { streamText } from 'ai';

await metorial.withProviderSession(
  metorialAiSdk,
  { serverDeployments: ['your-deployment-id'] },
  async session => {
    const result = streamText({
      model: yourModel,
      messages: [{ role: 'user', content: 'Search my emails from last week' }],
      tools: session.tools,
      maxSteps: 5
    });
    
    for await (const chunk of result.textStream) {
      process.stdout.write(chunk);
    }
  }
);

Manual Tool Execution

If you need more control, you can handle tool calls manually:

const result = await generateText({
  model: yourModel,
  messages: messages,
  tools: session.tools,
  maxSteps: 1 // Only allow one step
});

// Check if there are tool calls
if (result.toolCalls && result.toolCalls.length > 0) {
  // Execute tools through Metorial
  const toolResults = await session.callTools(result.toolCalls);
  
  // Continue conversation with tool results
  // ...
}

Understanding the Flow

  1. Your user sends a message
  2. The AI SDK sends the message and available tools to the model
  3. The model decides which tools to call (if any)
  4. Metorial executes the tool calls through your configured integrations
  5. Results are returned to the model
  6. The model generates a final response

Error Handling

Always wrap tool calls in try-catch blocks:

try {
  const result = await generateText({
    model: yourModel,
    messages: messages,
    tools: session.tools,
    maxSteps: 5
  });
  console.log(result.text);
} catch (error) {
  console.error('Tool execution failed:', error);
}

Deepseek on Metorial

Power your Deepseek AI agents with Metorial's extensive integration library featuring over 600 tools and services. Our MCP-powered platform makes it incredibly simple to connect Deepseek models to the APIs and services your applications need. With Metorial's TypeScript and Python SDKs, you can add integrations to your Deepseek-based agents in just a couple of lines of code, eliminating weeks of custom integration development. Whether you're building code assistants, data analysis tools, or intelligent automation with Deepseek's advanced models, Metorial provides instant connectivity to productivity tools, databases, communication platforms, and more. Our open-source, developer-first approach means you maintain full control while we handle the complexity of authentication, rate limiting, error handling, and API versioning. Focus your engineering resources on creating unique AI experiences rather than maintaining integration code. Join the growing community of developers who trust Metorial to handle their integration needs while they concentrate on innovation and delivering value to their users.

Connect anything. Anywhere.

Ready to build with Metorial?

Let's take your AI-powered applications to the next level, together.

About Metorial

Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.

Star us on GitHub