AI SDK

Connect Integrations to
AI SDK

Build advanced AI agents with AI SDK. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.

Back to AI SDK overview

Using generateText with Metorial Tools

Learn how to use the AI SDK's generateText function with Metorial-powered tools for agentic workflows.

Basic Usage

The generateText function runs a single AI completion with tool support:

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

metorial.withProviderSession(
  metorialAiSdk,
  { serverDeployments: ['your-deployment-id'] },
  async session => {
    let result = await generateText({
      model: openai('gpt-4o'),
      prompt: 'What are my upcoming meetings today?',
      maxSteps: 10,
      tools: session.tools
    });

    console.log(result.text);
  }
);

Understanding maxSteps

The maxSteps parameter controls how many times the AI can call tools:

  • Low (1-3): For simple queries that need one or two tool calls
  • Medium (5-10): For most use cases with moderate complexity
  • High (10+): For complex tasks requiring multiple tool interactions

Example with different complexities:

// Simple query - needs just one tool call
let simple = await generateText({
  model: openai('gpt-4o'),
  prompt: 'What is the weather today?',
  maxSteps: 3,
  tools: session.tools
});

// Complex query - may need multiple tool calls
let complex = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Summarize all GitHub issues assigned to me, check my calendar, and suggest which ones I should work on today',
  maxSteps: 15,
  tools: session.tools
});

Accessing Result Details

The result object contains useful information:

let result = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Search my emails for messages about the Q4 project',
  maxSteps: 10,
  tools: session.tools
});

console.log('Final answer:', result.text);
console.log('Total steps taken:', result.steps?.length);
console.log('Finish reason:', result.finishReason);

How Tool Calling Works

  1. AI Receives Prompt: The model analyzes your prompt
  2. Decides to Use Tools: If needed, it chooses which Metorial tool to call
  3. Tool Executes: Metorial handles the API call and returns results
  4. AI Continues: The model uses the tool response to answer or call more tools
  5. Final Response: After completing its task, the AI provides the final answer

Common Patterns

Information Retrieval

let result = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Find the README.md file in the main repository and summarize it',
  maxSteps: 5,
  tools: session.tools
});

Multi-Step Tasks

let result = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Create a new GitHub issue for the bug in the login flow, then send a Slack message to #engineering about it',
  maxSteps: 10,
  tools: session.tools
});

Data Analysis

let result = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Analyze our team's GitHub activity for the past week and provide insights',
  maxSteps: 12,
  tools: session.tools
});

Tips for Better Results

  • Be Specific: Clear prompts help the AI choose the right tools
  • Set Appropriate maxSteps: Too low and the task might fail; too high wastes resources
  • Test Iteratively: Start with simple prompts and gradually increase complexity
  • Handle Errors: Wrap calls in try-catch blocks for production use

AI SDK on Metorial

Connect Vercel AI SDK to Metorial and unlock instant access to over 600 integrations for your AI-powered applications. Our open-source, MCP-powered platform makes it effortless to add tools, APIs, and services to your AI SDK projects without writing complex integration code. With Metorial's TypeScript SDK, you can integrate calendars, databases, communication tools, and hundreds of other services in just a couple of lines of code. Whether you're building chatbots, AI assistants, or intelligent workflows with Vercel's AI SDK, Metorial eliminates integration headaches so you can focus on creating exceptional user experiences. Our developer-friendly approach means less time wrestling with authentication, API documentation, and maintenance—and more time innovating. Join developers who are shipping AI applications faster by letting Metorial handle the integration layer while you concentrate on what makes your app unique.

Connect anything. Anywhere.

Ready to build with Metorial?

Let's take your AI-powered applications to the next level, together.

About Metorial

Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.

Star us on GitHub