Google Gemini

Connect Integrations to
Google Gemini

Build advanced AI agents with Google Gemini. Connect 600+ integrations, automate workflows, and deploy with ease using Metorial.

Back to Google Gemini overview

Common Integration Patterns

Learn common patterns for using Google Gemini with Metorial's 600+ integrations.

Pattern 1: Simple Query and Response

The most basic pattern - ask a question, get an answer using integrations:

await metorial.withProviderSession(
  metorialGoogle,
  { serverDeployments: ['your-server-deployment-id'] },
  async session => {
    let response = await genAI.models.generateContent({
      model: 'gemini-1.5-pro-latest',
      contents: [{
        role: 'user',
        parts: [{ text: 'What are my meetings today?' }]
      }],
      config: { tools: session.tools }
    });

    let functionCalls = response.candidates?.[0]?.content?.parts
      ?.filter(part => part.functionCall)
      .map(part => part.functionCall!);

    if (functionCalls && functionCalls.length > 0) {
      let results = await session.callTools(functionCalls);
      console.log('Your meetings:', results);
    }
  }
);

Pattern 2: Multi-Step Conversations

Handle conversations where the agent needs to use multiple integrations:

await metorial.withProviderSession(
  metorialGoogle,
  { serverDeployments: ['your-server-deployment-id'] },
  async session => {
    let conversation = [{
      role: 'user',
      parts: [{ text: 'Find the README from the metorial repo on GitHub' }]
    }];

    let response = await genAI.models.generateContent({
      model: 'gemini-1.5-pro-latest',
      contents: conversation,
      config: { tools: session.tools }
    });

    let functionCalls = response.candidates?.[0]?.content?.parts
      ?.filter(part => part.functionCall)
      .map(part => part.functionCall!);

    if (functionCalls && functionCalls.length > 0) {
      let results = await session.callTools(functionCalls);
      
      // Add the function results to conversation history
      conversation.push({
        role: 'model',
        parts: functionCalls.map(fc => ({ functionCall: fc }))
      });
      
      conversation.push({
        role: 'function',
        parts: results.map((result, i) => ({
          functionResponse: {
            name: functionCalls[i].name,
            response: result
          }
        }))
      });

      // Continue the conversation
      conversation.push({
        role: 'user',
        parts: [{ text: 'Now summarize that README' }]
      });

      let finalResponse = await genAI.models.generateContent({
        model: 'gemini-1.5-pro-latest',
        contents: conversation,
        config: { tools: session.tools }
      });

      console.log(finalResponse.candidates?.[0]?.content?.parts?.[0]?.text);
    }
  }
);

Pattern 3: Batch Processing

Process multiple items using integrations:

const tasks = [
  'Summarize the latest issue in the metorial repo',
  'Find my next meeting',
  'Check unread emails'
];

await metorial.withProviderSession(
  metorialGoogle,
  { serverDeployments: ['your-server-deployment-id'] },
  async session => {
    for (const task of tasks) {
      let response = await genAI.models.generateContent({
        model: 'gemini-1.5-flash-latest', // Using Flash for speed
        contents: [{
          role: 'user',
          parts: [{ text: task }]
        }],
        config: { tools: session.tools }
      });

      let functionCalls = response.candidates?.[0]?.content?.parts
        ?.filter(part => part.functionCall)
        .map(part => part.functionCall!);

      if (functionCalls && functionCalls.length > 0) {
        let results = await session.callTools(functionCalls);
        console.log(`Results for "${task}":`, results);
      }
    }
  }
);

Pattern 4: Error Handling and Retries

Robust error handling for production applications:

await metorial.withProviderSession(
  metorialGoogle,
  { serverDeployments: ['your-server-deployment-id'] },
  async session => {
    const maxRetries = 3;
    let attempt = 0;

    while (attempt < maxRetries) {
      try {
        let response = await genAI.models.generateContent({
          model: 'gemini-1.5-pro-latest',
          contents: [{
            role: 'user',
            parts: [{ text: 'Get my calendar events' }]
          }],
          config: { tools: session.tools }
        });

        let functionCalls = response.candidates?.[0]?.content?.parts
          ?.filter(part => part.functionCall)
          .map(part => part.functionCall!);

        if (functionCalls && functionCalls.length > 0) {
          let results = await session.callTools(functionCalls);
          console.log('Success:', results);
          break;
        }
      } catch (error) {
        attempt++;
        console.error(`Attempt ${attempt} failed:`, error);
        
        if (attempt === maxRetries) {
          console.error('Max retries reached');
          throw error;
        }
        
        // Wait before retrying
        await new Promise(resolve => setTimeout(resolve, 1000 * attempt));
      }
    }
  }
);

These patterns cover most use cases when building AI agents with Google Gemini and Metorial integrations.

Google Gemini on Metorial

Build exceptional AI applications with Google Gemini and Metorial's comprehensive integration platform. Connect Gemini's state-of-the-art multimodal AI models to over 600 integrations including Google Workspace, Slack, GitHub, Salesforce, and hundreds more through our MCP-powered SDKs. Metorial eliminates the complexity of building and maintaining integrations, allowing you to add powerful capabilities to your Gemini-based agents in just a couple of lines of Python or TypeScript code. Whether you're creating virtual assistants, data analysis tools, or intelligent workflow automation with Google's advanced AI, Metorial provides the integration infrastructure you need to ship faster. Our open-source platform handles authentication flows, API versioning, rate limiting, and error handling automatically, so you can focus on crafting intelligent behaviors and delightful user experiences. Stop reinventing the wheel for every integration—let Metorial manage the connections while you concentrate on building innovative AI solutions. With Metorial, your Gemini agents can seamlessly interact with the tools and platforms your users depend on daily.

Connect anything. Anywhere.

Ready to build with Metorial?

Let's take your AI-powered applications to the next level, together.

About Metorial

Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.

Star us on GitHub