Automate workflows and connect AI agents to Firecrawl. Metorial is built for developers. Handling OAuth, compliance, observability, and more.
The Firecrawl integration lets you scrape websites, extract structured data, and convert web pages into LLM-ready formats directly from your MCP-enabled applications.
Metorial has 600+ integrations available. Here are some related ones you might find interesting.
The Hackernews integration lets you search and retrieve stories, comments, and user data from Hackernews directly within your workflow, enabling you to analyze trends, monitor discussions, and gather insights from the tech community.
The Exa integration lets you search the web using neural search capabilities and retrieve high-quality, AI-ready content directly within your MCP-enabled applications.
The Google Calendar integration lets you view, create, and manage calendar events directly from your workflow, enabling seamless scheduling and time management without switching contexts.
The Google Drive integration lets you search, read, create, and manage files and folders in your Drive directly through AI interactions. Use it to organize documents, retrieve file contents, share files, and automate common Drive tasks without switching to your browser.
The Microsoft 365 integration lets you access and manage your emails, calendar events, documents, and collaborate across Word, Excel, PowerPoint, and Teams directly from your workspace. Use it to read and send messages, schedule meetings, edit files in OneDrive and SharePoint, and streamline your productivity workflows.
The Neon integration lets you connect to your Neon Postgres databases to query data, inspect schemas, and manage database operations directly from your AI assistant.
The Supabase integration lets you query and manage your database, authentication, and storage directly from your AI assistant, enabling natural language database operations and real-time data access.
The Linear integration lets you create, update, and search issues directly from your workspace, enabling seamless project management and task tracking without leaving your development environment.
The Tavily integration lets you perform AI-powered web searches and retrieve real-time information from across the internet directly within your MCP-enabled applications, enabling your AI assistants to access current data and factual content for more accurate and up-to-date responses.
Metorial helps you connect AI agents to Firecrawl with various tools and resources. Tools allow you to interact with perform specific actions, while resources provide read-only access to data and information.
Scrape a single URL and return its content in various formats (markdown, HTML, links, screenshots, etc.). Supports advanced features like custom actions, JavaScript execution, and structured data extraction.
Start a crawl job to spider an entire website or domain. Supports path filtering, depth control, webhooks, and all scraping options. Returns a crawl job ID for tracking progress.
Check the status and progress of a crawl job. Returns current status, number of pages crawled, credits used, and scraped data.
Cancel an ongoing crawl job.
Access scraped content of a specific URL
Access a specific crawl job and its results
Access all pages from a crawl job
Access a specific page from a crawl job by index
Find guides and articles to help you get started with Firecrawl on Metorial.
Firecrawl
Firecrawl
Firecrawl
Firecrawl
Firecrawl
Firecrawl
Firecrawl
Firecrawl
The Firecrawl MCP Server provides powerful web scraping and crawling capabilities through the Model Context Protocol. It enables you to extract content from web pages in multiple formats, perform advanced browser automation tasks, and crawl entire websites with sophisticated filtering and control options. Whether you need to scrape a single page or systematically harvest data from an entire domain, this server offers the tools to get structured, clean data from the web.
Firecrawl is a comprehensive web scraping solution that goes beyond simple HTML retrieval. It handles JavaScript-heavy sites, performs browser automation, extracts structured data using AI, and provides multiple output formats including markdown, HTML, screenshots, and custom JSON schemas. The server supports both single-page scraping and large-scale website crawling with features like proxy rotation, ad-blocking, mobile emulation, and intelligent content extraction.
Scrape a single URL and return its content in various formats. This is your primary tool for extracting data from individual web pages.
Parameters:
markdown
: Clean markdown representation of the pagehtml
: Cleaned HTML contentrawHtml
: Original unprocessed HTMLlinks
: All links found on the pagescreenshot
: Visual capture of the pagesummary
: AI-generated summary of the contentjson
: Structured data extraction using a custom schema with optional promptwait
: Pause for a specified duration or until a selector appearsclick
: Click on elements matching a CSS selectorwrite
: Type text into input fieldspress
: Press keyboard keysscroll
: Scroll the page up or downscreenshot
: Capture a screenshot at this pointexecuteJavascript
: Run custom JavaScript codescrape
: Extract content at this pointpdf
: Generate a PDF of the pagebasic
, stealth
, or auto
)Start a crawl job to systematically spider an entire website or domain. This tool initiates a background job that discovers and scrapes multiple pages according to your specifications.
Parameters:
include
to use sitemap.xml or skip
to discover from links)url
: Webhook endpoint URLevents
: Events to subscribe to (started
, page
, completed
, failed
)headers
: Custom headers for webhook requestsmetadata
: Additional metadata to include in webhook payloadsscrape_url
apply to each crawled pageCheck the status and retrieve results from an ongoing or completed crawl job.
Parameters:
start_crawl
Returns: Current status, progress metrics, credits used, and scraped data from all pages.
Stop a crawl job that is currently in progress.
Parameters:
The Firecrawl MCP Server provides resource templates for accessing scraped content and crawl job data through a URI-based interface.
Access the scraped content of a specific URL.
URI Template: firecrawl://scraped/{url}
Use this resource to retrieve previously scraped content for a given URL. The URL should be properly encoded.
Access information about a specific crawl job including its status and metadata.
URI Template: firecrawl://crawl/{crawlId}
Retrieve comprehensive information about a crawl job, including its current state, configuration, and summary statistics.
Access all pages discovered and scraped during a crawl job.
URI Template: firecrawl://crawl/{crawlId}/pages
Get the complete collection of pages from a crawl job, including their content in the requested formats.
Access a specific page from a crawl job by its index position.
URI Template: firecrawl://crawl/{crawlId}/page/{pageIndex}
Retrieve an individual page from a crawl job's results using its zero-based index.
Extract web content in the format that best suits your needs. Convert web pages to clean markdown for LLM consumption, preserve HTML structure for parsing, capture visual screenshots, or extract structured data using custom JSON schemas with AI-powered extraction.
Perform complex interactions with web pages before scraping. Click buttons, fill forms, scroll to load dynamic content, wait for elements to appear, and execute custom JavaScript. These actions enable scraping of JavaScript-heavy applications and content behind interactions.
Use AI-powered extraction to get only the content you need. The onlyMainContent
option removes navigation, footers, and sidebars automatically. Custom JSON schemas with prompts allow you to extract specific structured data points using natural language instructions.
Systematically crawl entire websites with sophisticated control over what gets scraped. Use path filtering with regex patterns to include or exclude specific sections. Control crawl depth, handle subdomains, and manage concurrency for efficient data collection. Real-time webhook notifications keep you informed of progress.
Choose your proxy type based on needs: basic for speed, stealth for reliability, or auto for automatic fallback. Enable zero data retention for sensitive operations. Use caching to avoid redundant requests. Block ads and cookie popups for cleaner extraction and faster processing.
Scrape as if you're browsing from different countries with location settings. Emulate mobile devices to see mobile-optimized content. Set custom headers to match specific browser configurations.
This MCP server excels at research and data collection tasks. Use it to monitor competitor websites, aggregate news and articles, extract product information from e-commerce sites, collect real estate listings, gather job postings, archive web content, validate web page changes, or build datasets for machine learning. The combination of single-page scraping and site-wide crawling makes it suitable for both targeted extraction and comprehensive data harvesting operations.
The structured data extraction with custom schemas is particularly powerful for transforming unstructured web content into clean, typed data that can be directly used in applications or analysis pipelines. The browser automation capabilities enable scraping of modern single-page applications that traditional scrapers cannot handle.
Let's take your AI-powered applications to the next level, together.
Metorial provides developers with instant access to 600+ MCP servers for building AI agents that can interact with real-world tools and services. Built on MCP, Metorial simplifies agent tool integration by offering pre-configured connections to popular platforms like Google Drive, Slack, GitHub, Notion, and hundreds of other APIs. Our platform supports all major AI agent frameworks—including LangChain, AutoGen, CrewAI, and LangGraph—enabling developers to add tool calling capabilities to their agents in just a few lines of code. By eliminating the need for custom integration code, Metorial helps AI developers move from prototype to production faster while maintaining security and reliability. Whether you're building autonomous research agents, customer service bots, or workflow automation tools, Metorial's MCP server library provides the integrations you need to connect your agents to the real world.