Assaf Elovic/gptr-mcp
Built by Metorial, the integration platform for agentic AI.
Assaf Elovic/gptr-mcp
Server Summary
Explore various information sources
Validate information for trustworthiness
Optimize context usage
Gather insights for research projects
Save time in information retrieval
While LLM apps can access web search tools with MCP, GPT Researcher MCP delivers deep research results. Standard search tools return raw results requiring manual filtering, often containing irrelevant sources and wasting context window space.
GPT Researcher autonomously explores and validates numerous sources, focusing only on relevant, trusted and up-to-date information. Though slightly slower than standard search (~30 seconds wait), it delivers:
https://github.com/user-attachments/assets/ef97eea5-a409-42b9-8f6d-b82ab16c52a8
Want to use this with Claude Desktop right away? Here's the fastest path:
Install dependencies:
git clone https://github.com/assafelovic/gptr-mcp.git
pip install -r requirements.txt
Set up your Claude Desktop config at ~/Library/Application Support/Claude/claude_desktop_config.json
:
{
"mcpServers": {
"gptr-mcp": {
"command": "python",
"args": ["/absolute/path/to/gpt-researcher/gptr-mcp/server.py"],
"env": {
"OPENAI_API_KEY": "your-openai-key-here",
"TAVILY_API_KEY": "your-tavily-key-here"
}
}
}
}
Restart Claude Desktop and start researching! š
For detailed setup instructions, see the full Claude Desktop Integration section below.
research_resource
: Get web resources related to a given task via research.deep_research
: Performs deep web research on a topic, finding the most reliable and relevant informationquick_search
: Performs a fast web search optimized for speed over quality, returning search results with snippets. Supports any GPTR supported web retriever such as Tavily, Bing, Google, etc... Learn more herewrite_report
: Generate a report based on research resultsget_research_sources
: Get the sources used in the researchget_research_context
: Get the full context of the researchresearch_query
: Create a research query promptBefore running the MCP server, make sure you have:
You can also connect any other web search engines or MCP using GPTR supported retrievers. Check out the docs here
git clone https://github.com/assafelovic/gpt-researcher.git
cd gpt-researcher
cd gptr-mcp
pip install -r requirements.txt
.env.example
file to create a new file named .env
:cp .env.example .env
.env
file and add your API keys and configure other settings:OPENAI_API_KEY=your_openai_api_key
TAVILY_API_KEY=your_tavily_api_key
You can also add any other env variable for your GPT Researcher configuration.
You can run the MCP server in several ways:
python server.py
mcp run server.py
The simplest way to run with Docker:
# Build and run with docker-compose
docker-compose up -d
# Or manually:
docker build -t gptr-mcp .
docker run -d \
--name gptr-mcp \
-p 8000:8000 \
--env-file .env \
gptr-mcp
If you need to connect to an existing n8n network:
# First, start the container
docker-compose up -d
# Then connect to your n8n network
docker network connect n8n-mcp-net gptr-mcp
# Or create a shared network first
docker network create n8n-mcp-net
docker network connect n8n-mcp-net gptr-mcp
Note: The Docker image uses Python 3.11 to meet the requirements of gpt-researcher >=0.12.16. If you encounter errors during the build, ensure you're using the latest Dockerfile from this repository.
Once the server is running, you'll see output indicating that the server is ready to accept connections. You can verify it's working by:
python test_mcp_server.py
Important for Docker/n8n Integration:
0.0.0.0:8000
to work with Docker containers/sse
endpoint firstThe GPT Researcher MCP server supports multiple transport protocols and automatically chooses the best one for your environment:
Transport | Use Case | When to Use |
---|---|---|
STDIO | Claude Desktop, Local MCP clients | Default for local development |
SSE | Docker, Web clients, n8n integration | Auto-enabled in Docker |
Streamable HTTP | Modern web deployments | Advanced web deployments |
The server automatically detects your environment:
# Local development (default)
python server.py
# ā Uses STDIO transport (Claude Desktop compatible)
# Docker environment
docker run gptr-mcp
# ā Auto-detects Docker, uses SSE transport
# Manual override
export MCP_TRANSPORT=sse
python server.py
# ā Forces SSE transport
Variable | Description | Default | Example |
---|---|---|---|
MCP_TRANSPORT | Force specific transport | stdio | sse , streamable-http |
DOCKER_CONTAINER | Force Docker mode | Auto-detected | true |
// ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"gpt-researcher": {
"command": "python",
"args": ["/absolute/path/to/server.py"],
"env": {
"..."
}
}
}
}
# Set transport explicitly for web deployment
export MCP_TRANSPORT=sse
python server.py
# Or use Docker (auto-detects)
docker-compose up -d
# Use the container name as hostname
docker run --name gptr-mcp -p 8000:8000 gptr-mcp
# In n8n, connect to: http://gptr-mcp:8000/sse
When using SSE or HTTP transports:
GET /health
GET /sse
(get session ID)POST /messages/?session_id=YOUR_SESSION_ID
You can integrate your MCP server with Claude using:
Claude Desktop Integration - For using with Claude desktop application on Mac
For detailed instructions, follow the link above.
To integrate your locally running MCP server with Claude for Mac, you'll need to:
~/Library/Application Support/Claude/claude_desktop_config.json
Claude Desktop launches your MCP server as a separate subprocess, so you must explicitly pass your API keys in the configuration. The server cannot access your shell's environment variables or .env
file automatically.
{
"mcpServers": {
"gptr-mcp": {
"command": "python",
"args": ["/absolute/path/to/your/server.py"],
"env": {
"OPENAI_API_KEY": "your-actual-openai-key-here",
"TAVILY_API_KEY": "your-actual-tavily-key-here"
}
}
}
}
š Your Claude Desktop config contains sensitive API keys. Protect it:
chmod 600 ~/Library/Application\ Support/Claude/claude_desktop_config.json
Never commit this file to version control.
For better security, create a wrapper script:
run_gptr_mcp.sh:
#!/bin/bash
source /path/to/your/.env
python /absolute/path/to/server.py
Then use it in Claude Desktop:
{
"mcpServers": {
"gptr-mcp": {
"command": "/absolute/path/to/run_gptr_mcp.sh"
}
}
}
For complete step-by-step instructions, see the Claude Desktop Integration guide.
User: I'm considering investing in NVIDIA. Can you research the current state of the company, recent stock performance, latest product announcements, and analyst opinions?
Claude: I'll help you research NVIDIA for investment purposes. This will take a moment as I conduct deep research to find the most up-to-date information about their stock performance, recent developments, and expert analysis.
[Claude uses deep_research tool - this takes 30-40 seconds]
Claude: Based on my research, here's a comprehensive analysis of NVIDIA's current state:
## NVIDIA (NVDA) Current Status
### Recent Stock Performance
- Current price: $942.89 (as of my research)
- YTD performance: +90.4%
- 52-week range: $460.30 - $974.00
- Market cap: ~$2.32 trillion
### Recent Key Developments
1. **Blackwell AI Platform**: At GTC 2024, NVIDIA unveiled the Blackwell architecture, their next-generation AI chip platform promising 4x performance improvement over Hopper. Major cloud providers (AWS, Google, Microsoft) have already committed to integrating Blackwell.
2. **Q1 FY2025 Earnings (May 2024)**:
- Revenue: $26.0 billion (+262% YoY)
- Data Center revenue: $22.6 billion (+427% YoY)
- Gross margin: 78.4%
- Announced 10-for-1 stock split effective June 7, 2024
3. **Supply Chain Expansion**: Increased investments in manufacturing capacity with TSMC to address ongoing chip shortages for their high-demand AI GPUs.
### Analyst Consensus
- Strong Buy consensus with 37 analysts rating it "Buy," 5 "Hold," and 0 "Sell"
- Average price target: $1,042 (post-split adjusted: $104.20)
- Key bull thesis: Continued AI infrastructure spending and NVIDIA's dominant position in AI chips
- Key bear concerns: Valuation, potential competition from AMD, Intel, and custom silicon from tech giants
### Industry Position
- Dominant in AI GPU market with estimated 80-90% market share
- Expanding ecosystem through CUDA software platform and industry partnerships
- Diversifying beyond chips with DGX Cloud, AI software platforms, and automotive solutions
If you encounter issues while running the MCP server:
.env
filepip install -r requirements.txt
Container not accessible:
docker ps | grep gptr-mcp
docker logs gptr-mcp
n8n Integration Issues:
gptr-mcp
as the hostname in n8nhttp://gptr-mcp:8000/sse
Session ID Issues:
/sse
endpoint/messages/?session_id=YOUR_ID
Get Session ID:
curl http://gptr-mcp:8000/sse
# Look for: data: /messages/?session_id=XXXXX
Initialize MCP:
curl -X POST http://gptr-mcp:8000/messages/?session_id=YOUR_SESSION_ID \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "id": 1, "method": "initialize", "params": {"protocolVersion": "2024-11-05", "capabilities": {"roots": {"listChanged": true}}, "clientInfo": {"name": "n8n-client", "version": "1.0.0"}}}'
Call Tools:
curl -X POST http://gptr-mcp:8000/messages/?session_id=YOUR_SESSION_ID \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": {"name": "quick_search", "arguments": {"query": "test"}}}'
Run the included test script to verify functionality:
python test_mcp_server.py
This will test:
If your MCP server isn't working with Claude Desktop:
Server not appearing in Claude:
claude_desktop_config.json
syntax is valid JSONserver.py
is correct"OPENAI_API_KEY not found" error:
env
section in your configOPENAI_API_KEY
and TAVILY_API_KEY
Tools not showing up:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
Python/Permission issues:
python --version
"command": "/usr/bin/python3"
or "command": "python3"
Still not working?
python server.py
(should show STDIO transport message)This project is licensed under the MIT License - see the LICENSE file for details.
Email: [email protected]
ā¬ļø Back to Top