David Montgomery/deepseek-mcp-server
Built by Metorial, the integration platform for agentic AI.
David Montgomery/deepseek-mcp-server
Server Summary
Integrate with MCP-compatible applications
Access DeepSeek's language models
Anonymously use DeepSeek API
A Model Context Protocol (MCP) server for the DeepSeek API, allowing seamless integration of DeepSeek's powerful language models with MCP-compatible applications like Claude Desktop.
To install DeepSeek MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @dmontgomery40/deepseek-mcp-server --client claude
npm install -g deepseek-mcp-server
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"deepseek": {
"command": "npx",
"args": [
"-y",
"deepseek-mcp-server"
],
"env": {
"DEEPSEEK_API_KEY": "your-api-key"
}
}
}
}
Note: The server intelligently handles these natural language requests by mapping them to appropriate configuration changes. You can also query the current settings and available models:
deepseek-reasoner
in the server), the server will automatically attempt to try with v3 (called deepseek-chat
in the server)Note: You can switch back and forth anytime as well, by just giving your prompt and saying "use
deepseek-reasoner
" or "usedeepseek-chat
"
Multi-turn conversation support:
This feature is particularly valuable for two key use cases:
Training & Fine-tuning: Since DeepSeek is open source, many users are training their own versions. The multi-turn support provides properly formatted conversation data that's essential for training high-quality dialogue models.
Complex Interactions: For production use, this helps manage longer conversations where context is crucial:
The implementation handles all context management and message formatting behind the scenes, letting you focus on the actual interaction rather than the technical details of maintaining conversation state.
The DeepSeek MCP Server now supports the FIM (Fill-in-the-Middle) completion endpoint, which is especially useful for code completion, refactoring, and filling in missing code blocks.
Limitations:
You can invoke the FIM tool via MCP clients that support tools, or programmatically. The tool is named fim_completion
and accepts the following parameters:
prefix
(string, required): The text before the missing section.suffix
(string, required): The text after the missing section.model
(string, optional, default: deepseek-fim
): The FIM model to use.temperature
(number, optional, default: 0.7): Sampling temperature.max_tokens
(number, optional, default: 1024): Maximum tokens to generate for the middle section.top_p
(number, optional, default: 1.0): Top-p sampling parameter.frequency_penalty
(number, optional, default: 0.1): Frequency penalty.presence_penalty
(number, optional, default: 0): Presence penalty.stream
(boolean, optional, default: false): Whether to stream the response.{
"tool": "fim_completion",
"params": {
"prefix": "def add(a, b):\n return a + b\n\ndef multiply(a, b):\n ",
"suffix": "\n return result\n",
"temperature": 0.2,
"max_tokens": 64
}
}
The response will contain the generated code or text that fits between the prefix and suffix.
When to use FIM:
You can test the server locally using the MCP Inspector tool:
Build the server:
npm run build
Run the server with MCP Inspector:
# Make sure to specify the full path to the built server
npx @modelcontextprotocol/inspector node ./build/index.js
The inspector will open in your browser and connect to the server via stdio transport. You can:
Note: The server uses DeepSeek's R1 model (deepseek-reasoner) by default, which provides state-of-the-art performance for reasoning and general tasks.
MIT