Chirag Patankar/AI-Customer-Support-Bot--MCP-Server
Built by Metorial, the integration platform for agentic AI.
Chirag Patankar/AI-Customer-Support-Bot--MCP-Server
Server Summary
Real-time context fetching
AI response generation
Batch processing support
Priority queuing
Rate limiting
User interaction tracking
Health monitoring
MCP protocol compliance
A modern, extensible MCP server framework for building AI-powered customer support systems
Features ⢠Quick Start ⢠API Reference ⢠Architecture ⢠Contributing
A Model Context Protocol (MCP) compliant server framework built with modern Python. Designed for developers who want to create intelligent customer support systems without vendor lock-in. Clean architecture, battle-tested patterns, and ready for any AI provider.
graph TB
Client[HTTP Client] --> API[API Server]
API --> MW[Middleware Layer]
MW --> SVC[Service Layer]
SVC --> CTX[Context Manager]
SVC --> AI[AI Integration]
SVC --> DAL[Data Access Layer]
DAL --> DB[(PostgreSQL)]
šļø Clean Architecture
Layered design with clear separation of concerns
š” MCP Compliant
Full Model Context Protocol implementation
š Production Ready
Auth, rate limiting, monitoring included
š High Performance
Built on FastAPI with async support
š AI Agnostic
Integrate any AI provider easily
š Health Monitoring
Comprehensive metrics and diagnostics
š”ļø Secure by Default
Token auth and input validation
š¦ Batch Processing
Handle multiple queries efficiently
# Clone and setup
git clone https://github.com/ChiragPatankar/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Setup environment
cp .env.example .env
# Edit .env with your configuration
# .env file
DATABASE_URL=postgresql://user:password@localhost/customer_support_bot
SECRET_KEY=your-super-secret-key
RATE_LIMIT_REQUESTS=100
RATE_LIMIT_PERIOD=60
# Setup database
createdb customer_support_bot
# Start server
python app.py
# š Server running at http://localhost:8000
Core Endpoints
GET /mcp/health
POST /mcp/process
Content-Type: application/json
X-MCP-Auth: your-token
X-MCP-Version: 1.0
{
"query": "How do I reset my password?",
"priority": "high"
}
POST /mcp/batch
Content-Type: application/json
X-MCP-Auth: your-token
{
"queries": [
"How do I reset my password?",
"What are your business hours?"
]
}
Response Format
{
"status": "success",
"data": {
"response": "Generated AI response",
"confidence": 0.95,
"processing_time": "120ms"
},
"meta": {
"request_id": "req_123456",
"timestamp": "2024-02-14T12:00:00Z"
}
}
{
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded",
"details": {
"retry_after": 60,
"timestamp": "2024-02-14T12:00:00Z"
}
}
š¦ AI-Customer-Support-Bot--MCP-Server
āāā š app.py # FastAPI application
āāā šļø database.py # Database configuration
āāā š”ļø middleware.py # Auth & rate limiting
āāā š models.py # ORM models
āāā āļø mcp_config.py # MCP protocol config
āāā š requirements.txt # Dependencies
āāā š .env.example # Environment template
Layer | Purpose | Components |
---|---|---|
API | HTTP endpoints, validation | FastAPI routes, Pydantic models |
Middleware | Auth, rate limiting, logging | Token validation, request throttling |
Service | Business logic, AI integration | Context management, AI orchestration |
Data | Persistence, models | PostgreSQL, SQLAlchemy ORM |
pip install openai # or anthropic, cohere, etc.
# Add to .env
AI_SERVICE_API_KEY=sk-your-api-key
AI_SERVICE_MODEL=gpt-4
# In service layer
class AIService:
async def generate_response(self, query: str, context: dict) -> str:
# Your AI integration here
return ai_response
pytest tests/
# Format code
black .
# Lint
flake8
# Type checking
mypy .
# Coming soon - Docker containerization
# Structured logging included
{
"timestamp": "2024-02-14T12:00:00Z",
"level": "INFO",
"message": "Query processed",
"request_id": "req_123456",
"processing_time": 120
}
# Production environment variables
DATABASE_URL=postgresql://prod-user:password@prod-host/db
RATE_LIMIT_REQUESTS=1000
LOG_LEVEL=WARNING
We love contributions! Here's how to get started:
# Fork the repo, then:
git clone https://github.com/your-username/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server
# Create feature branch
git checkout -b feature/amazing-feature
# Make your changes
# ...
# Test your changes
pytest
# Submit PR
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ā¤ļø by Chirag Patankar
ā Star this repo if you find it helpful! ā
Report Bug ⢠Request Feature ⢠Documentation