dion-hagan/mcp-server-spinnaker
Built by Metorial, the integration platform for agentic AI.
dion-hagan/mcp-server-spinnaker
Server Summary
Intelligent deployment decisions
Proactive issue detection
Optimize CI/CD workflows
Automated insights and actions
Manage Spinnaker applications
Manage deployment pipelines
This package provides a Model Context Protocol (MCP) server implementation for Spinnaker integrations. It allows AI models to interact with Spinnaker deployments, pipelines, and applications through the standardized MCP interface.
This MCP server is a powerful example of how Anthropic's new AI model, Claude, can directly integrate with and enhance software deployment processes using the Model Context Protocol. By following MCP standards, Claude can access rich contextual information about Spinnaker applications, pipelines, and deployments, and actively manage them using well-defined tools.
Let's dive into some of the exciting possibilities this integration enables for AI-driven CI/CD:
Intelligent Deployment Decisions: With access to comprehensive context about the state of applications and pipelines, AI models like Claude can analyze this information to make intelligent decisions about when and how to deploy. For example, Claude could look at factors like test coverage, code churn, and historical success rates to determine the optimal time and target environment for a deployment.
Proactive Issue Detection and Autonomous Remediation: AI models can continuously monitor the CI/CD process, spotting potential issues before they cause problems. Imagine Claude detecting that a new version of a dependency has a known vulnerability and automatically creating a pull request to update it, or noticing that a deployment is taking longer than usual and proactively spinning up additional resources to prevent a timeout.
Continuous Process Optimization: With each deployment, AI models can learn and adapt, continuously optimizing the CI/CD process. Claude could analyze build and deployment logs to identify bottlenecks, then experiment with different configurations to improve speed and reliability. Over time, the entire deployment process becomes more efficient and robust.
Automated Root Cause Analysis and Recovery: When issues do occur, AI can rapidly diagnose the problem and even attempt to fix it autonomously. Claude could correlate errors across different parts of the system, identify the most likely root cause, and then take corrective actions like rolling back to a previous version or applying a known patch.
And these are just a few examples! As the Model Context Protocol evolves and more integrations are built, we can expect AI to take on increasingly sophisticated roles in the DevOps world. Across the entire CI/CD pipeline, AI could provide intelligent insights and recommendations, acting as a virtual assistant for product engineers.
By empowering AI to work alongside humans in the CI/CD process, MCP integrations like this Spinnaker server showcase how AI can become a proactive, intelligent partner in Developer Productivity infrastructure. It's a significant step towards more efficient, reliable, and autonomous software delivery.
npm install @airjesus17/mcp-server-spinnaker
or
yarn add @airjesus17/mcp-server-spinnaker
import { SpinnakerMCPServer } from '@airjesus17/mcp-server-spinnaker';
// Initialize the server
const server = new SpinnakerMCPServer(
'https://your-gate-url',
['app1', 'app2'], // List of applications to monitor
['prod', 'staging'] // List of environments to monitor
);
// Start the server
const port = 3000;
server.listen(port, () => {
console.log(`Spinnaker MCP Server is running on port ${port}`);
});
The server provides the following tools for AI models to interact with Spinnaker:
Retrieves a list of monitored Spinnaker applications and their current state.
// Example response
{
"success": true,
"data": [
{
"name": "myapp",
"description": "My application",
"pipelines": [
{
"id": "pipeline-1",
"name": "Deploy to Production",
"status": "SUCCEEDED"
}
]
}
]
}
Retrieves all pipelines for a specific application.
// Parameters
{
"application": "myapp"
}
// Example response
{
"success": true,
"data": [
{
"id": "pipeline-1",
"name": "Deploy to Production",
"status": "SUCCEEDED",
"stages": [...]
}
]
}
Triggers a pipeline execution for a specific application.
// Parameters
{
"application": "myapp",
"pipelineId": "pipeline-1",
"parameters": {
"version": "1.2.3",
"environment": "production"
}
}
// Example response
{
"success": true,
"data": {
"ref": "01HFGH2J...",
"status": "RUNNING"
}
}
The server automatically maintains context about your Spinnaker deployments. The context includes:
Context is refreshed every 30 seconds by default.
The server can be configured using the following environment variables:
GATE_URL
: URL of your Spinnaker Gate serviceMCP_PORT
: Port to run the MCP server on (default: 3000)REFRESH_INTERVAL
: Context refresh interval in seconds (default: 30)The package exports TypeScript types for working with the server:
import type {
SpinnakerApplication,
SpinnakerPipeline,
SpinnakerDeployment,
SpinnakerExecution
} from '@airjesus17/mcp-server-spinnaker';
To contribute to the development:
yarn install
yarn build
yarn test
MIT License - see LICENSE for details.