SLM MCP Hub

LIVE

The World's First MCP Gateway That Learns

Every AI coding session spawns its own MCP server processes. Five Claude Code sessions with 36 MCPs means 180 OS processes eating ~9GB of RAM. Each session loads ~150K tokens of tool definitions into the context window before you type anything. SLM MCP Hub runs all your MCPs in one shared process. Every AI client connects to one HTTP endpoint. Federated tool discovery with memory, learning, and cost intelligence — so Claude, Cursor, Windsurf, and VS Code share cache, tracing, and cost telemetry across every session.

$pip install slm-mcp-hub
GitHub →
79%
Process Reduction
~7 GB
RAM Savings
150K
Tokens Saved / Session
3
Meta-Tools

Features

3 Meta-Tools, 430+ Tools

Instead of loading 400+ tool definitions into every session, clients get 3 meta-tools: hub__search_tools, hub__call_tool, hub__list_servers. Routes dynamically to the right server.

79% Process Reduction

One hub process for all MCP servers. 5 sessions × 36 MCPs drops from 180 processes (~9GB RAM) to 37 processes (~1.9GB RAM).

150K Tokens Saved Per Session

Tool schemas fetched on-demand, not preloaded. Recover 75% of a 200K context window that was being consumed by tool definitions.

Federated + Learning

Cache, cost telemetry, and retrieval learning shared across every connected AI client. Pair with SuperLocalMemory for automatic tool-usage learning.

One Config, Every IDE

Configure MCPs once in the hub. Claude Code, Cursor, Windsurf, VS Code, and any MCP-compatible client share the same tool set via one HTTP endpoint.

Instant Session Startup

HTTP connect instead of spawning 36 processes per session. Sessions start instantly instead of after a ~30s process-spawn cascade.

Use Cases

Run 5+ concurrent Claude Code / Cursor / Windsurf sessions without RAM bloat
Share MCP tool cache + cost telemetry across every AI client
Recover 150K tokens of context window per session
One-config tool management across an entire agent workflow
Licensed under AGPL-3.0