Deploy an MCP Server to CloudBase Run
In one sentence: Write an MCP Server using
@modelcontextprotocol/sdk, use Hono for HTTP routing, package it with a multi-stage Dockerfile, deploy it to CloudBase Run viatcb cloudrun deploy, and have Cursor / Claude Code / Windsurf connect to it directly via Streamable HTTP transport.Estimated time: 30 minutes | Difficulty: Advanced
Applicable Scenarios
MCP (Model Context Protocol) is an open protocol led by Anthropic for connecting LLM agents to tools and data sources. This recipe covers the scenario where you have existing business capabilities (database queries, internal APIs, file retrieval) that you want Cursor / Claude Code / Windsurf and other agent tools to call directly — without handing internal credentials to a third-party desktop client. Run the MCP Server in your own CloudBase Run instance and let agents connect via an HTTPS URL.
- Applicable: wrapping internal APIs, database queries, and knowledge base retrieval as MCP tools for team-internal agent tools
- Applicable: needing a stable endpoint, custom domain, HTTPS, and private network connectivity with other cloud resources (database, object storage, Cloud Functions)
- Applicable: letting multiple people share a single MCP Server instead of running a local copy each
- Not applicable: purely local tools (operating local files, reading local process lists) — stdio transport running a local process is more appropriate
- Not applicable: experimental or one-off prompt engineering — a prompt template is sufficient
- Not applicable: scenarios where only your own frontend calls the service and no agent tools are involved — a plain HTTP API is fine
A note on the difference between Cloud Hosting and Cloud Functions: MCP Servers are long-running processes that need to maintain session state, and will very likely use SSE for server-initiated messages in the future — Cloud Hosting is more suitable for these scenarios. If you only need stateless tool calls and can tolerate cold starts, Cloud Functions can also work.
Prerequisites
| Dependency | Version |
|---|---|
| Node.js (local dev + image runtime) | ≥ 20 |
@modelcontextprotocol/sdk | 1.29.0 (v1.x is the current production recommendation; v2 is pending GA in 2026 Q1) |
hono + @hono/node-server | latest |
| MCP transport | Streamable HTTP (replaces the HTTP+SSE dual-endpoint approach deprecated in the 2024-11-05 spec) |
| Docker (local build verification) | latest |
@cloudbase/cli | latest |
| A CloudBase Environment with Cloud Hosting enabled | — |
One more note on transport choice: Streamable HTTP reduces the server to a single endpoint URL (e.g. https://example.com/mcp), supporting both POST (client → server requests) and GET (server → client streaming push), with session maintained via the Mcp-Session-Id header. This is simpler than the old HTTP+SSE approach (two endpoints plus session query parameters) and better suited for placement behind a CDN or gateway. This recipe uses Streamable HTTP exclusively.
Step 1: Write the MCP Server Code
Create a new project:
mkdir cloudbase-mcp-demo && cd cloudbase-mcp-demo
npm init -y
npm install @modelcontextprotocol/sdk@1.29.0 hono @hono/node-server zod
npm install --save-dev typescript @types/node tsx
Update package.json to be an ESM project and add a start script:
{
"name": "cloudbase-mcp-demo",
"type": "module",
"main": "dist/index.js",
"scripts": {
"build": "tsc",
"start": "node dist/index.js",
"dev": "tsx src/index.ts"
}
}
tsconfig.json (minimal viable config):
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "dist",
"strict": true,
"esModuleInterop": true
},
"include": ["src"]
}
src/index.ts — a sample MCP Server that registers one query-cloudbase-database tool for demonstration (replace with your own business logic):
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import { Hono } from "hono";
import { serve } from "@hono/node-server";
import { randomUUID } from "node:crypto";
import { z } from "zod";
// 1. 创建 MCP Server 实例并声明能力
const mcpServer = new Server(
{ name: "cloudbase-mcp-demo", version: "0.1.0" },
{ capabilities: { tools: {} } },
);
// 2. 列出本服务暴露的 tool
mcpServer.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "query-cloudbase-database",
description: "Query the CloudBase database collection by a where clause.",
inputSchema: {
type: "object",
properties: {
collection: { type: "string", description: "Collection name" },
where: { type: "object", description: "Filter conditions" },
},
required: ["collection"],
},
},
],
}));
// 3. 处理 tool 调用
const QueryArgs = z.object({
collection: z.string(),
where: z.record(z.unknown()).optional(),
});
mcpServer.setRequestHandler(CallToolRequestSchema, async (req) => {
if (req.params.name !== "query-cloudbase-database") {
throw new Error(`Unknown tool: ${req.params.name}`);
}
const args = QueryArgs.parse(req.params.arguments ?? {});
// TODO: 这里换成 @cloudbase/node-sdk 真实查询;示例只回显参数
return {
content: [
{
type: "text",
text: `Would query collection=${args.collection} where=${JSON.stringify(args.where ?? {})}`,
},
],
};
});
// 4. 用 Hono 起 HTTP 路由,把 /mcp 转给 Streamable HTTP transport
const app = new Hono();
// 一个 transport 实例对应一个 client session
const transports = new Map<string, StreamableHTTPServerTransport>();
app.all("/mcp", async (c) => {
// 防 DNS rebinding:必须校验 Origin
const origin = c.req.header("origin");
const allowed = (process.env.ALLOWED_ORIGINS ?? "")
.split(",")
.map((s) => s.trim())
.filter(Boolean);
if (origin && allowed.length > 0 && !allowed.includes(origin)) {
return c.json({ error: "forbidden_origin" }, 403);
}
const sessionId = c.req.header("mcp-session-id");
let transport = sessionId ? transports.get(sessionId) : undefined;
if (!transport) {
// 首次请求(InitializeRequest)时新建 transport,并生成 session id
transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => randomUUID(),
onsessioninitialized: (id) => transports.set(id, transport!),
});
transport.onclose = () => {
if (transport!.sessionId) transports.delete(transport!.sessionId);
};
await mcpServer.connect(transport);
}
// Hono 的 raw req/res 直接交给 transport
await transport.handleRequest(c.env.incoming, c.env.outgoing, await c.req.json().catch(() => undefined));
return c.body(null);
});
app.get("/health", (c) => c.json({ ok: true }));
const port = Number(process.env.PORT ?? 3000);
const hostname = process.env.HOSTNAME ?? "0.0.0.0";
serve({ fetch: app.fetch, port, hostname });
console.log(`MCP server listening on ${hostname}:${port}/mcp`);
A few key points:
Server+StreamableHTTPServerTransportis the standard combination for SDK 1.x. The official spec deprecated the old HTTP+SSE dual-endpoint approach after 2025-03-26 — use Streamable HTTP for new projects.- Session ID is generated by the server: the client sends an
InitializeRequeston first contact; the server returns a UUID in theMcp-Session-Idresponse header. The client must include this header in all subsequent requests — omitting it results in a 400. - The
Originheader must be validated to prevent DNS rebinding (required by the spec). You can be lenient during local development; setALLOWED_ORIGINSclearly for production. - Bind to
127.0.0.1for local development; bind to0.0.0.0when deploying to CloudBase Run — the Dockerfile below handles this. - The protocol version header is
MCP-Protocol-Version: 2025-06-18; the SDK negotiates this automatically, so business code does not need to handle it.
Step 2: Write the Multi-Stage Dockerfile
Create Dockerfile in the project root:
# ===== Stage 1: deps =====
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci
# ===== Stage 2: builder =====
FROM node:20-alpine AS builder
WORKDIR /app
COPY /app/node_modules ./node_modules
COPY . .
RUN npm run build
# 仅保留生产依赖
RUN npm prune --omit=dev
# ===== Stage 3: runner =====
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=3000
ENV HOSTNAME=0.0.0.0
# 创建非 root 用户运行
RUN addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 mcpsrv
COPY /app/dist ./dist
COPY /app/node_modules ./node_modules
COPY /app/package.json ./package.json
USER mcpsrv
EXPOSE 3000
CMD ["node", "dist/index.js"]
A few key details:
HOSTNAME=0.0.0.0is required. Node.js defaults to listening onlocalhost, which is unreachable from outside the container — this is the most common deployment failure in CloudBase Run.EXPOSE 3000/PORT=3000/tcb cloudrun deploy --port 3000must all be consistent.- The build stage uses
npm cito install everything (including dev dependenciestypescriptandtsx); after compilation,npm prune --omit=devstrips dev dependencies before COPYing into the runner, keeping the image smaller. --chown=mcpsrv:nodejsgrants read permissions to the non-root user; omitting it causesEACCESerrors.
Verify locally that the image runs correctly:
docker build -t cloudbase-mcp-demo:local .
docker run -p 3000:3000 -e ALLOWED_ORIGINS=http://localhost:3000 cloudbase-mcp-demo:local
# In another terminal
curl -i http://localhost:3000/health
# Expected: HTTP/1.1 200 OK
Step 3: Deploy via tcb cloudrun deploy
Deploy with a single CLI command from CI or locally:
# Interactive login locally
tcb login
# Or use an API key in CI environments
tcb login --apiKeyId <your API key id> --apiKey <your API key>
tcb cloudrun deploy --port 3000
The CLI will ask three things:
- Select an Environment ID
- Service name (recommend matching the project name, e.g.
cloudbase-mcp-demo) - Whether to enable public network access (must select yes — agent tools connect from the public internet)
The CLI packages and uploads the current directory, triggering a cloud-side build (CloudBase builds the image from your Dockerfile and deploys it). The entire process typically takes a few minutes.
After deployment, the Console under "Cloud Hosting → Services → your service name" shows a default domain like https://cloudbase-mcp-demo-abc123.ap-shanghai.app.tcloudbase.com. The final endpoint for clients is that domain plus /mcp:
https://cloudbase-mcp-demo-abc123.ap-shanghai.app.tcloudbase.com/mcp
For custom domains, adding auth header validation, and configuring environment variables (e.g. ALLOWED_ORIGINS, upstream service API keys), the steps are identical to the previous recipe Deploy Next.js to CloudBase Run and are not repeated here.
Step 4: Configure in Cursor / Claude Code / Windsurf
All three major agent tools already support Streamable HTTP transport — just provide the URL.
Cursor: edit ~/.cursor/mcp.json (macOS / Linux):
{
"mcpServers": {
"cloudbase-mcp-demo": {
"url": "https://cloudbase-mcp-demo-abc123.ap-shanghai.app.tcloudbase.com/mcp",
"headers": {
"Authorization": "Bearer YOUR_KEY"
}
}
}
}
After saving, the MCP section of the Cursor settings panel reloads automatically.
Claude Code: add with a single CLI command:
claude mcp add --transport http cloudbase-demo \
https://cloudbase-mcp-demo-abc123.ap-shanghai.app.tcloudbase.com/mcp
claude mcp list shows the newly added server; the /mcp command enters interactive mode to view the tool list.
Windsurf: edit ~/.codeium/windsurf/mcp_config.json — the format is identical to Cursor:
{
"mcpServers": {
"cloudbase-mcp-demo": {
"url": "https://cloudbase-mcp-demo-abc123.ap-shanghai.app.tcloudbase.com/mcp",
"headers": {
"Authorization": "Bearer YOUR_KEY"
}
}
}
}
The Authorization header is optional — if your server code has token validation (refer to the requireAuth middleware in connect-openai-api-cloud-function and add it to the /mcp route), include the corresponding token here. The sample server in this recipe has no auth and relies only on Origin validation — always add token auth for production.
Verification
Run these after deployment:
# 1. Health check
curl -i https://cloudbase-mcp-demo-abc123.ap-shanghai.app.tcloudbase.com/health
# Expected: HTTP/2 200, body {"ok":true}
# 2. MCP initialization handshake - simulate a client sending InitializeRequest
curl -i -X POST 'https://cloudbase-mcp-demo-abc123.ap-shanghai.app.tcloudbase.com/mcp' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json, text/event-stream' \
-H 'MCP-Protocol-Version: 2025-06-18' \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-06-18",
"capabilities": {},
"clientInfo": { "name": "curl", "version": "0.1" }
}
}'
# Expected: response header contains Mcp-Session-Id: <uuid>, body is InitializeResult JSON
- In Cursor: Settings panel → MCP → find
cloudbase-mcp-demo, status should be "Connected" with 1 tool listed (query-cloudbase-database). - In a chat, ask the agent directly: "Please use query-cloudbase-database to query
collection=users where={"status":"active"}" — the agent will invoke the tool, and you can see the corresponding request in the Cloud Hosting "Service Details → Logs" view.
Common Errors
| Error | Cause | Fix |
|---|---|---|
| Deployment succeeds but client keeps failing to connect / curl times out | The process inside the container is binding to 127.0.0.1, unreachable from outside | Add ENV HOSTNAME=0.0.0.0 to Dockerfile; read process.env.HOSTNAME in code and pass it to serve() |
400 Bad Request: No valid session ID provided | Client requests after the first one are missing the Mcp-Session-Id header | Check that the client correctly caches the Mcp-Session-Id from the first InitializeResult response; when implementing a custom client, all subsequent requests must include this header |
403 forbidden_origin | Origin header is not on the allowlist (DNS rebinding protection) | Add the client's origin (e.g. https://cursor.sh or null) to the ALLOWED_ORIGINS environment variable; browser and desktop client behavior differs — you can temporarily clear the allowlist for local debugging |
Streaming responses are truncated in browser/agent or show Connection closed | An intermediate layer (CDN / custom domain gateway) has response buffering enabled or timeout set too short | Disable buffering in custom domain config; increase the "Service Settings → Timeout" in CloudBase Run as needed (default is 30s — MCP long connections typically require ≥ 300s) |
404 Not Found at /mcp | Route is incorrect or Hono did not match the method | Confirm the handler uses app.all("/mcp", ...) and not app.post; Streamable HTTP uses both POST and GET |
EACCES: permission denied, open '/app/...' | COPY in Dockerfile is missing --chown, non-root user cannot read the files | Add --chown=mcpsrv:nodejs to all COPY --from=builder lines |
| Cursor MCP panel shows red "Failed" with no specific error | TLS certificate issue / trailing slash missing or extra in URL / public network access not enabled | First run curl -i to confirm the endpoint is reachable; URL must end with /mcp without a trailing slash; confirm "Public Network Access" is enabled for the service in the Console |
Error code definitions are at https://docs.cloudbase.net/error-code/; deployment and build failures are visible with full stack traces under "Cloud Hosting → Service Details → Deployment History → View Logs".
Related Documentation
- Deploy a Next.js 14+ App Router Application to CloudBase Run — more detailed coverage of multi-stage Dockerfile and the
tcb cloudrun deploycommand, which this recipe assumes familiarity with - Proxy OpenAI / Anthropic and Other Overseas LLM APIs via Cloud Functions — the comparison: putting an LLM API proxy in a Cloud Function (short tasks, per-request billing), versus the MCP Server in this recipe (long-running process) — exactly Cloud Functions vs. Cloud Hosting