Skip to main content

Deploy a Mastra TypeScript Agent to CloudBase Run

In one sentence: Use Mastra 1.x to declare agents and tools in TypeScript, run mastra build to produce a standard Hono server, write a multi-stage Dockerfile, and deploy the entire agent backend to CloudBase Run via tcb cloudrun deploy — with a demo tool that queries CloudBase's cloud database.

Estimated time: 40 minutes | Difficulty: Advanced

Applicable Scenarios

This recipe covers the scenario where your team is on the TypeScript stack and wants an agent framework with tool calling, multi-step execution, and memory — without being locked into a platform deployer like Vercel or Cloudflare — and wants the agent backend running in a container you fully control. Mastra's "self-hosted Hono server" deployment mode maps directly onto CloudBase Run's Docker model.

  • Applicable: TypeScript teams that need an agent framework (tools / memory / multi-step)
  • Applicable: wanting private-network connectivity to CloudBase databases, object storage, and Cloud Functions to avoid cross-cloud bandwidth costs
  • Applicable: needing a custom domain, ICP filing for China hosting, and no overseas deployment
  • Not applicable: a pure frontend streaming chatbot with no tools — use the add-vercel-ai-sdk-streaming-chatbot route C instead
  • Not applicable: a single LLM call with no tools at all — a Cloud Function is sufficient; an agent framework is unnecessary

One-line relationship between Mastra and the Vercel AI SDK: the Vercel AI SDK is the underlying SDK (streaming + provider abstraction); Mastra is the agent framework built on top of it (agent / tool / workflow / memory), still running on Hono. The approach in this recipe therefore applies to any agent framework that outputs a Hono or Node.js server.

Prerequisites

DependencyVersion
Node.js (local dev + image runtime)≥ 20 (Mastra 1.x requirement)
@mastra/core^1.0 (stable 1.31.x at time of writing, GA since 2026-01-20)
mastra CLIlatest (global or via npx)
@ai-sdk/openai^1.x (or swap for a Hunyuan / DeepSeek-compatible provider)
@cloudbase/node-sdk^3.x (used in the demo tool, optional)
Docker (local build verification)latest
@cloudbase/cli (tcb)latest
A CloudBase Environment with Cloud Hosting enabled

Step 1: Initialize a project with mastra create

npm install -g mastra
mastra create my-agent
cd my-agent

The CLI will ask a few questions (select TypeScript, select the OpenAI provider, whether to include an example agent) — the defaults work fine. The generated directory structure looks roughly like this:

my-agent/
├── src/
│ └── mastra/
│ ├── index.ts # Mastra instance entry point, registers agents / tools
│ └── agents/
│ └── my-agent.ts # agent definition
├── package.json
├── tsconfig.json
└── .env # put OPENAI_API_KEY here

Start it locally to confirm the default agent works:

mastra dev
# starts a playground at http://localhost:4111 by default — you can invoke the agent directly

mastra dev is the development server with hot reload and a UI playground; production uses the mastra build artifact. Do not mix the two.

Step 2: Write the agent and tool

Agent definition (src/mastra/agents/my-agent.ts):

import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { queryDb } from "../tools/query-db";

export const myAgent = new Agent({
name: "myAgent",
instructions: `You are a CloudBase assistant. When the user asks about data in the database, call the queryDb tool to query it, then answer in plain language. Do not fabricate data.`,
model: openai("gpt-4o-mini"),
tools: { queryDb },
});

Tool definition (src/mastra/tools/query-db.ts) — a demo tool that queries the CloudBase cloud database:

import { createTool } from "@mastra/core/tools";
import { z } from "zod";
import tcb from "@cloudbase/node-sdk";

const app = tcb.init({
env: process.env.CLOUDBASE_ENV,
});

export const queryDb = createTool({
id: "query-db",
description: "Query the CloudBase cloud database and return the first N records from a collection",
inputSchema: z.object({
collection: z.string().describe("Collection name, e.g. users / orders"),
limit: z.number().int().min(1).max(50).default(10),
}),
outputSchema: z.object({
data: z.array(z.record(z.unknown())),
count: z.number(),
}),
execute: async ({ context }) => {
const { collection, limit } = context;
const res = await app.database()
.collection(collection)
.limit(limit)
.get();
return {
data: res.data ?? [],
count: res.data?.length ?? 0,
};
},
});

Mastra instance entry point (src/mastra/index.ts):

import { Mastra } from "@mastra/core";
import { myAgent } from "./agents/my-agent";

export const mastra = new Mastra({
agents: { myAgent },
});

Hard constraints on how to write these:

  • inputSchema must use zod. Mastra converts it to the JSON Schema for the LLM's tool calling protocol. Missing a single .describe() noticeably degrades LLM call quality.
  • The context received by execute is the typed object inferred from inputSchema — no manual parsing needed.
  • Explicitly state the conditions under which each tool should be used in the agent's instructions. This is far more reliable than letting the LLM guess on its own.
  • Do not put tcb.init inside execute — re-creating a connection on every call will exhaust the cloud database connection pool.

Step 3: Verify the build artifact with mastra build

mastra build

The artifact is placed under .mastra/output/ by default (check the CLI output for the exact path), with a structure roughly like:

.mastra/output/
├── index.mjs # Hono server entry point
├── package.json # runtime dependencies (already pruned)
└── ...

What mastra build does: it compiles your agent into a production server using Hono ^4.6 + @hono/node-server ^1.13, and automatically mounts REST routes like /api/agents/:agentId/generate. In other words, after the build this is a standard Node.js HTTP server that runs without the Mastra CLI.

Start it locally to confirm the artifact works:

node .mastra/output/index.mjs
# listens on 4111 by default; curl http://localhost:4111/api/agents/myAgent/generate should respond

If this step fails, the Docker build will also fail. Fix this locally first.

Step 4: Write the multi-stage Dockerfile

Create Dockerfile in the project root:

# ===== Stage 1: deps =====
FROM node:20-alpine AS deps
WORKDIR /app

COPY package.json package-lock.json* ./
RUN npm ci

# ===== Stage 2: builder =====
FROM node:20-alpine AS builder
WORKDIR /app

COPY --from=deps /app/node_modules ./node_modules
COPY . .

RUN npx mastra build

# ===== Stage 3: runner =====
FROM node:20-alpine AS runner
WORKDIR /app

ENV NODE_ENV=production
ENV PORT=3000
ENV HOSTNAME=0.0.0.0

# Run as non-root user
RUN addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 mastra

# Copy only the build artifact + runtime node_modules
COPY --from=builder --chown=mastra:nodejs /app/.mastra/output ./
COPY --from=builder --chown=mastra:nodejs /app/node_modules ./node_modules

USER mastra

EXPOSE 3000

CMD ["node", "index.mjs"]

Key details:

  • HOSTNAME=0.0.0.0 is required. @hono/node-server defaults to listening on 127.0.0.1, which is unreachable from outside the container — the same gotcha as Next.js standalone.
  • PORT=3000 must match the port you specify when creating the service in CloudBase Run. Mastra defaults to 4111; we standardize to 3000 here to align with other recipes.
  • The Mastra runtime entry point is just a Node.js process, so node:20-alpine (≈ 50 MB) is sufficient as the base image. Do not pull node:20 (≈ 400 MB).
  • There is no output: 'standalone' equivalent here (that is a Next.js concept). Mastra prunes dependencies itself, so you can COPY node_modules directly.

Verify the image locally:

docker build -t my-mastra-agent:local .
docker run -p 3000:3000 \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-e CLOUDBASE_ENV=$CLOUDBASE_ENV \
my-mastra-agent:local
# test in a browser or with curl at http://localhost:3000/api/agents/myAgent/generate

Step 5: Deploy with tcb cloudrun deploy

tcb login
tcb cloudrun deploy --port 3000

The CLI will ask:

  1. Select an Environment ID
  2. Service name (recommend something like my-agent or mastra-agent)
  3. Whether to enable public network access (select yes if the frontend needs to call it)

The CLI uploads the current directory, and the cloud side builds the image from your Dockerfile and deploys it — typically a few minutes. After deployment, the Console under "Cloud Hosting → Services → your service name" shows a default domain like https://my-agent-xxxx.ap-shanghai.app.tcloudbase.com.

Environment variables must be configured separately in the CloudBase Run Console. Your local .env is not baked into the image (and should not be). Under "Service Settings → Version Management → New Version", add environment variables in the "Environment Variables" section:

KeyRequiredNotes
OPENAI_API_KEYYesLLM API key
CLOUDBASE_ENVOnly if the tool is usedEnvironment ID where the cloud database lives
TENCENTCLOUD_SECRETIDWhen a public-network container accesses a private cloud databaseUse secrets-based auth
TENCENTCLOUD_SECRETKEYSame as aboveDo not hardcode in the image

When Cloud Hosting and Cloud Functions share the same environment, @cloudbase/node-sdk can usually obtain temporary credentials over the private network automatically, eliminating the need for secret configuration. See secure-secrets-in-cloud-function for details.

Verification

# 1. Health check
curl -I https://my-agent-xxxx.ap-shanghai.app.tcloudbase.com/
# Expected: HTTP/2 200 or 404 (Mastra may not respond on the root path, but TCP connectivity is sufficient)

# 2. List registered agents
curl https://my-agent-xxxx.ap-shanghai.app.tcloudbase.com/api/agents

# 3. Invoke the agent (no tool)
curl -X POST https://my-agent-xxxx.ap-shanghai.app.tcloudbase.com/api/agents/myAgent/generate \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"Hello, introduce yourself"}]}'

# 4. Invoke the agent to trigger tool calling
curl -X POST https://my-agent-xxxx.ap-shanghai.app.tcloudbase.com/api/agents/myAgent/generate \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"Fetch the first 5 records from the users collection"}]}'

The response for step 4 should include a toolCalls field recording that the LLM triggered the queryDb tool along with the exact arguments it provided. If it is absent, either the instructions are not explicit enough, or the model is too weak (gpt-4o-mini handles tool calling reliably; smaller models may not).

Common Errors

ErrorCauseFix
Deployment succeeds but returns 503 / container fails to startThe Hono server is listening on 127.0.0.1 and the port is not exposed externallyAdd ENV HOSTNAME=0.0.0.0 to the Dockerfile and align with --port
Calling the agent returns OpenAI API key not found or 401No key in the image, or OPENAI_API_KEY not injectedAdd OPENAI_API_KEY under "Service Settings → Environment Variables" in CloudBase Run; never put it in the Dockerfile or it will be baked into the image layer
LibSQL reports unable to open database file, or conversation history is lost after a restartMastra's default storage is LibSQL file:./mastra.db, which is ephemeral inside the container — lost on every instance restartSwitch to PostgreSQL / MySQL for production: new Mastra({ storage: new PostgresStore({ connectionString: ... }) })
Tool call reports tcb is not a function or database is undefined@cloudbase/node-sdk not included in the runtime, or CLOUDBASE_ENV not injectedPut @cloudbase/node-sdk under dependencies (not devDependencies) in package.json; add CLOUDBASE_ENV as an environment variable
mastra build reports Cannot find module '@mastra/core'The mastra CLI is installed globally but @mastra/core is not installed locally in the projectRun npm install @mastra/core to add it as a local dependency; the mastra CLI is just the trigger
Agent calls time out after deploymentLLM call duration exceeds the Cloud Hosting default request timeout (usually 60 s)In the Console under "Service Settings → Advanced Configuration", increase the request timeout to 300 s+; or use streaming on the frontend (the /stream endpoint instead of /generate)
Tool is triggered but all parameters are undefinedThe inputSchema fields have no .describe(), so the LLM does not know how to fill themAdd .describe("description") to every zod field — this significantly improves the fill rate
Image is very large (> 500 MB)The runner stage copied the entire builder instead of only .mastra/outputFollow the Dockerfile above strictly — the runner should COPY only the artifact and node_modules
mastra dev works locally but mastra build output does not startCode uses import.meta.url / top-level await or similar patterns that only work in devMove these into execute function bodies or async initialization
Agent memory is inconsistent across instancesDefault in-memory storage is per-instance — each instance is independent after horizontal scalingConfigure Mastra's memory backend to PG / Redis; see the official documentation. Alternatively, force single-instance operation.

Build-phase errors are visible with full output under "Cloud Hosting → Service Details → Deployment History → View Logs". Runtime errors appear in "Service Details → Live Logs" as stdout/stderr. Error code reference: https://docs.cloudbase.net/error-code/.