Skip to main content

Optimize Cold Start Performance for WeChat Mini Program Cloud Functions

In one sentence: For CloudBase Cloud Functions backing a WeChat Mini Program, reduce cold start latency using four techniques — moving initialization to module scope, trimming dependencies, tuning memory, and enabling Per-instance Concurrency — with full before/after code.

Estimated time: 30 minutes | Difficulty: Advanced

Applicable Scenarios

  • Applicable: Cloud Functions on the Mini Program's first-screen or login path, where cold start latency affects perceived load time
  • Not applicable: Batch processing or scheduled tasks — optimizing throughput matters more than optimizing latency there

A Cloud Function cold start has two layers: container startup (managed by the CloudBase platform, not controllable by developers) and user code initialization (executing require + module-level code — this is where developers can optimize). All four techniques in this recipe target the user code layer.

Prerequisites

DependencyVersion
@cloudbase/node-sdk3.18.1
@cloudbase/cli3.0.4
Node.js≥ 16.13

You need an existing Cloud Function to optimize. The getLoginTicket function from add-auth-wechat-miniprogram is a good starting point.

Optimization 1: Move SDK Initialization to Module Scope

Why it works: CloudBase Cloud Functions run in a Node.js process. When a container starts for the first time, Node.js loads the entire module (executes all top-level code). On subsequent requests handled by the same container instance, module-level code does not execute again — this is Node.js module caching. Placing require and init at module scope lets container reuse skip this overhead entirely.

Before:

// ❌ require + init on every request
exports.main = async (event) => {
const cloudbase = require('@cloudbase/node-sdk');
const app = cloudbase.init({
env: process.env.TCB_ENV,
credentials: require('./tcb_custom_login.json'),
});
const ticket = app.auth().createTicket(event.openid);
return { ticket };
};

After:

// ✅ require and init at module scope
const cloudbase = require('@cloudbase/node-sdk');

const app = cloudbase.init({
env: process.env.TCB_ENV,
credentials: require('./tcb_custom_login.json'),
});
const auth = app.auth();

exports.main = async (event) => {
const ticket = auth.createTicket(event.openid);
return { ticket };
};

Note: Only pure in-memory operations (require, object initialization) belong at module scope. If init involves network I/O (e.g., establishing a database connection pool), putting it at module scope will actually slow cold starts — because the first container startup runs I/O and code loading serially.

Optimization 2: Trim node_modules to Reduce require File Count

Why it works: During cold start, Node.js require() reads and parses JS files from disk. @cloudbase/node-sdk includes every module — database, storage, AI, and more — totalling 20+ MB unpacked. If your function only uses the auth module, loading everything else is pure waste.

Check current dependency size:

cd cloudfunctions/getLoginTicket
du -sh node_modules/ | head -1 # total size
du -sh node_modules/*/ | sort -h | tail -10 # top 10 largest dependencies

Trimming principles:

  • Only install dependencies, not devDependencies. Use npm install --production or ensure devDependencies are not mixed into dependencies in package.json.
  • Replace large libraries with native modules. The getLoginTicket function only needs a single HTTP GET to call WeChat's jscode2session — Node.js's built-in https module is sufficient. No need to install axios (3 MB+).
  • npm ls --prod --depth=1 lists actual runtime dependencies — evaluate each one to see if it can be removed.

Optimization 3: Increase Memory Allocation

Why it works: CloudBase Cloud Functions allocate CPU proportional to memory (as stated in the official documentation: "CPU is automatically allocated proportional to the specified memory size"). Cold start involves CPU-intensive JS parsing — more CPU means this step finishes faster.

Configuration: Console → Cloud Functions → Function Details → Basic Configuration → Memory. Default is 256 MB; you can increase to 512 MB or higher (check the Console for available options).

Cost tradeoff: Doubling memory roughly doubles the per-invocation price. For login and authentication functions — "low daily call volume, latency-sensitive" — the absolute cost increase is small, and the perceived improvement is worth it. For high-QPS batch functions, keep the default.

Optimization 4: Enable Per-instance Concurrency (HTTP Cloud Functions)

Why it works: By default, each Cloud Function instance handles one request at a time. If 10 concurrent requests arrive, the platform spins up 10 new instances — each experiencing a cold start. With Per-instance Concurrency enabled, one instance handles multiple requests simultaneously. Fewer new instances are needed during traffic spikes, and the probability of a cold start drops significantly.

Configuration: Console → Cloud Functions → Function Details → Basic Configuration → Per-instance Concurrency → Enable, set the maximum concurrency (check the Console for available values).

Note: Whether this configuration is limited to HTTP Cloud Functions depends on the Console. Available settings may differ by function type (HTTP-triggered vs. event-triggered).

Side effect: With concurrency enabled, module-level variables are shared across multiple concurrent requests. If code assumes each invocation runs in a fresh process (e.g., using a module-level variable to cache per-user state), concurrent requests will interfere with each other. Rule: immutable constants can live at module scope; request-specific state must be defined inside the handler.

// ❌ Module-level variable causes interference under concurrency
let currentUserId = null;

exports.main = async (event) => {
currentUserId = event.uid; // Request A sets it; Request B overwrites it
// ...
};

// ✅ Request-specific state inside the handler
exports.main = async (event) => {
const currentUserId = event.uid; // Isolated per request
// ...
};

Verification

After completing all four optimizations, use this instrumentation snippet to confirm the improvement:

const COLD_START_AT = Date.now();
let isCold = true;

exports.main = async (event, context) => {
const latency = Date.now() - COLD_START_AT;
console.log(JSON.stringify({
coldStart: isCold,
latencyMs: isCold ? latency : 0,
}));
isCold = false;
// ... business logic
};

After deploying, invoke the function dozens of times via the CLI, then filter logs for coldStart: true entries and examine the latency distribution. Run this after each individual optimization to see which technique contributes the most.

Common Errors

SymptomCauseFix
Module-level init causes deployment to succeed but all calls to failModule-level code threw (e.g., require('./tcb_custom_login.json') — file missing), container fails to startWrap potentially-throwing require calls in try/catch; surface errors inside the handler
Data interference after enabling concurrencyModule-level state shared across concurrent requestsAudit all module-level state; move request-specific state into the handler
Memory increase didn't speed up cold startDependencies are small or the workload is I/O-bound (CPU was never the bottleneck)Use the instrumentation snippet to measure how much of latencyMs is Node.js loading + parsing — if it was already short, the bottleneck is elsewhere
Trimmed dependencies pass locally but Cloud Function throws Cannot find moduleDependency was in devDependencies; tcb fn deploy does not package dev dependenciesCheck package.json — all runtime dependencies must be in dependencies
Cost increases noticeably after increasing memoryDoubling memory roughly doubles unit price; high-QPS functions see a large absolute increaseOnly increase memory for latency-sensitive, low-QPS functions; keep the default for batch processing

Next Steps

  • Troubleshoot login errors one by one: fix-auth-wechat-miniprogram
  • Connect a WeCom Group Bot: connect-wecom-webhook-cloud-function