Node SDK Integration
Use @cloudbase/node-sdk to call CloudBase AI models from Node.js server-side environments (such as Cloud Functions or Cloud Run), supporting text generation, streaming output, image generation, and more.
Installation
npm install @cloudbase/node-sdk
Node SDK AI features require version 3.16.0 or higher. Check your version: npm list @cloudbase/node-sdk
Initialization
Usage in Cloud Functions
const tcb = require("@cloudbase/node-sdk");
const app = tcb.init({
env: "<YOUR_ENV_ID>",
timeout: 60000 // AI generation may take longer, recommend setting a longer timeout
});
exports.main = async (event, context) => {
const ai = app.ai();
// Use AI features
};
Usage in Standalone Node.js Services
const tcb = require("@cloudbase/node-sdk");
const app = tcb.init({
env: "<YOUR_ENV_ID>",
secretId: "<YOUR_SECRET_ID>",
secretKey: "<YOUR_SECRET_KEY>",
timeout: 60000 // Set timeout to 60 seconds, AI generation may take longer
});
const ai = app.ai();
Get your credentials at: Tencent Cloud API Key Management
AI model text generation may take longer. It's recommended to set timeout to 60000 (60 seconds) or higher to avoid request timeouts.
Text Generation
generateText() - Non-streaming
Returns the complete result at once.
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbos-latest",
messages: [{ role: "user", content: "介绍一下李白" }],
});
console.log(result.text); // Generated text
console.log(result.usage); // Token usage
console.log(result.messages); // Complete message history
Return Value
| Property | Type | Description |
|---|---|---|
| text | string | Generated text |
| messages | ChatModelMessage[] | Complete message history |
| usage | Usage | Token usage |
| rawResponses | unknown[] | Raw model responses |
| error | unknown | Error information (if any) |
streamText() - Streaming
Returns text in a stream, suitable for real-time conversation scenarios.
const model = ai.createModel("hunyuan-exp");
const res = await model.streamText({
model: "hunyuan-turbos-latest",
messages: [{ role: "user", content: "介绍一下李白" }],
});
// Method 1: Iterate text stream (recommended)
for await (const text of res.textStream) {
console.log(text); // Incremental text
}
// Method 2: Iterate data stream for complete response data
for await (const data of res.dataStream) {
console.log(data); // Contains choices, usage, and other complete information
}
// Get final results
const messages = await res.messages;
const usage = await res.usage;
Return Value
| Property | Type | Description |
|---|---|---|
| textStream | AsyncIterable\<string> | Incremental text stream |
| dataStream | AsyncIterable\<DataChunk> | Complete data stream |
| messages | Promise\<ChatModelMessage[]> | Final message history |
| usage | Promise\<Usage> | Final token usage |
Image Generation
Image generation is only available in Node SDK. Web SDK and Mini Programs are not supported.
const imageModel = ai.createImageModel("hunyuan-image");
const res = await imageModel.generateImage({
model: "hunyuan-image-v3.0-v1.0.4",
prompt: "一只可爱的猫咪在草地上玩耍",
size: "1280x720",
revise: { value: true },
enable_thinking: { value: true }
});
console.log(res.data[0].url); // Image URL (valid for 24 hours)
console.log(res.data[0].revised_prompt); // Revised prompt
Image Generation Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model name, hunyuan-image-v3.0-v1.0.4 (recommended) or hunyuan-image-v3.0-v1.0.1 |
| prompt | string | Yes | Image description, max 8192 characters |
| size | string | No | Image size, format "widthxheight", default 1024x1024, width/height range [512, 2048], area not exceeding 1024x1024 |
| seed | number | No | Generation seed, only effective when generating 1 image, range [1, 4294967295] |
| footnote | string | No | Custom watermark content, max 16 characters, displayed at bottom-right |
| revise | { value: boolean } | No | Whether to revise prompt, enabled by default, adds ~30s latency |
| enable_thinking | { value: boolean } | No | Whether to enable thinking mode for revision, enabled by default, improves quality but adds latency (max 60s) |
Return Value
| Property | Type | Description |
|---|---|---|
| id | string | Request ID |
| created | number | Timestamp |
| data[].url | string | Image URL (valid for 24 hours) |
| data[].revised_prompt | string | Revised prompt |
Image-to-Text (Vision)
Use vision-capable models (such as hunyuan-vision) to understand image content.
You need to configure a custom model hunyuan-custom first, with the Hunyuan model's BaseURL and API Key.
const model = ai.createModel("hunyuan-custom");
const res = await model.streamText({
model: "hunyuan-vision",
messages: [
{
role: "user",
content: [
{ type: "text", text: "这张图片的内容是什么?" },
{
type: "image_url",
image_url: {
url: "https://example.com/image.png"
}
}
]
}
]
});
for await (const text of res.textStream) {
console.log(text);
}
Complete Examples
Cloud Function: AI Chat API
const tcb = require("@cloudbase/node-sdk");
const app = tcb.init({
env: "<YOUR_ENV_ID>",
timeout: 60000 // AI generation may take longer
});
exports.main = async (event, context) => {
const { messages } = event;
const ai = app.ai();
const model = ai.createModel("hunyuan-exp");
try {
const result = await model.generateText({
model: "hunyuan-turbos-latest",
messages
});
return {
success: true,
text: result.text,
usage: result.usage
};
} catch (error) {
return {
success: false,
error: error.message
};
}
};
Cloud Function: Streaming Response (SSE)
const tcb = require("@cloudbase/node-sdk");
const app = tcb.init({
env: "<YOUR_ENV_ID>",
timeout: 60000 // AI generation may take longer
});
exports.main = async (event, context) => {
const { messages } = event;
const ai = app.ai();
const model = ai.createModel("hunyuan-exp");
const res = await model.streamText({
model: "hunyuan-turbos-latest",
messages
});
// Collect complete text
let fullText = "";
for await (const text of res.textStream) {
fullText += text;
}
const usage = await res.usage;
return {
text: fullText,
usage
};
};
Cloud Function: Image Generation
const tcb = require("@cloudbase/node-sdk");
const app = tcb.init({
env: "<YOUR_ENV_ID>",
timeout: 60000 // Image generation may take longer
});
exports.main = async (event, context) => {
const { prompt } = event;
const ai = app.ai();
const imageModel = ai.createImageModel("hunyuan-image");
const res = await imageModel.generateImage({
model: "hunyuan-image-v3.0-v1.0.4",
prompt,
size: "1280x720"
});
return {
url: res.data[0].url,
revisedPrompt: res.data[0].revised_prompt
};
};
Differences from Web SDK
| Feature | Node SDK | Web SDK |
|---|---|---|
| Initialization | tcb.init() | cloudbase.init() |
| Authentication | Secret Key or Cloud Function environment | Publishable Key + Login |
| Image Generation | Supported | Not supported |
| Runtime Environment | Server-side | Browser |
Type Definitions
BaseChatModelInput
interface BaseChatModelInput {
model: string; // Model name
messages: ChatModelMessage[]; // Message list
temperature?: number; // Sampling temperature
topP?: number; // Nucleus sampling
}
type ChatModelMessage =
| { role: "user"; content: string | ContentPart[] }
| { role: "system"; content: string }
| { role: "assistant"; content: string };
Usage
interface Usage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
}