AI
Provides CloudBase AI integration capabilities for quickly connecting to large language models and Agents.
Initialization
After initializing with the node sdk, get the AI instance through .ai().
import tcb from '@cloudbase/node-sdk'
const app = tcb.init({ env: 'xxx' })
const ai = app.ai()
AI
A class for creating AI models.
createModel()
Creates a specified AI text-to-text model.
Usage Example
const model = ai.createModel("hunyuan-exp");
Type Declaration
function createModel(model: string): ChatModel;
Returns a model instance that implements the ChatModel abstract class, which provides AI text generation capabilities.
createImageModel()
Creates a specified image generation model.
Usage Example
const imageModel = ai.createImageModel("hunyuan-image");
Type Declaration
function createImageModel(provider: string): ImageModel;
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| provider | Yes | string | Model provider name, e.g., "hunyuan-image" |
Return Value
Returns an ImageModel instance that provides AI image generation capabilities.
registerFunctionTool()
Registers a function tool. When calling a large language model, you can inform the model about available function tools. When the model's response is parsed as a tool call, the corresponding function tool will be automatically invoked.
Usage Example
// Omit AI sdk initialization...
// 1. Define a weather tool, see FunctionTool type for details
const getWeatherTool = {
name: "get_weather",
description: "Returns weather information for a city. Example call: get_weather({city: 'Beijing'})",
fn: ({ city }) => `The weather in ${city} is: Clear and crisp autumn day!!!`, // Define the tool execution here
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "The city to query",
},
},
required: ["city"],
},
};
// 2. Register the tool we just defined
ai.registerFunctionTool(getWeatherTool);
// 3. When sending a message to the model, inform it about the available weather tool
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbo",
tools: [getWeatherTool], // Here we pass in the weather tool
messages: [
{
role: "user",
content: "Please tell me the weather in Beijing",
},
],
});
console.log(result.text);
Type Declaration
function registerFunctionTool(functionTool: FunctionTool);
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| functionTool | Yes | FunctionTool | See FunctionTool for details |
Return Value
undefined
ChatModel
This abstract class describes the interfaces provided by AI text generation model classes.
generateText()
Calls the large language model to generate text.
Usage Example
const hy = ai.createModel("hunyuan-exp"); // Create model
const res = await hy.generateText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});
console.log(res.text); // Print generated text
Type Declaration
function generateText(data: BaseChatModelInput): Promise<{
rawResponses: Array<unknown>;
text: string;
messages: Array<ChatModelMessage>;
usage: Usage;
error?: unknown;
}>;
Parameters
| Parameter | Required | Type | Example | Description |
|---|---|---|---|---|
| data | Yes | BaseChatModelInput | {model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, please introduce Li Bai" }]} | The parameter type is defined as BaseChatModelInput, serving as the basic input definition. In practice, different large language models have their own unique input parameters. Developers can pass additional parameters not defined in this type based on the official documentation of the model being used to fully leverage the model's capabilities. Other parameters will be passed through to the model API, and the SDK will not perform additional processing on them. |
Return Value
| Property | Type | Example | Description |
|---|---|---|---|
| text | string | "Li Bai was a Tang Dynasty poet." | Text generated by the large language model. |
| rawResponses | unknown[] | [{"choices": [{"finish_reason": "stop","message": {"role": "assistant", "content": "Hello, how can I help you?"}}], "usage": {"prompt_tokens": 14, "completion_tokens": 9, "total_tokens": 23}}] | Complete response from the model, containing more detailed data such as message creation time. Since responses vary across different models, use according to your needs. |
| res.messages | ChatModelMessage[] | [{role: 'user', content: 'Hello'},{role: 'assistant', content: 'Hello! Nice to chat with you. How can I help you? Whether it\'s about life, work, study, or other topics, I\'ll do my best to assist you.'}] | Complete message list for this call. |
| usage | Usage | {"completion_tokens":33,"prompt_tokens":3,"total_tokens":36} | Tokens consumed in this call. |
| error | unknown | Errors that occurred during the call. |
streamText()
Calls the large language model to generate text in streaming mode. In streaming mode, generated text and other response data are returned via SSE. This interface's return value provides different levels of encapsulation for SSE, allowing developers to obtain text streams and complete data streams based on their needs.
Usage Example
const hy = ai.createModel("hunyuan-exp"); // Create model
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});
for await (let str of res.textStream) {
console.log(str); // Print generated text
}
for await (let data of res.dataStream) {
console.log(data); // Print complete data for each response
}
Type Declaration
function streamText(data: BaseChatModelInput): Promise<StreamTextResult>;
Parameters
| Parameter | Required | Type | Example | Description |
|---|---|---|---|---|
| data | Yes | BaseChatModelInput | {model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, please introduce Li Bai" }]} | The parameter type is defined as BaseChatModelInput, serving as the basic input definition. In practice, different large language models have their own unique input parameters. Developers can pass additional parameters not defined in this type based on the official documentation of the model being used to fully leverage the model's capabilities. Other parameters will be passed through to the model API, and the SDK will not perform additional processing on them. |
Return Value
| StreamTextResult Property | Type | Description |
|---|---|---|
| textStream | ReadableStream<string> | Text generated by the model returned as a stream. Refer to the usage example to get incremental generated text. |
| dataStream | ReadableStream<DataChunk> | Response data from the model returned as a stream. Refer to the usage example to get incremental data. Since responses vary across different models, use according to your needs. |
| messages | Promise<ChatModelMessage[]> | Complete message list for this call. |
| usage | Promise<Usage> | Tokens consumed in this call. |
| error | unknown | Errors that occurred in this call. |
| DataChunk Property | Type | Description |
|---|---|---|
| choices | Array<object> | |
| choices[n].finish_reason | string | The reason the model stopped inference. |
| choices[n].delta | ChatModelMessage | The message for this request. |
| usage | Usage | Tokens consumed in this request. |
| rawResponse | unknown | Raw response from the model. |
Example
const hy = ai.createModel("hunyuan-exp");
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "What is 1+1" }],
});
// Text stream
for await (let str of res.textStream) {
console.log(str);
}
// 1
// plus
// 1
// equals
// 2
// .
// Data stream
for await (let str of res.dataStream) {
console.log(str);
}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
ImageModel
This class describes the interfaces provided by AI image generation model classes.
generateImage()
Calls the large model to generate images.
Usage Example
const imageModel = ai.createImageModel("hunyuan-image");
const res = await imageModel.generateImage({
model: "hunyuan-image",
prompt: "A cute cat playing on the grass",
});
console.log(res.data[0].url); // Print generated image URL
Type Declaration
function generateImage(input: HunyuanGenerateImageInput): Promise<HunyuanGenerateImageOutput>;
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| input | Yes | HunyuanGenerateImageInput | Image generation parameters, see HunyuanGenerateImageInput for details |
Return Value
Promise<HunyuanGenerateImageOutput>
| Property | Type | Description |
|---|---|---|
| id | string | The id of this request |
| created | number | Unix timestamp |
| data | Array\<object> | Returned image generation content |
| data[n].url | string | URL of the generated image, valid for 24 hours |
| data[n].revised_prompt | string | Revised text from the original prompt (if revise is false, it's the original prompt) |
Type Definitions
BaseChatModelInput
interface BaseChatModelInput {
model: string;
messages: Array<ChatModelMessage>;
temperature?: number;
topP?: number;
tools?: Array<FunctionTool>;
toolChoice?: "none" | "auto" | "custom";
maxSteps?: number;
onStepFinish?: (prop: IOnStepFinish) => unknown;
}
| BaseChatModelInput Property | Type | Description |
|---|---|---|
| model | string | Model name. |
| messages | Array<ChatModelMessage> | Message list. |
| temperature | number | Sampling temperature, controls output randomness. |
| topP | number | Temperature sampling, the model considers tokens with top_p probability mass. |
| tools | Array<FunctionTool> | List of tools available to the model. |
| toolChoice | string | Specifies how the model selects tools. |
| maxSteps | number | Maximum number of requests to the model. |
| onStepFinish | (prop: IOnStepFinish) => unknown | Callback function triggered when a request to the model completes. |
ChatModelMessage
type ChatModelMessage =
| UserMessage
| SystemMessage
| AssistantMessage
| ToolMessage;
UserMessage
type UserMessage = {
role: "user";
content: string;
};
SystemMessage
type SystemMessage = {
role: "system";
content: string;
};
AssistantMessage
type AssistantMessage = {
role: "assistant";
content?: string;
tool_calls?: Array<ToolCall>;
};
ToolMessage
type ToolMessage = {
role: "tool";
tool_call_id: string;
content: string;
};
ToolCall
export type ToolCall = {
id: string;
type: string;
function: { name: string; arguments: string };
};
FunctionTool
Tool definition type.
type FunctionTool = {
name: string;
description: string;
fn: CallableFunction;
parameters: object;
};
| FunctionTool Property | Type | Description |
|---|---|---|
| name | string | Tool name. |
| description | string | Tool description. A clear description helps the model understand the tool's purpose. |
| fn | CallableFunction | Tool execution function. When the AI SDK parses the model's response as requiring a tool call, this function is invoked and the result is returned to the model. |
| parameters | object | Input parameters for the tool execution function. Parameters must be defined using JSON Schema format. |
IOnStepFinish
Input type for the callback function triggered after the model responds.
interface IOnStepFinish {
messages: Array<ChatModelMessage>;
text?: string;
toolCall?: ToolCall;
toolResult?: unknown;
finishReason?: string;
stepUsage?: Usage;
totalUsage?: Usage;
}
| IOnStepFinish Property | Type | Description |
|---|---|---|
| messages | Array<ChatModelMessage> | Complete message list up to the current step. |
| text | string | Text from the current response. |
| toolCall | ToolCall | Tool called in the current response. |
| toolResult | unknown | Result of the corresponding tool call. |
| finishReason | string | Reason for the model to stop inference. |
| stepUsage | Usage | Tokens spent in the current step. |
| totalUsage | Usage | Total tokens spent up to the current step. |
Usage
type Usage = {
completion_tokens: number;
prompt_tokens: number;
total_tokens: number;
};
HunyuanGenerateImageInput
Hunyuan image generation input parameters.
interface HunyuanGenerateImageInput {
model: 'hunyuan-image' | (string & {});
/** Text description used to generate the image */
prompt: string;
/** Model version, supports v1.8.1 and v1.9, default version v1.8.1 */
version?: 'v1.8.1' | 'v1.9' | (string & {});
/** Image size, default "1024x1024" */
size?: string;
/** Only v1.9 supported, negative prompt */
negative_prompt?: string;
/** Only v1.9 supported, specify style */
style?: 'Ancient 2D Style' | 'Urban 2D Style' | 'Suspense Style' | 'Campus Style' | 'Urban Fantasy Style' | (string & {});
/** When true, rewrites the prompt, default is true */
revise?: boolean;
/** Number of images to generate, default is 1 */
n?: number;
/** Custom business watermark content, limited to 16 characters */
footnote?: string;
/** Generation seed, range [1, 4294967295] */
seed?: number;
}
| HunyuanGenerateImageInput Property | Type | Description |
|---|---|---|
| model | string | Model name, e.g., hunyuan-image |
| prompt | string | Text description used to generate the image |
| version | string | Model version, supports v1.8.1 and v1.9, default version v1.8.1 |
| size | string | Image size, default "1024x1024" |
| negative_prompt | string | Only v1.9 supported, negative prompt |
| style | string | Only v1.9 supported, specify style: Ancient 2D Style, Urban 2D Style, Suspense Style, Campus Style, Urban Fantasy Style |
| revise | boolean | When true, rewrites the prompt, default is true |
| n | number | Number of images to generate, default is 1 |
| footnote | string | Custom business watermark content, limited to 16 characters |
| seed | number | Generation seed, range [1, 4294967295] |
HunyuanGenerateImageOutput
Hunyuan image generation output.
interface HunyuanGenerateImageOutput {
/** The id of this request */
id: string;
/** Unix timestamp */
created: number;
/** Returned image generation content */
data: Array<{
/** URL of the generated image, valid for 24 hours */
url: string;
/** Revised text from the original prompt. If revise is false, it's the original prompt */
revised_prompt?: string;
}>;
}
| HunyuanGenerateImageOutput Property | Type | Description |
|---|---|---|
| id | string | The id of this request |
| created | number | Unix timestamp |
| data | Array<object> | Returned image generation content |
| data[n].url | string | URL of the generated image, valid for 24 hours |
| data[n].revised_prompt | string | Revised text from the original prompt. If revise is false, it's the original prompt |