Skip to main content

AI

Provides CloudBase AI integration capabilities for quickly connecting to large language models and Agents.

Initialization

After initializing with the node sdk, get the AI instance through .ai().

import tcb from '@cloudbase/node-sdk'
const app = tcb.init({ env: 'xxx' })
const ai = app.ai()

AI

A class for creating AI models.

createModel()

Creates a specified AI text-to-text model.

Usage Example

const model = ai.createModel("hunyuan-exp");

Type Declaration

function createModel(model: string): ChatModel;

Returns a model instance that implements the ChatModel abstract class, which provides AI text generation capabilities.

createImageModel()

Creates a specified image generation model.

Usage Example

const imageModel = ai.createImageModel("hunyuan-image");

Type Declaration

function createImageModel(provider: string): ImageModel;

Parameters

ParameterRequiredTypeDescription
providerYesstringModel provider name, e.g., "hunyuan-image"

Return Value

Returns an ImageModel instance that provides AI image generation capabilities.

registerFunctionTool()

Registers a function tool. When calling a large language model, you can inform the model about available function tools. When the model's response is parsed as a tool call, the corresponding function tool will be automatically invoked.

Usage Example

// Omit AI sdk initialization...

// 1. Define a weather tool, see FunctionTool type for details
const getWeatherTool = {
name: "get_weather",
description: "Returns weather information for a city. Example call: get_weather({city: 'Beijing'})",
fn: ({ city }) => `The weather in ${city} is: Clear and crisp autumn day!!!`, // Define the tool execution here
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "The city to query",
},
},
required: ["city"],
},
};

// 2. Register the tool we just defined
ai.registerFunctionTool(getWeatherTool);

// 3. When sending a message to the model, inform it about the available weather tool
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbo",
tools: [getWeatherTool], // Here we pass in the weather tool
messages: [
{
role: "user",
content: "Please tell me the weather in Beijing",
},
],
});

console.log(result.text);

Type Declaration

function registerFunctionTool(functionTool: FunctionTool);

Parameters

ParameterRequiredTypeDescription
functionToolYesFunctionToolSee FunctionTool for details

Return Value

undefined

ChatModel

This abstract class describes the interfaces provided by AI text generation model classes.

generateText()

Calls the large language model to generate text.

Usage Example

const hy = ai.createModel("hunyuan-exp"); // Create model
const res = await hy.generateText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});
console.log(res.text); // Print generated text

Type Declaration

function generateText(data: BaseChatModelInput): Promise<{
rawResponses: Array<unknown>;
text: string;
messages: Array<ChatModelMessage>;
usage: Usage;
error?: unknown;
}>;

Parameters

ParameterRequiredTypeExampleDescription
dataYesBaseChatModelInput{model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, please introduce Li Bai" }]}The parameter type is defined as BaseChatModelInput, serving as the basic input definition. In practice, different large language models have their own unique input parameters. Developers can pass additional parameters not defined in this type based on the official documentation of the model being used to fully leverage the model's capabilities. Other parameters will be passed through to the model API, and the SDK will not perform additional processing on them.

Return Value

PropertyTypeExampleDescription
textstring"Li Bai was a Tang Dynasty poet."Text generated by the large language model.
rawResponsesunknown[][{"choices": [{"finish_reason": "stop","message": {"role": "assistant", "content": "Hello, how can I help you?"}}], "usage": {"prompt_tokens": 14, "completion_tokens": 9, "total_tokens": 23}}]Complete response from the model, containing more detailed data such as message creation time. Since responses vary across different models, use according to your needs.
res.messagesChatModelMessage[][{role: 'user', content: 'Hello'},{role: 'assistant', content: 'Hello! Nice to chat with you. How can I help you? Whether it\'s about life, work, study, or other topics, I\'ll do my best to assist you.'}]Complete message list for this call.
usageUsage{"completion_tokens":33,"prompt_tokens":3,"total_tokens":36}Tokens consumed in this call.
errorunknownErrors that occurred during the call.

streamText()

Calls the large language model to generate text in streaming mode. In streaming mode, generated text and other response data are returned via SSE. This interface's return value provides different levels of encapsulation for SSE, allowing developers to obtain text streams and complete data streams based on their needs.

Usage Example

const hy = ai.createModel("hunyuan-exp"); // Create model
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});

for await (let str of res.textStream) {
console.log(str); // Print generated text
}
for await (let data of res.dataStream) {
console.log(data); // Print complete data for each response
}

Type Declaration

function streamText(data: BaseChatModelInput): Promise<StreamTextResult>;

Parameters

ParameterRequiredTypeExampleDescription
dataYesBaseChatModelInput{model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, please introduce Li Bai" }]}The parameter type is defined as BaseChatModelInput, serving as the basic input definition. In practice, different large language models have their own unique input parameters. Developers can pass additional parameters not defined in this type based on the official documentation of the model being used to fully leverage the model's capabilities. Other parameters will be passed through to the model API, and the SDK will not perform additional processing on them.

Return Value

StreamTextResult PropertyTypeDescription
textStreamReadableStream<string>Text generated by the model returned as a stream. Refer to the usage example to get incremental generated text.
dataStreamReadableStream<DataChunk>Response data from the model returned as a stream. Refer to the usage example to get incremental data. Since responses vary across different models, use according to your needs.
messagesPromise<ChatModelMessage[]>Complete message list for this call.
usagePromise<Usage>Tokens consumed in this call.
errorunknownErrors that occurred in this call.
DataChunk PropertyTypeDescription
choicesArray<object>
choices[n].finish_reasonstringThe reason the model stopped inference.
choices[n].deltaChatModelMessageThe message for this request.
usageUsageTokens consumed in this request.
rawResponseunknownRaw response from the model.

Example

const hy = ai.createModel("hunyuan-exp");
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "What is 1+1" }],
});

// Text stream
for await (let str of res.textStream) {
console.log(str);
}
// 1
// plus
// 1
// equals
// 2
// .

// Data stream
for await (let str of res.dataStream) {
console.log(str);
}

// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}

ImageModel

This class describes the interfaces provided by AI image generation model classes.

generateImage()

Calls the large model to generate images.

Usage Example

const imageModel = ai.createImageModel("hunyuan-image");
const res = await imageModel.generateImage({
model: "hunyuan-image",
prompt: "A cute cat playing on the grass",
});
console.log(res.data[0].url); // Print generated image URL

Type Declaration

function generateImage(input: HunyuanGenerateImageInput): Promise<HunyuanGenerateImageOutput>;

Parameters

ParameterRequiredTypeDescription
inputYesHunyuanGenerateImageInputImage generation parameters, see HunyuanGenerateImageInput for details

Return Value

Promise<HunyuanGenerateImageOutput>

PropertyTypeDescription
idstringThe id of this request
creatednumberUnix timestamp
dataArray\<object>Returned image generation content
data[n].urlstringURL of the generated image, valid for 24 hours
data[n].revised_promptstringRevised text from the original prompt (if revise is false, it's the original prompt)

Type Definitions

BaseChatModelInput

interface BaseChatModelInput {
model: string;
messages: Array<ChatModelMessage>;
temperature?: number;
topP?: number;
tools?: Array<FunctionTool>;
toolChoice?: "none" | "auto" | "custom";
maxSteps?: number;
onStepFinish?: (prop: IOnStepFinish) => unknown;
}
BaseChatModelInput PropertyTypeDescription
modelstringModel name.
messagesArray<ChatModelMessage>Message list.
temperaturenumberSampling temperature, controls output randomness.
topPnumberTemperature sampling, the model considers tokens with top_p probability mass.
toolsArray<FunctionTool>List of tools available to the model.
toolChoicestringSpecifies how the model selects tools.
maxStepsnumberMaximum number of requests to the model.
onStepFinish(prop: IOnStepFinish) => unknownCallback function triggered when a request to the model completes.

ChatModelMessage

type ChatModelMessage =
| UserMessage
| SystemMessage
| AssistantMessage
| ToolMessage;

UserMessage

type UserMessage = {
role: "user";
content: string;
};

SystemMessage

type SystemMessage = {
role: "system";
content: string;
};

AssistantMessage

type AssistantMessage = {
role: "assistant";
content?: string;
tool_calls?: Array<ToolCall>;
};

ToolMessage

type ToolMessage = {
role: "tool";
tool_call_id: string;
content: string;
};

ToolCall

export type ToolCall = {
id: string;
type: string;
function: { name: string; arguments: string };
};

FunctionTool

Tool definition type.

type FunctionTool = {
name: string;
description: string;
fn: CallableFunction;
parameters: object;
};
FunctionTool PropertyTypeDescription
namestringTool name.
descriptionstringTool description. A clear description helps the model understand the tool's purpose.
fnCallableFunctionTool execution function. When the AI SDK parses the model's response as requiring a tool call, this function is invoked and the result is returned to the model.
parametersobjectInput parameters for the tool execution function. Parameters must be defined using JSON Schema format.

IOnStepFinish

Input type for the callback function triggered after the model responds.

interface IOnStepFinish {
messages: Array<ChatModelMessage>;
text?: string;
toolCall?: ToolCall;
toolResult?: unknown;
finishReason?: string;
stepUsage?: Usage;
totalUsage?: Usage;
}
IOnStepFinish PropertyTypeDescription
messagesArray<ChatModelMessage>Complete message list up to the current step.
textstringText from the current response.
toolCallToolCallTool called in the current response.
toolResultunknownResult of the corresponding tool call.
finishReasonstringReason for the model to stop inference.
stepUsageUsageTokens spent in the current step.
totalUsageUsageTotal tokens spent up to the current step.

Usage

type Usage = {
completion_tokens: number;
prompt_tokens: number;
total_tokens: number;
};

HunyuanGenerateImageInput

Hunyuan image generation input parameters.

interface HunyuanGenerateImageInput {
model: 'hunyuan-image' | (string & {});
/** Text description used to generate the image */
prompt: string;
/** Model version, supports v1.8.1 and v1.9, default version v1.8.1 */
version?: 'v1.8.1' | 'v1.9' | (string & {});
/** Image size, default "1024x1024" */
size?: string;
/** Only v1.9 supported, negative prompt */
negative_prompt?: string;
/** Only v1.9 supported, specify style */
style?: 'Ancient 2D Style' | 'Urban 2D Style' | 'Suspense Style' | 'Campus Style' | 'Urban Fantasy Style' | (string & {});
/** When true, rewrites the prompt, default is true */
revise?: boolean;
/** Number of images to generate, default is 1 */
n?: number;
/** Custom business watermark content, limited to 16 characters */
footnote?: string;
/** Generation seed, range [1, 4294967295] */
seed?: number;
}
HunyuanGenerateImageInput PropertyTypeDescription
modelstringModel name, e.g., hunyuan-image
promptstringText description used to generate the image
versionstringModel version, supports v1.8.1 and v1.9, default version v1.8.1
sizestringImage size, default "1024x1024"
negative_promptstringOnly v1.9 supported, negative prompt
stylestringOnly v1.9 supported, specify style: Ancient 2D Style, Urban 2D Style, Suspense Style, Campus Style, Urban Fantasy Style
revisebooleanWhen true, rewrites the prompt, default is true
nnumberNumber of images to generate, default is 1
footnotestringCustom business watermark content, limited to 16 characters
seednumberGeneration seed, range [1, 4294967295]

HunyuanGenerateImageOutput

Hunyuan image generation output.

interface HunyuanGenerateImageOutput {
/** The id of this request */
id: string;
/** Unix timestamp */
created: number;
/** Returned image generation content */
data: Array<{
/** URL of the generated image, valid for 24 hours */
url: string;
/** Revised text from the original prompt. If revise is false, it's the original prompt */
revised_prompt?: string;
}>;
}
HunyuanGenerateImageOutput PropertyTypeDescription
idstringThe id of this request
creatednumberUnix timestamp
dataArray<object>Returned image generation content
data[n].urlstringURL of the generated image, valid for 24 hours
data[n].revised_promptstringRevised text from the original prompt. If revise is false, it's the original prompt