AI
Provides CloudBase AI integration capabilities for quickly connecting to large language models and Agents.
Initialization
After initializing with the node sdk, get the AI instance through .ai().
import tcb from '@cloudbase/node-sdk'
const app = tcb.init({ env: 'xxx' })
const ai = app.ai()
AI
A class for creating AI models.
createModel()
Creates a specified AI text-to-text model.
Usage Example
const model = ai.createModel("cloudbase");
Type Declaration
function createModel(model: string): ChatModel;
Returns a model instance that implements the ChatModel abstract class, which provides AI text generation capabilities.
createImageModel()
Creates a specified image generation model.
Usage Example
const imageModel = ai.createImageModel("hunyuan-image");
Type Declaration
function createImageModel(provider: string): ImageModel;
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| provider | Yes | string | Model provider name, e.g., "hunyuan-image" |
Return Value
Returns an ImageModel instance that provides AI image generation capabilities.
registerFunctionTool()
Registers a function tool. When calling a large language model, you can inform the model about available function tools. When the model's response is parsed as a tool call, the corresponding function tool will be automatically invoked.
Usage Example
// Omit AI sdk initialization...
// 1. Define a weather tool, see FunctionTool type for details
const getWeatherTool = {
name: "get_weather",
description: "Returns weather information for a city. Example call: get_weather({city: 'Beijing'})",
fn: ({ city }) => `The weather in ${city} is: Clear and crisp autumn day!!!`, // Define the tool execution here
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "The city to query",
},
},
required: ["city"],
},
};
// 2. Register the tool we just defined
ai.registerFunctionTool(getWeatherTool);
// 3. When sending a message to the model, inform it about the available weather tool
const model = ai.createModel("cloudbase");
const result = await model.generateText({
model: "deepseek-v4-flash",
tools: [getWeatherTool], // Here we pass in the weather tool
messages: [
{
role: "user",
content: "Please tell me the weather in Beijing",
},
],
});
console.log(result.text);
Type Declaration
function registerFunctionTool(functionTool: FunctionTool);
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| functionTool | Yes | FunctionTool | See FunctionTool for details |
Return Value
undefined
ChatModel
This abstract class describes the interfaces provided by AI text generation model classes.
generateText()
Calls the large language model to generate text.
Usage Example
const hy = ai.createModel("cloudbase");
const res = await hy.generateText({
model: "deepseek-v4-flash",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});
console.log(res.text); // Print generated text
Type Declaration
function generateText(data: BaseChatModelInput): Promise<{
rawResponses: Array<unknown>;
text: string;
messages: Array<ChatModelMessage>;
usage: Usage;
error?: unknown;
}>;
Parameters
| Parameter | Required | Type | Example | Description |
|---|---|---|---|---|
| data | Yes | BaseChatModelInput | {model: "deepseek-v4-flash", messages: [{ role: "user", content: "Hello, please introduce Li Bai" }]} | The parameter type is defined as BaseChatModelInput, serving as the basic input definition. In practice, different large language models have their own unique input parameters. Developers can pass additional parameters not defined in this type based on the official documentation of the model being used to fully leverage the model's capabilities. Other parameters will be passed through to the model API, and the SDK will not perform additional processing on them. |
Return Value
| Property | Type | Example | Description |
|---|---|---|---|
| text | string | "Li Bai was a Tang Dynasty poet." | Text generated by the large language model. |
| rawResponses | unknown[] | [{"choices": [{"finish_reason": "stop","message": {"role": "assistant", "content": "Hello, how can I help you?"}}], "usage": {"prompt_tokens": 14, "completion_tokens": 9, "total_tokens": 23}}] | Complete response from the model, containing more detailed data such as message creation time. Since responses vary across different models, use according to your needs. |
| res.messages | ChatModelMessage[] | [{role: 'user', content: 'Hello'},{role: 'assistant', content: 'Hello! Nice to chat with you. How can I help you? Whether it\'s about life, work, study, or other topics, I\'ll do my best to assist you.'}] | Complete message list for this call. |
| usage | Usage | {"completion_tokens":33,"prompt_tokens":3,"total_tokens":36} | Tokens consumed in this call. |
| error | unknown | Errors that occurred during the call. |
streamText()
Calls the large language model to generate text in streaming mode. In streaming mode, generated text and other response data are returned via SSE. This interface's return value provides different levels of encapsulation for SSE, allowing developers to obtain text streams and complete data streams based on their needs.
Usage Example
const hy = ai.createModel("cloudbase");
const res = await hy.streamText({
model: "deepseek-v4-flash",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});
for await (let str of res.textStream) {
console.log(str); // Print generated text
}
for await (let data of res.dataStream) {
console.log(data); // Print complete data for each response
}
Type Declaration
function streamText(data: BaseChatModelInput): Promise<StreamTextResult>;