Skip to main content

AI

Cloud Development provides AI access capabilities, enabling quick access to large models and Agent.

Initialization

After initializing and logging in with js-sdk, obtain the AI instance via .ai().

import cloudbase from "@cloudbase/js-sdk";

const app = cloudbase.init({
env: "your-env-id"
});
const auth = app.auth();
await auth.signInAnonymously();

const ai = app.ai();

app.ai

After initialization, you can use the ai method mounted on the cloudbase instance to create an AI instance for subsequent model creation.

Usage Example

app = cloudbase.init({ env: "your-env" });
const ai = app.ai();

Type Declaration

function ai(): AI;

Return Value

AI

Returns the newly created AI instance.

AI

A class for creating AI models.

createModel()

Create the specified AI model.

Usage Example

const model = ai.createModel("hunyuan-exp");

Type Declaration

function createModel(model: string): ChatModel;

Returns a model instance that implements the ChatModel abstract class, which provides AI text generation capabilities.

bot

Provides an instance of the Bot class that includes a series of methods for interacting with the Agent. For details, refer to the Bot class documentation.

Usage Example

const agentList = await ai.bot.list({ pageNumber: 1, pageSize: 10 });

registerFunctionTool()

Register function tools. When invoking a large model, you can inform it of the available function tools. When the model's response is parsed as a tool invocation, the corresponding function tool is automatically invoked.

Usage Example

// Omit the initialization operations of the AI sdk...

// 1. Define the weather retrieval tool, see the FunctionTool type
const getWeatherTool = {
name: "get_weather",
"Returns the weather information for a city. Call example: get_weather({city: 'Beijing'})",
fn: ({ city }) => `The weather in ${city} is: crisp and clear autumn weather!!!`, // Define the tool's execution content here
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "City to query",
},
},
required: ["city"],
},
};

// 2. Register the tool we just defined
ai.registerFunctionTool(getWeatherTool);

// 3. While sending a message to the Large Model, inform it that a weather retrieval tool is available
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbo",
tools: [getWeatherTool], // Here we pass in the weather retrieval tool
messages: [
{
role: "user",
content: "Please tell me the weather conditions in Beijing",
},
],
});

console.log(result.text);

Type Declaration

function registerFunctionTool(functionTool: FunctionTool);

Parameters

Parameter NameRequiredTypeDescription
functionToolRequiredFunctionToolSee FunctionTool

Return Value

undefined

ChatModel

This abstract class describes the interface provided by the AI text generation model class.

generateText()

Invoke large models to generate text.

Usage Example

const hy = ai.createModel("hunyuan-exp"); // Create a model
const res = await hy.generateText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }],
});
console.log(res.text); // Print the generated text

Type Declaration

function generateText(data: BaseChatModelInput): Promise<{
rawResponses: Array<unknown>;
text: string;
messages: Array<ChatModelMessage>;
usage: Usage;
error?: unknown;
}>;

Parameters

Parameter NameRequiredTypeExampleDescription
dataYesBaseChatModelInput{model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }]}The parameter type is defined as BaseChatModelInput, serving as the basic input parameter definition. In practice, different large models have their own unique input parameters. Developers can pass additional parameters not defined in this type according to the official documentation of the actual large model used, fully leveraging the capabilities provided by large models. Other parameters will be passed through to the large model interface, and the SDK does not perform additional processing on them.

Return Value

Property NameTypeExampleDescription
textstring"Li Bai was a poet of the Tang Dynasty."Text generated by the large model.
rawResponsesunknown[][{"choices": [{"finish_reason": "stop","message": {"role": "assistant", "content": "Hello, is there anything I can help you with?"}}], "usage": {"prompt_tokens": 14, "completion_tokens": 9, "total_tokens": 23}}]The complete return value from the large model, containing more detailed data such as message creation time, etc. Since return values vary among different large models, please use them according to the actual situation.
res.messagesChatModelMessage[][{role: 'user', content: 'Hello'},{role: 'assistant', content: 'Hello! I am glad to communicate with you. May I ask what I can help you with? Whether it is about life, work, study, or any other aspect, I will do my best to assist you.'}]The complete message list for this call.
usageUsage{"completion_tokens":33,"prompt_tokens":3,"total_tokens":36}Tokens consumed by this call.
errorunknownError generated during invocation.

streamText()

To generate text by streaming invocation of large models. During streaming invocation, the generated text and other response data are returned via SSE. The return value of this interface encapsulates SSE to varying degrees, enabling developers to obtain text streams and complete data streams according to their actual needs.

Usage Example

const hy = ai.createModel("hunyuan-exp"); // Create a model
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }],
});

for await (let str of res.textStream) {
console.log(str); // Print the generated text
}
for await (let data of res.dataStream) {
console.log(data); // Print the complete data returned each time
}

Type Declaration

function streamText(data: BaseChatModelInput): Promise<StreamTextResult>;

Parameters

Parameter NameRequiredTypeExampleDescription
dataYesBaseChatModelInput{model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }]}The parameter type is defined as BaseChatModelInput, serving as the basic input parameter definition. In practice, different large models have their own unique input parameters. Developers can pass additional parameters not defined in this type according to the official documentation of the actual large model used, fully leveraging the capabilities provided by large models. Other parameters will be passed through to the large model interface, and the SDK does not perform additional processing on them.

Return Value

StreamTextResult Property NameTypeDescription
textStreamReadableStream<string>Large model-generated text returned in streaming mode. Refer to the usage sample to obtain incrementally generated text.
dataStreamReadableStream<DataChunk>Large model response data returned in streaming mode. Refer to the usage sample to obtain incrementally generated data. As response values vary across different large models, please use them appropriately based on actual conditions.
messagesPromise<ChatModelMessage[]>The complete message list for this call.
usagePromise<Usage>Tokens consumed by this call.
errorunknownError generated during this call.
DataChunk Property NameTypeDescription
choicesArray<object>
choices[n].finish_reasonstringReason for model inference termination
choices[n].deltaChatModelMessageThe message for this request.
usageUsageTokens consumed by this request.
rawResponseunknownRaw response returned by the large model.

Example

const hy = ai.createModel("hunyuan-exp");
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "What is the result of 1+1?" }],
});

// Text stream
for await (let str of res.textStream) {
console.log(str);
}
// 1
// Add
// 1
// Result
// Is
// 2
// .

// Data stream
for await (let str of res.dataStream) {
console.log(str);
}

// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}

Bot

A class for interacting with Agent.

get()

Get information about a specific Agent.

Usage Example

const res = await ai.bot.get({ botId: "botId-xxx" });
console.log(res);

Type Declaration

function get(props: { botId: string });

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringThe id of the Agent for which to retrieve information

Return Value

Property NameTypeExampleDescription
botIdstring"bot-27973647"Agent ID
namestring"Example Agent Name"Agent name
introductionstringAgent introduction
welcomeMessagestringAgent welcome message
avatarstringAgent avatar link
backgroundstringAgent chat background image link
isNeedRecommendbooleanWhether to recommend questions after the Agent answers
typestringAgent type

list()

Retrieve information for multiple Agents in batch.

Usage Example

await ai.bot.list({
pageNumber: 1,
pageSize: 10,
name: "",
enable: true,
information: "",
introduction: "",
});

Type Declaration

function list(props: {
name: string;
introduction: string;
information: string;
enable: boolean;
pageSize: number;
pageNumber: number;
});

Parameters

Parameter NameRequiredTypeDescription
props.pageNumberYesnumberPage index
props.pageSizeYesnumberPage size
props.enableYesbooleanWhether the Agent is enabled
props.nameYesstringAgent name, used for fuzzy query
props.informationYesstringAgent information, used for fuzzy query
props.introductionYesstringAgent introduction, used for fuzzy query

Return Value

Property NameTypeExampleDescription
totalnumber---Total Agents
botListArray<object>Agent list
botList[n].botIdstring"bot-27973647"Agent ID
botList[n].namestring"Xindaya Translation"Agent name
botList[n].introductionstringAgent introduction
botList[n].welcomeMessagestringAgent welcome message
botList[n].avatarstringAgent avatar link
botList[n].backgroundstringAgent chat background image link
botList[n].isNeedRecommendbooleanWhether to recommend questions after the Agent answers
botList[n].typestringAgent type

sendMessage()

Converse with the Agent. The response is returned via SSE. The return value of this interface encapsulates SSE to varying degrees, enabling developers to obtain text streams and complete data streams according to their actual needs.

Usage Example

const res = await ai.bot.sendMessage({
botId: "botId-xxx",
history: [{ content: "You are Li Bai.", role: "user" }],
msg: "Hello",
});
for await (let str of res.textStream) {
console.log(str);
}
for await (let data of res.dataStream) {
console.log(data);
}

Type Declaration

function sendMessage(props: {
botId: string;
msg: string;
history: Array<{
role: string;
content: string;
}>;
}): Promise<StreamResult>;

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent id
props.msgYesstringMessage to send in this dialog
props.historyYes[]Chat history before this conversation
props.history[n].roleYesstringThe sender role of this chat message
props.history[n].contentYesstringThe content of this chat message

Return Value

Promise<StreamResult>

StreamResult Property NameTypeDescription
textStreamAsyncIterable<string>Agent-generated text returned in streaming mode. Refer to the usage example to obtain incrementally generated text.
dataStreamAsyncIterable<AgentStreamChunk>Agent-generated text returned in streaming mode. Refer to the usage example to obtain incrementally generated text.
AgentStreamChunk Property NameTypeDescription
creatednumberConversation timestamp
record_idstringconversation record ID
modelstringLarge model type
versionstringlarge model version
typestringResponse type: text: main answer content, thinking: thinking process, search: search results, knowledge: knowledge base
rolestringDialogue role, always 'assistant' in responses.
contentstringConversation content
finish_reasionstringConversation end flag: 'continue' indicates the conversation is ongoing, 'stop' indicates the conversation has ended.
reasoning_contentstringDeep reasoning content (only non-empty for deepseek-r1)
usageobjecttoken usage
usage.prompt_tokensnumberIndicates the number of tokens in the prompt, remains unchanged across multiple responses.
usage.completion_tokensnumberTotal number of tokens in the completion. In streaming responses, it represents the cumulative total of tokens for all completions so far and continues to accumulate across multiple responses.
usage.total_tokensnumberrepresents the sum of prompt_tokens and completion_tokens
knowledge_basestring[]knowledge bases used in the conversation
search_infoobjectSearch result information, requires enabling web search
search_info.search_resultsobject[]search citation information
search_info.search_results[n].indexstringcitation index
search_info.search_results[n].titlestringsearch citation title
search_info.search_results[n].urlstringcitation URL

getChatRecords()

Get chat history.

Usage Example

await ai.bot.getChatRecords({
botId: "botId-xxx",
pageNumber: 1,
pageSize: 10,
sort: "asc",
});

Type Declaration

function getChatRecords(props: {
botId: string;
sort: string;
pageSize: number;
pageNumber: number;
});

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent id
props.sortYesstringSort method
props.pageSizeYesnumberPage size
props.pageNumberYesnumberPage index

Return Value

Property NameTypeDescription
totalnumberTotal conversations
recordListArray<object>Total conversations
recordList[n].botIdstringAgent ID
recordList[n].recordIdstringConversation ID, system-generated
recordList[n].rolestringRole in conversation
recordList[n].contentstringConversation content
recordList[n].conversationstringUser identifier
recordList[n].typestringConversation data type
recordList[n].imagestringConversation-generated image URL
recordList[n].triggerSrcstringConversation initiation source
recordList[n].replyTostringReplied-to record ID
recordList[n].createTimestringConversation time

sendFeedback()

Send feedback on a specific chat history.

Usage Example

const res = await ai.bot.sendFeedback({
userFeedback: {
botId: "botId-xxx",
recordId: "recordId-xxx",
comment: "Excellent",
rating: 5,
tags: ["Graceful"],
aiAnswer: "Fallen petals scatter in profusion",
input: "Give me an idiom",
type: "upvote",
},
botId: "botId-xxx",
});

Type Declaration

function sendFeedback(props: { userFeedback: IUserFeedback; botId: string });

Parameters

Parameter NameRequiredTypeDescription
props.userFeedbackYesIUserFeedbackUser feedback. See IUserFeedback type definition
props.botIdYesstringAgent id to be provided for feedback

getFeedback()

Get existing feedback information.

Usage Example

const res = await ai.bot.getFeedback({
botId: "botId-xxx",
from: 0,
to: 0,
maxRating: 4,
minRating: 3,
pageNumber: 1,
pageSize: 10,
sender: "user-a",
senderFilter: "include",
type: "upvote",
});

Type Declaration

function sendFeedback(props: {
botId: string;
type: string;
sender: string;
senderFilter: string;
minRating: number;
maxRating: number;
from: number;
to: number;
pageSize: number;
pageNumber: number;
});

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent id
props.typeYesstringUser feedback type: upvote (like) or downvote (dislike)
props.senderYesstringUser who created the comment
props.senderFilterYesstringFilter relationship for comment creator: include: include, exclude: exclude, equal: equal, unequal: not equal, prefix: prefix
props.minRatingYesnumberMinimum rating
props.maxRatingYesnumberMaximum rating
props.fromYesnumberStart timestamp
props.toYesnumberEnd timestamp
props.pageSizeYesnumberPage size
props.pageNumberYesnumberPage index

Return Value

Property NameTypeDescription
feedbackListobject[]Feedback query results
feedbackList[n].recordIdstringConversation record ID
feedbackList[n].typestringUser feedback type: upvote (like) or downvote (dislike)
feedbackList[n].botIdstringAgent ID
feedbackList[n].commentstringUser comment
feedbackList[n].ratingnumberUser rating
feedbackList[n].tagsstring[]Array of user feedback tags
feedbackList[n].inputstringUser input question
feedbackList[n].aiAnswerstringAgent's answer
totalnumberTotal feedback

uploadFiles()

Upload files from cloud storage to the Agent for document-based chatting.

Usage Example

Uploading Files
await ai.bot.uploadFiles({
botId: "botId-xxx",
fileList: [
{
fileId: "cloud://xxx.docx",
fileName: "xxx.docx",
type: "file",
},
],
});

// Document-Based Chatting
const res = await ai.bot.sendMessage({
botId: "your-bot-id",
msg: "What is the content of this file",
files: ["xxx.docx"], // File fileId array
});

for await (let text of res.textStream) {
console.let(text);
}

Type Declaration

function uploadFiles(props: {
botId: string;
fileList: Array<{
fileId: "string";
fileName: "string";
type: "file";
}>;
});

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent id
props.fileListYesstringFile list
props.fileList[n].fileIdYesstringCloud storage file id
props.fileList[n].fileNameYesstringFile name
props.fileList[n].typeYesstringCurrently only supports "file"

getRecommendQuestions()

Get recommended questions.

Usage Example

const res = ai.bot.getRecommendQuestions({
botId: "botId-xxx",
history: [{ content: "Who are you?", role: "user" }],
msg: "Hello",
agentSetting: "",
introduction: "",
name: "",
});

for await (let str of res.textStream) {
console.log(str);
}

Type Declaration

function getRecommendQuestions(props: {
botId: string;
name: string;
introduction: string;
agentSetting: string;
msg: string;
history: Array<{
role: string;
content: string;
}>;
}): Promise<StreamResult>;

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent id
props.nameYesstringAgent Name
props.introductionYesstringAgent introduction
props.agentSettingYesstringAgent Setting
props.msgYesstringUser message
props.historyYesArrayHistorical conversation information
props.history[n].roleYesstringHistorical message role
props.history[n].contentYesstringHistorical message content

Return Value

Promise<StreamResult>

StreamResult Property NameTypeDescription
textStreamAsyncIterable<string>Agent-generated text returned in streaming mode. Refer to the usage example to obtain incrementally generated text.
dataStreamAsyncIterable<AgentStreamChunk>Agent-generated text returned in streaming mode. Refer to the usage example to obtain incrementally generated text.
AgentStreamChunk Property NameTypeDescription
creatednumberConversation timestamp
record_idstringconversation record ID
modelstringLarge model type
versionstringlarge model version
typestringResponse type: text: main answer content, thinking: thinking process, search: search results, knowledge: knowledge base
rolestringDialogue role, always 'assistant' in responses.
contentstringConversation content
finish_reasionstringConversation end flag: 'continue' indicates the conversation is ongoing, 'stop' indicates the conversation has ended.
reasoning_contentstringDeep reasoning content (only non-empty for deepseek-r1)
usageobjecttoken usage
usage.prompt_tokensnumberIndicates the number of tokens in the prompt, remains unchanged across multiple responses.
usage.completion_tokensnumberTotal number of tokens in the completion. In streaming responses, it represents the cumulative total of tokens for all completions so far and continues to accumulate across multiple responses.
usage.total_tokensnumberrepresents the sum of prompt_tokens and completion_tokens
knowledge_basestring[]knowledge bases used in the conversation
search_infoobjectSearch result information, requires enabling web search
search_info.search_resultsobject[]search citation information
search_info.search_results[n].indexstringcitation index
search_info.search_results[n].titlestringsearch citation title
search_info.search_results[n].urlstringcitation URL

createConversation()

Create a new conversation with the Agent.

Usage Example

const res = await ai.bot.createConversation({
botId: "botId-xxx",
title: "My Conversation",
}): Promise<IConversation>;

Type Declaration

function createConversation(props: IBotCreateConversation);

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent ID
props.titleNostringConversation Title

Return Value

Promise<IConversation>

Related Type: IConversation

getConversation()

Get conversation list.

Usage Example

const res = await ai.bot.getConversation({
botId: "botId-xxx",
pageSize: 10,
pageNumber: 1,
isDefault: false,
});

Type Declaration

function getConversation(props: IBotGetConversation);

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent ID
props.pageSizeNonumberPage size, default 10
props.pageNumberNonumberPage index, default 1
props.isDefaultNobooleanWhether to get only the default conversation

deleteConversation()

Delete specified conversation.

Usage Example

await ai.bot.deleteConversation({
botId: "botId-xxx",
conversationId: "conv-123",
});

Type Declaration

function deleteConversation(props: IBotDeleteConversation);

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent ID
props.conversationIdYesstringConversation ID to be deleted

speechToText()

Speech to text.

Usage Example

const res = await ai.bot.speechToText({
botId: "botId-xxx",
engSerViceType: "16k_zh",
voiceFormat: "mp3",
url: "https://example.com/audio.mp3",
});

Type Declaration

function speechToText(props: IBotSpeechToText);

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent ID
props.engSerViceTypeYesstringEngine type, e.g. "16k_zh"
props.voiceFormatYesstringAudio format, e.g. "mp3"
props.urlYesstringAudio file URL
props.isPreviewNobooleanWhether preview mode is enabled

textToSpeech()

Text to speech.

Usage Example

const res = await ai.bot.textToSpeech({
botId: "botId-xxx",
voiceType: 1,
text: "Hello, I am an AI assistant.",
});

Type Declaration

function textToSpeech(props: IBotTextToSpeech);

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent ID
props.voiceTypeYesnumberVoice type
props.textYesstringText to be converted
props.isPreviewNobooleanWhether preview mode is enabled

getTextToSpeechResult()

Get the text-to-speech result.

Usage Example

const res = await ai.bot.getTextToSpeechResult({
botId: "botId-xxx",
taskId: "task-123",
});

Type Declaration

function getTextToSpeechResult(props: IBotGetTextToSpeechResult);

Parameters

Parameter NameRequiredTypeDescription
props.botIdYesstringAgent ID
props.taskIdYesstringTask ID
props.isPreviewNobooleanWhether preview mode is enabled

IBotCreateConversation

interface IBotCreateConversation {
botId: string;
title?: string;
}

IBotGetConversation

interface IBotGetConversation {
botId: string;
pageSize?: number;
pageNumber?: number;
isDefault?: boolean;
}

IBotDeleteConversation

interface IBotDeleteConversation {
botId: string;
conversationId: string;
}

IBotSpeechToText

interface IBotSpeechToText {
botId: string;
engSerViceType: string;
voiceFormat: string;
url: string;
isPreview?: boolean;
}

IBotTextToSpeech

interface IBotTextToSpeech {
botId: string;
voiceType: number;
text: string;
isPreview?: boolean;
}

IBotGetTextToSpeechResult

interface IBotGetTextToSpeechResult {
botId: string;
taskId: string;
isPreview?: boolean;
}

BaseChatModelInput

interface BaseChatModelInput {
model: string;
messages: Array<ChatModelMessage>;
temperature?: number;
topP?: number;
tools?: Array<FunctionTool>;
toolChoice?: "none" | "auto" | "custom";
maxSteps?: number;
onStepFinish?: (prop: IOnStepFinish) => unknown;
}
BaseChatModelInput Property NameTypeDescription
modelstringModel name.
messagesArray<ChatModelMessage>Message list.
temperaturenumberSampling temperature, controlling the randomness of the output.
topPnumberNucleus sampling, where the model considers tokens with cumulative probability mass top_p.
toolsArray<FunctionTool>List of tools available for the large language model
toolChoicestringSpecifies how the large language model selects tools.
maxStepsnumberMaximum number of requests to the large language model.
onStepFinish(prop: IOnStepFinish) => unknownCallback function triggered when a request to the large language model is completed.

BotInfo

interface BotInfo {
botId: string;
name: string;
introduction: string;
agentSetting: string;
welcomeMessage: string;
avatar: string;
background: string;
tags: Array<string>;
isNeedRecommend: boolean;
knowledgeBase: Array<string>;
type: string;
initQuestions: Array<string>;
enable: true;
}

IUserFeedback

interface IUserFeedback {
recordId: string;
type: string;
botId: string;
comment: string;
rating: number;
tags: Array<string>;
input: string;
aiAnswer: string;
}

ChatModelMessage

type ChatModelMessage =
| UserMessage
| SystemMessage
| AssistantMessage
| ToolMessage;

UserMessage

type UserMessage = {
role: "user";
content: string;
};

SystemMessage

type SystemMessage = {
role: "system";
content: string;
};

AssistantMessage

type AssistantMessage = {
role: "assistant";
content?: string;
tool_calls?: Array<ToolCall>;
};

ToolMessage

type ToolMessage = {
role: "tool";
tool_call_id: string;
content: string;
};

ToolCall

export type ToolCall = {
id: string;
type: string;
function: { name: string; arguments: string };
};

FunctionTool

Tool definition type.

type FunctionTool = {
name: string;
description: string;
fn: CallableFunction;
parameters: object;
};
FunctionTool Property NameTypeDescription
namestringTool name.
descriptionstringDescription of the tool. A clear tool description helps the large language model understand the tool's purpose.
fnCallableFunctionThe execution function of the tool. When the AI SDK parses that the large language model's response requires this tool invocation, it will call this function and return the result to the large language model.
parametersobjectInput parameters for the tool execution function, which must be defined using the JSON Schema format.

IOnStepFinish

Type of input parameters for the callback function triggered after the large language model responds.

interface IOnStepFinish {
messages: Array<ChatModelMessage>;
text?: string;
toolCall?: ToolCall;
toolResult?: unknown;
finishReason?: string;
stepUsage?: Usage;
totalUsage?: Usage;
}
IOnStepFinish Property NameTypeDescription
messagesArray<ChatModelMessage>List of all messages up to the current step.
textstringText of the current response.
toolCallToolCallTool invoked in the current response.
toolResultunknownCorresponding tool call result.
finishReasonstringReason for large language model inference termination
stepUsageUsageTokens consumed by the current step.
totalUsageUsageTotal tokens consumed up to the current step.

Usage

type Usage = {
completion_tokens: number;
prompt_tokens: number;
total_tokens: number;
};

IConversation

Agent session.

interface IConversation {
id: string;
envId: string;
ownerUin: string;
userId: string;
conversationId: string;
title: string;
startTime: string; // date-time format
createTime: string;
updateTime: string;
}