AI
Cloud Development provides AI access capabilities, enabling quick access to large models and Agent.
Initialization
After initializing and logging in with js-sdk, obtain the AI instance via .ai().
import cloudbase from "@cloudbase/js-sdk";
const app = cloudbase.init({
env: "your-env-id"
});
const auth = app.auth();
await auth.signInAnonymously();
const ai = app.ai();
app.ai
After initialization, you can use the ai method mounted on the cloudbase instance to create an AI instance for subsequent model creation.
Usage Example
app = cloudbase.init({ env: "your-env" });
const ai = app.ai();
Type Declaration
function ai(): AI;
Return Value
AI
Returns the newly created AI instance.
AI
A class for creating AI models.
createModel()
Create the specified AI model.
Usage Example
const model = ai.createModel("hunyuan-exp");
Type Declaration
function createModel(model: string): ChatModel;
Returns a model instance that implements the ChatModel abstract class, which provides AI text generation capabilities.
bot
Provides an instance of the Bot class that includes a series of methods for interacting with the Agent. For details, refer to the Bot class documentation.
Usage Example
const agentList = await ai.bot.list({ pageNumber: 1, pageSize: 10 });
registerFunctionTool()
Register function tools. When invoking a large model, you can inform it of the available function tools. When the model's response is parsed as a tool invocation, the corresponding function tool is automatically invoked.
Usage Example
// Omit the initialization operations of the AI sdk...
// 1. Define the weather retrieval tool, see the FunctionTool type
const getWeatherTool = {
name: "get_weather",
"Returns the weather information for a city. Call example: get_weather({city: 'Beijing'})",
fn: ({ city }) => `The weather in ${city} is: crisp and clear autumn weather!!!`, // Define the tool's execution content here
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "City to query",
},
},
required: ["city"],
},
};
// 2. Register the tool we just defined
ai.registerFunctionTool(getWeatherTool);
// 3. While sending a message to the Large Model, inform it that a weather retrieval tool is available
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbo",
tools: [getWeatherTool], // Here we pass in the weather retrieval tool
messages: [
{
role: "user",
content: "Please tell me the weather conditions in Beijing",
},
],
});
console.log(result.text);
Type Declaration
function registerFunctionTool(functionTool: FunctionTool);
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
functionTool | Required | FunctionTool | See FunctionTool |
Return Value
undefined
ChatModel
This abstract class describes the interface provided by the AI text generation model class.
generateText()
Invoke large models to generate text.
Usage Example
const hy = ai.createModel("hunyuan-exp"); // Create a model
const res = await hy.generateText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }],
});
console.log(res.text); // Print the generated text
Type Declaration
function generateText(data: BaseChatModelInput): Promise<{
rawResponses: Array<unknown>;
text: string;
messages: Array<ChatModelMessage>;
usage: Usage;
error?: unknown;
}>;
Parameters
Parameter Name | Required | Type | Example | Description |
---|---|---|---|---|
data | Yes | BaseChatModelInput | {model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }]} | The parameter type is defined as BaseChatModelInput, serving as the basic input parameter definition. In practice, different large models have their own unique input parameters. Developers can pass additional parameters not defined in this type according to the official documentation of the actual large model used, fully leveraging the capabilities provided by large models. Other parameters will be passed through to the large model interface, and the SDK does not perform additional processing on them. |
Return Value
Property Name | Type | Example | Description |
---|---|---|---|
text | string | "Li Bai was a poet of the Tang Dynasty." | Text generated by the large model. |
rawResponses | unknown[] | [{"choices": [{"finish_reason": "stop","message": {"role": "assistant", "content": "Hello, is there anything I can help you with?"}}], "usage": {"prompt_tokens": 14, "completion_tokens": 9, "total_tokens": 23}}] | The complete return value from the large model, containing more detailed data such as message creation time, etc. Since return values vary among different large models, please use them according to the actual situation. |
res.messages | ChatModelMessage[] | [{role: 'user', content: 'Hello'},{role: 'assistant', content: 'Hello! I am glad to communicate with you. May I ask what I can help you with? Whether it is about life, work, study, or any other aspect, I will do my best to assist you.'}] | The complete message list for this call. |
usage | Usage | {"completion_tokens":33,"prompt_tokens":3,"total_tokens":36} | Tokens consumed by this call. |
error | unknown | Error generated during invocation. |
streamText()
To generate text by streaming invocation of large models. During streaming invocation, the generated text and other response data are returned via SSE. The return value of this interface encapsulates SSE to varying degrees, enabling developers to obtain text streams and complete data streams according to their actual needs.
Usage Example
const hy = ai.createModel("hunyuan-exp"); // Create a model
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }],
});
for await (let str of res.textStream) {
console.log(str); // Print the generated text
}
for await (let data of res.dataStream) {
console.log(data); // Print the complete data returned each time
}
Type Declaration
function streamText(data: BaseChatModelInput): Promise<StreamTextResult>;
Parameters
Parameter Name | Required | Type | Example | Description |
---|---|---|---|---|
data | Yes | BaseChatModelInput | {model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }]} | The parameter type is defined as BaseChatModelInput, serving as the basic input parameter definition. In practice, different large models have their own unique input parameters. Developers can pass additional parameters not defined in this type according to the official documentation of the actual large model used, fully leveraging the capabilities provided by large models. Other parameters will be passed through to the large model interface, and the SDK does not perform additional processing on them. |
Return Value
StreamTextResult Property Name | Type | Description |
---|---|---|
textStream | ReadableStream<string> | Large model-generated text returned in streaming mode. Refer to the usage sample to obtain incrementally generated text. |
dataStream | ReadableStream<DataChunk> | Large model response data returned in streaming mode. Refer to the usage sample to obtain incrementally generated data. As response values vary across different large models, please use them appropriately based on actual conditions. |
messages | Promise<ChatModelMessage[]> | The complete message list for this call. |
usage | Promise<Usage> | Tokens consumed by this call. |
error | unknown | Error generated during this call. |
DataChunk Property Name | Type | Description |
---|---|---|
choices | Array<object> | |
choices[n].finish_reason | string | Reason for model inference termination |
choices[n].delta | ChatModelMessage | The message for this request. |
usage | Usage | Tokens consumed by this request. |
rawResponse | unknown | Raw response returned by the large model. |
Example
const hy = ai.createModel("hunyuan-exp");
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "What is the result of 1+1?" }],
});
// Text stream
for await (let str of res.textStream) {
console.log(str);
}
// 1
// Add
// 1
// Result
// Is
// 2
// .
// Data stream
for await (let str of res.dataStream) {
console.log(str);
}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
Bot
A class for interacting with Agent.
get()
Get information about a specific Agent.
Usage Example
const res = await ai.bot.get({ botId: "botId-xxx" });
console.log(res);
Type Declaration
function get(props: { botId: string });
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | The id of the Agent for which to retrieve information |
Return Value
Property Name | Type | Example | Description |
---|---|---|---|
botId | string | "bot-27973647" | Agent ID |
name | string | "Example Agent Name" | Agent name |
introduction | string | Agent introduction | |
welcomeMessage | string | Agent welcome message | |
avatar | string | Agent avatar link | |
background | string | Agent chat background image link | |
isNeedRecommend | boolean | Whether to recommend questions after the Agent answers | |
type | string | Agent type |
list()
Retrieve information for multiple Agents in batch.
Usage Example
await ai.bot.list({
pageNumber: 1,
pageSize: 10,
name: "",
enable: true,
information: "",
introduction: "",
});
Type Declaration
function list(props: {
name: string;
introduction: string;
information: string;
enable: boolean;
pageSize: number;
pageNumber: number;
});
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.pageNumber | Yes | number | Page index |
props.pageSize | Yes | number | Page size |
props.enable | Yes | boolean | Whether the Agent is enabled |
props.name | Yes | string | Agent name, used for fuzzy query |
props.information | Yes | string | Agent information, used for fuzzy query |
props.introduction | Yes | string | Agent introduction, used for fuzzy query |
Return Value
Property Name | Type | Example | Description |
---|---|---|---|
total | number | --- | Total Agents |
botList | Array<object> | Agent list | |
botList[n].botId | string | "bot-27973647" | Agent ID |
botList[n].name | string | "Xindaya Translation" | Agent name |
botList[n].introduction | string | Agent introduction | |
botList[n].welcomeMessage | string | Agent welcome message | |
botList[n].avatar | string | Agent avatar link | |
botList[n].background | string | Agent chat background image link | |
botList[n].isNeedRecommend | boolean | Whether to recommend questions after the Agent answers | |
botList[n].type | string | Agent type |
sendMessage()
Converse with the Agent. The response is returned via SSE. The return value of this interface encapsulates SSE to varying degrees, enabling developers to obtain text streams and complete data streams according to their actual needs.
Usage Example
const res = await ai.bot.sendMessage({
botId: "botId-xxx",
history: [{ content: "You are Li Bai.", role: "user" }],
msg: "Hello",
});
for await (let str of res.textStream) {
console.log(str);
}
for await (let data of res.dataStream) {
console.log(data);
}
Type Declaration
function sendMessage(props: {
botId: string;
msg: string;
history: Array<{
role: string;
content: string;
}>;
}): Promise<StreamResult>;
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent id |
props.msg | Yes | string | Message to send in this dialog |
props.history | Yes | [] | Chat history before this conversation |
props.history[n].role | Yes | string | The sender role of this chat message |
props.history[n].content | Yes | string | The content of this chat message |
Return Value
Promise<StreamResult>
StreamResult Property Name | Type | Description |
---|---|---|
textStream | AsyncIterable<string> | Agent-generated text returned in streaming mode. Refer to the usage example to obtain incrementally generated text. |
dataStream | AsyncIterable<AgentStreamChunk> | Agent-generated text returned in streaming mode. Refer to the usage example to obtain incrementally generated text. |
AgentStreamChunk Property Name | Type | Description |
---|---|---|
created | number | Conversation timestamp |
record_id | string | conversation record ID |
model | string | Large model type |
version | string | large model version |
type | string | Response type: text: main answer content, thinking: thinking process, search: search results, knowledge: knowledge base |
role | string | Dialogue role, always 'assistant' in responses. |
content | string | Conversation content |
finish_reasion | string | Conversation end flag: 'continue' indicates the conversation is ongoing, 'stop' indicates the conversation has ended. |
reasoning_content | string | Deep reasoning content (only non-empty for deepseek-r1) |
usage | object | token usage |
usage.prompt_tokens | number | Indicates the number of tokens in the prompt, remains unchanged across multiple responses. |
usage.completion_tokens | number | Total number of tokens in the completion. In streaming responses, it represents the cumulative total of tokens for all completions so far and continues to accumulate across multiple responses. |
usage.total_tokens | number | represents the sum of prompt_tokens and completion_tokens |
knowledge_base | string[] | knowledge bases used in the conversation |
search_info | object | Search result information, requires enabling web search |
search_info.search_results | object[] | search citation information |
search_info.search_results[n].index | string | citation index |
search_info.search_results[n].title | string | search citation title |
search_info.search_results[n].url | string | citation URL |
getChatRecords()
Get chat history.
Usage Example
await ai.bot.getChatRecords({
botId: "botId-xxx",
pageNumber: 1,
pageSize: 10,
sort: "asc",
});
Type Declaration
function getChatRecords(props: {
botId: string;
sort: string;
pageSize: number;
pageNumber: number;
});
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent id |
props.sort | Yes | string | Sort method |
props.pageSize | Yes | number | Page size |
props.pageNumber | Yes | number | Page index |
Return Value
Property Name | Type | Description |
---|---|---|
total | number | Total conversations |
recordList | Array<object> | Total conversations |
recordList[n].botId | string | Agent ID |
recordList[n].recordId | string | Conversation ID, system-generated |
recordList[n].role | string | Role in conversation |
recordList[n].content | string | Conversation content |
recordList[n].conversation | string | User identifier |
recordList[n].type | string | Conversation data type |
recordList[n].image | string | Conversation-generated image URL |
recordList[n].triggerSrc | string | Conversation initiation source |
recordList[n].replyTo | string | Replied-to record ID |
recordList[n].createTime | string | Conversation time |
sendFeedback()
Send feedback on a specific chat history.
Usage Example
const res = await ai.bot.sendFeedback({
userFeedback: {
botId: "botId-xxx",
recordId: "recordId-xxx",
comment: "Excellent",
rating: 5,
tags: ["Graceful"],
aiAnswer: "Fallen petals scatter in profusion",
input: "Give me an idiom",
type: "upvote",
},
botId: "botId-xxx",
});
Type Declaration
function sendFeedback(props: { userFeedback: IUserFeedback; botId: string });
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.userFeedback | Yes | IUserFeedback | User feedback. See IUserFeedback type definition |
props.botId | Yes | string | Agent id to be provided for feedback |
getFeedback()
Get existing feedback information.
Usage Example
const res = await ai.bot.getFeedback({
botId: "botId-xxx",
from: 0,
to: 0,
maxRating: 4,
minRating: 3,
pageNumber: 1,
pageSize: 10,
sender: "user-a",
senderFilter: "include",
type: "upvote",
});
Type Declaration
function sendFeedback(props: {
botId: string;
type: string;
sender: string;
senderFilter: string;
minRating: number;
maxRating: number;
from: number;
to: number;
pageSize: number;
pageNumber: number;
});
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent id |
props.type | Yes | string | User feedback type: upvote (like) or downvote (dislike) |
props.sender | Yes | string | User who created the comment |
props.senderFilter | Yes | string | Filter relationship for comment creator: include: include, exclude: exclude, equal: equal, unequal: not equal, prefix: prefix |
props.minRating | Yes | number | Minimum rating |
props.maxRating | Yes | number | Maximum rating |
props.from | Yes | number | Start timestamp |
props.to | Yes | number | End timestamp |
props.pageSize | Yes | number | Page size |
props.pageNumber | Yes | number | Page index |
Return Value
Property Name | Type | Description |
---|---|---|
feedbackList | object[] | Feedback query results |
feedbackList[n].recordId | string | Conversation record ID |
feedbackList[n].type | string | User feedback type: upvote (like) or downvote (dislike) |
feedbackList[n].botId | string | Agent ID |
feedbackList[n].comment | string | User comment |
feedbackList[n].rating | number | User rating |
feedbackList[n].tags | string[] | Array of user feedback tags |
feedbackList[n].input | string | User input question |
feedbackList[n].aiAnswer | string | Agent's answer |
total | number | Total feedback |
uploadFiles()
Upload files from cloud storage to the Agent for document-based chatting.
Usage Example
Uploading Files
await ai.bot.uploadFiles({
botId: "botId-xxx",
fileList: [
{
fileId: "cloud://xxx.docx",
fileName: "xxx.docx",
type: "file",
},
],
});
// Document-Based Chatting
const res = await ai.bot.sendMessage({
botId: "your-bot-id",
msg: "What is the content of this file",
files: ["xxx.docx"], // File fileId array
});
for await (let text of res.textStream) {
console.let(text);
}
Type Declaration
function uploadFiles(props: {
botId: string;
fileList: Array<{
fileId: "string";
fileName: "string";
type: "file";
}>;
});
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent id |
props.fileList | Yes | string | File list |
props.fileList[n].fileId | Yes | string | Cloud storage file id |
props.fileList[n].fileName | Yes | string | File name |
props.fileList[n].type | Yes | string | Currently only supports "file" |
getRecommendQuestions()
Get recommended questions.
Usage Example
const res = ai.bot.getRecommendQuestions({
botId: "botId-xxx",
history: [{ content: "Who are you?", role: "user" }],
msg: "Hello",
agentSetting: "",
introduction: "",
name: "",
});
for await (let str of res.textStream) {
console.log(str);
}
Type Declaration
function getRecommendQuestions(props: {
botId: string;
name: string;
introduction: string;
agentSetting: string;
msg: string;
history: Array<{
role: string;
content: string;
}>;
}): Promise<StreamResult>;
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent id |
props.name | Yes | string | Agent Name |
props.introduction | Yes | string | Agent introduction |
props.agentSetting | Yes | string | Agent Setting |
props.msg | Yes | string | User message |
props.history | Yes | Array | Historical conversation information |
props.history[n].role | Yes | string | Historical message role |
props.history[n].content | Yes | string | Historical message content |
Return Value
Promise<StreamResult>
StreamResult Property Name | Type | Description |
---|---|---|
textStream | AsyncIterable<string> | Agent-generated text returned in streaming mode. Refer to the usage example to obtain incrementally generated text. |
dataStream | AsyncIterable<AgentStreamChunk> | Agent-generated text returned in streaming mode. Refer to the usage example to obtain incrementally generated text. |
AgentStreamChunk Property Name | Type | Description |
---|---|---|
created | number | Conversation timestamp |
record_id | string | conversation record ID |
model | string | Large model type |
version | string | large model version |
type | string | Response type: text: main answer content, thinking: thinking process, search: search results, knowledge: knowledge base |
role | string | Dialogue role, always 'assistant' in responses. |
content | string | Conversation content |
finish_reasion | string | Conversation end flag: 'continue' indicates the conversation is ongoing, 'stop' indicates the conversation has ended. |
reasoning_content | string | Deep reasoning content (only non-empty for deepseek-r1) |
usage | object | token usage |
usage.prompt_tokens | number | Indicates the number of tokens in the prompt, remains unchanged across multiple responses. |
usage.completion_tokens | number | Total number of tokens in the completion. In streaming responses, it represents the cumulative total of tokens for all completions so far and continues to accumulate across multiple responses. |
usage.total_tokens | number | represents the sum of prompt_tokens and completion_tokens |
knowledge_base | string[] | knowledge bases used in the conversation |
search_info | object | Search result information, requires enabling web search |
search_info.search_results | object[] | search citation information |
search_info.search_results[n].index | string | citation index |
search_info.search_results[n].title | string | search citation title |
search_info.search_results[n].url | string | citation URL |
createConversation()
Create a new conversation with the Agent.
Usage Example
const res = await ai.bot.createConversation({
botId: "botId-xxx",
title: "My Conversation",
}): Promise<IConversation>;
Type Declaration
function createConversation(props: IBotCreateConversation);
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent ID |
props.title | No | string | Conversation Title |
Return Value
Promise<IConversation>
Related Type: IConversation
getConversation()
Get conversation list.
Usage Example
const res = await ai.bot.getConversation({
botId: "botId-xxx",
pageSize: 10,
pageNumber: 1,
isDefault: false,
});
Type Declaration
function getConversation(props: IBotGetConversation);
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent ID |
props.pageSize | No | number | Page size, default 10 |
props.pageNumber | No | number | Page index, default 1 |
props.isDefault | No | boolean | Whether to get only the default conversation |
deleteConversation()
Delete specified conversation.
Usage Example
await ai.bot.deleteConversation({
botId: "botId-xxx",
conversationId: "conv-123",
});
Type Declaration
function deleteConversation(props: IBotDeleteConversation);
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent ID |
props.conversationId | Yes | string | Conversation ID to be deleted |
speechToText()
Speech to text.
Usage Example
const res = await ai.bot.speechToText({
botId: "botId-xxx",
engSerViceType: "16k_zh",
voiceFormat: "mp3",
url: "https://example.com/audio.mp3",
});
Type Declaration
function speechToText(props: IBotSpeechToText);
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent ID |
props.engSerViceType | Yes | string | Engine type, e.g. "16k_zh" |
props.voiceFormat | Yes | string | Audio format, e.g. "mp3" |
props.url | Yes | string | Audio file URL |
props.isPreview | No | boolean | Whether preview mode is enabled |
textToSpeech()
Text to speech.
Usage Example
const res = await ai.bot.textToSpeech({
botId: "botId-xxx",
voiceType: 1,
text: "Hello, I am an AI assistant.",
});
Type Declaration
function textToSpeech(props: IBotTextToSpeech);
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent ID |
props.voiceType | Yes | number | Voice type |
props.text | Yes | string | Text to be converted |
props.isPreview | No | boolean | Whether preview mode is enabled |
getTextToSpeechResult()
Get the text-to-speech result.
Usage Example
const res = await ai.bot.getTextToSpeechResult({
botId: "botId-xxx",
taskId: "task-123",
});
Type Declaration
function getTextToSpeechResult(props: IBotGetTextToSpeechResult);
Parameters
Parameter Name | Required | Type | Description |
---|---|---|---|
props.botId | Yes | string | Agent ID |
props.taskId | Yes | string | Task ID |
props.isPreview | No | boolean | Whether preview mode is enabled |
IBotCreateConversation
interface IBotCreateConversation {
botId: string;
title?: string;
}
IBotGetConversation
interface IBotGetConversation {
botId: string;
pageSize?: number;
pageNumber?: number;
isDefault?: boolean;
}
IBotDeleteConversation
interface IBotDeleteConversation {
botId: string;
conversationId: string;
}
IBotSpeechToText
interface IBotSpeechToText {
botId: string;
engSerViceType: string;
voiceFormat: string;
url: string;
isPreview?: boolean;
}
IBotTextToSpeech
interface IBotTextToSpeech {
botId: string;
voiceType: number;
text: string;
isPreview?: boolean;
}
IBotGetTextToSpeechResult
interface IBotGetTextToSpeechResult {
botId: string;
taskId: string;
isPreview?: boolean;
}
BaseChatModelInput
interface BaseChatModelInput {
model: string;
messages: Array<ChatModelMessage>;
temperature?: number;
topP?: number;
tools?: Array<FunctionTool>;
toolChoice?: "none" | "auto" | "custom";
maxSteps?: number;
onStepFinish?: (prop: IOnStepFinish) => unknown;
}
BaseChatModelInput Property Name | Type | Description |
---|---|---|
model | string | Model name. |
messages | Array<ChatModelMessage> | Message list. |
temperature | number | Sampling temperature, controlling the randomness of the output. |
topP | number | Nucleus sampling, where the model considers tokens with cumulative probability mass top_p. |
tools | Array<FunctionTool> | List of tools available for the large language model |
toolChoice | string | Specifies how the large language model selects tools. |
maxSteps | number | Maximum number of requests to the large language model. |
onStepFinish | (prop: IOnStepFinish) => unknown | Callback function triggered when a request to the large language model is completed. |
BotInfo
interface BotInfo {
botId: string;
name: string;
introduction: string;
agentSetting: string;
welcomeMessage: string;
avatar: string;
background: string;
tags: Array<string>;
isNeedRecommend: boolean;
knowledgeBase: Array<string>;
type: string;
initQuestions: Array<string>;
enable: true;
}
IUserFeedback
interface IUserFeedback {
recordId: string;
type: string;
botId: string;
comment: string;
rating: number;
tags: Array<string>;
input: string;
aiAnswer: string;
}
ChatModelMessage
type ChatModelMessage =
| UserMessage
| SystemMessage
| AssistantMessage
| ToolMessage;
UserMessage
type UserMessage = {
role: "user";
content: string;
};
SystemMessage
type SystemMessage = {
role: "system";
content: string;
};
AssistantMessage
type AssistantMessage = {
role: "assistant";
content?: string;
tool_calls?: Array<ToolCall>;
};
ToolMessage
type ToolMessage = {
role: "tool";
tool_call_id: string;
content: string;
};
ToolCall
export type ToolCall = {
id: string;
type: string;
function: { name: string; arguments: string };
};
FunctionTool
Tool definition type.
type FunctionTool = {
name: string;
description: string;
fn: CallableFunction;
parameters: object;
};
FunctionTool Property Name | Type | Description |
---|---|---|
name | string | Tool name. |
description | string | Description of the tool. A clear tool description helps the large language model understand the tool's purpose. |
fn | CallableFunction | The execution function of the tool. When the AI SDK parses that the large language model's response requires this tool invocation, it will call this function and return the result to the large language model. |
parameters | object | Input parameters for the tool execution function, which must be defined using the JSON Schema format. |
IOnStepFinish
Type of input parameters for the callback function triggered after the large language model responds.
interface IOnStepFinish {
messages: Array<ChatModelMessage>;
text?: string;
toolCall?: ToolCall;
toolResult?: unknown;
finishReason?: string;
stepUsage?: Usage;
totalUsage?: Usage;
}
IOnStepFinish Property Name | Type | Description |
---|---|---|
messages | Array<ChatModelMessage> | List of all messages up to the current step. |
text | string | Text of the current response. |
toolCall | ToolCall | Tool invoked in the current response. |
toolResult | unknown | Corresponding tool call result. |
finishReason | string | Reason for large language model inference termination |
stepUsage | Usage | Tokens consumed by the current step. |
totalUsage | Usage | Total tokens consumed up to the current step. |
Usage
type Usage = {
completion_tokens: number;
prompt_tokens: number;
total_tokens: number;
};
IConversation
Agent session.
interface IConversation {
id: string;
envId: string;
ownerUin: string;
userId: string;
conversationId: string;
title: string;
startTime: string; // date-time format
createTime: string;
updateTime: string;
}