AI SDK
Initialize SDK
For initialization methods, please refer to Initialize SDK.
app.ai
After initialization, you can use the ai method mounted on the cloudbase instance to create an AI instance for subsequent model creation.
Usage Example
app = cloudbase.init({ env: "your-env" });
const ai = app.ai();
Type Declaration
function ai(): AI;
Return Value
AI
Returns a newly created AI instance.
AI
A class for creating AI models.
createModel()
Creates a specified AI model.
Usage Example
const model = ai.createModel("hunyuan-exp");
Type Declaration
function createModel(model: string): ChatModel;
Returns a model instance that implements the ChatModel abstract class, providing AI text generation capabilities.
createImageModel()
Creates a specified image generation model.
Usage Example
const imageModel = ai.createImageModel("hunyuan-image");
Type Declaration
function createImageModel(provider: string): ImageModel;
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| provider | Yes | string | Model provider name, e.g., "hunyuan-image" |
Return Value
Returns an ImageModel instance that provides AI image generation capabilities.
bot
An instance of the Bot class mounted here, which includes a series of methods for interacting with Agents. For details, refer to the Bot class documentation.
Usage Example
const agentList = await ai.bot.list({ pageNumber: 1, pageSize: 10 });
registerFunctionTool()
Registers a function tool. When calling the LLM, you can inform it of available function tools. When the LLM's response is parsed as a tool call, the corresponding function tool will be automatically invoked.
Usage Example
// Omitting AI SDK initialization...
// 1. Define the weather tool, see FunctionTool type for details
const getWeatherTool = {
name: "get_weather",
description: "Returns weather information for a city. Example call: get_weather({city: 'Beijing'})",
fn: ({ city }) => `The weather in ${city} is: Clear autumn skies!!!`, // Define the tool execution content here
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "The city to query",
},
},
required: ["city"],
},
};
// 2. Register the tool we just defined
ai.registerFunctionTool(getWeatherTool);
// 3. Send a message to the LLM and inform it that a weather tool is available
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbo",
tools: [getWeatherTool], // Here we pass in the weather tool
messages: [
{
role: "user",
content: "Please tell me the weather in Beijing",
},
],
});
console.log(result.text);
Type Declaration
function registerFunctionTool(functionTool: FunctionTool);
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| functionTool | Yes | FunctionTool | See FunctionTool for details |
Return Value
undefined
Parameters
| Parameter | Required | Type | Example | Description |
|---|---|---|---|---|
| data | Yes | BaseChatModelInput | {model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, please introduce Li Bai" }]} | Parameter type is defined as BaseChatModelInput as the basic input definition. In practice, different LLM providers have their own unique input parameters. Developers can pass additional parameters not defined in this type based on the LLM's official documentation to fully utilize the LLM's capabilities. Other parameters will be passed through to the LLM interface, and the SDK does not perform additional processing on them. |
Return Value
| Property | Type | Example | Description |
|---|---|---|---|
| text | string | "Li Bai was a Tang Dynasty poet." | Text generated by the LLM. |
| rawResponses | unknown[] | [{"choices": [{"finish_reason": "stop","message": {"role": "assistant", "content": "Hello, is there anything I can help you with?"}}], "usage": {"prompt_tokens": 14, "completion_tokens": 9, "total_tokens": 23}}] | Complete response from the LLM, containing more detailed data such as message creation time. Since different LLMs have varying return values, please use according to your situation. |
| res.messages | ChatModelMessage[] | [{role: 'user', content: 'Hello'},{role: 'assistant', content: 'Hello! Nice to chat with you. Is there anything I can help you with? Whether it\'s about life, work, study, or other areas, I\'ll do my best to assist you.'}] | Complete message list for this call. |
| usage | Usage | {"completion_tokens":33,"prompt_tokens":3,"total_tokens":36} | Tokens consumed in this call. |
| error | unknown | Errors that occurred during the call. |
streamText()
Calls the LLM to generate text in streaming mode. During streaming calls, the generated text and other response data are returned via SSE. The return value of this interface provides different levels of encapsulation for SSE, allowing developers to obtain text streams and complete data streams according to their needs.
Usage Example
const hy = ai.createModel("hunyuan-exp"); // Create model
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});
for await (let str of res.textStream) {
console.log(str); // Print the generated text
}
for await (let data of res.dataStream) {
console.log(data); // Print the complete data for each response
}
Type Declaration
function streamText(data: BaseChatModelInput): Promise<StreamTextResult>;
Parameters
| Parameter | Required | Type | Example | Description |
|---|---|---|---|---|
| data | Yes | BaseChatModelInput | {model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, please introduce Li Bai" }]} | Parameter type is defined as BaseChatModelInput as the basic input definition. In practice, different LLM providers have their own unique input parameters. Developers can pass additional parameters not defined in this type based on the LLM's official documentation to fully utilize the LLM's capabilities. Other parameters will be passed through to the LLM interface, and the SDK does not perform additional processing on them. |
Return Value
| StreamTextResult Property | Type | Description |
|---|---|---|
| textStream | ReadableStream<string> | LLM-generated text returned in streaming mode. Refer to the usage example to get the incremental generated text. |
| dataStream | ReadableStream<DataChunk> | LLM response data returned in streaming mode. Refer to the usage example to get the incremental generated data. Since different LLMs have varying response values, please use according to your situation. |
| messages | Promise<ChatModelMessage[]> | Complete message list for this call. |
| usage | Promise<Usage> | Tokens consumed in this call. |
| error | unknown | Errors that occurred during this call. |
| DataChunk Property | Type | Description |
|---|---|---|
| choices | Array<object> | |
| choices[n].finish_reason | string | Reason for model inference termination. |
| choices[n].delta | ChatModelMessage | Message for this request. |
| usage | Usage | Tokens consumed in this request. |
| rawResponse | unknown | Raw response from the LLM. |
Example
const hy = ai.createModel("hunyuan-exp");
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "What is 1+1" }],
});
// Text stream
for await (let str of res.textStream) {
console.log(str);
}
// 1
// plus
// 1
// equals
// 2
// .
// Data stream
for await (let str of res.dataStream) {
console.log(str);
}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
ImageModel
This class describes the interface provided by AI image generation model classes.
generateImage()
Calls the LLM to generate images.
Usage Example
const imageModel = ai.createImageModel("hunyuan-image");
const res = await imageModel.generateImage({
model: "hunyuan-image-v3.0-v1.0.4",
prompt: "A cute cat playing on the grass",
});
console.log(res.data[0].url); // Print the generated image URL
Type Declaration
function generateImage(input: HunyuanARGenerateImageInput): Promise<HunyuanARGenerateImageOutput>;
function generateImage(input: HunyuanGenerateImageInput): Promise<HunyuanGenerateImageOutput>;
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| input | Yes | HunyuanARGenerateImageInput | HunyuanGenerateImageInput | Image generation parameters, see HunyuanARGenerateImageInput or HunyuanGenerateImageInput for details |
Return Value
Promise<HunyuanARGenerateImageOutput> or Promise<HunyuanGenerateImageOutput>
| Property | Type | Description |
|---|---|---|
| id | string | ID of this request |
| created | number | Unix timestamp |
| data | Array\<object> | Returned image generation content |
| data[n].url | string | Generated image URL, valid for 24 hours |
| data[n].revised_prompt | string | Text after prompt revision (if revise is false, it's the original prompt) |
Bot
A class for interacting with Agents.
get()
Gets information about a specific Agent.
Usage Example
const res = await ai.bot.get({ botId: "botId-xxx" });
console.log(res);
Type Declaration
function get(props: { botId: string });
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | ID of the Agent to get info for |
Return Value
| Property | Type | Example | Description |
|---|---|---|---|
| botId | string | "bot-27973647" | Agent ID |
| name | string | "Translator" | Agent name |
| introduction | string | Agent introduction | |
| welcomeMessage | string | Agent welcome message | |
| avatar | string | Agent avatar URL | |
| background | string | Agent chat background image URL | |
| isNeedRecommend | boolean | Whether to recommend questions after Agent responds | |
| type | string | Agent type |
list()
Gets information about multiple Agents in batch.
Usage Example
await ai.bot.list({
pageNumber: 1,
pageSize: 10,
name: "",
enable: true,
information: "",
introduction: "",
});
Type Declaration
function list(props: {
name: string;
introduction: string;
information: string;
enable: boolean;
pageSize: number;
pageNumber: number;
});
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.pageNumber | Yes | number | Page index |
| props.pageSize | Yes | number | Page size |
| props.enable | Yes | boolean | Whether the Agent is enabled |
| props.name | Yes | string | Agent name, for fuzzy search |
| props.information | Yes | string | Agent information, for fuzzy search |
| props.introduction | Yes | string | Agent introduction, for fuzzy search |
Return Value
| Property | Type | Example | Description |
|---|---|---|---|
| total | number | --- | Total number of Agents |
| botList | Array<object> | Agent list | |
botList[n].botId | string | "bot-27973647" | Agent ID |
botList[n].name | string | "Translator" | Agent name |
botList[n].introduction | string | Agent introduction | |
botList[n].welcomeMessage | string | Agent welcome message | |
botList[n].avatar | string | Agent avatar URL | |
botList[n].background | string | Agent chat background image URL | |
botList[n].isNeedRecommend | boolean | Whether to recommend questions after Agent responds | |
botList[n].type | string | Agent type |
sendMessage()
Sends a message to an Agent for conversation. The response is returned via SSE, and this interface's return value provides different levels of encapsulation for SSE, allowing developers to obtain text streams and complete data streams according to their needs.
Usage Example
const res = await ai.bot.sendMessage({
botId: "botId-xxx",
history: [{ content: "You are Li Bai.", role: "user" }],
msg: "Hello",
});
for await (let str of res.textStream) {
console.log(str);
}
for await (let data of res.dataStream) {
console.log(data);
}
Type Declaration
function sendMessage(props: {
botId: string;
msg: string;
history: Array<{
role: string;
content: string;
}>;
}): Promise<StreamResult>;
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.msg | Yes | string | Message to send in this conversation |
| props.history | Yes | [] | Chat history before this conversation |
props.history[n].role | Yes | string | Role of the message sender |
props.history[n].content | Yes | string | Content of the message |
Return Value
Promise<StreamResult>
| StreamResult Property | Type | Description |
|---|---|---|
| textStream | AsyncIterable<string> | Agent-generated text returned in streaming mode. Refer to the usage example to get the incremental generated text. |
| dataStream | AsyncIterable<AgentStreamChunk> | Agent-generated data returned in streaming mode. Refer to the usage example to get the incremental generated data. |
| AgentStreamChunk Property | Type | Description |
|---|---|---|
| created | number | Conversation timestamp |
| record_id | string | Conversation record ID |
| model | string | LLM type |
| version | string | LLM version |
| type | string | Reply type: text: main answer content, thinking: thinking process, search: search results, knowledge: knowledge base |
| role | string | Conversation role, fixed as assistant in responses |
| content | string | Conversation content |
| finish_reasion | string | Conversation end flag, continue means conversation is ongoing, stop means conversation ended |
| reasoning_content | string | Deep thinking content (only non-empty for deepseek-r1) |
| usage | object | Token usage |
| usage.prompt_tokens | number | Number of prompt tokens, remains unchanged across multiple returns |
| usage.completion_tokens | number | Total completion tokens. In streaming returns, represents the cumulative total of all completion tokens so far |
| usage.total_tokens | number | Sum of prompt_tokens and completion_tokens |
| knowledge_base | string[] | Knowledge bases used in the conversation |
| search_info | object | Search result information, requires web search to be enabled |
| search_info.search_results | object[] | Search citation information |
| search_info.search_results[n].index | string | Search citation index |
| search_info.search_results[n].title | string | Search citation title |
| search_info.search_results[n].url | string | Search citation URL |
getChatRecords()
Gets chat records.
Usage Example
await ai.bot.getChatRecords({
botId: "botId-xxx",
pageNumber: 1,
pageSize: 10,
sort: "asc",
});
Type Declaration
function getChatRecords(props: {
botId: string;
sort: string;
pageSize: number;
pageNumber: number;
});
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.sort | Yes | string | Sort order |
| props.pageSize | Yes | number | Page size |
| props.pageNumber | Yes | number | Page index |
Return Value
| Property | Type | Description |
|---|---|---|
| total | number | Total number of conversations |
| recordList | Array<object> | Conversation list |
recordList[n].botId | string | Agent ID |
recordList[n].recordId | string | Conversation ID, system-generated |
recordList[n].role | string | Role in conversation |
recordList[n].content | string | Conversation content |
recordList[n].conversation | string | User identifier |
recordList[n].type | string | Conversation data type |
recordList[n].image | string | Image URL generated in conversation |
recordList[n].triggerSrc | string | Conversation trigger source |
recordList[n].replyTo | string | Record ID being replied to |
recordList[n].createTime | string | Conversation time |
sendFeedback()
Sends feedback for a specific chat record.
Usage Example
const res = await ai.bot.sendFeedback({
userFeedback: {
botId: "botId-xxx",
recordId: "recordId-xxx",
comment: "Excellent",
rating: 5,
tags: ["Beautiful"],
aiAnswer: "falling petals",
input: "Give me an idiom",
type: "upvote",
},
botId: "botId-xxx",
});
Type Declaration
function sendFeedback(props: { userFeedback: IUserFeedback; botId: string });
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.userFeedback | Yes | IUserFeedback | User feedback, see IUserFeedback type definition |
| props.botId | Yes | string | ID of the Agent to provide feedback for |
getFeedback()
Gets existing feedback information.
Usage Example
const res = await ai.bot.getFeedback({
botId: "botId-xxx",
from: 0,
to: 0,
maxRating: 4,
minRating: 3,
pageNumber: 1,
pageSize: 10,
sender: "user-a",
senderFilter: "include",
type: "upvote",
});
Type Declaration
function sendFeedback(props: {
botId: string;
type: string;
sender: string;
senderFilter: string;
minRating: number;
maxRating: number;
from: number;
to: number;
pageSize: number;
pageNumber: number;
});
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.type | Yes | string | User feedback type, upvote or downvote |
| props.sender | Yes | string | Feedback creator user |
| props.senderFilter | Yes | string | Feedback creator user filter: include, exclude, equal, unequal, prefix |
| props.minRating | Yes | number | Minimum rating |
| props.maxRating | Yes | number | Maximum rating |
| props.from | Yes | number | Start timestamp |
| props.to | Yes | number | End timestamp |
| props.pageSize | Yes | number | Page size |
| props.pageNumber | Yes | number | Page index |
Return Value
| Property | Type | Description |
|---|---|---|
| feedbackList | object[] | Feedback query results |
| feedbackList[n].recordId | string | Conversation record ID |
| feedbackList[n].type | string | User feedback type, upvote or downvote |
| feedbackList[n].botId | string | Agent ID |
| feedbackList[n].comment | string | User comment |
| feedbackList[n].rating | number | User rating |
| feedbackList[n].tags | string[] | User feedback tags array |
| feedbackList[n].input | string | User input question |
| feedbackList[n].aiAnswer | string | Agent's answer |
| total | number | Total number of feedbacks |
uploadFiles()
Uploads files from cloud storage to an Agent for document-based chat.
Usage Example
// Upload files
await ai.bot.uploadFiles({
botId: "botId-xxx",
fileList: [
{
fileId: "cloud://xxx.docx",
fileName: "xxx.docx",
type: "file",
},
],
});
// Conduct document-based chat
const res = await ai.bot.sendMessage({
botId: "your-bot-id",
msg: "What is the content of this file",
files: ["xxx.docx"], // Array of file fileIds
});
for await (let text of res.textStream) {
console.let(text);
}
Type Declaration
function uploadFiles(props: {
botId: string;
fileList: Array<{
fileId: "string";
fileName: "string";
type: "file";
}>;
});
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.fileList | Yes | string | File list |
| props.fileList[n].fileId | Yes | string | Cloud storage file ID |
| props.fileList[n].fileName | Yes | string | File name |
| props.fileList[n].type | Yes | string | Currently only supports "file" |
getRecommendQuestions()
Gets recommended questions.
Usage Example
const res = ai.bot.getRecommendQuestions({
botId: "botId-xxx",
history: [{ content: "Who are you", role: "user" }],
msg: "Hello",
agentSetting: "",
introduction: "",
name: "",
});
for await (let str of res.textStream) {
console.log(str);
}
Type Declaration
function getRecommendQuestions(props: {
botId: string;
name: string;
introduction: string;
agentSetting: string;
msg: string;
history: Array<{
role: string;
content: string;
}>;
}): Promise<StreamResult>;
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.name | Yes | string | Agent name |
| props.introduction | Yes | string | Agent introduction |
| props.agentSetting | Yes | string | Agent settings |
| props.msg | Yes | string | User message |
| props.history | Yes | Array | History messages |
props.history[n].role | Yes | string | History message role |
props.history[n].content | Yes | string | History message content |
Return Value
Promise<StreamResult>
| StreamResult Property | Type | Description |
|---|---|---|
| textStream | AsyncIterable<string> | Agent-generated text returned in streaming mode. Refer to the usage example to get the incremental generated text. |
| dataStream | AsyncIterable<AgentStreamChunk> | Agent-generated data returned in streaming mode. Refer to the usage example to get the incremental generated data. |
| AgentStreamChunk Property | Type | Description |
|---|---|---|
| created | number | Conversation timestamp |
| record_id | string | Conversation record ID |
| model | string | LLM type |
| version | string | LLM version |
| type | string | Reply type: text: main answer content, thinking: thinking process, search: search results, knowledge: knowledge base |
| role | string | Conversation role, fixed as assistant in responses |
| content | string | Conversation content |
| finish_reasion | string | Conversation end flag, continue means conversation is ongoing, stop means conversation ended |
| reasoning_content | string | Deep thinking content (only non-empty for deepseek-r1) |
| usage | object | Token usage |
| usage.prompt_tokens | number | Number of prompt tokens, remains unchanged across multiple returns |
| usage.completion_tokens | number | Total completion tokens. In streaming returns, represents the cumulative total of all completion tokens so far |
| usage.total_tokens | number | Sum of prompt_tokens and completion_tokens |
| knowledge_base | string[] | Knowledge bases used in the conversation |
| search_info | object | Search result information, requires web search to be enabled |
| search_info.search_results | object[] | Search citation information |
| search_info.search_results[n].index | string | Search citation index |
| search_info.search_results[n].title | string | Search citation title |
| search_info.search_results[n].url | string | Search citation URL |
createConversation()
Creates a new conversation with an Agent.
Usage Example
const res = await ai.bot.createConversation({
botId: "botId-xxx",
title: "My Conversation",
}): Promise<IConversation>;
Type Declaration
function createConversation(props: IBotCreateConversation);
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.title | No | string | Conversation title |
Return Value
Promise<IConversation>
See related type: IConversation
getConversation()
Gets the conversation list.
Usage Example
const res = await ai.bot.getConversation({
botId: "botId-xxx",
pageSize: 10,
pageNumber: 1,
isDefault: false,
});
Type Declaration
function getConversation(props: IBotGetConversation);
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.pageSize | No | number | Page size, default 10 |
| props.pageNumber | No | number | Page index, default 1 |
| props.isDefault | No | boolean | Whether to only get default conversation |
deleteConversation()
Deletes a specified conversation.
Usage Example
await ai.bot.deleteConversation({
botId: "botId-xxx",
conversationId: "conv-123",
});
Type Declaration
function deleteConversation(props: IBotDeleteConversation);
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.conversationId | Yes | string | ID of conversation to delete |
speechToText()
Converts speech to text.
Usage Example
const res = await ai.bot.speechToText({
botId: "botId-xxx",
engSerViceType: "16k_zh",
voiceFormat: "mp3",
url: "https://example.com/audio.mp3",
});
Type Declaration
function speechToText(props: IBotSpeechToText);
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.engSerViceType | Yes | string | Engine type, e.g., "16k_zh" |
| props.voiceFormat | Yes | string | Audio format, e.g., "mp3" |
| props.url | Yes | string | Audio file URL |
| props.isPreview | No | boolean | Whether in preview mode |
textToSpeech()
Converts text to speech.
Usage Example
const res = await ai.bot.textToSpeech({
botId: "botId-xxx",
voiceType: 1,
text: "Hello, I am an AI assistant",
});
Type Declaration
function textToSpeech(props: IBotTextToSpeech);
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.voiceType | Yes | number | Voice type |
| props.text | Yes | string | Text to convert |
| props.isPreview | No | boolean | Whether in preview mode |
getTextToSpeechResult()
Gets the text-to-speech result.
Usage Example
const res = await ai.bot.getTextToSpeechResult({
botId: "botId-xxx",
taskId: "task-123",
});
Type Declaration
function getTextToSpeechResult(props: IBotGetTextToSpeechResult);
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| props.botId | Yes | string | Agent ID |
| props.taskId | Yes | string | Task ID |
| props.isPreview | No | boolean | Whether in preview mode |
IBotCreateConversation
interface IBotCreateConversation {
botId: string;
title?: string;
}
IBotGetConversation
interface IBotGetConversation {
botId: string;
pageSize?: number;
pageNumber?: number;
isDefault?: boolean;
}
IBotDeleteConversation
interface IBotDeleteConversation {
botId: string;
conversationId: string;
}
IBotSpeechToText
interface IBotSpeechToText {
botId: string;
engSerViceType: string;
voiceFormat: string;
url: string;
isPreview?: boolean;
}
IBotTextToSpeech
interface IBotTextToSpeech {
botId: string;
voiceType: number;
text: string;
isPreview?: boolean;
}
IBotGetTextToSpeechResult
interface IBotGetTextToSpeechResult {
botId: string;
taskId: string;
isPreview?: boolean;
}
BaseChatModelInput
interface BaseChatModelInput {
model: string;
messages: Array<ChatModelMessage>;
temperature?: number;
topP?: number;
tools?: Array<FunctionTool>;
toolChoice?: "none" | "auto" | "custom";
maxSteps?: number;
onStepFinish?: (prop: IOnStepFinish) => unknown;
}
| BaseChatModelInput Property | Type | Description |
|---|---|---|
| model | string | Model name. |
| messages | Array<ChatModelMessage> | Message list. |
| temperature | number | Sampling temperature, controls output randomness. |
| topP | number | Temperature sampling, model considers tokens with probability mass of top_p. |
| tools | Array<FunctionTool> | List of tools available to the LLM. |
| toolChoice | string | Specifies how the LLM selects tools. |
| maxSteps | number | Maximum number of LLM requests. |
| onStepFinish | (prop: IOnStepFinish) => unknown | Callback function triggered when an LLM request completes. |
BotInfo
interface BotInfo {
botId: string;
name: string;
introduction: string;
agentSetting: string;
welcomeMessage: string;
avatar: string;
background: string;
tags: Array<string>;
isNeedRecommend: boolean;
knowledgeBase: Array<string>;
type: string;
initQuestions: Array<string>;
enable: true;
}
IUserFeedback
interface IUserFeedback {
recordId: string;
type: string;
botId: string;
comment: string;
rating: number;
tags: Array<string>;
input: string;
aiAnswer: string;
}
ChatModelMessage
type ChatModelMessage =
| UserMessage
| SystemMessage
| AssistantMessage
| ToolMessage;
UserMessage
type UserMessage = {
role: "user";
content: string;
};
SystemMessage
type SystemMessage = {
role: "system";
content: string;
};
AssistantMessage
type AssistantMessage = {
role: "assistant";
content?: string;
tool_calls?: Array<ToolCall>;
};
ToolMessage
type ToolMessage = {
role: "tool";
tool_call_id: string;
content: string;
};
ToolCall
export type ToolCall = {
id: string;
type: string;
function: { name: string; arguments: string };
};
FunctionTool
Tool definition type.
type FunctionTool = {
name: string;
description: string;
fn: CallableFunction;
parameters: object;
};
| FunctionTool Property | Type | Description |
|---|---|---|
| name | string | Tool name. |
| description | string | Tool description. A clear tool description helps the LLM understand the tool's purpose. |
| fn | CallableFunction | Tool execution function. When the AI SDK parses the LLM's response as requiring this tool call, it calls this function and returns the result to the LLM. |
| parameters | object | Tool execution function parameters. Must be defined in JSON Schema format. |
IOnStepFinish
Parameter type for the callback function triggered after LLM response.
interface IOnStepFinish {
messages: Array<ChatModelMessage>;
text?: string;
toolCall?: ToolCall;
toolResult?: unknown;
finishReason?: string;
stepUsage?: Usage;
totalUsage?: Usage;
}
| IOnStepFinish Property | Type | Description |
|---|---|---|
| messages | Array<ChatModelMessage> | Complete message list up to the current step. |
| text | string | Text from the current response. |
| toolCall | ToolCall | Tool called in the current response. |
| toolResult | unknown | Result of the corresponding tool call. |
| finishReason | string | Reason for LLM inference completion. |
| stepUsage | Usage | Tokens consumed in the current step. |
| totalUsage | Usage | Total tokens consumed up to the current step. |
Usage
type Usage = {
completion_tokens: number;
prompt_tokens: number;
total_tokens: number;
};
IConversation
Agent conversation.
interface IConversation {
id: string;
envId: string;
ownerUin: string;
userId: string;
conversationId: string;
title: string;
startTime: string; // date-time format
createTime: string;
updateTime: string;
}
HunyuanGenerateImageInput
Hunyuan image generation input parameters.
interface HunyuanGenerateImageInput {
model: 'hunyuan-image';
/** Text description used to generate the image */
prompt: string;
/** Model version, supports v1.8.1 and v1.9, default version v1.8.1 */
version?: 'v1.8.1' | 'v1.9' | (string & {});
/** Image size, default "1024x1024" */
size?: string;
/** v1.9 only, negative prompt */
negative_prompt?: string;
/** v1.9 only, can specify style */
style?: '古风二次元风格' | '都市二次元风格' | '悬疑风格' | '校园风格' | '都市异能风格' | (string & {});
/** When true, rewrites the prompt, default is true */
revise?: boolean;
/** Number of images to generate, default is 1 */
n?: number;
/** Custom business watermark content, limited to 16 characters */
footnote?: string;
/** Generation seed, range [1, 4294967295] */
seed?: number;
}
| HunyuanGenerateImageInput Property | Type | Description |
|---|---|---|
| model | string | Model name, must be hunyuan-image |
| prompt | string | Text description used to generate the image |
| version | string | Model version, supports v1.8.1 and v1.9, default version v1.8.1 |
| size | string | Image size, default "1024x1024" |
| negative_prompt | string | v1.9 only, negative prompt |
| style | string | v1.9 only, can specify style: Ancient Anime Style, Urban Anime Style, Suspense Style, Campus Style, Urban Fantasy Style |
| revise | boolean | When true, rewrites the prompt, default is true |
| n | number | Number of images to generate, default is 1 |
| footnote | string | Custom business watermark content, limited to 16 characters |
| seed | number | Generation seed, range [1, 4294967295] |
HunyuanGenerateImageOutput
Hunyuan image generation output.
interface HunyuanGenerateImageOutput {
/** ID of this request */
id: string;
/** Unix timestamp */
created: number;
/** Returned image generation content */
data: Array<{
/** Generated image URL, valid for 24 hours */
url: string;
/** Text after original prompt revision. If revise is false, it's the original prompt */
revised_prompt?: string;
}>;
}
| HunyuanGenerateImageOutput Property | Type | Description |
|---|---|---|
| id | string | ID of this request |
| created | number | Unix timestamp |
| data | Array<object> | Returned image generation content |
| data[n].url | string | Generated image URL, valid for 24 hours |
| data[n].revised_prompt | string | Text after original prompt revision. If revise is false, it's the original prompt |
HunyuanARGenerateImageInput
Hunyuan image generation v3.0 input parameters, supports custom aspect ratio.
interface HunyuanARGenerateImageInput {
/** Model name: hunyuan-image-v3.0-v1.0.4 (recommended) or hunyuan-image-v3.0-v1.0.1 */
model: 'hunyuan-image-v3.0-v1.0.4' | 'hunyuan-image-v3.0-v1.0.1';
/** Text for image generation, max 8192 characters */
prompt: string;
/**
* Image size, format "${width}x${height}", default "1024x1024"
* hunyuan-image-v3.0-v1.0.4: width/height range [512, 2048], area not exceeding 1024x1024
* hunyuan-image-v3.0-v1.0.1: supports fixed size list
*/
size?: string;
/** Generation seed, only effective when generating 1 image, range [1, 4294967295] */
seed?: number;
/** Custom watermark content, max 16 characters, displayed at bottom-right */
footnote?: string;
/** Whether to rewrite prompt, enabled by default. Rewriting adds ~30s latency */
revise?: { value: boolean };
/** Whether to enable thinking mode for rewriting, enabled by default. Improves quality but adds latency (max 60s) */
enable_thinking?: { value: boolean };
}
| HunyuanARGenerateImageInput Property | Type | Description |
|---|---|---|
| model | string | Model name, hunyuan-image-v3.0-v1.0.4 (recommended) or hunyuan-image-v3.0-v1.0.1 |
| prompt | string | Text for image generation, max 8192 characters |
| size | string | Image size, format "widthxheight", default "1024x1024", width/height range [512, 2048] |
| seed | number | Generation seed, only effective when generating 1 image, range [1, 4294967295] |
| footnote | string | Custom watermark content, max 16 characters, displayed at bottom-right |
| revise | { value: boolean } | Whether to rewrite prompt, enabled by default. Adds ~30s latency |
| enable_thinking | { value: boolean } | Whether to enable thinking mode, enabled by default. Improves quality but adds latency (max 60s) |
HunyuanARGenerateImageOutput
Hunyuan image generation v3.0 output.
interface HunyuanARGenerateImageOutput {
/** Request id */
id: string;
/** Unix timestamp */
created: number;
/** Generated image content */
data: Array<{
/** Generated image url, valid for 24 hours */
url: string;
/** Rewritten prompt */
revised_prompt?: string;
}>;
}
| HunyuanARGenerateImageOutput Property | Type | Description |
|---|---|---|
| id | string | Request id |
| created | number | Unix timestamp |
| data | Array<object> | Generated image content |
| data[n].url | string | Generated image url, valid for 24 hours |
| data[n].revised_prompt | string | Rewritten prompt |