Skip to main content

AI SDK

Initialize SDK

For initialization methods, please refer to Initialize SDK.

app.ai

After initialization, you can use the ai method mounted on the cloudbase instance to create an AI instance for subsequent model creation.

Usage Example

app = cloudbase.init({ env: "your-env" });
const ai = app.ai();

Type Declaration

function ai(): AI;

Return Value

AI

Returns a newly created AI instance.

AI

A class for creating AI models.

createModel()

Creates a specified AI model.

Usage Example

const model = ai.createModel("hunyuan-exp");

Type Declaration

function createModel(model: string): ChatModel;

Returns a model instance that implements the ChatModel abstract class, providing AI text generation capabilities.

createImageModel()

Creates a specified image generation model.

Usage Example

const imageModel = ai.createImageModel("hunyuan-image");

Type Declaration

function createImageModel(provider: string): ImageModel;

Parameters

ParameterRequiredTypeDescription
providerYesstringModel provider name, e.g., "hunyuan-image"

Return Value

Returns an ImageModel instance that provides AI image generation capabilities.

bot

An instance of the Bot class mounted here, which includes a series of methods for interacting with Agents. For details, refer to the Bot class documentation.

Usage Example

const agentList = await ai.bot.list({ pageNumber: 1, pageSize: 10 });

registerFunctionTool()

Registers a function tool. When calling the LLM, you can inform it of available function tools. When the LLM's response is parsed as a tool call, the corresponding function tool will be automatically invoked.

Usage Example

// Omitting AI SDK initialization...

// 1. Define the weather tool, see FunctionTool type for details
const getWeatherTool = {
name: "get_weather",
description: "Returns weather information for a city. Example call: get_weather({city: 'Beijing'})",
fn: ({ city }) => `The weather in ${city} is: Clear autumn skies!!!`, // Define the tool execution content here
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "The city to query",
},
},
required: ["city"],
},
};

// 2. Register the tool we just defined
ai.registerFunctionTool(getWeatherTool);

// 3. Send a message to the LLM and inform it that a weather tool is available
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbo",
tools: [getWeatherTool], // Here we pass in the weather tool
messages: [
{
role: "user",
content: "Please tell me the weather in Beijing",
},
],
});

console.log(result.text);

Type Declaration

function registerFunctionTool(functionTool: FunctionTool);

Parameters

ParameterRequiredTypeDescription
functionToolYesFunctionToolSee FunctionTool for details

Return Value

undefined

Parameters

ParameterRequiredTypeExampleDescription
dataYesBaseChatModelInput{model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, please introduce Li Bai" }]}Parameter type is defined as BaseChatModelInput as the basic input definition. In practice, different LLM providers have their own unique input parameters. Developers can pass additional parameters not defined in this type based on the LLM's official documentation to fully utilize the LLM's capabilities. Other parameters will be passed through to the LLM interface, and the SDK does not perform additional processing on them.

Return Value

PropertyTypeExampleDescription
textstring"Li Bai was a Tang Dynasty poet."Text generated by the LLM.
rawResponsesunknown[][{"choices": [{"finish_reason": "stop","message": {"role": "assistant", "content": "Hello, is there anything I can help you with?"}}], "usage": {"prompt_tokens": 14, "completion_tokens": 9, "total_tokens": 23}}]Complete response from the LLM, containing more detailed data such as message creation time. Since different LLMs have varying return values, please use according to your situation.
res.messagesChatModelMessage[][{role: 'user', content: 'Hello'},{role: 'assistant', content: 'Hello! Nice to chat with you. Is there anything I can help you with? Whether it\'s about life, work, study, or other areas, I\'ll do my best to assist you.'}]Complete message list for this call.
usageUsage{"completion_tokens":33,"prompt_tokens":3,"total_tokens":36}Tokens consumed in this call.
errorunknownErrors that occurred during the call.

streamText()

Calls the LLM to generate text in streaming mode. During streaming calls, the generated text and other response data are returned via SSE. The return value of this interface provides different levels of encapsulation for SSE, allowing developers to obtain text streams and complete data streams according to their needs.

Usage Example

const hy = ai.createModel("hunyuan-exp"); // Create model
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});

for await (let str of res.textStream) {
console.log(str); // Print the generated text
}
for await (let data of res.dataStream) {
console.log(data); // Print the complete data for each response
}

Type Declaration

function streamText(data: BaseChatModelInput): Promise<StreamTextResult>;

Parameters

ParameterRequiredTypeExampleDescription
dataYesBaseChatModelInput{model: "hunyuan-lite", messages: [{ role: "user", content: "Hello, please introduce Li Bai" }]}Parameter type is defined as BaseChatModelInput as the basic input definition. In practice, different LLM providers have their own unique input parameters. Developers can pass additional parameters not defined in this type based on the LLM's official documentation to fully utilize the LLM's capabilities. Other parameters will be passed through to the LLM interface, and the SDK does not perform additional processing on them.

Return Value

StreamTextResult PropertyTypeDescription
textStreamReadableStream<string>LLM-generated text returned in streaming mode. Refer to the usage example to get the incremental generated text.
dataStreamReadableStream<DataChunk>LLM response data returned in streaming mode. Refer to the usage example to get the incremental generated data. Since different LLMs have varying response values, please use according to your situation.
messagesPromise<ChatModelMessage[]>Complete message list for this call.
usagePromise<Usage>Tokens consumed in this call.
errorunknownErrors that occurred during this call.
DataChunk PropertyTypeDescription
choicesArray<object>
choices[n].finish_reasonstringReason for model inference termination.
choices[n].deltaChatModelMessageMessage for this request.
usageUsageTokens consumed in this request.
rawResponseunknownRaw response from the LLM.

Example

const hy = ai.createModel("hunyuan-exp");
const res = await hy.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "What is 1+1" }],
});

// Text stream
for await (let str of res.textStream) {
console.log(str);
}
// 1
// plus
// 1
// equals
// 2
// .

// Data stream
for await (let str of res.dataStream) {
console.log(str);
}

// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}
// {created: 1723013866, id: "a95a54b5c5d2144eb700e60d0dfa5c98", model: "hunyuan-lite", version: "202404011000", choices: Array(1), …}

ImageModel

This class describes the interface provided by AI image generation model classes.

generateImage()

Calls the LLM to generate images.

Usage Example

const imageModel = ai.createImageModel("hunyuan-image");
const res = await imageModel.generateImage({
model: "hunyuan-image-v3.0-v1.0.4",
prompt: "A cute cat playing on the grass",
});
console.log(res.data[0].url); // Print the generated image URL

Type Declaration

function generateImage(input: HunyuanARGenerateImageInput): Promise<HunyuanARGenerateImageOutput>;
function generateImage(input: HunyuanGenerateImageInput): Promise<HunyuanGenerateImageOutput>;

Parameters

ParameterRequiredTypeDescription
inputYesHunyuanARGenerateImageInput | HunyuanGenerateImageInputImage generation parameters, see HunyuanARGenerateImageInput or HunyuanGenerateImageInput for details

Return Value

Promise<HunyuanARGenerateImageOutput> or Promise<HunyuanGenerateImageOutput>

PropertyTypeDescription
idstringID of this request
creatednumberUnix timestamp
dataArray\<object>Returned image generation content
data[n].urlstringGenerated image URL, valid for 24 hours
data[n].revised_promptstringText after prompt revision (if revise is false, it's the original prompt)

Bot

A class for interacting with Agents.

get()

Gets information about a specific Agent.

Usage Example

const res = await ai.bot.get({ botId: "botId-xxx" });
console.log(res);

Type Declaration

function get(props: { botId: string });

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringID of the Agent to get info for

Return Value

PropertyTypeExampleDescription
botIdstring"bot-27973647"Agent ID
namestring"Translator"Agent name
introductionstringAgent introduction
welcomeMessagestringAgent welcome message
avatarstringAgent avatar URL
backgroundstringAgent chat background image URL
isNeedRecommendbooleanWhether to recommend questions after Agent responds
typestringAgent type

list()

Gets information about multiple Agents in batch.

Usage Example

await ai.bot.list({
pageNumber: 1,
pageSize: 10,
name: "",
enable: true,
information: "",
introduction: "",
});

Type Declaration

function list(props: {
name: string;
introduction: string;
information: string;
enable: boolean;
pageSize: number;
pageNumber: number;
});

Parameters

ParameterRequiredTypeDescription
props.pageNumberYesnumberPage index
props.pageSizeYesnumberPage size
props.enableYesbooleanWhether the Agent is enabled
props.nameYesstringAgent name, for fuzzy search
props.informationYesstringAgent information, for fuzzy search
props.introductionYesstringAgent introduction, for fuzzy search

Return Value

PropertyTypeExampleDescription
totalnumber---Total number of Agents
botListArray<object>Agent list
botList[n].botIdstring"bot-27973647"Agent ID
botList[n].namestring"Translator"Agent name
botList[n].introductionstringAgent introduction
botList[n].welcomeMessagestringAgent welcome message
botList[n].avatarstringAgent avatar URL
botList[n].backgroundstringAgent chat background image URL
botList[n].isNeedRecommendbooleanWhether to recommend questions after Agent responds
botList[n].typestringAgent type

sendMessage()

Sends a message to an Agent for conversation. The response is returned via SSE, and this interface's return value provides different levels of encapsulation for SSE, allowing developers to obtain text streams and complete data streams according to their needs.

Usage Example

const res = await ai.bot.sendMessage({
botId: "botId-xxx",
history: [{ content: "You are Li Bai.", role: "user" }],
msg: "Hello",
});
for await (let str of res.textStream) {
console.log(str);
}
for await (let data of res.dataStream) {
console.log(data);
}

Type Declaration

function sendMessage(props: {
botId: string;
msg: string;
history: Array<{
role: string;
content: string;
}>;
}): Promise<StreamResult>;

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.msgYesstringMessage to send in this conversation
props.historyYes[]Chat history before this conversation
props.history[n].roleYesstringRole of the message sender
props.history[n].contentYesstringContent of the message

Return Value

Promise<StreamResult>

StreamResult PropertyTypeDescription
textStreamAsyncIterable<string>Agent-generated text returned in streaming mode. Refer to the usage example to get the incremental generated text.
dataStreamAsyncIterable<AgentStreamChunk>Agent-generated data returned in streaming mode. Refer to the usage example to get the incremental generated data.
AgentStreamChunk PropertyTypeDescription
creatednumberConversation timestamp
record_idstringConversation record ID
modelstringLLM type
versionstringLLM version
typestringReply type: text: main answer content, thinking: thinking process, search: search results, knowledge: knowledge base
rolestringConversation role, fixed as assistant in responses
contentstringConversation content
finish_reasionstringConversation end flag, continue means conversation is ongoing, stop means conversation ended
reasoning_contentstringDeep thinking content (only non-empty for deepseek-r1)
usageobjectToken usage
usage.prompt_tokensnumberNumber of prompt tokens, remains unchanged across multiple returns
usage.completion_tokensnumberTotal completion tokens. In streaming returns, represents the cumulative total of all completion tokens so far
usage.total_tokensnumberSum of prompt_tokens and completion_tokens
knowledge_basestring[]Knowledge bases used in the conversation
search_infoobjectSearch result information, requires web search to be enabled
search_info.search_resultsobject[]Search citation information
search_info.search_results[n].indexstringSearch citation index
search_info.search_results[n].titlestringSearch citation title
search_info.search_results[n].urlstringSearch citation URL

getChatRecords()

Gets chat records.

Usage Example

await ai.bot.getChatRecords({
botId: "botId-xxx",
pageNumber: 1,
pageSize: 10,
sort: "asc",
});

Type Declaration

function getChatRecords(props: {
botId: string;
sort: string;
pageSize: number;
pageNumber: number;
});

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.sortYesstringSort order
props.pageSizeYesnumberPage size
props.pageNumberYesnumberPage index

Return Value

PropertyTypeDescription
totalnumberTotal number of conversations
recordListArray<object>Conversation list
recordList[n].botIdstringAgent ID
recordList[n].recordIdstringConversation ID, system-generated
recordList[n].rolestringRole in conversation
recordList[n].contentstringConversation content
recordList[n].conversationstringUser identifier
recordList[n].typestringConversation data type
recordList[n].imagestringImage URL generated in conversation
recordList[n].triggerSrcstringConversation trigger source
recordList[n].replyTostringRecord ID being replied to
recordList[n].createTimestringConversation time

sendFeedback()

Sends feedback for a specific chat record.

Usage Example

const res = await ai.bot.sendFeedback({
userFeedback: {
botId: "botId-xxx",
recordId: "recordId-xxx",
comment: "Excellent",
rating: 5,
tags: ["Beautiful"],
aiAnswer: "falling petals",
input: "Give me an idiom",
type: "upvote",
},
botId: "botId-xxx",
});

Type Declaration

function sendFeedback(props: { userFeedback: IUserFeedback; botId: string });

Parameters

ParameterRequiredTypeDescription
props.userFeedbackYesIUserFeedbackUser feedback, see IUserFeedback type definition
props.botIdYesstringID of the Agent to provide feedback for

getFeedback()

Gets existing feedback information.

Usage Example

const res = await ai.bot.getFeedback({
botId: "botId-xxx",
from: 0,
to: 0,
maxRating: 4,
minRating: 3,
pageNumber: 1,
pageSize: 10,
sender: "user-a",
senderFilter: "include",
type: "upvote",
});

Type Declaration

function sendFeedback(props: {
botId: string;
type: string;
sender: string;
senderFilter: string;
minRating: number;
maxRating: number;
from: number;
to: number;
pageSize: number;
pageNumber: number;
});

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.typeYesstringUser feedback type, upvote or downvote
props.senderYesstringFeedback creator user
props.senderFilterYesstringFeedback creator user filter: include, exclude, equal, unequal, prefix
props.minRatingYesnumberMinimum rating
props.maxRatingYesnumberMaximum rating
props.fromYesnumberStart timestamp
props.toYesnumberEnd timestamp
props.pageSizeYesnumberPage size
props.pageNumberYesnumberPage index

Return Value

PropertyTypeDescription
feedbackListobject[]Feedback query results
feedbackList[n].recordIdstringConversation record ID
feedbackList[n].typestringUser feedback type, upvote or downvote
feedbackList[n].botIdstringAgent ID
feedbackList[n].commentstringUser comment
feedbackList[n].ratingnumberUser rating
feedbackList[n].tagsstring[]User feedback tags array
feedbackList[n].inputstringUser input question
feedbackList[n].aiAnswerstringAgent's answer
totalnumberTotal number of feedbacks

uploadFiles()

Uploads files from cloud storage to an Agent for document-based chat.

Usage Example

// Upload files
await ai.bot.uploadFiles({
botId: "botId-xxx",
fileList: [
{
fileId: "cloud://xxx.docx",
fileName: "xxx.docx",
type: "file",
},
],
});

// Conduct document-based chat
const res = await ai.bot.sendMessage({
botId: "your-bot-id",
msg: "What is the content of this file",
files: ["xxx.docx"], // Array of file fileIds
});

for await (let text of res.textStream) {
console.let(text);
}

Type Declaration

function uploadFiles(props: {
botId: string;
fileList: Array<{
fileId: "string";
fileName: "string";
type: "file";
}>;
});

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.fileListYesstringFile list
props.fileList[n].fileIdYesstringCloud storage file ID
props.fileList[n].fileNameYesstringFile name
props.fileList[n].typeYesstringCurrently only supports "file"

getRecommendQuestions()

Gets recommended questions.

Usage Example

const res = ai.bot.getRecommendQuestions({
botId: "botId-xxx",
history: [{ content: "Who are you", role: "user" }],
msg: "Hello",
agentSetting: "",
introduction: "",
name: "",
});

for await (let str of res.textStream) {
console.log(str);
}

Type Declaration

function getRecommendQuestions(props: {
botId: string;
name: string;
introduction: string;
agentSetting: string;
msg: string;
history: Array<{
role: string;
content: string;
}>;
}): Promise<StreamResult>;

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.nameYesstringAgent name
props.introductionYesstringAgent introduction
props.agentSettingYesstringAgent settings
props.msgYesstringUser message
props.historyYesArrayHistory messages
props.history[n].roleYesstringHistory message role
props.history[n].contentYesstringHistory message content

Return Value

Promise<StreamResult>

StreamResult PropertyTypeDescription
textStreamAsyncIterable<string>Agent-generated text returned in streaming mode. Refer to the usage example to get the incremental generated text.
dataStreamAsyncIterable<AgentStreamChunk>Agent-generated data returned in streaming mode. Refer to the usage example to get the incremental generated data.
AgentStreamChunk PropertyTypeDescription
creatednumberConversation timestamp
record_idstringConversation record ID
modelstringLLM type
versionstringLLM version
typestringReply type: text: main answer content, thinking: thinking process, search: search results, knowledge: knowledge base
rolestringConversation role, fixed as assistant in responses
contentstringConversation content
finish_reasionstringConversation end flag, continue means conversation is ongoing, stop means conversation ended
reasoning_contentstringDeep thinking content (only non-empty for deepseek-r1)
usageobjectToken usage
usage.prompt_tokensnumberNumber of prompt tokens, remains unchanged across multiple returns
usage.completion_tokensnumberTotal completion tokens. In streaming returns, represents the cumulative total of all completion tokens so far
usage.total_tokensnumberSum of prompt_tokens and completion_tokens
knowledge_basestring[]Knowledge bases used in the conversation
search_infoobjectSearch result information, requires web search to be enabled
search_info.search_resultsobject[]Search citation information
search_info.search_results[n].indexstringSearch citation index
search_info.search_results[n].titlestringSearch citation title
search_info.search_results[n].urlstringSearch citation URL

createConversation()

Creates a new conversation with an Agent.

Usage Example

const res = await ai.bot.createConversation({
botId: "botId-xxx",
title: "My Conversation",
}): Promise<IConversation>;

Type Declaration

function createConversation(props: IBotCreateConversation);

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.titleNostringConversation title

Return Value

Promise<IConversation>

See related type: IConversation

getConversation()

Gets the conversation list.

Usage Example

const res = await ai.bot.getConversation({
botId: "botId-xxx",
pageSize: 10,
pageNumber: 1,
isDefault: false,
});

Type Declaration

function getConversation(props: IBotGetConversation);

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.pageSizeNonumberPage size, default 10
props.pageNumberNonumberPage index, default 1
props.isDefaultNobooleanWhether to only get default conversation

deleteConversation()

Deletes a specified conversation.

Usage Example

await ai.bot.deleteConversation({
botId: "botId-xxx",
conversationId: "conv-123",
});

Type Declaration

function deleteConversation(props: IBotDeleteConversation);

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.conversationIdYesstringID of conversation to delete

speechToText()

Converts speech to text.

Usage Example

const res = await ai.bot.speechToText({
botId: "botId-xxx",
engSerViceType: "16k_zh",
voiceFormat: "mp3",
url: "https://example.com/audio.mp3",
});

Type Declaration

function speechToText(props: IBotSpeechToText);

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.engSerViceTypeYesstringEngine type, e.g., "16k_zh"
props.voiceFormatYesstringAudio format, e.g., "mp3"
props.urlYesstringAudio file URL
props.isPreviewNobooleanWhether in preview mode

textToSpeech()

Converts text to speech.

Usage Example

const res = await ai.bot.textToSpeech({
botId: "botId-xxx",
voiceType: 1,
text: "Hello, I am an AI assistant",
});

Type Declaration

function textToSpeech(props: IBotTextToSpeech);

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.voiceTypeYesnumberVoice type
props.textYesstringText to convert
props.isPreviewNobooleanWhether in preview mode

getTextToSpeechResult()

Gets the text-to-speech result.

Usage Example

const res = await ai.bot.getTextToSpeechResult({
botId: "botId-xxx",
taskId: "task-123",
});

Type Declaration

function getTextToSpeechResult(props: IBotGetTextToSpeechResult);

Parameters

ParameterRequiredTypeDescription
props.botIdYesstringAgent ID
props.taskIdYesstringTask ID
props.isPreviewNobooleanWhether in preview mode

IBotCreateConversation

interface IBotCreateConversation {
botId: string;
title?: string;
}

IBotGetConversation

interface IBotGetConversation {
botId: string;
pageSize?: number;
pageNumber?: number;
isDefault?: boolean;
}

IBotDeleteConversation

interface IBotDeleteConversation {
botId: string;
conversationId: string;
}

IBotSpeechToText

interface IBotSpeechToText {
botId: string;
engSerViceType: string;
voiceFormat: string;
url: string;
isPreview?: boolean;
}

IBotTextToSpeech

interface IBotTextToSpeech {
botId: string;
voiceType: number;
text: string;
isPreview?: boolean;
}

IBotGetTextToSpeechResult

interface IBotGetTextToSpeechResult {
botId: string;
taskId: string;
isPreview?: boolean;
}

BaseChatModelInput

interface BaseChatModelInput {
model: string;
messages: Array<ChatModelMessage>;
temperature?: number;
topP?: number;
tools?: Array<FunctionTool>;
toolChoice?: "none" | "auto" | "custom";
maxSteps?: number;
onStepFinish?: (prop: IOnStepFinish) => unknown;
}
BaseChatModelInput PropertyTypeDescription
modelstringModel name.
messagesArray<ChatModelMessage>Message list.
temperaturenumberSampling temperature, controls output randomness.
topPnumberTemperature sampling, model considers tokens with probability mass of top_p.
toolsArray<FunctionTool>List of tools available to the LLM.
toolChoicestringSpecifies how the LLM selects tools.
maxStepsnumberMaximum number of LLM requests.
onStepFinish(prop: IOnStepFinish) => unknownCallback function triggered when an LLM request completes.

BotInfo

interface BotInfo {
botId: string;
name: string;
introduction: string;
agentSetting: string;
welcomeMessage: string;
avatar: string;
background: string;
tags: Array<string>;
isNeedRecommend: boolean;
knowledgeBase: Array<string>;
type: string;
initQuestions: Array<string>;
enable: true;
}

IUserFeedback

interface IUserFeedback {
recordId: string;
type: string;
botId: string;
comment: string;
rating: number;
tags: Array<string>;
input: string;
aiAnswer: string;
}

ChatModelMessage

type ChatModelMessage =
| UserMessage
| SystemMessage
| AssistantMessage
| ToolMessage;

UserMessage

type UserMessage = {
role: "user";
content: string;
};

SystemMessage

type SystemMessage = {
role: "system";
content: string;
};

AssistantMessage

type AssistantMessage = {
role: "assistant";
content?: string;
tool_calls?: Array<ToolCall>;
};

ToolMessage

type ToolMessage = {
role: "tool";
tool_call_id: string;
content: string;
};

ToolCall

export type ToolCall = {
id: string;
type: string;
function: { name: string; arguments: string };
};

FunctionTool

Tool definition type.

type FunctionTool = {
name: string;
description: string;
fn: CallableFunction;
parameters: object;
};
FunctionTool PropertyTypeDescription
namestringTool name.
descriptionstringTool description. A clear tool description helps the LLM understand the tool's purpose.
fnCallableFunctionTool execution function. When the AI SDK parses the LLM's response as requiring this tool call, it calls this function and returns the result to the LLM.
parametersobjectTool execution function parameters. Must be defined in JSON Schema format.

IOnStepFinish

Parameter type for the callback function triggered after LLM response.

interface IOnStepFinish {
messages: Array<ChatModelMessage>;
text?: string;
toolCall?: ToolCall;
toolResult?: unknown;
finishReason?: string;
stepUsage?: Usage;
totalUsage?: Usage;
}
IOnStepFinish PropertyTypeDescription
messagesArray<ChatModelMessage>Complete message list up to the current step.
textstringText from the current response.
toolCallToolCallTool called in the current response.
toolResultunknownResult of the corresponding tool call.
finishReasonstringReason for LLM inference completion.
stepUsageUsageTokens consumed in the current step.
totalUsageUsageTotal tokens consumed up to the current step.

Usage

type Usage = {
completion_tokens: number;
prompt_tokens: number;
total_tokens: number;
};

IConversation

Agent conversation.

interface IConversation {
id: string;
envId: string;
ownerUin: string;
userId: string;
conversationId: string;
title: string;
startTime: string; // date-time format
createTime: string;
updateTime: string;
}

HunyuanGenerateImageInput

Hunyuan image generation input parameters.

interface HunyuanGenerateImageInput {
model: 'hunyuan-image';
/** Text description used to generate the image */
prompt: string;
/** Model version, supports v1.8.1 and v1.9, default version v1.8.1 */
version?: 'v1.8.1' | 'v1.9' | (string & {});
/** Image size, default "1024x1024" */
size?: string;
/** v1.9 only, negative prompt */
negative_prompt?: string;
/** v1.9 only, can specify style */
style?: '古风二次元风格' | '都市二次元风格' | '悬疑风格' | '校园风格' | '都市异能风格' | (string & {});
/** When true, rewrites the prompt, default is true */
revise?: boolean;
/** Number of images to generate, default is 1 */
n?: number;
/** Custom business watermark content, limited to 16 characters */
footnote?: string;
/** Generation seed, range [1, 4294967295] */
seed?: number;
}
HunyuanGenerateImageInput PropertyTypeDescription
modelstringModel name, must be hunyuan-image
promptstringText description used to generate the image
versionstringModel version, supports v1.8.1 and v1.9, default version v1.8.1
sizestringImage size, default "1024x1024"
negative_promptstringv1.9 only, negative prompt
stylestringv1.9 only, can specify style: Ancient Anime Style, Urban Anime Style, Suspense Style, Campus Style, Urban Fantasy Style
revisebooleanWhen true, rewrites the prompt, default is true
nnumberNumber of images to generate, default is 1
footnotestringCustom business watermark content, limited to 16 characters
seednumberGeneration seed, range [1, 4294967295]

HunyuanGenerateImageOutput

Hunyuan image generation output.

interface HunyuanGenerateImageOutput {
/** ID of this request */
id: string;
/** Unix timestamp */
created: number;
/** Returned image generation content */
data: Array<{
/** Generated image URL, valid for 24 hours */
url: string;
/** Text after original prompt revision. If revise is false, it's the original prompt */
revised_prompt?: string;
}>;
}
HunyuanGenerateImageOutput PropertyTypeDescription
idstringID of this request
creatednumberUnix timestamp
dataArray<object>Returned image generation content
data[n].urlstringGenerated image URL, valid for 24 hours
data[n].revised_promptstringText after original prompt revision. If revise is false, it's the original prompt

HunyuanARGenerateImageInput

Hunyuan image generation v3.0 input parameters, supports custom aspect ratio.

interface HunyuanARGenerateImageInput {
/** Model name: hunyuan-image-v3.0-v1.0.4 (recommended) or hunyuan-image-v3.0-v1.0.1 */
model: 'hunyuan-image-v3.0-v1.0.4' | 'hunyuan-image-v3.0-v1.0.1';
/** Text for image generation, max 8192 characters */
prompt: string;
/**
* Image size, format "${width}x${height}", default "1024x1024"
* hunyuan-image-v3.0-v1.0.4: width/height range [512, 2048], area not exceeding 1024x1024
* hunyuan-image-v3.0-v1.0.1: supports fixed size list
*/
size?: string;
/** Generation seed, only effective when generating 1 image, range [1, 4294967295] */
seed?: number;
/** Custom watermark content, max 16 characters, displayed at bottom-right */
footnote?: string;
/** Whether to rewrite prompt, enabled by default. Rewriting adds ~30s latency */
revise?: { value: boolean };
/** Whether to enable thinking mode for rewriting, enabled by default. Improves quality but adds latency (max 60s) */
enable_thinking?: { value: boolean };
}
HunyuanARGenerateImageInput PropertyTypeDescription
modelstringModel name, hunyuan-image-v3.0-v1.0.4 (recommended) or hunyuan-image-v3.0-v1.0.1
promptstringText for image generation, max 8192 characters
sizestringImage size, format "widthxheight", default "1024x1024", width/height range [512, 2048]
seednumberGeneration seed, only effective when generating 1 image, range [1, 4294967295]
footnotestringCustom watermark content, max 16 characters, displayed at bottom-right
revise{ value: boolean }Whether to rewrite prompt, enabled by default. Adds ~30s latency
enable_thinking{ value: boolean }Whether to enable thinking mode, enabled by default. Improves quality but adds latency (max 60s)

HunyuanARGenerateImageOutput

Hunyuan image generation v3.0 output.

interface HunyuanARGenerateImageOutput {
/** Request id */
id: string;
/** Unix timestamp */
created: number;
/** Generated image content */
data: Array<{
/** Generated image url, valid for 24 hours */
url: string;
/** Rewritten prompt */
revised_prompt?: string;
}>;
}
HunyuanARGenerateImageOutput PropertyTypeDescription
idstringRequest id
creatednumberUnix timestamp
dataArray<object>Generated image content
data[n].urlstringGenerated image url, valid for 24 hours
data[n].revised_promptstringRewritten prompt