Skip to main content

Web SDK Calling

Call CloudBase AI large models in Web applications through @cloudbase/js-sdk, supporting text generation, streaming output, image understanding, and other capabilities.

Installation

npm install @cloudbase/js-sdk

Initialization

import cloudbase from "@cloudbase/js-sdk";

const app = cloudbase.init({
env: "<YOUR_ENV_ID>",
accessKey: "<YOUR_PUBLISHABLE_KEY>" // Get from console
});

// Get AI instance
const ai = app.ai();

Publishable Key location: CloudBase Console → Environment Configuration → API Key

Text Generation

generateText() - Non-streaming

Returns complete result at once.

const model = ai.createModel("hunyuan-exp");

const result = await model.generateText({
model: "hunyuan-turbos-latest",
messages: [{ role: "user", content: "Introduce Li Bai" }],
});

console.log(result.text); // Generated text
console.log(result.usage); // Token usage
console.log(result.messages); // Complete message history

Return Value

PropertyTypeDescription
textstringGenerated text
messagesChatModelMessage[]Complete message history
usageUsageToken usage
rawResponsesunknown[]Raw model responses
errorunknownError information (if any)

streamText() - Streaming

Stream text return, suitable for real-time conversation scenarios.

const model = ai.createModel("hunyuan-exp");

const res = await model.streamText({
model: "hunyuan-turbos-latest",
messages: [{ role: "user", content: "Introduce Li Bai" }],
});

// Method 1: Iterate text stream (recommended)
for await (const text of res.textStream) {
console.log(text); // Incremental text
}

// Method 2: Iterate data stream to get complete response data
for await (const data of res.dataStream) {
console.log(data); // Contains choices, usage, and other complete information
}

// Get final result
const messages = await res.messages;
const usage = await res.usage;

Return Value

PropertyTypeDescription
textStreamAsyncIterable\<string>Incremental text stream
dataStreamAsyncIterable\<DataChunk>Complete data stream
messagesPromise\<ChatModelMessage[]>Final message history
usagePromise\<Usage>Final Token usage

Image Understanding (Vision)

Use vision-capable models (like hunyuan-vision) to understand image content.

tip

You need to first configure the custom model hunyuan-custom with Hunyuan's BaseURL and API Key.

const model = ai.createModel("hunyuan-custom");

const res = await model.streamText({
model: "hunyuan-vision",
messages: [
{
role: "user",
content: [
{ type: "text", text: "What is the content of this image?" },
{
type: "image_url",
image_url: {
url: "https://example.com/image.png"
}
}
]
}
]
});

for await (const text of res.textStream) {
console.log(text);
}

Message Content Format

When using image understanding, content is in array format:

Text Content:

interface TextContent {
type: "text";
text: string;
}

Image Content:

interface ImageContent {
type: "image_url";
image_url: {
url: string; // Image URL
};
}

Image Generation

Note

Image generation functionality is only available in Node SDK. Web SDK does not support it yet. If you need to use it in Web applications, call it through cloud functions. See Node SDK Calling - Image Generation for details.

Complete Example

Chat Application

import cloudbase from "@cloudbase/js-sdk";

const app = cloudbase.init({
env: "<YOUR_ENV_ID>",
accessKey: "<YOUR_PUBLISHABLE_KEY>"
});

async function chat(userInput, history = []) {
const auth = app.auth();
await auth.signInAnonymously();

const ai = app.ai();
const model = ai.createModel("hunyuan-exp");

const messages = [
...history,
{ role: "user", content: userInput }
];

const res = await model.streamText({
model: "hunyuan-turbos-latest",
messages
});

let fullText = "";
for await (const text of res.textStream) {
fullText += text;
// Update UI
document.getElementById("output").textContent = fullText;
}

return fullText;
}

Type Definitions

BaseChatModelInput

interface BaseChatModelInput {
model: string; // Model name
messages: ChatModelMessage[]; // Message list
temperature?: number; // Sampling temperature
topP?: number; // Top-p sampling
}

type ChatModelMessage =
| { role: "user"; content: string | ContentPart[] }
| { role: "system"; content: string }
| { role: "assistant"; content: string };

Usage

interface Usage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
}