Mini Program Integration
WeChat Mini Programs call CloudBase AI models via wx.cloud.extend.AI, with no additional SDK installation required.
Prerequisites
- WeChat base library version ≥ 3.7.1
- CloudBase environment activated
- AI model configured (see Model Configuration Guide)
Initialization
Initialize CloudBase in app.js:
App({
onLaunch() {
wx.cloud.init({
env: "<YOUR_ENV_ID>"
});
}
});
Text Generation
generateText() - Non-streaming
Returns the complete result at once, suitable for short text generation.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
const res = await model.generateText({
model: "hunyuan-turbos-latest",
messages: [{ role: "user", content: "Introduce Li Bai" }],
});
// The return value is the raw model response
console.log(res.choices[0].message.content);
console.log(res.usage); // { prompt_tokens, completion_tokens, total_tokens }
Return Value
interface GenerateTextResponse {
id: string;
object: "chat.completion";
created: number;
model: string;
choices: Array<{
index: number;
message: {
role: "assistant";
content: string;
};
finish_reason: string;
}>;
usage: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}
streamText() - Streaming
Returns text in a stream, suitable for real-time conversations and long text generation.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
const res = await model.streamText({
data: {
model: "hunyuan-turbos-latest",
messages: [{ role: "user", content: "Introduce Li Bai" }],
}
});
// Use textStream to get incremental text
for await (const text of res.textStream) {
console.log("Text chunk:", text);
}
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| data | object | Yes | Request parameters, containing model and messages |
| data.model | string | Yes | Model name |
| data.messages | array | Yes | Message list |
Return Value
| Property | Type | Description |
|---|---|---|
| textStream | AsyncIterable\<string> | Incremental text stream |
| eventStream | AsyncIterable\<object> | Raw event stream (includes metadata) |
Detecting Stream End
Use eventStream to get full event information. The stream ends when [DONE] is returned:
for await (const event of res.eventStream) {
console.log(event);
if (event.data === "[DONE]") {
break;
}
}
Complete Example
Chat Page
Page({
data: {
messages: [],
inputValue: "",
isLoading: false
},
async sendMessage() {
const { inputValue, messages } = this.data;
if (!inputValue.trim() || this.data.isLoading) return;
// Add user message
const userMessage = { role: "user", content: inputValue };
const newMessages = [...messages, userMessage];
this.setData({
messages: [...newMessages, { role: "assistant", content: "" }],
inputValue: "",
isLoading: true
});
try {
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
let assistantContent = "";
const res = await model.streamText({
data: {
model: "hunyuan-turbos-latest",
messages: newMessages,
}
});
// Use textStream to progressively update the UI
for await (const text of res.textStream) {
assistantContent += text;
this.setData({
messages: [
...newMessages,
{ role: "assistant", content: assistantContent }
]
});
}
} catch (error) {
console.error("Call failed:", error);
wx.showToast({ title: "Request failed", icon: "error" });
} finally {
this.setData({ isLoading: false });
}
}
});
Differences from Web SDK
| Feature | Mini Program | Web SDK |
|---|---|---|
| Namespace | wx.cloud.extend.AI | app.ai() |
| Parameter format | streamText wrapped in data object, generateText passed directly | Passed directly |
| Return value | Raw response { choices, usage } | Wrapped { text, usage, messages } |
| Image generation | Not supported | Supported |