在微信小程序中集成 AI 功能
在微信小程序中使用 CloudBase AI 模型,支持文本生成和流式响应
如何使用
查看如何使用Skill了解详细的使用方法。
测试 Skill
你可以使用以下提示词来测试:
- "在微信小程序中集成 CloudBase AI 模型,实现智能对话功能"
- "创建一个使用 CloudBase AI 模型的小程序应用,支持流式响应"
- "在小程序中使用混元模型生成内容并展示给用户"
先完成 MCP 连接,再选择一个提示词开始你的 AI 原生开发之旅
安装与查看
如果需要安装全部 CloudBase Skills,可执行:
npx skills add tencentcloudbase/cloudbase-skills
如果只安装当前 Skill,可执行:
npx skills add https://github.com/tencentcloudbase/skills --skill ai-model-wechat
当前 Skill 在线查看: ai-model-wechat
Skill 规则原文
查看 SKILL.md 原文
## Standalone Install Note
If this environment only installed the current skill, start from the CloudBase main entry and use the published `cloudbase/references/...` paths for sibling skills.
- CloudBase main entry: `https://cnb.cool/tencent/cloud/cloudbase/cloudbase-skills/-/git/raw/main/skills/cloudbase/SKILL.md`
- Current skill raw source: `https://cnb.cool/tencent/cloud/cloudbase/cloudbase-skills/-/git/raw/main/skills/cloudbase/references/ai-model-wechat/SKILL.md`
Keep local `references/...` paths for files that ship with the current skill directory. When this file points to a sibling skill such as `auth-tool` or `web-development`, use the standalone fallback URL shown next to that reference.
## When to use this skill
Use this skill for **calling AI models in WeChat Mini Program** using `wx.cloud.extend.AI`.
**Use it when you need to:**
- Integrate AI text generation in a Mini Program
- Stream AI responses with callback support
- Call Hunyuan models from WeChat environment
**Do NOT use for:**
- Browser/Web apps → use `ai-model-web` skill
- Node.js backend or cloud functions → use `ai-model-nodejs` skill
- Image generation → use `ai-model-nodejs` skill (not available in Mini Program)
- HTTP API integration → use `http-api` skill
---
## Available Providers and Models
CloudBase provides these built-in providers and models:
| Provider | Models | Recommended |
|----------|--------|-------------|
| `hunyuan-exp` | `hunyuan-turbos-latest`, `hunyuan-t1-latest`, `hunyuan-2.0-thinking-20251109`, `hunyuan-2.0-instruct-20251111` | ✅ `hunyuan-2.0-instruct-20251111` |
| `deepseek` | `deepseek-r1-0528`, `deepseek-v3-0324`, `deepseek-v3.2` | ✅ `deepseek-v3.2` |
---
## Prerequisites
- WeChat base library **3.7.1+**
- No extra SDK installation needed
---
## Initialization
```js
// app.js
App({
onLaunch: function() {
wx.cloud.init({ env: "<YOUR_ENV_ID>" });
}
})
```
---
## generateText() - Non-streaming
⚠️ **Different from JS/Node SDK:** Return value is raw model response.
```js
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
const res = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "你好" }],
});
// ⚠️ Return value is RAW model response, NOT wrapped like JS/Node SDK
console.log(res.choices[0].message.content); // Access via choices array
console.log(res.usage); // Token usage
```
---
## streamText() - Streaming
⚠️ **Different from JS/Node SDK:** Must wrap parameters in `data` object, supports callbacks.
```js
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
// ⚠️ Parameters MUST be wrapped in `data` object
const res = await model.streamText({
data: { // ⚠️ Required wrapper
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "hi" }]
},
onText: (text) => { // Optional: incremental text callback
console.log("New text:", text);
},
onEvent: ({ data }) => { // Optional: raw event callback
console.log("Event:", data);
},
onFinish: (fullText) => { // Optional: completion callback
console.log("Done:", fullText);
}
});
// Async iteration also available
for await (let str of res.textStream) {
console.log(str);
}
// Check for completion with eventStream
for await (let event of res.eventStream) {
console.log(event);
if (event.data === "[DONE]") { // ⚠️ Check for [DONE] to stop
break;
}
}
```
---
## Error Handling Pattern
```js
const model = wx.cloud.extend.AI.createModel("deepseek");
try {
const res = await model.generateText({
model: "deepseek-v3.2",
messages: [{ role: "user", content: "生成一段欢迎文案" }],
});
console.log(res.choices[0].message.content);
} catch (error) {
console.error("Mini Program AI request failed", error);
}
```
---
## API Comparison: JS/Node SDK vs WeChat Mini Program
| Feature | JS/Node SDK | WeChat Mini Program |
|---------|-------------|---------------------|
| **Namespace** | `app.ai()` | `wx.cloud.extend.AI` |
| **generateText params** | Direct object | Direct object |
| **generateText return** | `{ text, usage, messages }` | Raw: `{ choices, usage }` |
| **streamText params** | Direct object | ⚠️ Wrapped in `data: {...}` |
| **streamText return** | `{ textStream, dataStream }` | `{ textStream, eventStream }` |
| **Callbacks** | Not supported | `onText`, `onEvent`, `onFinish` |
| **Image generation** | Node SDK only | Not available |
---
## Type Definitions
### streamText() Input
```ts
interface WxStreamTextInput {
data: { // ⚠️ Required wrapper object
model: string;
messages: Array<{
role: "user" | "system" | "assistant";
content: string;
}>;
};
onText?: (text: string) => void; // Incremental text callback
onEvent?: (prop: { data: string }) => void; // Raw event callback
onFinish?: (text: string) => void; // Completion callback
}
```
### streamText() Return
```ts
interface WxStreamTextResult {
textStream: AsyncIterable<string>; // Incremental text stream
eventStream: AsyncIterable<{ // Raw event stream
event?: unknown;
id?: unknown;
data: string; // "[DONE]" when complete
}>;
}
```
### generateText() Return
```ts
// Raw model response (OpenAI-compatible format)
interface WxGenerateTextResponse {
id: string;
object: "chat.completion";
created: number;
model: string;
choices: Array<{
index: number;
message: {
role: "assistant";
content: string;
};
finish_reason: string;
}>;
usage: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}
```
---
## Best Practices
1. **Check base library version** - Ensure 3.7.1+ for AI support
2. **Use callbacks for UI updates** - `onText` is great for real-time display
3. **Check for [DONE]** - When using `eventStream`, check `event.data === "[DONE]"` to stop
4. **Handle errors gracefully** - Wrap AI calls in try/catch
5. **Remember the `data` wrapper** - streamText params must be wrapped in `data: {...}`