Skip to main content

Integrate AI Capabilities in WeChat Mini Program

Use CloudBase AI models in WeChat Mini Program, supporting text generation and streaming responses

How to Use

See How to Use Skill for detailed usage.

Test the Skill

You can use the following prompts to test:

  • "Integrate CloudBase AI models in WeChat Mini Program to implement intelligent conversation"
  • "Create a Mini Program application using CloudBase AI models with streaming response support"
  • "Use Hunyuan models to generate content and display to users in Mini Program"

Finish MCP setup first, then select a prompt to start your AI-native development journey

Installation and Viewing

To install all CloudBase Skills, execute:

npx skills add tencentcloudbase/cloudbase-skills

To install only the current Skill, execute:

npx skills add https://github.com/tencentcloudbase/skills --skill ai-model-wechat

View current Skill online: ai-model-wechat


Skill Rules Original Text

View SKILL.md Original
## Standalone Install Note

If this environment only installed the current skill, start from the CloudBase main entry and use the published `cloudbase/references/...` paths for sibling skills.

- CloudBase main entry: `https://cnb.cool/tencent/cloud/cloudbase/cloudbase-skills/-/git/raw/main/skills/cloudbase/SKILL.md`
- Current skill raw source: `https://cnb.cool/tencent/cloud/cloudbase/cloudbase-skills/-/git/raw/main/skills/cloudbase/references/ai-model-wechat/SKILL.md`

Keep local `references/...` paths for files that ship with the current skill directory. When this file points to a sibling skill such as `auth-tool` or `web-development`, use the standalone fallback URL shown next to that reference.

## Activation Contract

### Use this first when:

- The task is a WeChat Mini Program that calls AI models using `wx.cloud.extend.AI`.
- The user needs streaming text generation with `onText`, `onEvent`, `onFinish` callbacks.

### Read before writing code if:

- The code runs inside WeChat Mini Program.
- The AI integration point is `wx.cloud.extend.AI`, not `@cloudbase/js-sdk` or `@cloudbase/node-sdk`.

### Then also read:

- Web AI integration → `../ai-model-web/SKILL.md` (standalone fallback: `https://cnb.cool/tencent/cloud/cloudbase/cloudbase-skills/-/git/raw/main/skills/cloudbase/references/ai-model-web/SKILL.md`)
- Node.js AI integration → `../ai-model-nodejs/SKILL.md` (standalone fallback: `https://cnb.cool/tencent/cloud/cloudbase/cloudbase-skills/-/git/raw/main/skills/cloudbase/references/ai-model-nodejs/SKILL.md`)

### Do NOT use for:

- Browser/Web apps (use `ai-model-web`)
- Node.js backend or cloud functions (use `ai-model-nodejs`)
- Image generation (not supported in Mini Program)

### Common mistakes / gotchas:

- Using `@cloudbase/js-sdk` imports in Mini Program code.
- Expecting the same API shape as Web SDK (`streamText()` returns `res` with `.textStream`).
- Forgetting that `streamText()` in Mini Program requires a `data` wrapper.
- Using `generateText()` and expecting the same return shape as Web SDK.

## When to use this skill

Use this skill when the AI integration target is **WeChat Mini Program**, and you need to:

- Call Hunyuan or DeepSeek models from Mini Program
- Implement streaming text generation with callbacks
- Handle AI responses in WeChat Mini Program environment

**Do NOT use for:**

- Web apps → use `ai-model-web`
- Node.js backend → use `ai-model-nodejs`
- Image generation → not supported in Mini Program

## Available Providers and Models

CloudBase provides these built-in providers and models:

| Provider | Models | Recommended |
|----------|--------|-------------|
| `hunyuan-exp` | `hunyuan-turbos-latest`, `hunyuan-t1-latest`, `hunyuan-2.0-thinking-20251109`, `hunyuan-2.0-instruct-20251111` |`hunyuan-2.0-instruct-20251111` |
| `deepseek` | `deepseek-r1-0528`, `deepseek-v3-0324`, `deepseek-v3.2` |`deepseek-v3.2` |

---

## Initialization

In WeChat Mini Program, use `wx.cloud.extend.AI()`:

```js
// In Mini Program Page or Component
const ai = wx.cloud.extend.AI()

// Models are created from the AI instance
const model = ai.createModel("hunyuan-exp")
```

**Important notes:**

- `wx.cloud.extend.AI()` is the entry point (not `app.ai()` like in Web/Node)
- User must be authenticated before using AI features
- Get environment ID from CloudBase console

---

## generateText() - Non-streaming

```js
const model = ai.createModel("hunyuan-exp")

const result = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
})

console.log(result) // Raw response (different from Web SDK!)
// Access generated text from result structure
```

**Note:** The return value of `generateText()` in Mini Program is different from Web SDK. Check the actual response structure.

---

## streamText() - Streaming

```js
const model = ai.createModel("hunyuan-exp")

const res = await model.streamText({
data: { // ⚠️ Mini Program requires `data` wrapper
model: "hunyuan-2.0-instruct-20251111",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
}
})

// Use callbacks for streaming
res.onText((text) => {
console.log('Incremental text:', text)
})

res.onEvent((event) => {
console.log('Event:', event)
})

res.onFinish((fullText) => {
console.log('Complete text:', fullText)
})
```

**Key differences from Web SDK:**

- Mini Program: `streamText({ data: {...} })` with `data` wrapper
- Web SDK: `streamText({ model, messages })` without wrapper
- Mini Program: uses `onText`, `onEvent`, `onFinish` callbacks
- Web SDK: uses `for await (text of res.textStream)`

---

## Error Handling Pattern

```js
const model = ai.createModel("deepseek")

try {
const result = await model.generateText({
model: "deepseek-v3.2",
messages: [{ role: "user", content: "Generate a concise summary" }],
})

console.log(result)
} catch (error) {
console.error("Failed to call CloudBase AI from Mini Program", error)
}
```

---

## Complete Example (Mini Program Page)

```js
// pages/ai-chat/ai-chat.js
Page({
data: {
inputText: '',
messages: [],
isStreaming: false
},

onLoad() {
this.ai = wx.cloud.extend.AI()
},

async sendMessage() {
const { inputText, messages } = this.data
if (!inputText.trim()) return

const newMessages = [...messages, { role: 'user', content: inputText }]
this.setData({ messages: newMessages, inputText: '', isStreaming: true })

try {
const model = this.ai.createModel("hunyuan-exp")
let fullText = ''

const res = await model.streamText({
data: {
model: "hunyuan-2.0-instruct-20251111",
messages: newMessages.map(m => ({
role: m.role,
content: m.content
})),
}
})

res.onText((text) => {
fullText += text
this.setData({
messages: [...newMessages, { role: 'assistant', content: fullText }]
})
})

res.onFinish(() => {
this.setData({ isStreaming: false })
})

} catch (error) {
console.error('AI call failed:', error)
this.setData({ isStreaming: false })
}
}
})
```

---

## Best Practices

1. **Use streaming for better UX** - Users see responses in real-time
2. **Handle errors gracefully** - Wrap AI calls in try/catch
3. **Check authentication** - User must be signed in before AI calls
4. **Use callbacks properly** - `onText`, `onEvent`, `onFinish` for streaming
5. **Wrap parameters in `data`** - Mini Program API requires `data` wrapper for `streamText()`