Skip to main content

Mini Programs Integrating CloudBase AI Capabilities Guide

tip

CloudBase AI+ seamlessly integrates DeepSeek + Hunyuan dual models for developing AI agents, supporting mini programs, h5, official accounts, WeChat Customer Service, and other scenarios

New users get a free trial for the first month and receive 1 million tokens

Try CloudBase AI capabilities now, click to view the complete guide

This article describes how Mini Programs can quickly integrate CloudBase AI (full-powered DeepSeek) capabilities

Unlock more core AI capabilities of the CloudBase platform: Intelligent Q&A bots, AI-powered enterprise knowledge bases, AI-enhanced WeChat platform customer service. Click to view the Quick Guide

Preparations

  • Register a WeChat Mini Program account and create a local Mini Program project
  • The Mini Program base library needs to be version 3.7.1 or above and have the wx.cloud.extend.AI object
  • To activate CloudBase for your Mini Program, click the "CloudBase" button in the toolbar of the Mini Program development tools, then create an environment (PS: First-time CloudBase users get the first month's package free):
    • image

Guide 1: Invoking Large Models for Text Generation

In Mini Programs, directly invoke the text generation capabilities of large models to achieve the simplest text generation. Here we take a simple Demo of a "seven-character quatrain" generator as an example:

Step 1: Initialize the CloudBase environment

In the Mini Program code, initialize the CloudBase environment via the following code:

wx.cloud.init({
env: "<CloudBase environment ID>",
});

Replace "<CloudBase environment ID>" with the actual CloudBase environment ID. After successful initialization, you can use wx.cloud.extend.AI to invoke AI capabilities.

Step 2: Create an AI model and invoke text generation

In Mini Program base library version 3.7.1 or above, taking the invocation of the DeepSeek-R1 model as an example, the client-side code is as follows:

// Create a model instance; here we use the DeepSeek large model
const model = wx.cloud.extend.AI.createModel("deepseek");

// We first set up the system prompt for the AI, using the generation of seven-character quatrains as an example
const systemPrompt =
"Please strictly adhere to the metrical requirements of seven-character quatrains or seven-character regulated verse when composing. Tonal patterns must follow the rules, rhymes should be harmonious and natural, with rhyming words belonging to the same rhyme group. Create content around the user-given theme. A seven-character quatrain has four lines with seven characters each; a seven-character regulated verse has eight lines with seven characters each, requiring precise parallelism in the third and fourth couplets. Simultaneously, incorporate vivid imagery, rich emotions, and elegant artistic conception to showcase the charm and beauty of classical poetry.";

// User's natural language input, such as 'Help me write a poem praising the Jade Dragon Snow Mountain'
const userInput = "Help me write a poem praising the Jade Dragon Snow Mountain";

// Pass the system prompt and user input to the large model
const res = await model.streamText({
data: {
model: "deepseek-r1", // specify the concrete model
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userInput },
],
},
});

// Receive the response from the large model
// Since the large model's response is streamed, we need to receive the complete response text in a loop here.
for await (let str of res.textStream) {
console.log(str);
}
// Output result:
// "# Ode to the Jade Dragon Snow Mountain\n"
// "Snow-capped ridges pierce the clouded height,\nJade-boned, ice-skinned, proud beneath heaven's light.\n"
// "Snow shadows and misty light enhance the splendid view,\nThe divine mountain's sacred realm with enduring charm anew.\n"

Clearly, with just a few lines of Mini Program code, you can directly invoke the large model's text generation capabilities through cloud development.

Guidance 2: Implement intelligent dialogue via Agent (intelligent agent)

By calling the large model's text generation interface, a question-and-answer scenario can be quickly implemented. However, for a complete dialogue function, merely having the large model's input and output is insufficient; it is also necessary to transform the large model into a complete Agent to better engage in dialogue with users.

Cloud Development's AI capabilities not only provide raw large model access but also offer Agent access capabilities. Developers can define their own Agents on Cloud Development, then directly invoke Agents via mini-programs to conduct dialogues.

Step 1: Initialize the CloudBase environment

In the Mini Program code, initialize the CloudBase environment via the following code:

wx.cloud.init({
env: "<CloudBase environment ID>",
});

Replace "<CloudBase environment ID>" with the actual CloudBase environment ID. After successful initialization, you can use wx.cloud.extend.AI to invoke AI capabilities.

Step 2: Create an Agent

Go to Cloud Development Platform - AI+ to create a new Agent.

image

Here, you can choose to create from a template or input custom prompts and welcome messages to create a customized Agent. For simplicity, we directly create a template:

image

Click "Copy ID" at the top of the page to obtain a bot-id, which is the unique identifier of the Agent and will be used in the following code.

Step 3: Implement conversations with the Agent in the Mini Program

We just created an Agent called "Mini Program Development Expert". Let's now try to have a conversation with it to see if it can handle common CloudBase error issues. In the Mini Program, use the following code to directly invoke the Agent we just created for conversation:

// Initialization
wx.cloud.init({
env: "<CloudBase environment ID>",
});

// User input, here we use an error message as an example
const userInput =
"What does this error in my Mini Program mean: FunctionName parameter could not be found";

const res = await wx.cloud.extend.AI.bot.sendMessage({
data: {
botId: "xxx-bot-id", // unique identifier of the Agent obtained in Step 2
msg: userInput, // user input
history: [], // chat history content; since this is the first conversation round, it can be omitted
},
});
for await (let x of res.textStream) {
console.log(x);
}
// Output result:
// "### Error Explanation\n"
// "**Error Message:** `FunctionName \n"
// "parameter could not be found` \n
// "This error usually indicates that when calling a function,\n"
// "The specified function name parameter was not found. Specifically,\n"
// "It may be one of the following situations:\n"
// ...

We can also record the conversation content and repeatedly call the Agent's interface to achieve multi-turn conversations.

const res = await wx.cloud.extend.AI.bot.sendMessage({
data: {
botId: "xxx-bot-id", // unique identifier of the Agent obtained in Step 2
msg: userInput, // user input
history: [
{ role: "user", message: "What does this error mean?..."},
{ role: "bot", message: "### Error Explanation..." },
{ role: "user", message: "Then how should I proceed to fix it?" }
// ...
]
},
});

Step 4: Implement More Robust Chat Functionality

Cloud Development provides a complete set of API interfaces for Agent (intelligent agent) integration in the SDK, including basic conversations, conversation history storage, feedback collection, and follow-up question recommendations.

Mini Program developers can create an Agent on the Cloud Development Platform, then directly call various interfaces under wx.cloud.extend.AI in the frontend code to interact with the Agent, including:

  • Get chat history
  • Send and retrieve user feedback
  • Get recommended follow-up questions

Here are some code samples:

Get Chat History

await wx.cloud.extend.AI.bot.getChatRecords({
botId: "botId-xxx",
pageNumber: 1,
pageSize: 10,
sort: "asc",
});

Pass in the botId, pagination information, and sorting method to retrieve the chat history of the specified Agent.

Send and Retrieve Feedback

Send user feedback:

const res = await wx.cloud.extend.AI.bot.sendFeedback({
userFeedback: {
botId: "botId-xxx",
recordId: "recordId-xxx",
comment: "Excellent",
rating: 5,
tags: ["Graceful"],
aiAnswer: "Fallen petals scatter in profusion",
input: "Give me an idiom",
type: "upvote",
},
});
const res = await wx.cloud.extend.AI.bot.getRecommendQuestions({
data: {
botId: "xxx-bot-id",
msg: "Introduce the Python language",
},
});
for await (let x of res.textStream) {
console.log(x);
}

Set the botId and user message msg in the data parameter, and retrieve recommended questions by iterating through the textStream.

Guide 3: Use the CloudBase AI Chat Component to quickly integrate AI chat

To help developers quickly implement AI chat functionality in their mini programs, CloudBase provides a mini program source code component for AI chat that developers can use directly. The effect is as shown in the figure below:

Step 1: Download the component package

Method 1: Directly download the component sample package, which includes the agent-ui source code component and usage instructions

Method 2: Create an agent-ui component template via WeChat Developer Tools and configure and use it according to the instructions.

Step 2: Add the component to your mini program project

  1. Copy the miniprogram/components/agent-ui component into the Mini Program

image

  1. Register the component in the page index.json configuration file
{
"usingComponents": {
"agent-ui":"/components/agent-ui/index"
},
}
  1. Use the component in the page index.wxml file
<view>
<agent-ui agentConfig="{{agentConfig}}" showBotAvatar="{{showBotAvatar}}" chatMode="{{chatMode}}" modelConfig="{{modelConfig}}"></agent-ui>
</view>
  1. Write the configuration in the page index.js file

Agent Integration:

 data: {
chatMode: "bot", // bot indicates using agent, model indicates using large model
showBotAvatar: true, // Whether to display the avatar on the left side of the dialog box
agentConfig: {
botId: "bot-e7d1e736", // agent id,
allowWebSearch: true, // Allow the client to choose to enable web search
allowUploadFile: true, // Allow file upload
allowPullRefresh: true, // Allow pull-to-refresh
allowUploadImage: true // Allow image upload
showToolCallDetail: true, // Whether to display tool call details
allowMultiConversation: true, // Whether to display the conversation list and the create conversation button
}
}

Integrating with Large Models:

  data: {
chatMode: "model", // bot indicates using agent, model indicates using large model
showBotAvatar: true, // Whether to display the avatar on the left side of the dialog box
modelConfig: {
modelProvider: "hunyuan-open", // large model service provider
quickResponseModel: "hunyuan-lite", // large model name
logo: "", // model avatar
welcomeMsg: "Welcome message", // model welcome message
},
}

Step 3: Initialize the CloudBase environment

In app.js, within the onLaunch lifecycle, initialize the sdk

   // app.js
App({
onLaunch: function () {
wx.cloud.init({
env: "<CloudBase environment ID>",
});
},
});

Then you can directly use the AI chat component in the page:

image

Summary

This article introduces three approaches to access large models in cloud development, each suitable for different scenarios:

  1. Directly invoking large models via the SDK: Suitable for general non-dialog scenarios, such as text generation, intelligent completion, intelligent translation, etc.
  2. Invoking Agent (intelligent agent) conversational capabilities via the SDK: This approach is suitable for dedicated AI conversational scenarios, supporting the configuration of capabilities required in conversations such as welcome messages, prompts, and knowledge bases.
  3. Using the AI chat component: This approach is more friendly to professional frontend developers, allowing them to quickly integrate AI conversational capabilities into mini programs based on the UI components provided by CloudBase.

CloudBase has placed complete sample code for the three approaches to integrate AI in mini programs in the code repository for reference:

Of course, not only mini programs, but CloudBase's AI capabilities also support invoking large models through Web applications, Node.js, and HTTP APIs. You can refer to the following documents:

tip

Unlock more core AI capabilities of CloudBase, click to view the Quick Guide

In the future, CloudBase plans to launch more AI capabilities such as Tool Calling, multi-Agent chaining, workflow orchestration, etc. Stay tuned for updates. For more information, visit: