Overview
What is Large Model Integration
We offer services for quick access to Tencent Hunyuan Large Model and DeepSeek, providing developers with an AI SDK available for both Web and Mini Program platforms, enabling a single codebase to be compatible across multiple platforms and models.
In addition, the WeChat Mini Program base library has built-in support for large model integration. Developers focused on Mini Programs can directly call relevant features provided by the AI module via Mini Program native code.
How to Enable Large Model Integration
Access TCB AI, select to integrate AI large models, then enable any large model in the existing large model list.
To enable a large model, you need to obtain the API key or relevant authentication information for invoking the model from the large model provider and enter it into the large model configuration.
The invocation keys, API keys, or relevant authentication information for each available model can be found in the product documentation of the large model.
Once the large model is enabled, you can use the AI SDK to integrate the large model into your application.
Available Large Models for Integration
The list of currently supported large models includes:
You can also access TCB AI to view the large models supported for integration in the large model list.
Why Use Large Model Integration
Using the large model integration service improves efficiency when integrating large models into applications, avoids interference from invocation details of various large models, and enables rapid switching between multiple large models. Meanwhile, the AI SDK provides a concise text generation interface and supports streaming calls.
Cross-Platform Compatible Text Generation
AI SDK supports use on both the Web and Mini Program platforms, providing a unified interface for integrating large models. Whether running on the Web or Mini Program, you can implement AI integration with a single codebase.
In addition to SDK support, we have also opened up HTTP APIs. You can integrate large models simply by initiating HTTP requests.
Here we choose to integrate the Hunyuan large model, invoke the large model to generate an introduction text about Li Bai, and print out the result.
const model = ai.createModel("hunyuan-exp"); // create model
const result = await model.generateText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});
console.log(result.text);
Easy-to-Use Streaming Text Generation Interface
AI SDK provides an equally concise streaming text generation interface, which is highly useful in scenarios requiring user interaction, such as building intelligent chatbots, streaming long generated text, and so on.
Here we will attempt to incrementally add the text streamed back by the large model to the web page:
const model = ai.createModel("hunyuan-exp"); // create model
const result = await model.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, please introduce Li Bai" }],
});
for await (let str of result.textStream) {
document.body.innerText += str;
}
With the same level of simplicity, we have implemented streaming invocation for text generation!
Robust Tool Calling Support
AI SDK supports tool calling for large models. By utilizing the tool calling functionality, model reasoning can be enhanced or other external operations can be performed, including scenarios such as information search, database operations, knowledge graph search and reasoning, operating systems, and triggering external actions.
Using the AI SDK, you can complete tool calling in just a few steps:
- Define the tool
- Register the tool
- Inform the Large Model of available tools and engage in dialogue with it
// Omit the initialization of the AI sdk...
// 1. Define the tool for obtaining weather, see the FunctionTool type for details
const getWeatherTool = {
name: "get_weather",
description:
Returns weather information for a city. Call example: get_weather({city: 'Beijing'})
fn: ({ city }) => \`${city} weather is: crisp autumn weather!!!\`, // Here define the tool's execution content
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "City to query",
},
},
required: ["city"],
},
};
// 2. Register the tool we just defined
ai.registerFunctionTool(getWeatherTool);
// 3. While sending a message to the Large Model, inform it that a tool for obtaining weather is available
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbo",
tools: [getWeatherTool], // Here we pass the get weather tool
messages: [
{
role: "user",
content: "Please tell me the weather conditions in Beijing"
}
]
})
console.log(result.text);
After simply adding three steps, we have implemented the integration of large models with tool invocation feature. The AI SDK helped us handle the following:
- Parse the tool invocation in the large model's response
- Execute the corresponding tool invocation
- Format the tool invocation result into a new message and resend it to the Large Model
- Return the final result to the caller