Overview
What Large Model Access Is
We provide services for quickly accessing Tencent Hunyuan Large Model and DeepSeek. We offer developers an AI SDK available on both Web and Mini Program, enabling one set of code to be applicable across multiple platforms and models. Moreover, the WeChat Mini Program basic library has built-in support for large model access. Developers focused on Mini Programs can directly call relevant functions provided by the AI+ module via Mini Program native code.
How to Enable Large Model Access
Access Cloud Development/AI+, select to access AI large models, and then enable any large model from the existing list.
When enabling large models, you need to obtain the key or relevant authentication information for invoking the model from the large model vendor and enter it into the large model configuration.
Call keys, keys, or relevant authentication information for each available model can be found in the product documentation for the large model.
After the large model is enabled, you can use the AI SDK to access large models in your application.
CloudBase AI+ seamlessly integrates DeepSeek
New users get a free trial for the first month and receive 1 million tokens.
What Large Models Can Be Accessed
The list of large models currently supported for access includes:
You can also access Cloud Development/AI+ to view the large models supported for access in the large model list.
Why Use Large Model Access
Using large model access services can improve the efficiency of integrating large models into applications, without being disturbed by the invocation details of various large models, and enables quick switching between multiple large models. Meanwhile, the AI SDK provides a concise text generation interface and supports streaming invocation.
Multi-end Applicable and Compatible Text Generation
The AI SDK supports use on the Web side and Mini Program side, and provides a unified interface for connecting to large models. Whether running on the Web side or Mini Program side, it can use a single set of code to implement AI access.
In addition to SDK support, we have also opened up HTTP APIs. You can access large models by initiating HTTP requests.
Here we choose to access the Hunyuan large model, invoke it to generate text introducing Li Bai, and print the result.
const model = ai.createModel("hunyuan-exp"); // Create a model
const result = await model.generateText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }],
});
console.log(result.text);
Easy-to-Use Streaming Text Generation Interface
The AI SDK provides an equally concise streaming text generation interface, which is highly useful in scenarios requiring user interaction, such as building intelligent chatbots or streaming long generated text.
Here we will try to incrementally add the text streaming returned by the large model to the webpage:
const model = ai.createModel("hunyuan-exp"); // Create a model
const result = await model.streamText({
model: "hunyuan-lite",
messages: [{ role: "user", content: "Hello, could you please introduce Li Bai?" }],
});
for await (let str of result.textStream) {
document.body.innerText += str;
}
With the same level of conciseness, we have implemented streaming text generation!
Powerful Tool Invocation Support
The AI SDK supports tool invocation for large models. Using the tool invocation feature can enhance model inference effectiveness or perform other external operations, including information retrieval, database operations, knowledge graph search and inference, operating system operations, triggering external operations, and other scenarios.
Using the AI SDK, tool invocation can be completed in just a few steps:
- Define the tool
- Register the tool
- Inform the Large Model of the available tools and conduct a dialogue with it
// Omit the initialization operations of the AI sdk...
// 1. Define the weather retrieval tool, see the FunctionTool type
const getWeatherTool = {
name: "get_weather",
description:
"Returns the weather information for a city. Call example: get_weather({city: 'Beijing'})",
fn: ({ city }) => `The weather in ${city} is: crisp and clear autumn weather!!!`, // Define the tool's execution content here
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "City to query",
},
},
required: ["city"],
},
};
// 2. Register the tool we just defined
ai.registerFunctionTool(getWeatherTool);
// 3. While sending a message to the Large Model, inform it that a weather retrieval tool is available
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbo",
tools: [getWeatherTool], // Here we pass in the weather retrieval tool
messages: [
{
role: "user",
content: "Please tell me the weather conditions in Beijing"
}
]
})
console.log(result.text);
After simply adding three steps, we have implemented large model integration with tool invocation capabilities. The AI SDK handles the following for us:
- Parse the tool invocation in the Large Model response
- Execute the corresponding tool invocation
- Construct the tool invocation results into a new message and resend it to the Large Model
- Return the final result to the caller