Skip to main content

Overview

CloudBase AI provides unified large model access capabilities, supporting multi-platform calls including Web, Mini Programs, and Node.js, with one set of code adapting to multiple models.

Supported Models

ModelProviderDescription
Tencent Hunyuanhunyuan.tencent.comTencent's proprietary large model
DeepSeekdeepseek.comHigh-performance inference model
Custom Models-Support for any model compatible with OpenAI protocol

See complete model list on CloudBase AI Console

Core Capabilities

CapabilityDescriptionSupported Platforms
Text GenerationNon-streaming call, returns complete result at onceWeb / Mini Program / Node.js
Streaming Text GenerationStreaming return, suitable for real-time conversation scenariosWeb / Mini Program / Node.js
Image GenerationText-to-image capabilityNode.js
Image UnderstandingImage-to-text capabilityWeb / Node.js
Tool CallingFunction Calling, extends model capabilitiesWeb / Node.js

Calling Methods

Choose the appropriate calling method based on your development scenario:

Calling MethodUse CaseDocumentation
Mini Program SDKWeChat Mini Program / Mini GamesMini Program Calling
Web SDKBrowser-based Web applicationsWeb SDK Calling
Node SDKCloud Functions, CloudBase Run, Node.js servicesNode SDK Calling
cURL / HTTP APIBackend services, scripts, any languagecURL Calling
OpenAI SDKProject migration compatible with OpenAI SDKOpenAI SDK Calling

Quick Start

1. Get Environment ID and API Key

  1. Visit CloudBase Console
  2. Go to Environment Configuration → API Key Configuration
  3. Create API Key (for HTTP API calls) or Publishable Key (for client SDK)

2. Configure Large Model

  1. Visit CloudBase AI Console
  2. Select the large model you want to access
  3. Fill in the API Key provided by the model provider

For detailed configuration instructions, see Large Model Configuration Guide

3. Choose Calling Method

Web example:

import cloudbase from "@cloudbase/js-sdk";

const app = cloudbase.init({
env: "<YOUR_ENV_ID>",
accessKey: "<YOUR_PUBLISHABLE_KEY>"
});

const ai = app.ai();
const model = ai.createModel("hunyuan-exp");

const result = await model.streamText({
model: "hunyuan-turbos-latest",
messages: [{ role: "user", content: "Hello" }],
});

for await (const text of result.textStream) {
console.log(text);
}

Mini Program example:

wx.cloud.init({ env: "<YOUR_ENV_ID>" });

const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-turbos-latest",
messages: [{ role: "user", content: "Hello" }],
});

console.log(result.choices[0].message.content);