Skip to main content

Large Model Configuration Guide

This document guides you on how to access and configure various large models in the cloud development platform.

CloudBase has pre-integrated Hunyuan and DeepSeek into the platform.

Integrating Large Models

Precautions

Custom models only support large model interfaces compatible with the OpenAI protocol.

Required Parameters

Accessing custom models requires preparing the following three key parameters.

  • BaseURL: Request URL for the large model API (interfaces compatible with the OpenAI protocol)
  • APIKey: Key for accessing the large model API
  • Model Name: specific model identifier, such as hunyuan-turbos-latest, deepseek-chat, etc.

Integration Steps

  1. Obtain BaseURL and API Key

The following provides integration examples for hunyuan and DeepSeek. For other large models, please refer to the corresponding model vendor's documentation to obtain them.

ModelModel ProviderBaseURL ExampleModel NameBilling Related
hunyuanTencent Hunyuan Large Modelhttps://api.hunyuan.cloud.tencent.com/v1Refer to Hunyuan Model ListRefer to Billing Documentation
DeepSeekTencent Cloud Intelligent Agent Platform/DeepSeekhttps://api.lkeap.cloud.tencent.com/v1Refer to DeepSeek Model ListRefer to Billing Documentation
DeepSeek only supports enabling postpaid
DeepSeekDeepSeek Officialhttps://api.deepseek.com/v1Refer to DeepSeek Model ListRefer to DeepSeek Documentation
  1. Configure in the platform

    Modify Configuration Entry
  2. Fill in BaseURL and API Key

The Cloud Development platform has preconfigured the model names for Tencent Hunyuan Large Model and Tencent Cloud Intelligent Agent Platform/DeepSeek.

If you need to access other large models, please fill in the corresponding model name in the Model Name field.

API Key Configuration Page

Access Example

Official DeepSeek

  1. Obtain BaseURL and Model Name

    • Access DeepSeek API Documentation

    • Confirm BaseURL: https://api.deepseek.com/v1

    • View available Model Name

      • deepseek-chat (corresponds to DeepSeek-V3-0324)
      • deepseek-reasoner (corresponds to DeepSeek-R1-0528)
  2. Obtain API Key

  3. Configuration in Cloud Development Platform

    • Go to the CloudBase/ai+ console

    • Click the New Large Model button

    • Fill in the following information:

      • Group name (custom)
      • BaseURL
      • API Key
      • Model Name

  1. Wait for the configuration to take effect
    • The system requires a few minutes to complete the configuration
    • After successful configuration, you will see the following effect:

Custom Model Configuration Successful

Frequently Asked Questions

1: How to determine if my large model is compatible with the OpenAI protocol?

A: Large models compatible with the OpenAI protocol typically explicitly state this in their API documentation, or their API structure aligns with OpenAI's interface specifications (e.g., using similar endpoints and parameter structures).

2: What should I do if the large model fails to work properly after configuration?

A: Please check the following points:

  • Is the API Key correct and not expired?
  • Is the BaseURL correct?
  • Is the model name consistent with the provider's documentation?
  • Is the network connection normal?

3: How much concurrency does the large model support?

When using tokens provided by the platform, one environment supports 5 concurrent requests.

You can add custom models, modify the configuration for deepseek/hunyuan, or enter a third-party API Key.

After performing the above configuration, the platform will no longer impose concurrency limits. You will need to refer to the restrictions set by the third-party model provider.