Developing with LangChain
LangChain is an open-source framework for developing LLM applications, providing rich components and toolchains. CloudBase offers the @cloudbase/agent-adapter-langchain adapter, enabling LangChain Agents to seamlessly connect with the AG-UI protocol.
Prerequisites
- Node.js 18+
- An active CloudBase environment
- Configured LLM (see LLM Configuration Guide)
- Created API Key (Get it here)
Install Dependencies
npm install @cloudbase/agent-adapter-langchain @cloudbase/agent-server langchain @langchain/openai @langchain/langgraph express
Quick Start
1. Create LangChain Agent
// agent.ts
import { createAgent as createLangchainAgent } from "langchain";
import { MemorySaver } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { clientTools } from "@cloudbase/agent-adapter-langchain";
const checkpointer = new MemorySaver();
export function createAgent() {
// Use CloudBase's built-in LLM endpoint
const model = new ChatOpenAI({
model: process.env.TCB_AI_MODEL || "hunyuan-turbos-latest",
apiKey: process.env.TCB_API_KEY!,
configuration: {
baseURL: `https://${process.env.TCB_ENV_ID}.api.tcloudbasegateway.com/v1/ai/hunyuan/v1`,
},
});
// Create Agent with clientTools middleware for client-side tool support
return createLangchainAgent({
model,
checkpointer,
middleware: [clientTools()],
});
}
CloudBase provides a unified LLM HTTP endpoint supporting models like Tencent Hunyuan and DeepSeek. The endpoint format is:
https://<ENV_ID>.api.tcloudbasegateway.com/v1/ai/<PROVIDER>/v1
Where PROVIDER can be hunyuan or deepseek.
2. Wrap and Deploy as HTTP Service
// index.ts
import { createExpressRoutes } from "@cloudbase/agent-server";
import { LangchainAgent } from "@cloudbase/agent-adapter-langchain";
import { createAgent as createLangchainAgent } from "./agent.js";
import express from "express";
function createAgent() {
const lcAgent = createLangchainAgent();
return {
agent: new LangchainAgent({
agent: lcAgent,
}),
};
}
const app = express();
createExpressRoutes({
createAgent,
express: app,
});
app.listen(9000, () => console.log("Listening on 9000!"));
3. Configure Environment Variables
Create a .env file:
TCB_ENV_ID=your-env-id # CloudBase environment ID
TCB_API_KEY=your-api-key # CloudBase API Key
TCB_AI_MODEL=hunyuan-turbos-latest # Model name
For supported models, refer to the LLM Configuration Guide.
4. Start the Service
npx tsx src/index.ts
Core API
LangchainAgent
Wraps a LangChain Agent as an AG-UI compatible Agent:
import { LangchainAgent } from "@cloudbase/agent-adapter-langchain";
const agent = new LangchainAgent({
agent: lcAgent, // LangChain Agent returned by createAgent()
});
Constructor Parameters:
| Parameter | Type | Description |
|---|---|---|
agent | ReturnType<typeof createAgent> | Return value from LangChain's createAgent() |
clientTools()
Creates a middleware that allows clients to dynamically inject tools into Agent calls:
import { clientTools } from "@cloudbase/agent-adapter-langchain";
const agent = createLangchainAgent({
model,
checkpointer,
middleware: [clientTools()],
});
Purpose:
- Allows frontend clients to define tools and pass them dynamically to the Agent at call time
- The Agent will merge client-side tools with server-side tools
Advanced Usage
Agent with Server-Side Tools
import { createAgent as createLangchainAgent } from "langchain";
import { MemorySaver } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const checkpointer = new MemorySaver();
// Define server-side tools
const weatherTool = tool(
async ({ city }) => {
// In real projects, call a weather API
return JSON.stringify({
city,
temperature: "25°C",
weather: "Sunny",
});
},
{
name: "get_weather",
description: "Get weather information for a specified city",
schema: z.object({
city: z.string().describe("City name"),
}),
}
);
export function createAgent() {
const model = new ChatOpenAI({
model: process.env.OPENAI_MODEL!,
apiKey: process.env.OPENAI_API_KEY!,
});
return createLangchainAgent({
model,
checkpointer,
tools: [weatherTool],
});
}
Multi-Model Support
CloudBase has built-in support for multiple LLMs, accessible through a unified HTTP endpoint:
import { ChatOpenAI } from "@langchain/openai";
const envId = process.env.TCB_ENV_ID;
const apiKey = process.env.TCB_API_KEY;
// Tencent Hunyuan
const hunyuanModel = new ChatOpenAI({
model: "hunyuan-turbos-latest", // Options: hunyuan-turbos-latest, hunyuan-t1-latest, hunyuan-2.0-thinking-20251109, hunyuan-2.0-instruct-20251111
apiKey,
configuration: {
baseURL: `https://${envId}.api.tcloudbasegateway.com/v1/ai/hunyuan/v1`,
},
});
// DeepSeek
const deepseekModel = new ChatOpenAI({
model: "deepseek-r1-0528", // Options: deepseek-r1-0528, deepseek-v3-0324, deepseek-v3.2
apiKey,
configuration: {
baseURL: `https://${envId}.api.tcloudbasegateway.com/v1/ai/deepseek/v1`,
},
});
If you need to use external model APIs, you can configure them yourself:
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
// OpenAI (requires your own API Key)
const openaiModel = new ChatOpenAI({
model: "gpt-4",
apiKey: process.env.OPENAI_API_KEY,
});
// Anthropic (requires your own API Key)
const anthropicModel = new ChatAnthropic({
model: "claude-3-opus-20240229",
apiKey: process.env.ANTHROPIC_API_KEY,
});
Streaming Output
LangchainAgent supports streaming output by default, requiring no additional configuration. Clients will receive streaming TEXT_MESSAGE_CONTENT events.
Deployment
Cloud Function Deployment
Refer to HTTP Cloud Function Deployment
CloudRun Deployment
Refer to CloudRun Deployment
Best Practices
1. Use Environment Variables
const model = new ChatOpenAI({
model: process.env.TCB_AI_MODEL || "hunyuan-turbos-latest",
apiKey: process.env.TCB_API_KEY,
configuration: {
baseURL: `https://${process.env.TCB_ENV_ID}.api.tcloudbasegateway.com/v1/ai/hunyuan/v1`,
},
});
2. Set Temperature Appropriately
- For deterministic output (e.g., tool calls):
temperature: 0 - For creative output (e.g., writing):
temperature: 0.7-1.0
3. Timeout Control
const model = new ChatOpenAI({
model: "gpt-4",
timeout: 30000, // 30 seconds timeout
});
Observability
CloudBase provides built-in observability capabilities for LangChain applications, configurable via environment variables or code.
Environment Variable Method (Recommended)
# Enable console output
AUTO_TRACES_STDOUT=true
No code changes needed; the SDK will automatically read environment variables and enable tracing.
Code Configuration Method
import { createExpressRoutes } from "@cloudbase/agent-server";
import { ExporterType } from "@cloudbase/agent-observability/server";
createExpressRoutes({
createAgent,
express: app,
observability: { type: ExporterType.Console }
});
For more configuration options (OTLP export, Serverless considerations, etc.), refer to the Agent Observability Guide.