Upgrade a batch-2 Chatbot to Vercel AI SDK 6
In one sentence: Run
npx @ai-sdk/codemod upgrade v6to auto-migrate from 4.x → 6.x, verifystreamText/useChatstill work, then rewrite using v6's newToolLoopAgentand add tool approval so users can review sensitive tool calls.Estimated time: 30 minutes (auto-migration + verification) | Difficulty: Advanced
Applicable Scenarios
- You have an existing chatbot built with Vercel AI SDK 4.x / 5.x and want to upgrade to 6.x for agent abstractions, tool approval, and multimodal support
- You saw the v6 release notes and want to know whether the upgrade is worth it and what pitfalls to watch for
- You completed Path C in
add-vercel-ai-sdk-streaming-chatbotand now want to upgrade to v6 - Your backend runs on CloudBase Run / Web Cloud Functions, and your frontend is Next.js / Vue / React
Not applicable:
- Projects that have never used Vercel AI SDK — start from the v6 official docs directly, no migration needed
- Projects using the OpenAI Node SDK directly with no Vercel AI SDK dependency — this guide is not relevant
- WeChat Mini Program stack — refer to Path A (
cloudbase-agent-ui) in batch-2 guide 08
v6 vs v5 / v4 Key Differences
| Dimension | v4.x | v5.x | v6.x |
|---|---|---|---|
| Streaming protocol | data stream v1 | data stream v2 (breaking) | compatible with v5 protocol |
| Frontend hook | ai/react | @ai-sdk/react | @ai-sdk/react |
| Agent abstraction | none | experimental | ToolLoopAgent (stable) |
| Tool approval | none | none | needsApproval + UI protocol |
| MCP integration | none | experimental | @ai-sdk/mcp stable |
| Dev tools | none | none | @ai-sdk/devtools |
| Upgrade path | — | manual migration | npx @ai-sdk/codemod upgrade v6 |
The Vercel AI SDK team describes v6 as having "no major breaking changes" — most 4.x → 6.x work can be handled by the codemod, making it gentler than the v4 → v5 migration. That said, the codemod is not perfect; places it cannot reach are called out below.
Prerequisites
| Dependency | Version (npm latest at time of writing) |
|---|---|
| Node.js | ≥ 20 (v6 dropped support for Node 18) |
ai | ^6.0 (currently 6.0.x) |
@ai-sdk/codemod | ^3.0 (currently 3.0.x) |
@ai-sdk/openai / @ai-sdk/anthropic | ^3.0 |
@ai-sdk/react | ^3.0 |
@ai-sdk/devtools (optional) | ^0.0 (public beta) |
@cloudbase/cli (tcb) | latest |
For the exact patch version, run npm view ai version. Use ^6.0 / latest in your commands rather than pinning to a specific patch.
Before You Start: Get the batch-2 Guide 08 Project Running
This guide assumes you have a project that follows the Path C structure from add-vercel-ai-sdk-streaming-chatbot:
chatbot/
├── app/
│ ├── api/
│ │ └── chat/
│ │ └── route.ts # streamText + toDataStreamResponse
│ └── chat/
│ └── page.tsx # useChat hook
├── package.json # ai@^4 + @ai-sdk/openai@^1
└── .env.local
Before upgrading, run npm run dev and confirm that chat and streaming output work. Do not upgrade a project that is already in a broken state — if something goes wrong after the migration, you need to be able to tell whether it was caused by the migration or was broken beforehand.
Step 1: Run the Codemod
# Navigate to the project root
cd chatbot
# Create a backup branch
git checkout -b upgrade-ai-sdk-v6
# Run the official codemod — auto-updates import paths, API call shapes, and type signatures
npx @ai-sdk/codemod upgrade v6
The codemod scans all .ts, .tsx, .js, and .jsx files under src/, app/, and pages/. It handles:
- Import paths: deprecated exports from
'ai'are renamed or moved (e.g.,experimental_*prefixes removed) streamText/generateTextoption renames: field names adjusted to match v6's stricter conventionstoDataStreamResponse→toUIMessageStreamResponse: the streaming response method has been unified under a new name- Provider wrappers: factory functions like
createOpenAI({ baseURL })are mostly unchanged, but the internal protocol layer is updated
After the codemod runs, check the diff:
git diff --stat
A clean migration should touch only a few files and a few dozen lines. If you see hundreds of lines changed, you were likely using experimental APIs — go back to the stable surface first, then upgrade.
Note: the codemod does not install new packages or update version numbers in package.json. Do that in the next step.
Step 2: Upgrade Dependencies and Verify the Existing Chatbot
# Upgrade core package to v6
npm install ai@^6.0
# Upgrade the React frontend hook
npm install @ai-sdk/react@^3.0
# Upgrade the provider (OpenAI or Anthropic — match what you used in guide 08)
npm install @ai-sdk/openai@latest
# Optional: install dev tools
npm install -D @ai-sdk/devtools
Then run:
npm run dev
Open http://localhost:3000/chat, send a message, and confirm streaming output still works. This step verifies:
streamText({ model, messages, system })still returns a stream correctly- The
useChat({ api: '/api/chat' })hook still renders messages - Error handling (
errorfield) and abort (stopfunction) behavior is unchanged
If it does not start, here are the three most common failure modes:
| Symptom | Cause | Fix |
|---|---|---|
Cannot find module 'ai/react' | v6 moved the React entry to a separate package | import { useChat } from '@ai-sdk/react' |
Route handler throws toDataStreamResponse is not a function | codemod missed this call inside a closure | Manually replace with result.toUIMessageStreamResponse() |
| Frontend receives no chunks; network shows 200 but empty body | data stream protocol version mismatch between frontend and backend | Upgrade both backend ai to ^6 and frontend @ai-sdk/react to ^3 together |
At this point, your 4.x / 5.x chatbot is running on v6. If that is sufficient for your use case, the remaining steps are optional improvements.
Step 3: Rewrite with ToolLoopAgent (Cleaner)
v6's new ToolLoopAgent encapsulates the "declare tools, run the LLM loop, accumulate messages" pattern into a single object. The verbose streamText + tools array code in your route handler can be collapsed to just a few lines.
app/api/chat/route.ts:
import { ToolLoopAgent, tool } from 'ai';
import { z } from 'zod';
import { createOpenAI } from '@ai-sdk/openai';
export const runtime = 'nodejs';
export const maxDuration = 60;
const llm = createOpenAI({
baseURL: process.env.LLM_PROXY_URL,
apiKey: process.env.LLM_PROXY_TOKEN,
});
const weatherTool = tool({
description: 'Get the weather for a given city',
inputSchema: z.object({ city: z.string().describe('City name, e.g. Beijing, Shanghai') }),
execute: async ({ city }) => {
// In a real scenario, call a Cloud Function or third-party API — mocked here
return { city, temperature: 22, condition: 'sunny' };
},
});
const chatAgent = new ToolLoopAgent({
model: llm('gpt-4o-mini'),
instructions: 'You are a helpful assistant. For weather questions, use the weather tool.',
tools: { weather: weatherTool },
});
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await chatAgent.stream({ messages });
return result.toUIMessageStreamResponse();
}
Key differences from the previous approach:
- The loose
streamText({ model, system, messages, tools })call is nowchatAgent.stream({ messages }) instructionsreplacessystem(same semantics; naming aligns with OpenAI Assistants and Anthropic system prompt conventions)- Multi-turn tool calling loops run inside the SDK — no manual while-loop needed
The frontend useChat hook requires no changes, because the external protocol is unchanged.
ToolLoopAgent also supports prepareCall to dynamically inject configuration on each invocation (e.g., adjusting instructions based on the current user's account tier). That is an advanced capability not covered in this guide — see the official migration guide.
Step 4: Enable Tool Approval (Sensitive Tool Confirmation)
v6's tool() factory adds a needsApproval field. When it returns true, the SDK pauses tool execution, streams an "awaiting approval" event to the frontend, the frontend shows an Approve / Reject UI, and the user's choice is sent back to resume execution.
This is the right pattern for tools with side effects: writing to a database, sending email, charging a payment, deleting files — operations that should not be decided by the LLM alone.
Backend tool definition with approval:
const sendEmailTool = tool({
description: 'Send an email to a specified address',
inputSchema: z.object({
to: z.string().email(),
subject: z.string(),
body: z.string(),
}),
// Require approval for all emails; or use async ({ to }) => to.endsWith('@external.com') for external-only
needsApproval: async () => true,
execute: async ({ to, subject, body }) => {
// In production, call the resend email Cloud Function here
return { sent: true, to };
},
});
The frontend needs a new approval UI. useChat from @ai-sdk/react surfaces tool invocations with state: 'approval-requested' inside the message parts:
'use client';
import { useChat } from '@ai-sdk/react';
export default function ChatPage() {
const { messages, input, handleInputChange, handleSubmit, addToolApprovalResponse } = useChat({
api: '/api/chat',
});
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role === 'user' ? 'You' : 'Assistant'}:</strong>
{m.parts?.map((part, i) => {
if (part.type === 'text') return <span key={i}>{part.text}</span>;
if (part.type === 'tool-invocation' && part.toolInvocation.state === 'approval-requested') {
const { toolName, args, approval } = part.toolInvocation;
return (
<div key={i} style={{ border: '1px solid #ddd', padding: 12, marginTop: 8 }}>
<div>Assistant wants to call tool <code>{toolName}</code> with args:</div>
<pre>{JSON.stringify(args, null, 2)}</pre>
<button onClick={() => addToolApprovalResponse({ id: approval.id, approved: true })}>
Approve
</button>
<button onClick={() => addToolApprovalResponse({ id: approval.id, approved: false })}>
Reject
</button>
</div>
);
}
return null;
})}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
</div>
);
}
Field names and exact protocol details may have changed in recent patches — tool approval is new in v6. Always refer to the current v6 docs.
Step 5 (Optional): Use Output.object for Structured Output
v6's Output.object lets an agent produce output that conforms to a zod schema while running tool calls — no more splitting structured output and tool calling across two separate requests.
import { ToolLoopAgent, Output } from 'ai';
import { z } from 'zod';
const reportAgent = new ToolLoopAgent({
model: llm('gpt-4o-mini'),
instructions: 'Query the weather and produce a structured daily report',
tools: { weather: weatherTool },
output: Output.object({
schema: z.object({
city: z.string(),
temperature: z.number(),
summary: z.string(),
suggestion: z.string(),
}),
}),
});
const result = await reportAgent.generate({ prompt: "What is the weather in Shanghai today?" });
console.log(result.output);
// { city: 'Shanghai', temperature: 18, summary: 'Partly cloudy', suggestion: 'Good day to go outside' }
This is particularly useful for "agent produces output that gets written to a database" scenarios — you get a typed object ready to insert, with no need to parse natural language on the frontend.
Deploy to CloudBase
The overall deployment path is the same as deploy-nextjs-to-cloudbase-run:
tcb cloudrun deploy --port 3000
Add the following in CloudBase Run under "Service Settings → Environment Variables":
LLM_PROXY_URL(must end with/v1—@ai-sdk/openaiappends/chat/completionsinternally)LLM_PROXY_TOKEN
For Web Cloud Function deployment:
tcb fn deploy chat-backend --httpFn -e your-env-id
Add the environment variables in the Console under "Cloud Functions". If you already deployed guide 08, a redeploy is all you need — but note that v6 requires Node 20 at runtime. If your CloudBase Run Dockerfile still has FROM node:18, update it to FROM node:20.
Verification Checklist
A successful upgrade meets all of the following:
| Check | Command / Action | Expected Result |
|---|---|---|
| Dependencies installed correctly | npm ls ai @ai-sdk/react @ai-sdk/openai | Major versions are 6 / 3 / 3 respectively |
| Codemod was clean | git diff --stat | Only import paths and a few API calls changed |
| Old chatbot still works | npm run dev → /chat, send a message | Streaming output works normally |
| ToolLoopAgent can call tools | Ask "What is the weather in Beijing?" | Response includes the weather tool result |
| Tool approval shows UI | Ask "Send an email to a@b.com" | Approve / Reject buttons appear in the frontend |
| Type check passes | npx tsc --noEmit | No errors |
The final step is mandatory — v6 type signatures are stricter than v5, and TypeScript users frequently miss spots the codemod could not reach.
Common Errors
| Symptom | Cause | Fix |
|---|---|---|
codemod ran but still reports Cannot find module 'ai/react' | v6 moved the React hook to the @ai-sdk/react package; codemod updates the import path but does not install the package | npm install @ai-sdk/react@^3.0 and ensure all from 'ai/react' are changed to from '@ai-sdk/react' |
streamText result's toDataStreamResponse is undefined | v6 renamed this to toUIMessageStreamResponse; codemod misses it inside closures | Grep the project for toDataStreamResponse and replace all occurrences with toUIMessageStreamResponse |
Frontend useChat throws Failed to parse stream chunk | Backend on v6 but frontend still on v5's @ai-sdk/react — data stream protocol mismatch | Upgrade both backend ai to ^6 and frontend @ai-sdk/react to ^3 at the same time |
ToolLoopAgent's needsApproval never fires; tool executes directly | Tool was created with the v5 tool() factory — the needsApproval field is ignored | Confirm import { tool } from 'ai' is pulling v6's tool; also check package.json for leftover old versions |
CloudBase Run deployment throws SyntaxError: Unexpected token '?' | Dockerfile is still using Node 18; v6 requires Node ≥ 20 | Change Dockerfile to FROM node:20-alpine and redeploy with tcb cloudrun deploy |
| codemod changed hundreds of lines and the diff is unreadable | The project was using experimental_* APIs that v6 renamed or removed | Migrate to stable APIs first (v5 release notes have a mapping), then upgrade to v6; do not try to do it in one step |
addToolApprovalResponse throws not a function | @ai-sdk/react was not upgraded to the v6-compatible ^3 version | npm install @ai-sdk/react@latest — the method only exists in the updated hook return value |
The codemod is not perfect automation. Always do a manual review of the diff after it runs, and pay special attention to try/catch blocks, onFinish callbacks, and custom transform functions — the codemod frequently misses these inside closures.
Related Documentation
add-vercel-ai-sdk-streaming-chatbot— batch-2 guide 08, the starting point for this upgrade (4.x version)add-ai-nextjs(if published) / CloudBase AI Toolbox — an alternative path if you do not want to upgrade to v6 and prefer CloudBase's built-in AI capabilitiesdeploy-nextjs-to-cloudbase-run— deploy Next.js to CloudBase Run after this upgradeconnect-openai-api-cloud-function— isolate the LLM key to a Cloud Function proxy; all three implementation paths benefit- Vercel AI SDK 6 release notes — complete v6 capability list
- v6 migration guide — official migration guide for spots the codemod cannot reach
- CloudBase Error Codes — deployment and runtime error reference
Next Steps
deploy-mastra-agent-to-cloudbase-run— for a more complete agent framework with memory, workflow, and multi-step orchestration, move from Vercel AI SDK to Mastra; deployed as a CloudBase Run Docker containeradd-rag-with-pgvector-cloudbase— add RAG to the upgraded agent so it can answer from your own documentssecure-secrets-in-cloud-function(if published) — layered management ofLLM_PROXY_TOKENacross dev / staging / prod