Migrate from Vercel AI Gateway
Guide to migrate from Vercel AI Gateway to LLM Gateway for more control and flexibility
Vercel AI Gateway provides a unified interface for AI providers within the Vercel ecosystem. LLM Gateway offers similar functionality with additional features like response caching, detailed analytics, and self-hosting options.
Quick Migration
Replace your Vercel AI SDK provider imports with the LLM Gateway provider:
1- import { openai } from "@ai-sdk/openai";2- import { anthropic } from "@ai-sdk/anthropic";3+ import { generateText } from "ai";4+ import { createLLMGateway } from "@llmgateway/ai-sdk-provider";56+ const llmgateway = createLLMGateway({7+ apiKey: process.env.LLM_GATEWAY_API_KEY8+ });910const { text } = await generateText({11- model: openai("gpt-5.2"),12+ model: llmgateway("gpt-5.2"),13 prompt: "Hello!"14});
1- import { openai } from "@ai-sdk/openai";2- import { anthropic } from "@ai-sdk/anthropic";3+ import { generateText } from "ai";4+ import { createLLMGateway } from "@llmgateway/ai-sdk-provider";56+ const llmgateway = createLLMGateway({7+ apiKey: process.env.LLM_GATEWAY_API_KEY8+ });910const { text } = await generateText({11- model: openai("gpt-5.2"),12+ model: llmgateway("gpt-5.2"),13 prompt: "Hello!"14});
Why Migrate to LLM Gateway?
| Feature | Vercel AI Gateway | LLM Gateway |
|---|---|---|
| AI SDK integration | Native | Native + OpenAI compat |
| Response caching | No | Yes |
| Detailed cost analytics | Limited | Comprehensive |
| Provider key management | Per-provider env vars | Centralized (Pro) |
| Self-hosting | No | Yes (AGPLv3) |
| Rate limiting | Platform-level | Customizable |
| Anthropic-compatible API | No | Yes (/v1/messages) |
| Smart routing | No | Yes (auto failover) |
Migration Steps
1. Get Your LLM Gateway API Key
Sign up at llmgateway.io/signup and create an API key from your dashboard.
2. Install the LLM Gateway AI SDK Provider
Install the native LLM Gateway provider for the Vercel AI SDK:
1pnpm add @llmgateway/ai-sdk-provider
1pnpm add @llmgateway/ai-sdk-provider
This package provides full compatibility with the Vercel AI SDK and supports all LLM Gateway features.
3. Update Your Code
Basic Text Generation
1// Before (Vercel AI Gateway with native providers)2import { openai } from "@ai-sdk/openai";3import { anthropic } from "@ai-sdk/anthropic";4import { generateText } from "ai";56const { text: openaiText } = await generateText({7 model: openai("gpt-4o"),8 prompt: "Hello!",9});1011const { text: claudeText } = await generateText({12 model: anthropic("claude-3-5-sonnet-20241022"),13 prompt: "Hello!",14});1516// After (LLM Gateway - single provider for all models)17import { createLLMGateway } from "@llmgateway/ai-sdk-provider";18import { generateText } from "ai";1920const llmgateway = createLLMGateway({21 apiKey: process.env.LLM_GATEWAY_API_KEY,22});2324const { text: openaiText } = await generateText({25 model: llmgateway("openai/gpt-4o"),26 prompt: "Hello!",27});2829const { text: claudeText } = await generateText({30 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),31 prompt: "Hello!",32});
1// Before (Vercel AI Gateway with native providers)2import { openai } from "@ai-sdk/openai";3import { anthropic } from "@ai-sdk/anthropic";4import { generateText } from "ai";56const { text: openaiText } = await generateText({7 model: openai("gpt-4o"),8 prompt: "Hello!",9});1011const { text: claudeText } = await generateText({12 model: anthropic("claude-3-5-sonnet-20241022"),13 prompt: "Hello!",14});1516// After (LLM Gateway - single provider for all models)17import { createLLMGateway } from "@llmgateway/ai-sdk-provider";18import { generateText } from "ai";1920const llmgateway = createLLMGateway({21 apiKey: process.env.LLM_GATEWAY_API_KEY,22});2324const { text: openaiText } = await generateText({25 model: llmgateway("openai/gpt-4o"),26 prompt: "Hello!",27});2829const { text: claudeText } = await generateText({30 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),31 prompt: "Hello!",32});
Streaming Responses
1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";2import { streamText } from "ai";34const llmgateway = createLLMGateway({5 apiKey: process.env.LLM_GATEWAY_API_KEY,6});78const { textStream } = await streamText({9 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),10 prompt: "Write a poem about coding",11});1213for await (const text of textStream) {14 process.stdout.write(text);15}
1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";2import { streamText } from "ai";34const llmgateway = createLLMGateway({5 apiKey: process.env.LLM_GATEWAY_API_KEY,6});78const { textStream } = await streamText({9 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),10 prompt: "Write a poem about coding",11});1213for await (const text of textStream) {14 process.stdout.write(text);15}
Using in Next.js API Routes
1// app/api/chat/route.ts2import { createLLMGateway } from "@llmgateway/ai-sdk-provider";3import { streamText } from "ai";45const llmgateway = createLLMGateway({6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89export async function POST(req: Request) {10 const { messages } = await req.json();1112 const result = await streamText({13 model: llmgateway("openai/gpt-4o"),14 messages,15 });1617 return result.toDataStreamResponse();18}
1// app/api/chat/route.ts2import { createLLMGateway } from "@llmgateway/ai-sdk-provider";3import { streamText } from "ai";45const llmgateway = createLLMGateway({6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89export async function POST(req: Request) {10 const { messages } = await req.json();1112 const result = await streamText({13 model: llmgateway("openai/gpt-4o"),14 messages,15 });1617 return result.toDataStreamResponse();18}
Alternative: Using OpenAI SDK Adapter
If you prefer not to install a new package, you can use @ai-sdk/openai with a custom base URL:
1import { createOpenAI } from "@ai-sdk/openai";2import { generateText } from "ai";34const llmgateway = createOpenAI({5 baseURL: "https://api.llmgateway.io/v1",6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89const { text } = await generateText({10 model: llmgateway("openai/gpt-4o"),11 prompt: "Hello!",12});
1import { createOpenAI } from "@ai-sdk/openai";2import { generateText } from "ai";34const llmgateway = createOpenAI({5 baseURL: "https://api.llmgateway.io/v1",6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89const { text } = await generateText({10 model: llmgateway("openai/gpt-4o"),11 prompt: "Hello!",12});
4. Update Environment Variables
1# Remove individual provider keys (optional - can keep as backup)2# OPENAI_API_KEY=sk-...3# ANTHROPIC_API_KEY=sk-ant-...45# Add LLM Gateway key6export LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
1# Remove individual provider keys (optional - can keep as backup)2# OPENAI_API_KEY=sk-...3# ANTHROPIC_API_KEY=sk-ant-...45# Add LLM Gateway key6export LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
Model Name Format
LLM Gateway supports two model ID formats:
Root Model IDs (without provider prefix) - Uses smart routing to automatically select the best provider based on uptime, throughput, price, and latency:
1gpt-4o2claude-3-5-sonnet-202410223gemini-1.5-pro
1gpt-4o2claude-3-5-sonnet-202410223gemini-1.5-pro
Provider-Prefixed Model IDs - Routes to a specific provider with automatic failover if uptime drops below 90%:
1openai/gpt-4o2anthropic/claude-3-5-sonnet-202410223google-ai-studio/gemini-1.5-pro
1openai/gpt-4o2anthropic/claude-3-5-sonnet-202410223google-ai-studio/gemini-1.5-pro
For more details on routing behavior, see the routing documentation.
Model Mapping Examples
| Vercel AI SDK | LLM Gateway |
|---|---|
openai("gpt-4o") | llmgateway("gpt-4o") or llmgateway("openai/gpt-4o") |
anthropic("claude-3-5-sonnet-20241022") | llmgateway("claude-3-5-sonnet-20241022") or llmgateway("anthropic/claude-3-5-sonnet-20241022") |
google("gemini-1.5-pro") | llmgateway("gemini-1.5-pro") or llmgateway("google-ai-studio/gemini-1.5-pro") |
Check the models page for the full list of available models.
Tool Calling
LLM Gateway supports tool calling through the AI SDK:
1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";2import { generateText, tool } from "ai";3import { z } from "zod";45const llmgateway = createLLMGateway({6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89const { text, toolResults } = await generateText({10 model: llmgateway("openai/gpt-4o"),11 tools: {12 weather: tool({13 description: "Get the weather for a location",14 parameters: z.object({15 location: z.string(),16 }),17 execute: async ({ location }) => {18 return { temperature: 72, condition: "sunny" };19 },20 }),21 },22 prompt: "What's the weather in San Francisco?",23});
1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";2import { generateText, tool } from "ai";3import { z } from "zod";45const llmgateway = createLLMGateway({6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89const { text, toolResults } = await generateText({10 model: llmgateway("openai/gpt-4o"),11 tools: {12 weather: tool({13 description: "Get the weather for a location",14 parameters: z.object({15 location: z.string(),16 }),17 execute: async ({ location }) => {18 return { temperature: 72, condition: "sunny" };19 },20 }),21 },22 prompt: "What's the weather in San Francisco?",23});
Benefits After Migration
- Unified API Key: One API key for all providers instead of managing multiple
- Response Caching: Automatic caching reduces costs for repeated requests
- Cost Analytics: Track spending per model, per request, with detailed breakdowns
- Smart Routing: Automatic provider selection and failover for reliability
- Self-Hosting: Deploy on your own infrastructure for complete control
- No Vendor Lock-in: OpenAI-compatible API works with any client
Self-Hosting LLM Gateway
If you prefer self-hosting, LLM Gateway is available under AGPLv3:
1git clone https://github.com/llmgateway/llmgateway2cd llmgateway3pnpm install4pnpm setup5pnpm dev
1git clone https://github.com/llmgateway/llmgateway2cd llmgateway3pnpm install4pnpm setup5pnpm dev
This gives you the same managed experience with full control over your infrastructure.
Need Help?
- Browse available models at llmgateway.io/models
- Read the API documentation
- Contact support at contact@llmgateway.io