← Back to articles

OpenRouter vs LiteLLM vs Portkey: Best LLM Gateway (2026)

Using multiple LLM providers? You need a gateway. Route requests to the cheapest model, fail over when providers go down, track costs, and switch models without code changes.

OpenRouter, LiteLLM, and Portkey take different approaches. Here's how to choose.

Quick Comparison

FeatureOpenRouterLiteLLMPortkey
TypeManaged proxyOpen-source proxyManaged gateway
Self-hostNoYesYes (enterprise)
Models available200+ via marketplace100+ providers15+ providers
Unified APIOpenAI-compatibleOpenAI-compatibleOpenAI-compatible
FallbacksAutomaticConfigurableConfigurable
Load balancingAutomaticYesYes
Cost trackingBasicBasicAdvanced
CachingYesYesYes (semantic)
GuardrailsNoBasicYes
ObservabilityBasicBasic (+ integrations)Advanced (Langfuse-like)
Free tierPay-per-useFree (self-host)Free (10K requests/mo)
PricingMarkup on model costsFree + hosting costsFrom $0 (free tier)

OpenRouter: The LLM Marketplace

OpenRouter is a managed proxy that gives you access to 200+ models from dozens of providers through a single API endpoint.

Strengths

Model variety. Access Claude, GPT-4, Gemini, Llama, Mistral, and hundreds more through one API key. No individual provider accounts needed.

Automatic routing. OpenRouter can automatically choose the cheapest or fastest model for your request. Set preferences and let it optimize.

Dead simple setup. Change your base URL to https://openrouter.ai/api/v1, use your OpenRouter API key, and you're done. Drop-in replacement for OpenAI.

Free models. Some models are available for free (rate-limited). Great for testing and development.

Automatic fallbacks. If a provider is down, OpenRouter automatically routes to an alternative provider running the same model.

Weaknesses

  • Markup on pricing. OpenRouter adds a margin on top of provider costs (varies, typically small).
  • No self-hosting. You're sending all requests through OpenRouter's servers.
  • Limited observability. Basic usage tracking but no advanced analytics, traces, or debugging.
  • No guardrails. No built-in content moderation, PII detection, or output validation.
  • Latency overhead. Additional network hop through OpenRouter adds ~50-100ms.
  • Data privacy. Your prompts pass through OpenRouter's infrastructure.

Best For

Developers who want access to many models without managing multiple provider accounts. Great for experimentation and cost-sensitive routing.

LiteLLM: The Open-Source Proxy

LiteLLM is an open-source Python proxy that provides a unified API for 100+ LLM providers. Self-host it and route requests to any model.

Strengths

Open-source. Full source code access. Self-host with complete control over your data.

Provider coverage. Supports OpenAI, Anthropic, Google, Azure, AWS Bedrock, Cohere, Ollama, vLLM, and 100+ more. The broadest open-source LLM proxy.

Drop-in replacement. OpenAI SDK compatible. Change the base URL and model name — existing code works unchanged.

Fallbacks and routing. Configure model fallbacks, load balancing across providers, and rate limit handling.

# litellm config
model_list:
  - model_name: "gpt-4"
    litellm_params:
      model: "openai/gpt-4"
      api_key: "sk-..."
  - model_name: "gpt-4"
    litellm_params:
      model: "azure/gpt-4"
      api_key: "..."
    # Fallback to Azure if OpenAI fails

Budget management. Set spending limits per key, per user, or globally.

Caching. Redis or in-memory caching for repeated requests.

Weaknesses

  • Self-host complexity. You manage the infrastructure (Docker, database, Redis).
  • Limited UI. Dashboard exists but is basic compared to Portkey.
  • Python-centric. Proxy is Python-based (can be used from any language via HTTP, but native integration is Python).
  • Observability requires integration. Need to add Langfuse, Helicone, or similar for advanced tracing.
  • No built-in guardrails. Content filtering and safety checks require additional setup.

Best For

Teams that want open-source, self-hosted LLM routing with full data control. Ideal for companies with data privacy requirements.

Portkey: The Enterprise AI Gateway

Portkey provides a managed (and self-hosted) AI gateway with advanced observability, guardrails, and optimization features.

Strengths

Advanced observability. Full request/response logging, latency tracking, cost analytics, and error analysis. Dashboard shows everything happening with your LLM usage.

Guardrails. Built-in content moderation, PII detection, output validation, and custom rules. Prevent harmful or non-compliant outputs before they reach users.

Semantic caching. Cache not just identical requests but semantically similar ones. "What's the weather?" and "How's the weather?" return the same cached response.

Virtual keys. Create API keys with per-key rate limits, budgets, and model access restrictions. Perfect for multi-tenant applications.

Reliability features. Automatic retries, fallbacks, load balancing, and timeout handling with sophisticated configuration.

Analytics. Cost tracking by model, user, feature, and time period. Identify optimization opportunities.

Weaknesses

  • Newer platform. Smaller community than LiteLLM.
  • Pricing for advanced features. Free tier is limited. Advanced guardrails and analytics require paid plans.
  • Fewer provider integrations than LiteLLM (15+ vs 100+, though covers all major providers).
  • Additional latency. Like any proxy, adds a network hop.

Best For

Production applications that need reliability, observability, and governance around LLM usage. Enterprise teams and regulated industries.

When to Use Each

Use OpenRouter When:

  • You want to try many models quickly
  • You don't want to manage provider accounts
  • Cost optimization via automatic routing is important
  • You're prototyping or building hobby projects
  • Data privacy isn't a primary concern

Use LiteLLM When:

  • You need self-hosted infrastructure (data sovereignty)
  • You want the broadest provider support
  • You prefer open-source with full control
  • You have DevOps capacity to manage the proxy
  • Budget management per API key is needed

Use Portkey When:

  • You need production-grade observability
  • Guardrails and content safety are required
  • You want managed infrastructure without ops burden
  • You need detailed cost analytics and optimization
  • Enterprise compliance is a requirement

Cost Comparison

OpenRouter

  • No subscription. Pay per token at provider rates + small markup (varies by model, typically <5%).
  • Most cost-effective for low-volume, many-model usage.

LiteLLM

  • Free. Self-host and pay only for infrastructure (~$20-100/month for a VPS) + provider API costs.
  • Most cost-effective for high-volume, self-hosted deployments.

Portkey

  • Free: 10K requests/month
  • Growth: $49/month (100K requests)
  • Enterprise: Custom
  • Plus provider API costs.

Integration Example

All three support OpenAI-compatible APIs:

import OpenAI from 'openai';

// OpenRouter
const openrouter = new OpenAI({
  baseURL: 'https://openrouter.ai/api/v1',
  apiKey: process.env.OPENROUTER_API_KEY,
});

// LiteLLM (self-hosted)
const litellm = new OpenAI({
  baseURL: 'http://your-litellm-server:4000',
  apiKey: process.env.LITELLM_API_KEY,
});

// Portkey
const portkey = new OpenAI({
  baseURL: 'https://api.portkey.ai/v1',
  apiKey: process.env.PORTKEY_API_KEY,
  defaultHeaders: { 'x-portkey-virtual-key': 'your-virtual-key' },
});

// Same code works with all three:
const response = await client.chat.completions.create({
  model: 'claude-3-5-sonnet',
  messages: [{ role: 'user', content: 'Hello!' }],
});

FAQ

Do I need an LLM gateway?

If you use one provider and one model, probably not. If you use multiple providers, need fallbacks, want cost tracking, or run production AI features — yes.

Can I use these with local models (Ollama, vLLM)?

LiteLLM: Yes, excellent local model support. Portkey: Yes, via custom endpoints. OpenRouter: No, cloud models only.

Which adds the least latency?

LiteLLM (self-hosted on same network): <5ms. OpenRouter and Portkey: 50-150ms depending on location.

Can I switch between these gateways later?

Yes. All three use OpenAI-compatible APIs. Switching means changing your base URL and API key.

The Verdict

  • OpenRouter for model access and experimentation. The easiest way to try 200+ models.
  • LiteLLM for self-hosted, open-source LLM routing. Maximum control and privacy.
  • Portkey for production applications needing observability, guardrails, and reliability.

For most production applications in 2026, start with Portkey for its observability and reliability features. Use LiteLLM if you need self-hosting. Use OpenRouter for development and experimentation.

Get AI tool guides in your inbox

Weekly deep-dives on the best AI coding tools, automation platforms, and productivity software.