Library to easily interface with LLM API providers
Project description
🚅 LiteLLM
LiteLLM AI Gateway
Open Source AI Gateway for 100+ LLMs. Self-hosted. Enterprise-ready. Call any LLM in OpenAI format.
LiteLLM Proxy Server (AI Gateway) | Hosted Proxy | Enterprise Tier | Website
What is LiteLLM
LiteLLM is an open source AI Gateway that gives you a single, unified interface to call 100+ LLM providers — OpenAI, Anthropic, Gemini, Bedrock, Azure, and more — using the OpenAI format.
Use it as a Python SDK for direct library integration, or deploy the AI Gateway (Proxy Server) as a centralized service for your team or organization.
Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers
Why LiteLLM
Managing LLM calls across providers gets complicated fast — different SDKs, auth patterns, request formats, and error types for every model. LiteLLM removes that friction:
- Unified API — one interface for 100+ LLMs, no provider-specific SDK juggling
- Drop-in OpenAI compatibility — swap providers without rewriting your code
- Production-ready gateway — virtual keys, spend tracking, guardrails, load balancing, and an admin dashboard out of the box
- 8ms P95 latency at 1k RPS (benchmarks)
OSS Adopters
Netflix |
Features
LLMs - Call 100+ LLMs (Python SDK + AI Gateway)
All Supported Endpoints - /chat/completions, /responses, /embeddings, /images, /audio, /batches, /rerank, /a2a, /messages and more.
Python SDK
uv add litellm
from litellm import completion
import os
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-key"
# OpenAI
response = completion(model="openai/gpt-4o", messages=[{"role": "user", "content": "Hello!"}])
# Anthropic
response = completion(model="anthropic/claude-sonnet-4-20250514", messages=[{"role": "user", "content": "Hello!"}])
AI Gateway (Proxy Server)
Getting Started - E2E Tutorial - Setup virtual keys, make your first request
uv tool install 'litellm[proxy]'
litellm --model gpt-4o
import openai
client = openai.OpenAI(api_key="anything", base_url="http://0.0.0.0:4000")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
Agents - Invoke A2A Agents (Python SDK + AI Gateway)
Supported Providers - LangGraph, Vertex AI Agent Engine, Azure AI Foundry, Bedrock AgentCore, Pydantic AI
Python SDK - A2A Protocol
from litellm.a2a_protocol import A2AClient
from a2a.types import SendMessageRequest, MessageSendParams
from uuid import uuid4
client = A2AClient(base_url="http://localhost:10001")
request = SendMessageRequest(
id=str(uuid4()),
params=MessageSendParams(
message={
"role": "user",
"parts": [{"kind": "text", "text": "Hello!"}],
"messageId": uuid4().hex,
}
)
)
response = await client.send_message(request)
AI Gateway (Proxy Server)
Step 1. Add your Agent to the AI Gateway
Step 2. Call Agent via A2A SDK
from a2a.client import A2ACardResolver, A2AClient
from a2a.types import MessageSendParams, SendMessageRequest
from uuid import uuid4
import httpx
base_url = "http://localhost:4000/a2a/my-agent" # LiteLLM proxy + agent name
headers = {"Authorization": "Bearer sk-1234"} # LiteLLM Virtual Key
async with httpx.AsyncClient(headers=headers) as httpx_client:
resolver = A2ACardResolver(httpx_client=httpx_client, base_url=base_url)
agent_card = await resolver.get_agent_card()
client = A2AClient(httpx_client=httpx_client, agent_card=agent_card)
request = SendMessageRequest(
id=str(uuid4()),
params=MessageSendParams(
message={
"role": "user",
"parts": [{"kind": "text", "text": "Hello!"}],
"messageId": uuid4().hex,
}
)
)
response = await client.send_message(request)
MCP Tools - Connect MCP servers to any LLM (Python SDK + AI Gateway)
Python SDK - MCP Bridge
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from litellm import experimental_mcp_client
import litellm
server_params = StdioServerParameters(command="python", args=["mcp_server.py"])
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Load MCP tools in OpenAI format
tools = await experimental_mcp_client.load_mcp_tools(session=session, format="openai")
# Use with any LiteLLM model
response = await litellm.acompletion(
model="gpt-4o",
messages=[{"role": "user", "content": "What's 3 + 5?"}],
tools=tools
)
AI Gateway - MCP Gateway
Step 1. Add your MCP Server to the AI Gateway
Step 2. Call MCP tools via /chat/completions
curl -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Summarize the latest open PR"}],
"tools": [{
"type": "mcp",
"server_url": "litellm_proxy/mcp/github",
"server_label": "github_mcp",
"require_approval": "never"
}]
}'
Use with Cursor IDE
{
"mcpServers": {
"LiteLLM": {
"url": "http://localhost:4000/mcp/",
"headers": {
"x-litellm-api-key": "Bearer sk-1234"
}
}
}
}
Supported Providers (Website Supported Models | Docs)
Get Started
You can use LiteLLM through either the Proxy Server or Python SDK. Both give you a unified interface to access multiple LLMs (100+ LLMs). Choose the option that best fits your needs:
| LiteLLM AI Gateway | LiteLLM Python SDK | |
|---|---|---|
| Use Case | Central service (LLM Gateway) to access multiple LLMs | Use LiteLLM directly in your Python code |
| Who Uses It? | Gen AI Enablement / ML Platform Teams | Developers building LLM projects |
| Key Features | Centralized API gateway with authentication and authorization, multi-tenant cost tracking and spend management per project/user, per-project customization (logging, guardrails, caching), virtual keys for secure access control, admin dashboard UI for monitoring and management | Direct Python library integration in your codebase, Router with retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router, application-level load balancing and cost tracking, exception handling with OpenAI-compatible errors, observability callbacks (Lunary, MLflow, Langfuse, etc.) |
Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published. More information about the release cycle here
Support for more providers. Missing a provider or LLM Platform, raise a feature request.
Run in Developer Mode
Services
- Setup .env file in root
- Run dependant services
docker-compose up db prometheus
Backend
- (In root) create virtual environment
python -m venv .venv - Activate virtual environment
source .venv/bin/activate - Install dependencies
uv sync --all-extras --group proxy-dev uv run prisma generateprisma generate- Start proxy backend
python litellm/proxy/proxy_cli.py
Frontend
- Navigate to
ui/litellm-dashboard - Install dependencies
npm install - Run
npm run devto start the dashboard
Verify Docker Image Signatures
All LiteLLM Docker images published to GHCR are signed with cosign. Every release is signed with the same key introduced in commit 0112e53.
Verify using the pinned commit hash (recommended):
A commit hash is cryptographically immutable, so this is the strongest way to ensure you are using the original signing key:
cosign verify \
--key https://raw.githubusercontent.com/BerriAI/litellm/0112e53046018d726492c814b3644b7d376029d0/cosign.pub \
ghcr.io/berriai/litellm:<release-tag>
Verify using a release tag (convenience):
Tags are protected in this repository and resolve to the same key. This option is easier to read but relies on tag protection rules:
cosign verify \
--key https://raw.githubusercontent.com/BerriAI/litellm/<release-tag>/cosign.pub \
ghcr.io/berriai/litellm:<release-tag>
Replace <release-tag> with the version you are deploying (e.g. v1.83.0-stable).
Enterprise
For companies that need better security, user management and professional support
Get an Enterprise License Talk to founders
This covers:
- ✅ Features under the LiteLLM Commercial License:
- ✅ Feature Prioritization
- ✅ Custom Integrations
- ✅ Professional Support - Dedicated discord + slack
- ✅ Custom SLAs
- ✅ Secure access with Single Sign-On
Contributing
We welcome contributions to LiteLLM! Whether you're fixing bugs, adding features, or improving documentation, we appreciate your help.
Quick Start for Contributors
This requires uv to be installed.
git clone https://github.com/BerriAI/litellm.git
cd litellm
make install-dev # Install development dependencies
make format # Format your code
make lint # Run all linting checks
make test-unit # Run unit tests
make format-check # Check formatting only
For detailed contributing guidelines, see CONTRIBUTING.md.
Code Quality / Linting
LiteLLM follows the Google Python Style Guide.
Our automated checks include:
- Black for code formatting
- Ruff for linting and code quality
- MyPy for type checking
- Circular import detection
- Import safety checks
All these checks must pass before your PR can be merged.
Support / talk with founders
- Schedule Demo 👋
- Community Discord 💭
- Community Slack 💭
- Our emails ✉️ ishaan@berri.ai / krrish@berri.ai
Contributors
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file litellm-1.83.10.tar.gz.
File metadata
- Download URL: litellm-1.83.10.tar.gz
- Upload date:
- Size: 14.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d54eaa98f93a1eb02decf593dbb525fa1ddd4cf03686c1d5c7bb69c2a9ba2a41
|
|
| MD5 |
398a1540c343e6b60168172cbcdffd58
|
|
| BLAKE2b-256 |
e216fc51935e406887079f0d7c20427b9a15b9fc1d558e68b4e7072331b70db4
|
Provenance
The following attestation bundles were made for litellm-1.83.10.tar.gz:
Publisher:
publish-litellm.yml on BerriAI/project-releaser
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
litellm-1.83.10.tar.gz -
Subject digest:
d54eaa98f93a1eb02decf593dbb525fa1ddd4cf03686c1d5c7bb69c2a9ba2a41 - Sigstore transparency entry: 1340083472
- Sigstore integration time:
-
Permalink:
BerriAI/project-releaser@e022d569e5c24183074f0a5390409f95992d2c28 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/BerriAI
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-litellm.yml@e022d569e5c24183074f0a5390409f95992d2c28 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file litellm-1.83.10-py3-none-any.whl.
File metadata
- Download URL: litellm-1.83.10-py3-none-any.whl
- Upload date:
- Size: 16.3 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
55203a7b5551efec8f2fccde29ee045ba057e768591e0b6b9fe1d12f00685ff8
|
|
| MD5 |
afbda5334081d9fdd0dd37657626e92a
|
|
| BLAKE2b-256 |
6f588dd69d5b1ab11f206a9c9f21b6fd191bcdd4fb3ed90b8efbf3a1291fd47c
|
Provenance
The following attestation bundles were made for litellm-1.83.10-py3-none-any.whl:
Publisher:
publish-litellm.yml on BerriAI/project-releaser
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
litellm-1.83.10-py3-none-any.whl -
Subject digest:
55203a7b5551efec8f2fccde29ee045ba057e768591e0b6b9fe1d12f00685ff8 - Sigstore transparency entry: 1340083473
- Sigstore integration time:
-
Permalink:
BerriAI/project-releaser@e022d569e5c24183074f0a5390409f95992d2c28 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/BerriAI
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-litellm.yml@e022d569e5c24183074f0a5390409f95992d2c28 -
Trigger Event:
workflow_dispatch
-
Statement type: