LiteLLM (Fallback)
A dedicated LiteLLM adapter for opentine is planned but not yet available. In the meantime, you can use LiteLLM's OpenAI-compatible proxy with the existing OpenAI adapter to access 100+ model providers through a single endpoint.
Status: Coming soon. A native LiteLLM adapter will provide tighter integration, proper cost tracking, and automatic model detection. Follow the opentine repository for updates.
How It Works
LiteLLMis a proxy server that provides an OpenAI-compatible API in front of 100+ model providers including Anthropic, Google, Azure, AWS Bedrock, Cohere, and many more. Since it exposes the same API as OpenAI, opentine's OpenAI adapter works seamlessly with it.
1. Start the LiteLLM Proxy
Install LiteLLM and start the proxy server. You can configure it to route to any supported provider.
# Install LiteLLM
pip install litellm
# Start the LiteLLM proxy server
litellm --model anthropic/claude-sonnet-4-20250514
# Or with a config file for multiple models
litellm --config litellm_config.yaml2. Configure Models
Create a LiteLLM config file to define your available models and their API keys.
1# litellm_config.yaml
2model_list:
3 - model_name: "claude-sonnet"
4 litellm_params:
5 model: "anthropic/claude-sonnet-4-20250514"
6 api_key: "sk-ant-..."
7
8 - model_name: "gpt-4o"
9 litellm_params:
10 model: "openai/gpt-4o"
11 api_key: "sk-..."
12
13 - model_name: "gemini-flash"
14 litellm_params:
15 model: "gemini/gemini-2.0-flash"
16 api_key: "AIza..."
3. Set Environment Variables
Point the OpenAI adapter at your LiteLLM proxy by setting the base URL.
# Set the LiteLLM proxy as the OpenAI base URL
export OPENAI_API_BASE="http://localhost:4000"
# If your LiteLLM proxy requires a key
export OPENAI_API_KEY="sk-litellm-..."4. Use the OpenAI Adapter
Create an OpenAI adapter instance and pass the model name from your LiteLLM config. The adapter will route requests through the LiteLLM proxy to the actual provider.
1from opentine import Agent
2from opentine.models.openai import OpenAI
3
4# Point the OpenAI adapter at the LiteLLM proxy
5# LiteLLM exposes an OpenAI-compatible API on port 4000 by default
6model = OpenAI(
7 model="claude-sonnet", # The model_name from your LiteLLM config
8 api_key="sk-litellm-...", # Your LiteLLM proxy key (if configured)
9)
10
11# You need to set the base URL via the OPENAI_API_BASE env var
12# export OPENAI_API_BASE="http://localhost:4000"
13
14agent = Agent(model=model, tools=[...])
15run = agent.run_sync("Summarize this document")
Automatic Fallbacks
One of LiteLLM's most useful features is automatic fallbacks. If you define multiple providers under the same model name, LiteLLM will automatically try the next provider when one fails.
1# litellm_config.yaml with fallbacks
2model_list:
3 - model_name: "main-model"
4 litellm_params:
5 model: "anthropic/claude-sonnet-4-20250514"
6 api_key: "sk-ant-..."
7
8 - model_name: "main-model"
9 litellm_params:
10 model: "openai/gpt-4o"
11 api_key: "sk-..."
12
13# LiteLLM will automatically fall back to GPT-4o if Claude is unavailable
Your opentine agent code stays exactly the same — the fallback logic is handled entirely by LiteLLM.
1from opentine import Agent
2from opentine.models.openai import OpenAI
3
4# "main-model" routes through LiteLLM's fallback logic
5model = OpenAI(model="main-model")
6
7agent = Agent(model=model, tools=[...])
8
9# If Claude is down, LiteLLM automatically falls back to GPT-4o
10# Your agent code doesn't need to change at all
11run = agent.run_sync("Analyze the quarterly revenue data")
Limitations
When using the OpenAI adapter as a LiteLLM workaround, there are a few limitations to be aware of:
- Cost tracking: The
costfield in responses may not accurately reflect actual costs, since pricing varies by provider and the OpenAI adapter uses OpenAI's pricing model. - Thinking detection: The
supports_thinkingproperty is based on OpenAI's naming conventions (model starts with"o"), so it may not correctly detect thinking support for non-OpenAI models routed through LiteLLM. - Model naming: The
nameproperty will show"openai/{model_name}"rather than the actual provider name.
These limitations will be resolved when the dedicated LiteLLM adapter is released.
Next Steps
- Model Adapters — Overview of all available adapters
- OpenAI (GPT) — The adapter used for the LiteLLM workaround
- Ollama (Local) — Another option for running models locally