Model Adapters
opentine uses a unified Model protocol that every adapter implements. This means you can swap between Anthropic, OpenAI, Google, Ollama, or any compatible provider by changing a single line of code. Your agent logic, tools, and run tree structure stay exactly the same.
The Model Protocol
Every model adapter implements this protocol. If you're building a custom adapter, this is the interface you need to satisfy.
1class Model(Protocol):
2 @property
3 def name(self) -> str: ...
4
5 @property
6 def supports_tools(self) -> bool: ...
7
8 @property
9 def supports_thinking(self) -> bool: ...
10
11 async def complete(
12 self,
13 messages: list[dict],
14 tools: list[dict] | None = None,
15 system: str | None = None,
16 temperature: float = 0.0,
17 ) -> dict:
18 # Returns: {"text": str, "tool_calls": list[dict], "cost": float}
19 ...
20
21 async def stream(
22 self,
23 messages: list[dict],
24 tools: list[dict] | None = None,
25 system: str | None = None,
26 temperature: float = 0.0,
27 ) -> AsyncIterator[dict]:
28 # Yields: {"type": "text_delta", "text": str}
29 ...
Properties
name— A string identifier for the model (e.g."anthropic/claude-sonnet-4-20250514","ollama/llama3.1").supports_tools— Whether the model supports tool calling. All built-in adapters returnTrue.supports_thinking— Whether the model supports extended thinking / chain-of-thought. This varies by model and provider.
Methods
complete()— Send messages and get a full response back. Returns a dict withtext,tool_calls, andcost.stream()— Send messages and get an async iterator of text deltas. Each chunk is a dict withtypeandtext.
Swapping Models
Because every adapter implements the same protocol, swapping models is a one-line change. Your Agent, tools, and run tree format are completely model-agnostic.
1from opentine import Agent
2
3# Just change the import and constructor — everything else stays the same
4
5from opentine.models.anthropic import Anthropic
6agent = Agent(model=Anthropic("claude-sonnet-4-20250514"), tools=[...])
7
8# Swap to OpenAI
9from opentine.models.openai import OpenAI
10agent = Agent(model=OpenAI("gpt-4o"), tools=[...])
11
12# Swap to Google
13from opentine.models.google import Google
14agent = Agent(model=Google("gemini-2.0-flash"), tools=[...])
15
16# Swap to a local model via Ollama
17from opentine.models.ollama import Ollama
18agent = Agent(model=Ollama("llama3.1"), tools=[...])
Using Complete
The complete() method is the primary interface for getting a full response from a model. It returns the response text, any tool calls, and the cost of the request.
1from opentine.models.anthropic import Anthropic
2
3model = Anthropic("claude-sonnet-4-20250514")
4
5# Use complete() for a single response
6result = await model.complete(
7 messages=[{"role": "user", "content": "Explain quantum entanglement"}],
8 system="You are a physics teacher.",
9 temperature=0.0,
10)
11
12print(result["text"]) # The model's response text
13print(result["tool_calls"]) # Any tool calls (empty list if none)
14print(result["cost"]) # Cost in USD for this call
Using Stream
The stream() method returns an async iterator that yields text deltas as the model generates them. This is useful for real-time output or long responses.
1from opentine.models.openai import OpenAI
2
3model = OpenAI("gpt-4o")
4
5# Use stream() for incremental output
6async for chunk in model.stream(
7 messages=[{"role": "user", "content": "Write a haiku about Python"}],
8):
9 print(chunk["text"], end="", flush=True)
Adapter Comparison
| Adapter | Default Model | Tools | Thinking | Env Variable |
|---|---|---|---|---|
| Anthropic | claude-sonnet-4-20250514 | Yes | Yes (Opus, Sonnet) | ANTHROPIC_API_KEY |
| OpenAI | gpt-4o | Yes | Yes (o-series) | OPENAI_API_KEY |
gemini-2.0-flash | Yes | Yes (thinking models) | GOOGLE_API_KEY | |
| Ollama | llama3.1 | Yes | No | OLLAMA_HOST |
| LiteLLM | — | — | — | Coming soon |
Next Steps
- Anthropic (Claude) — Claude models with tool use and extended thinking
- OpenAI (GPT) — GPT-4o, o-series reasoning models
- Google (Gemini) — Gemini models via the google-genai SDK
- Ollama (Local) — Run models locally with zero API costs
- LiteLLM (Fallback)— Use LiteLLM's proxy for 100+ providers