Google (Gemini)
The Google adapter connects opentine to Gemini models via the google-genai SDK. It supports tool use across all Gemini models and extended thinking with thinking-enabled model variants.
Installation
Install opentine with the Google extra to pull in the google-genai SDK.
pip install "opentine[google]"Quick Start
1from opentine import Agent
2from opentine.models.google import Google
3
4# Uses GOOGLE_API_KEY env var by default
5model = Google("gemini-2.0-flash")
6
7agent = Agent(model=model, tools=[...])
8run = agent.run_sync("Compare the top 3 JavaScript frameworks")
Constructor
Google(
model: str = "gemini-2.0-flash",
api_key: str | None = None, # Falls back to GOOGLE_API_KEY env var
)model— The Gemini model identifier. Defaults to"gemini-2.0-flash".api_key— Your Google API key. If not provided, the adapter falls back to theGOOGLE_API_KEYenvironment variable.
Authentication
You can provide your API key either directly in the constructor or via an environment variable.
1from opentine.models.google import Google
2
3# Explicit API key (overrides env var)
4model = Google("gemini-2.0-flash", api_key="AIza...")
5
6# Or set the environment variable
7# export GOOGLE_API_KEY="AIza..."
Properties
1from opentine.models.google import Google
2
3model = Google("gemini-2.0-flash")
4
5print(model.name) # "google/gemini-2.0-flash"
6print(model.supports_tools) # True
7print(model.supports_thinking) # False
name— Returns"google/{model}".supports_tools— AlwaysTrue. All Gemini models support tool use.supports_thinking—Trueif the model name contains"thinking",Falseotherwise.
Supported Models
| Model | Tools | Thinking | Notes |
|---|---|---|---|
gemini-2.0-flash | Yes | No | Fast and cost-effective |
gemini-2.0-flash-thinking | Yes | Yes | Flash with extended thinking |
gemini-2.0-pro | Yes | No | Most capable Gemini model |
Any valid Gemini model identifier can be passed to the constructor.
Tool Use
Gemini models support native tool use. Pass your tools to the Agent and the model will call them as needed.
1from opentine import Agent
2from opentine.models.google import Google
3from opentine.tools.web import search, fetch
4
5model = Google("gemini-2.0-flash")
6
7agent = Agent(
8 model=model,
9 tools=[search, fetch],
10 system="You are a helpful research assistant.",
11)
12
13run = agent.run_sync("Find the latest news about renewable energy")
Thinking Models
Gemini thinking models include built-in chain-of-thought reasoning. The adapter automatically detects these models by checking for "thinking" in the model name and enables extended reasoning.
1from opentine.models.google import Google
2
3# Thinking is enabled when "thinking" appears in the model name
4flash_thinking = Google("gemini-2.0-flash-thinking")
5print(flash_thinking.supports_thinking) # True
6
7flash = Google("gemini-2.0-flash")
8print(flash.supports_thinking) # False
SDK Note
The Google adapter uses the newer google-genai SDK, not the older google-generativeai package. The correct SDK is installed automatically when you use the [google] extra.
# The Google adapter uses the google-genai SDK (not the older google-generativeai package).
# This is installed automatically with the [google] extra.
pip install "opentine[google]"
# The SDK is imported internally as:
# from google import genaiNext Steps
- Model Adapters — Overview of all available adapters
- Ollama (Local) — Run models locally with zero API costs
- Tools — Built-in tools for web, filesystem, and more