Google (Gemini)

The Google adapter connects opentine to Gemini models via the google-genai SDK. It supports tool use across all Gemini models and extended thinking with thinking-enabled model variants.

Installation

Install opentine with the Google extra to pull in the google-genai SDK.

Terminal
pip install "opentine[google]"

Quick Start

agent.py
1from opentine import Agent
2from opentine.models.google import Google
3
4# Uses GOOGLE_API_KEY env var by default
5model = Google("gemini-2.0-flash")
6
7agent = Agent(model=model, tools=[...])
8run = agent.run_sync("Compare the top 3 JavaScript frameworks")

Constructor

Signature
Google(
    model: str = "gemini-2.0-flash",
    api_key: str | None = None,  # Falls back to GOOGLE_API_KEY env var
)
  • model — The Gemini model identifier. Defaults to "gemini-2.0-flash".
  • api_key — Your Google API key. If not provided, the adapter falls back to the GOOGLE_API_KEY environment variable.

Authentication

You can provide your API key either directly in the constructor or via an environment variable.

auth.py
1from opentine.models.google import Google
2
3# Explicit API key (overrides env var)
4model = Google("gemini-2.0-flash", api_key="AIza...")
5
6# Or set the environment variable
7# export GOOGLE_API_KEY="AIza..."

Properties

properties.py
1from opentine.models.google import Google
2
3model = Google("gemini-2.0-flash")
4
5print(model.name)               # "google/gemini-2.0-flash"
6print(model.supports_tools)     # True
7print(model.supports_thinking)  # False
  • name — Returns "google/{model}".
  • supports_tools — Always True. All Gemini models support tool use.
  • supports_thinkingTrue if the model name contains "thinking", False otherwise.

Supported Models

ModelToolsThinkingNotes
gemini-2.0-flashYesNoFast and cost-effective
gemini-2.0-flash-thinkingYesYesFlash with extended thinking
gemini-2.0-proYesNoMost capable Gemini model

Any valid Gemini model identifier can be passed to the constructor.

Tool Use

Gemini models support native tool use. Pass your tools to the Agent and the model will call them as needed.

tool_use.py
1from opentine import Agent
2from opentine.models.google import Google
3from opentine.tools.web import search, fetch
4
5model = Google("gemini-2.0-flash")
6
7agent = Agent(
8    model=model,
9    tools=[search, fetch],
10    system="You are a helpful research assistant.",
11)
12
13run = agent.run_sync("Find the latest news about renewable energy")

Thinking Models

Gemini thinking models include built-in chain-of-thought reasoning. The adapter automatically detects these models by checking for "thinking" in the model name and enables extended reasoning.

thinking.py
1from opentine.models.google import Google
2
3# Thinking is enabled when "thinking" appears in the model name
4flash_thinking = Google("gemini-2.0-flash-thinking")
5print(flash_thinking.supports_thinking)  # True
6
7flash = Google("gemini-2.0-flash")
8print(flash.supports_thinking)           # False

SDK Note

The Google adapter uses the newer google-genai SDK, not the older google-generativeai package. The correct SDK is installed automatically when you use the [google] extra.

Terminal
# The Google adapter uses the google-genai SDK (not the older google-generativeai package).
# This is installed automatically with the [google] extra.
pip install "opentine[google]"

# The SDK is imported internally as:
# from google import genai

Next Steps