OpenAI-Compatible Providers
opentine includes thin wrappers for providers and local servers that speak an OpenAI-style chat completions API. These wrappers share the OpenAI SDK path and are scoped compatibility targets until provider-specific live gates pass.
Installation
Terminal
pip install "opentine[compat]"Cloud Providers
Cloud wrappers set a provider base URL and read the provider-specific API key from the environment unless you pass one explicitly.
cloud_compat.py
1from opentine import Agent
2from opentine.models.compat import DeepSeek, GLM, Groq, Kimi, Mistral, Qwen, Together
3
4agent = Agent(model=Kimi("moonshot-v1-8k")) # KIMI_API_KEY
5agent = Agent(model=DeepSeek("deepseek-chat")) # DEEPSEEK_API_KEY
6agent = Agent(model=Qwen("qwen-plus")) # QWEN_API_KEY
7agent = Agent(model=GLM("glm-4-flash")) # GLM_API_KEY
8agent = Agent(model=Groq("llama-3.1-70b-versatile")) # GROQ_API_KEY
9agent = Agent(model=Together("meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo"))
10agent = Agent(model=Mistral("mistral-large-latest")) # MISTRAL_API_KEY
Local Endpoints
Local wrappers target OpenAI-compatible servers running on your machine or network. Endpoint-specific live gates are still required before treating a local server as validated in your environment.
local_compat.py
1from opentine import Agent
2from opentine.models.compat import Jan, LMStudio, LlamaCpp, LocalAI, Unsloth, VLLM
3
4agent = Agent(model=LMStudio(model="local-model", host="http://localhost:1234"))
5agent = Agent(model=VLLM(model="default", host="http://localhost:8000"))
6agent = Agent(model=Unsloth(model="default", host="http://localhost:8000"))
7agent = Agent(model=LlamaCpp(model="default", host="http://localhost:8080"))
8agent = Agent(model=LocalAI(model="default", host="http://localhost:8080"))
9agent = Agent(model=Jan(model="default", host="http://localhost:1337"))
Custom Base URL
You can also use the base OpenAI adapterdirectly with a custom base_url.
custom_endpoint.py
1from opentine.models.openai import OpenAI
2
3model = OpenAI(
4 model="custom-model",
5 api_key="provider-key",
6 base_url="https://api.example.com/v1",
7)
Status
| Target | Status |
|---|---|
| Kimi API, DeepSeek, Qwen API, GLM, Groq, Together, Mistral | Scoped |
| LM Studio | Skipped in prior gate because local server was unavailable |
| vLLM, Unsloth-compatible endpoints, llama.cpp server, LocalAI, Jan | Scoped |