Skip to main content
Use Auriko as your LLM provider in LangChain with a drop-in ChatOpenAI replacement.

Prerequisites

Installation

pip install "auriko[langchain]"

Use SDK adapter

Use the AurikoChatOpenAI adapter:
from auriko.frameworks.langchain import AurikoChatOpenAI

llm = AurikoChatOpenAI(model="gpt-5.4")
AurikoChatOpenAI extends LangChain’s ChatOpenAI with:
  • Automatic use_responses_api=False (LangChain >=1.1 auto-routes GPT-5/Codex to the Responses API, which Auriko doesn’t implement)
  • Routing injection via extra_body
  • OpenAI error mapping to typed Auriko error classes
from auriko.frameworks.langchain import AurikoChatOpenAI

llm = AurikoChatOpenAI(model="gpt-5.4")

# Simple invoke
response = llm.invoke("What is 2+2?")
print(response.content)

# Streaming
for chunk in llm.stream("Count to 5"):
    print(chunk.content, end="", flush=True)

# With messages
from langchain_core.messages import HumanMessage, SystemMessage

messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="Explain quantum computing briefly."),
]
response = llm.invoke(messages)
print(response.content)

Configure options

ParameterTypeDefaultDescription
modelstr(required, via parent)Model ID
api_keystr | NoneAURIKO_API_KEY envAPI key
routingRoutingOptions | NoneNoneRouting configuration
base_urlstr"https://api.auriko.ai/v1"API base URL
**kwargsPassed through to ChatOpenAI (e.g., temperature, max_tokens)

Configure routing

Configure routing options:
from auriko.frameworks.langchain import AurikoChatOpenAI
from auriko.route_types import RoutingOptions

llm = AurikoChatOpenAI(
    model="gpt-5.4",
    routing=RoutingOptions(optimize="cost", max_ttft_ms=200),
)

response = llm.invoke("Hello!")
print(response.content)
Routing metadata is available through response generation info when using generate():
result = llm.generate([[HumanMessage(content="Hello!")]])
info = result.generations[0][0].generation_info
if info and "routing_metadata" in info:
    print(f"Provider: {info['routing_metadata']['provider']}")

Configure manually

If you prefer to use ChatOpenAI directly:
import os
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-5.4",
    api_key=os.environ["AURIKO_API_KEY"],
    base_url="https://api.auriko.ai/v1",
    use_responses_api=False,  # required for Auriko
)
Note: you must set use_responses_api=False manually, and routing options aren’t available without extra_body configuration.

Notes

  • AurikoChatOpenAI inherits all ChatOpenAI capabilities: chains, agents, tool calling, async, streaming.
  • OpenAI API errors are automatically mapped to typed Auriko error classes (RateLimitError, BudgetExceededError, etc.).
  • The use_responses_api=False flag is set automatically — you don’t need to remember it.