Skip to main content
Use Auriko as your LLM provider in the OpenAI Agents SDK.

Prerequisites

Installation

pip install "auriko[agents]"

Use SDK adapter

Use the AurikoModel adapter:
from auriko.frameworks.agents import AurikoModel

model = AurikoModel(model="gpt-5.4")
AurikoModel replaces 4 lines of global client configuration with a single model parameter. It extends OpenAIChatCompletionsModel with routing injection, error mapping, and per-task metadata isolation via ContextVar.
import asyncio
from auriko.frameworks.agents import AurikoModel
from agents import Agent, Runner

model = AurikoModel(model="gpt-5.4")

agent = Agent(
    name="assistant",
    instructions="You are a helpful assistant.",
    model=model,
)

async def main():
    result = await Runner.run(agent, input="What is the capital of France?")
    print(result.final_output)

asyncio.run(main())

Configure options

ParameterTypeDefaultDescription
modelstr(required)Model ID
api_keystr | NoneAURIKO_API_KEY envAPI key
routingRoutingOptions | NoneNoneRouting configuration
base_urlstr"https://api.auriko.ai/v1"API base URL

Configure routing

Configure routing options:
import asyncio
from auriko.frameworks.agents import AurikoModel
from auriko.route_types import RoutingOptions
from agents import Agent, Runner

model = AurikoModel(
    model="gpt-5.4",
    routing=RoutingOptions(optimize="cost"),
)

agent = Agent(name="assistant", instructions="You are helpful.", model=model)

async def main():
    result = await Runner.run(agent, input="Hello!")
    print(result.final_output)

asyncio.run(main())
Routing metadata is isolated per async task using ContextVar, so concurrent Runner.run() calls sharing the same AurikoModel instance don’t interfere with each other.

Configure manually

If you prefer to configure the SDK’s client directly:
import asyncio
import os
from openai import AsyncOpenAI
from agents import Agent, Runner, set_default_openai_client, set_default_openai_api, set_tracing_disabled

set_default_openai_api("chat_completions")
set_tracing_disabled(True)

client = AsyncOpenAI(
    base_url="https://api.auriko.ai/v1",
    api_key=os.environ["AURIKO_API_KEY"],
)
set_default_openai_client(client, use_for_tracing=False)

agent = Agent(name="assistant", instructions="You are helpful.", model="gpt-5.4")

async def main():
    result = await Runner.run(agent, input="Hello!")
    print(result.final_output)

asyncio.run(main())
Note: set_default_openai_api("chat_completions") is required because Auriko implements the Chat Completions API, not the Responses API. Routing options, error mapping, and per-task metadata isolation aren’t available with manual configuration.

Notes

  • AurikoModel extends OpenAIChatCompletionsModel — it works with all Agents SDK features: tools, handoffs, streaming, guardrails.
  • OpenAI API errors are automatically mapped to typed Auriko error classes (RateLimitError, BudgetExceededError, etc.).
  • Concurrent agent runs using the same AurikoModel instance have isolated routing metadata via ContextVar.