R
Relayby Ai5labs
← Back to all models

ollama/qwen2.5

$0
Input / 1M
$0
Output / 1M
131K
Context
Speed

Capabilities

toolsstreaming

Use qwen2.5 via Relay

Configure the model alias in YAML, then call it from Python.

YAML
# models.yaml
version: 1
models:
  qwen2.5:
    target: ollama/qwen2.5
    credential: $env.OLLAMA_API_KEY
Python
from relay import Hub

async with Hub.from_yaml("models.yaml") as hub:
    resp = await hub.chat(
        "qwen2.5",
        messages=[{"role": "user", "content": "Hello"}],
    )
    print(resp.text, resp.cost_usd)

pip install ai5labs-relay · full docs on GitHub

Compare with