R
Relayby Ai5labs
← Back to all models

groq/llama-3.1-8b-instant

$0.05
Input / 1M
$0.08
Output / 1M
131K
Context
Speed

Capabilities

toolsstreaming

Use llama-3.1-8b-instant via Relay

Configure the model alias in YAML, then call it from Python.

YAML
# models.yaml
version: 1
models:
  llama:
    target: groq/llama-3.1-8b-instant
    credential: $env.GROQ_API_KEY
Python
from relay import Hub

async with Hub.from_yaml("models.yaml") as hub:
    resp = await hub.chat(
        "llama",
        messages=[{"role": "user", "content": "Hello"}],
    )
    print(resp.text, resp.cost_usd)

pip install ai5labs-relay · full docs on GitHub

Compare with