R
Relayby Ai5labs
← Back to all models

groq/llama-3.3-70b-versatile

Aliases: llama-3.3-70b

$0.59
Input / 1M
$0.79
Output / 1M
131K
Context
280 t/s
Speed

Public benchmark scores

Sourced from each provider's published numbers. Verify before quoting.

Quality index
56
MMLU
86
GPQA
50.5
HumanEval
88.4
MATH
77
SWE-bench
Arena Elo

Sources: meta-llama-3.3-blog

Capabilities

toolsjson_modestreaming

Use llama-3.3-70b-versatile via Relay

Configure the model alias in YAML, then call it from Python.

YAML
# models.yaml
version: 1
models:
  llama-3.3-70b:
    target: groq/llama-3.3-70b-versatile
    credential: $env.GROQ_API_KEY
Python
from relay import Hub

async with Hub.from_yaml("models.yaml") as hub:
    resp = await hub.chat(
        "llama-3.3-70b",
        messages=[{"role": "user", "content": "Hello"}],
    )
    print(resp.text, resp.cost_usd)

pip install ai5labs-relay · full docs on GitHub

Compare with