meta-llama/llama-3.2-3b-instruct
$0.05
Input / 1M
$0.34
Output / 1M
80K
Context
—
Speed
Capabilities
No capabilities flagged in the catalog.
Use llama-3.2-3b-instruct via Relay
Configure the model alias in YAML, then call it from Python.
YAML
# models.yaml
version: 1
models:
llama:
target: meta-llama/llama-3.2-3b-instruct
credential: $env.META-LLAMA_API_KEYPython
from relay import Hub
async with Hub.from_yaml("models.yaml") as hub:
resp = await hub.chat(
"llama",
messages=[{"role": "user", "content": "Hello"}],
)
print(resp.text, resp.cost_usd)pip install ai5labs-relay · full docs on GitHub