Compare models
Comparing 2models. Drop the URL into a doc — it's permalinked.
| Field | openai/gpt-4o-mini | groq/llama-3.3-70b-versatile |
|---|---|---|
| Provider | openai | groq |
| Model ID | gpt-4o-mini | llama-3.3-70b-versatile |
| Context | 128K | 131K |
| Max output | 16K | 33K |
| Input / 1M | $0.15 | $0.59 |
| Output / 1M | $0.60 | $0.79 |
| Cached input / 1M | $0.07 | — |
| Avg cost / 1M | $0.38 | $0.69 |
| Speed | 145 t/s | 280 t/s |
| Quality index | 60.0 | 56.0 |
| MMLU | 82.0 | 86.0 |
| GPQA | 40.2 | 50.5 |
| HumanEval | 87.2 | 88.4 |
| MATH | 70.2 | 77.0 |
| SWE-bench | — | — |
| Arena Elo | — | — |
| Tools | ✓ | ✓ |
| Vision | ✓ | — |
| Thinking | — | — |
| Streaming | ✓ | ✓ |
| JSON mode | ✓ | ✓ |
| Structured output | ✓ | — |
| Prompt cache | — | — |
Same data, in your terminal: relay models compare 4o-mini llama-3.3-70b