Model Comparison
Baseline scores + community votes · 37 models
Provider
Tier
Sort by
Cost estimate for:tokens · click row to expand scores
| Model | Radar | Value ↕ | Overall | Complex | Reasoning | Coding | Speed | Est. Cost | Input/1M |
|---|---|---|---|---|---|---|---|---|---|
DeepSeek V3★ Best value DeepSeek | 7.6 | 7.9 | 8.0 | 8.0 | 8.5 | 7.0 | $0.049 | $0.270 | |
Llama 4 Maverick Together | 7.5 | 7.8 | 7.5 | 7.5 | 7.5 | 8.0 | $0.032 | $0.270 | |
Gemini 2.0 Flash Google | 7.4 | 8.0 | 7.0 | 7.0 | 7.5 | 9.5 | $0.018 | $0.100 | |
Codestral Mistral | 7.4 | 7.9 | 7.0 | 7.0 | 9.5 | 8.0 | $0.048 | $0.300 | |
GLM-4 Plus Zhipu | 7.4 | 7.6 | 7.5 | 7.5 | 7.0 | 8.0 | $0.017 | $0.140 | |
Qwen Max Alibaba | 7.3 | 7.7 | 8.0 | 8.0 | 7.5 | 7.0 | $0.064 | $0.400 | |
Qwen Plus Alibaba | 7.3 | 7.5 | 7.0 | 7.0 | 7.0 | 8.0 | $0.013 | $0.080 | |
Llama 3.3 70B Groq | 7.2 | 8.2 | 7.5 | 7.5 | 7.5 | 9.8 | $0.075 | $0.590 | |
Grok 3 Mini xAI | 7.2 | 7.7 | 6.5 | 8.0 | 7.0 | 8.5 | $0.040 | $0.300 | |
MiniMax Text 01 MiniMax | 7.0 | 7.5 | 7.5 | 7.5 | 7.0 | 7.5 | $0.042 | $0.200 | |
GPT-4o Mini OpenAI | 7.0 | 7.6 | 6.5 | 6.5 | 7.0 | 9.0 | $0.027 | $0.150 | |
DeepSeek R1 DeepSeek | 6.9 | 7.0 | 9.0 | 9.5 | 8.5 | 3.5 | $0.099 | $0.550 | |
Moonshot v1 128K Moonshot | 6.8 | 7.4 | 7.5 | 7.0 | 7.0 | 7.5 | $0.098 | $0.820 | |
Moonshot v1 8K Moonshot | 6.8 | 7.3 | 6.0 | 6.5 | 6.5 | 9.0 | $0.014 | $0.120 | |
DeepSeek R1 Distill 70B Groq | 6.8 | 7.7 | 8.0 | 9.0 | 7.5 | 9.0 | $0.095 | $0.750 | |
Llama 3.3 70B Turbo Together | 6.7 | 7.5 | 7.0 | 7.0 | 7.0 | 8.5 | $0.106 | $0.880 | |
Gemini 2.0 Flash Lite Google | 6.6 | 7.4 | 5.5 | 5.5 | 6.5 | 9.8 | $0.013 | $0.075 | |
ABAB 6.5s MiniMax | 6.6 | 7.1 | 6.0 | 6.0 | 6.0 | 9.0 | $0.012 | $0.100 | |
Qwen Turbo Alibaba | 6.5 | 7.1 | 5.5 | 5.5 | 6.0 | 9.5 | $0.0032 | $0.020 | |
Mistral Small Mistral | 6.3 | 6.9 | 5.5 | 5.5 | 6.0 | 9.0 | $0.016 | $0.100 | |
Sonar Perplexity | 6.2 | 7.2 | 6.5 | 6.5 | 6.0 | 8.5 | $0.120 | $1.00 | |
Command R Cohere | 6.1 | 6.7 | 6.0 | 6.0 | 6.0 | 8.0 | $0.027 | $0.150 | |
GLM-4 Flash Zhipu | 6.1 | 6.8 | 5.0 | 5.5 | 5.5 | 9.5 | Free | Free | |
Llama 3.1 8B Instant Groq | 6.1 | 6.9 | 5.0 | 5.5 | 5.5 | 10.0 | $0.0066 | $0.050 | |
o4-mini OpenAI | 6.1 | 7.3 | 8.0 | 9.0 | 8.0 | 5.5 | $0.198 | $1.10 | |
Jamba 1.5 Mini AI21 | 5.9 | 6.7 | 5.5 | 5.5 | 5.5 | 9.0 | $0.028 | $0.200 | |
Claude Haiku 4.5 Anthropic | 5.7 | 7.6 | 6.0 | 6.0 | 7.0 | 9.5 | $0.160 | $0.800 | |
Gemini 2.5 Pro Google | 5.5 | 8.2 | 9.0 | 9.0 | 8.5 | 6.5 | $0.325 | $1.25 | |
Mistral Large Mistral | 5.4 | 7.7 | 7.5 | 7.5 | 7.5 | 8.0 | $0.320 | $2.00 | |
GPT-4o OpenAI | 5.2 | 8.3 | 8.5 | 8.0 | 8.5 | 8.0 | $0.450 | $2.50 | |
Jamba 1.5 Large AI21 | 4.8 | 7.3 | 7.5 | 7.0 | 7.0 | 7.5 | $0.360 | $2.00 | |
Claude Sonnet 4.5 Anthropic | 4.5 | 8.2 | 8.5 | 8.5 | 8.5 | 7.0 | $0.600 | $3.00 | |
Grok 3 xAI | 4.5 | 8.2 | 8.5 | 9.0 | 8.5 | 7.0 | $0.600 | $3.00 | |
Command R+ Cohere | 4.5 | 7.3 | 7.5 | 7.0 | 7.0 | 7.5 | $0.450 | $2.50 | |
Sonar Pro Perplexity | 4.1 | 7.6 | 8.0 | 7.5 | 7.0 | 7.0 | $0.600 | $3.00 | |
o3 OpenAI | 2.3 | 7.2 | 9.5 | 9.8 | 9.0 | 3.0 | $1.80 | $10.00 | |
Claude Opus 4.5 Anthropic | 1.6 | 7.9 | 9.5 | 9.5 | 9.0 | 4.5 | $3.00 | $15.00 |
↑↓ Community votes shift scores ±0.1 per net vote·Value = quality ÷ (price × 0.1 + 1)·Click row to expand