Logo
Home
Archive
Live Data
Methodology
Subscribe
Search

Current AI Benchmarks (Week 6, 2026)

Slide 1
Slide 2

Key Takeaways (Week 6):

  • The Value Leader: Liquid AI sweeps the top 2 spots. Their LFM2 models are ~50% cheaper than the competition, giving them the highest Efficiency Scores despite moderate latency.

  • The Speed Demons: If latency is your priority, Ministral 3B (#5) and Llama Guard 3 8B (#4) are the clear winners, both clocking in under 0.20s.

  • Small is Big: The entire Top 5 is dominated by efficient models under 10B parameters. The era of massive, expensive models for everyday tasks is ending.

🏆 Top 5 Models (Ranked by Efficiency):

Rank

Model Name

Provider

Cost (1M Tokens)

Latency (s)

Score

#1

AI LFM2 2.6B

Liquid

$0.0125

0.47s

14,134

#2

AI LFM2 8B

Liquid

$0.0125

0.55s

12,345

#3

Llama 3.2 3B Instruct

Meta

$0.0200

0.32s

11,961

#4

Llama Guard 3 8B

Meta

$0.0300

0.18s

11,737

#5

Ministral 3B

Mistral

$0.0400

0.17s

9,293

Updated every Monday at 8 AM EST. Want the raw CSV?

The Compute Index

Subscribe

© 2026 The Compute Index.
Report abusePrivacy policyTerms of use
beehiivPowered by beehiiv