Specs, pricing and head-to-head comparisons for the models that actually matter — refreshed whenever a new one ships.
The frontier releases from the last 60 days — specs, pricing and benchmarks on every page.
First full retrain since GPT-4.5. 1M context, $5/$30 pricing, 82.7% on Terminal-Bench 2.0. Shipped 23 Apr 2026.
Flagship · Apr 20262M-token context — the largest available. Native video, audio and text in one pass, no transcription step.
Multimodal · Apr 2026Largest open-weights model of 2026 — 1.6T params, MIT-licensed, 80.6% SWE-bench, from $0.14/1M tokens.
Open weights · Apr 2026Open Apache 2.0 family — four sizes from phone to workstation, 89.2% on AIME, runs on one GPU.
Open source · Apr 2026Open-weight 1T-param coding model — ties GPT-5.5 on SWE-bench Pro at ~80% lower cost.
Coding · Apr 2026AI code editor rebuilt around agent orchestration — the new Agents Window. Shipped 2 Apr 2026.
Dev tool · Apr 202687.6% on SWE-bench Verified — a 7-point jump in one release. 1M context, $5/$25 pricing.
Flagship · Apr 2026Reportedly 6T parameters, trained on the Colossus 2 supercluster. Expected Q2 2026.
Upcoming · Q2 2026The model match-ups people search for most — benchmarks, pricing and a clear verdict.
Which flagship wins on coding, context window, multimodal ability and price?
Updated May 2026Open MIT weights vs closed flagship — an 8-10x price gap and who each suits.
Updated May 2026Code quality vs agentic reliability — Opus leads SWE-bench Pro, GPT-5.5 owns Codex.
Updated May 2026Two open-weight coding models head to head — context, price and licensing.
Updated May 2026Code quality vs multimodal scale — 1M vs 2M context, and a 2x price gap.
Updated May 2026New AI models ship almost every week, each with its own benchmarks, pricing tiers and marketing language. AI Model Hub cuts through it: every page gives you the specs, the real pricing, the benchmark numbers and a plain-English read on who the model is actually for. Independent, no affiliation with any model provider, and refreshed whenever something new lands.