max_tokens
Set completion limits to avoid unpredictable long-output spend.
Grok 4.20 is a balanced model with ultra-long context support, suited to general reasoning and fast responses for enterprise teams.
Try Grok 4.20 with your teamLast reviewed: 2026-04-07
xAI lists Grok 4.20 as an ultra-long context option with $2.00 per 1M tokens input pricing, $6.00 per 1M tokens output pricing, and text->text modality support for enterprise AI operations.
Explore adjacent model profiles for routing and benchmarking decisions.
Start Smaller
Choose your team and goals, then start with the AI use cases that fit best and carry the least risk.
You get
Recommended first use cases for your company.
Set completion limits to avoid unpredictable long-output spend.
Lower temperature for deterministic policy and compliance tasks.
Use tighter sampling for stable outputs in repeatable operations.
Prefer structured output where responses feed internal systems.
Start Smaller
Test what can go wrong before teams start using AI loosely across the company.
You get
A short risk summary with the main gaps to close.
Use policy controls, role-based access, and budget guardrails before enabling advanced model tiers at scale.
Try Grok 4.20 with your team