xAI (Grok)
xAI provides the Grok family of models, designed for conversational AI with strong reasoning capabilities, large context windows (up to 2M tokens), and image generation via Grok Imagine.
Getting an API Key
- Visit console.x.ai
- Sign in or create an account
- Generate a new API key (starts with
xai-...) - Paste the key into AI Supreme Council under Settings > AI Model > xAI
API keys are stored locally in your browser (localStorage) and are never included in shared bot URLs.
Supported Models
| Model | Context Window | Max Output | Input Price | Output Price | Capabilities |
|---|---|---|---|---|---|
| Grok 4.1 Fast | 2M | 16K | $0.20/MTok | $0.50/MTok | Vision, tools, reasoning, code |
| Grok 4 Fast | 2M | 16K | $0.20/MTok | $0.50/MTok | Tools, reasoning, code |
| Grok 4 | 256K | 16K | $3.00/MTok | $15.00/MTok | Tools, reasoning, code |
| Grok 3 | 131K | 131K | $3.00/MTok | $15.00/MTok | Tools, code |
| Grok 3 Mini | 131K | 131K | $0.30/MTok | $0.50/MTok | Tools, reasoning, code |
| Grok Code Fast | 256K | 16K | $0.20/MTok | $1.50/MTok | Tools, code |
Prices are per million tokens (MTok). Cached input pricing available on select models.
- Grok 4.1 Fast is the latest and recommended default -- it combines vision, reasoning, and a massive 2M context window at a low price.
- Grok 4 is the full-power reasoning model for complex tasks that justify the higher price.
- Grok 3 Mini is the budget-friendly option with reasoning support and high output capacity (131K tokens).
- Grok Code Fast is optimized specifically for code generation tasks.
Reasoning Support
Grok 3 Mini, Grok 4, Grok 4 Fast, and Grok 4.1 Fast all support reasoning. Since xAI uses the OpenAI-compatible API format, reasoning is controlled via the Reasoning Effort setting in the bot configuration panel:
| Setting | Behavior |
|---|---|
low | Minimal reasoning -- fastest responses |
medium | Balanced reasoning |
high | Deep reasoning -- best quality |
Reasoning output from Grok models appears as reasoning_content in the SSE stream. The platform parses this and displays it in a collapsible thinking block above the main response, just like extended thinking from other providers.
Reasoning models may produce thinking tokens that are not billed separately by xAI but do consume time. Use low reasoning effort for quick tasks and high for problems that benefit from deeper analysis.
Vision Support
Grok 4.1 Fast supports vision input. You can:
- Paste images directly into the chat input (Ctrl+V / Cmd+V)
- Upload images using the camera button
- Drag and drop images into the chat area
Grok can analyze images, read text in screenshots, describe visual content, and reason about diagrams. Image data is sent in the OpenAI-compatible image_url content block format with the full data URL.
For more details on vision input, see the Vision feature page.
Tool Calling
All Grok models support function/tool calling via the OpenAI-compatible format. Define tools in your bot configuration, and Grok will invoke them as structured function calls. The platform normalizes tool calls between Anthropic and OpenAI formats, so tools work consistently across all providers.
Image Generation (Grok Imagine)
xAI supports image generation via Grok Imagine using the grok-2-image model. When you use the /image command or natural language triggers like "draw" or "generate an image of", the platform routes the request to xAI's images endpoint.
The image generation request is sent to api.x.ai/v1/images/generations with:
- Model:
grok-2-image - Response format: URL (falls back to base64 if the provider returns it)
- Authentication: Bearer token (your xAI API key)
For more details, see the Image Generation feature page.
OpenAI-Compatible API
xAI uses a fully OpenAI-compatible Chat Completions API at api.x.ai/v1/chat/completions. The platform uses the shared openaiCompatible() SSE streaming factory, which means:
- Standard
POST /v1/chat/completionsendpoint - Bearer token authentication
- SSE streaming with
data:events - Same request/response format as OpenAI
- Reasoning content streamed via
reasoning_contentdelta field
No special configuration is needed -- the platform handles routing automatically for any model prefixed with grok-*.
Pricing
xAI bills per input and output token. The Fast models (Grok 4 Fast, Grok 4.1 Fast, Grok Code Fast) are notably affordable:
| Model Tier | Input | Output | Best For |
|---|---|---|---|
| Fast (4.1 Fast, 4 Fast, Code Fast) | $0.20/MTok | $0.50-1.50/MTok | Daily use, code, long context |
| Standard (Grok 4, Grok 3) | $3.00/MTok | $15.00/MTok | Complex analysis, high-quality output |
| Budget (Grok 3 Mini) | $0.30/MTok | $0.50/MTok | Experimentation, reasoning on a budget |
Refer to x.ai for current pricing. AI Supreme Council tracks token usage per provider in Settings > Usage.
Configuration
When creating a bot profile, select xAI as the provider and choose your preferred model. You can set a per-bot API key in the bot configuration panel to override the global key.
The xAI provider is registered with the openaiCompatible() factory pointing to https://api.x.ai/v1/chat/completions, so all standard features (streaming, tool calling, reasoning) work out of the box.
Tips for Best Results
- Leverage the 2M context window. Grok 4 Fast and 4.1 Fast have one of the largest context windows available -- great for analyzing entire repositories or large document collections.
- Use Grok Code Fast for programming. It is specifically optimized for code generation and understanding, with output pricing at $1.50/MTok.
- Start with Grok 3 Mini for experimentation. At $0.30/$0.50 per MTok with reasoning support, it is one of the most cost-effective reasoning models available.
- Enable reasoning for hard problems. Set reasoning effort to
highfor math, logic, and multi-step problems to get significantly better results. - Combine with councils. Grok models make excellent council members alongside Claude and GPT models, bringing a different perspective and the largest context window.