Pular para o conteúdo principal

Chat

AI Supreme Council provides a full-featured chat interface for conversing with AI models directly in your browser. All communication goes straight from your browser to the AI provider -- no proxy server, no middleware.

Streaming Responses

Responses arrive in real time, token by token. As the model generates text, you see it appear character by character in the chat area. This gives immediate feedback and lets you stop generation early if the response is going in the wrong direction.

While a response is streaming, the Send button changes to Stop. Click it or press Escape to halt generation. Any text generated so far is kept in the conversation.

dica

If you switch to a different bot while a response is streaming, the stream continues in the background. When you switch back, the completed response will be there waiting for you.

Markdown Rendering

Assistant responses are rendered as rich markdown by default. This includes:

  • Headings (H1 through H6)
  • Bold, italic, and strikethrough text
  • Bullet lists and numbered lists
  • Hyperlinks
  • Tables
  • Blockquotes
  • Inline code and fenced code blocks with syntax highlighting

Code Blocks

Fenced code blocks are rendered with syntax highlighting and a copy button. Click the clipboard icon in the top-right corner of any code block to copy its contents.

```python
def hello():
print("Hello from AI Supreme Council")
```
informação

You can disable markdown rendering in the config panel under Chat Settings if you prefer plain text output.

Message Actions

Hover over any message to reveal action buttons:

ActionAvailable OnWhat It Does
CopyUser and AssistantCopies the message text to your clipboard
RegenerateAssistant onlyRe-sends the previous user message to get a new response
EditUser onlyOpens the message for editing, then re-sends it
DeleteUser and AssistantRemoves the message from the conversation
ForkUser and AssistantCreates a new conversation branching from this point

Regenerate

When you regenerate an assistant message, the conversation is rolled back to the preceding user message, which is then re-sent to the model. This gives you a fresh response without retyping anything.

Edit

Editing a user message truncates the conversation at that point and re-sends your edited text. All messages after the edited message are removed. This is useful for refining your prompt without starting over.

Fork

Forking creates a new conversation that contains all messages up to the forked point. The original conversation is unchanged. This lets you explore different conversation branches from the same starting point.

Context Window Management

AI models have a limited context window -- the number of tokens they can process in a single request. AI Supreme Council lets you control how much conversation history is sent with each message.

Set the Context Limit in the config panel under Chat Settings. For example, setting it to 20 means only the last 20 messages are sent to the model. Older messages are excluded from the API call but remain visible in your chat history.

observação

When messages are excluded due to the context limit, you will see a toast notification indicating how many older messages were dropped.

Multi-Turn Conversations

Every message you send includes the conversation history (up to the context limit). The model sees the full thread of user and assistant messages, allowing it to maintain context across multiple exchanges. This enables:

  • Follow-up questions that reference earlier answers
  • Iterative refinement of ideas
  • Complex multi-step tasks like debugging or writing

Stopping Generation

There are two ways to stop a response mid-stream:

  1. Click the Stop button -- the Send button becomes a Stop button during streaming
  2. Press Escape -- keyboard shortcut to halt generation immediately

Any text generated before stopping is preserved in the conversation. You can regenerate the response if you want a fresh attempt.

Chat Settings

The config panel (right sidebar) includes these chat-specific settings under Chat Settings:

SettingDefaultDescription
Context LimitUnlimitedMaximum number of messages sent to the model per request
StreamingOnToggle real-time token streaming on/off
Auto-titleOffAutomatically set the chat title from the first user message
Markdown RenderingOnRender assistant responses as formatted markdown
Show Token CountOffDisplay token usage after each response

Keyboard Shortcuts

ShortcutAction
EnterSend message
Shift + EnterInsert newline (without sending)
Ctrl + F (or Cmd + F on Mac)Open conversation search
EscapeStop generation / Close search

Welcome Screen

When you start a new conversation, a welcome screen is displayed with quick-start information. It disappears as soon as you send your first message.

Background Streaming

If you switch to a different bot while a response is still streaming, the stream continues in the background. When you switch back to that bot, the completed (or still-streaming) response is reattached to the chat view. You do not lose any content by switching away.

aviso

You cannot send a new message to a bot that has a background stream still running. Wait for it to finish or switch back to that bot and click Stop.

Error Handling

If an API call fails (network error, invalid key, rate limit), the error message is displayed inline in the chat as a red error message. The conversation state is preserved -- you can fix the issue (such as adding an API key) and try again.