Skip to main content

Smart Context Management

When working with Large Language Models (LLMs), there are limits to how much conversation history they can process at once. Goose provides smart context management features to help you maintain productive sessions even when reaching these limits. Here are the key concepts:

  • Context Length: The amount of conversation history the LLM can consider
  • Context Limit: The maximum number of tokens the model can process
  • Context Management: How Goose handles conversations approaching these limits

Smart Context Management Features

When a conversation reaches the context limit, Goose offers different ways to handle it:

FeatureDescriptionBest ForImpact
SummarizationCondenses conversation while preserving key pointsLong, complex conversationsMaintains most context
TruncationRemoves oldest messages to make roomSimple, linear conversationsLoses old context
ClearStarts fresh while keeping session activeNew direction in conversationLoses all context

Using Smart Context Management

Goose Desktop exclusively uses summarization to manage context, preserving key information while reducing size.

When you reach the context limit in Goose Desktop:

  1. Goose will automatically start summarizing the conversation to make room.
  2. You'll see a message that says "Preparing summary...", followed by "Session summarized."
  3. Once complete, you'll have the option to "View or edit summary."
  4. You can then continue the session with the summarized context in place.

Token usage

After sending your first message to Goose, a colored circle appears next to the model name at the bottom of the session window. The color provides a visual indicator of your token usage for the session.

  • Green: Normal usage - Plenty of context space available
  • Orange: Warning state - Approaching limit (80% of capacity)
  • Red: Error state - Context limit reached

Hover over this circle to display:

  • the number of tokens used
  • the percentage of available tokens used
  • the total available tokens
  • A progress bar showing your current token usage