Environment Variables
Goose supports various environment variables that allow you to customize its behavior. This guide provides a comprehensive list of available environment variables grouped by their functionality.
Model Configuration
These variables control the language models and their behavior.
Basic Provider Configuration
These are the minimum required variables to get started with Goose.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_PROVIDER | Specifies the LLM provider to use | See available providers | None (must be configured) |
GOOSE_MODEL | Specifies which model to use from the provider | Model name (e.g., "gpt-4", "claude-3.5-sonnet") | None (must be configured) |
GOOSE_TEMPERATURE | Sets the temperature for model responses | Float between 0.0 and 1.0 | Model-specific default |
Examples
# Basic model configuration
export GOOSE_PROVIDER="anthropic"
export GOOSE_MODEL="claude-3.5-sonnet"
export GOOSE_TEMPERATURE=0.7
Advanced Provider Configuration
These variables are needed when using custom endpoints, enterprise deployments, or specific provider implementations.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_PROVIDER__TYPE | The specific type/implementation of the provider | See available providers | Derived from GOOSE_PROVIDER |
GOOSE_PROVIDER__HOST | Custom API endpoint for the provider | URL (e.g., "https://api.openai.com") | Provider-specific default |
GOOSE_PROVIDER__API_KEY | Authentication key for the provider | API key string | None |
Examples
# Advanced provider configuration
export GOOSE_PROVIDER__TYPE="anthropic"
export GOOSE_PROVIDER__HOST="https://api.anthropic.com"
export GOOSE_PROVIDER__API_KEY="your-api-key-here"
Lead/Worker Model Configuration
These variables configure a lead/worker model pattern where a powerful lead model handles initial planning and complex reasoning, then switches to a faster/cheaper worker model for execution. The switch happens automatically based on your settings.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_LEAD_MODEL | Required to enable lead mode. Name of the lead model | Model name (e.g., "gpt-4o", "claude-3.5-sonnet") | None |
GOOSE_LEAD_PROVIDER | Provider for the lead model | See available providers | Falls back to GOOSE_PROVIDER |
GOOSE_LEAD_TURNS | Number of initial turns using the lead model before switching to the worker model | Integer | 3 |
GOOSE_LEAD_FAILURE_THRESHOLD | Consecutive failures before fallback to the lead model | Integer | 2 |
GOOSE_LEAD_FALLBACK_TURNS | Number of turns to use the lead model in fallback mode | Integer | 2 |
A turn is one complete prompt-response interaction. Here's how it works with the default settings:
- Use the lead model for the first 3 turns
- Use the worker model starting on the 4th turn
- Fallback to the lead model if the worker model struggles for 2 consecutive turns
- Use the lead model for 2 turns and then switch back to the worker model
The lead model and worker model names are displayed at the start of the Goose CLI session. If you don't export a GOOSE_MODEL
for your session, the worker model defaults to the GOOSE_MODEL
in your configuration file.
Examples
# Basic lead/worker setup
export GOOSE_LEAD_MODEL="o4"
# Advanced lead/worker configuration
export GOOSE_LEAD_MODEL="claude4-opus"
export GOOSE_LEAD_PROVIDER="anthropic"
export GOOSE_LEAD_TURNS=5
export GOOSE_LEAD_FAILURE_THRESHOLD=3
export GOOSE_LEAD_FALLBACK_TURNS=2
Planning Mode Configuration
These variables control Goose's planning functionality.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_PLANNER_PROVIDER | Specifies which provider to use for planning mode | See available providers | Falls back to GOOSE_PROVIDER |
GOOSE_PLANNER_MODEL | Specifies which model to use for planning mode | Model name (e.g., "gpt-4", "claude-3.5-sonnet") | Falls back to GOOSE_MODEL |
Examples
# Planning mode with different model
export GOOSE_PLANNER_PROVIDER="openai"
export GOOSE_PLANNER_MODEL="gpt-4"
Session Management
These variables control how Goose manages conversation sessions and context.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_CONTEXT_STRATEGY | Controls how Goose handles context limit exceeded situations | "summarize", "truncate", "clear", "prompt" | "prompt" (interactive), "summarize" (headless) |
GOOSE_MAX_TURNS | Maximum number of turns allowed without user input | Integer (e.g., 10, 50, 100) | 1000 |
Examples
# Automatically summarize when context limit is reached
export GOOSE_CONTEXT_STRATEGY=summarize
# Always prompt user to choose (default for interactive mode)
export GOOSE_CONTEXT_STRATEGY=prompt
# Set a low limit for step-by-step control
export GOOSE_MAX_TURNS=5
# Set a moderate limit for controlled automation
export GOOSE_MAX_TURNS=25
# Set a reasonable limit for production
export GOOSE_MAX_TURNS=100
Context Limit Configuration
These variables allow you to override the default context window size (token limit) for your models. This is particularly useful when using LiteLLM proxies or custom models that don't match Goose's predefined model patterns.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_CONTEXT_LIMIT | Override context limit for the main model | Integer (number of tokens) | Model-specific default or 128,000 |
GOOSE_LEAD_CONTEXT_LIMIT | Override context limit for the lead model in lead/worker mode | Integer (number of tokens) | Falls back to GOOSE_CONTEXT_LIMIT or model default |
GOOSE_WORKER_CONTEXT_LIMIT | Override context limit for the worker model in lead/worker mode | Integer (number of tokens) | Falls back to GOOSE_CONTEXT_LIMIT or model default |
GOOSE_PLANNER_CONTEXT_LIMIT | Override context limit for the planner model | Integer (number of tokens) | Falls back to GOOSE_CONTEXT_LIMIT or model default |
Examples
# Set context limit for main model (useful for LiteLLM proxies)
export GOOSE_CONTEXT_LIMIT=200000
# Set different context limits for lead/worker models
export GOOSE_LEAD_CONTEXT_LIMIT=500000 # Large context for planning
export GOOSE_WORKER_CONTEXT_LIMIT=128000 # Smaller context for execution
# Set context limit for planner
export GOOSE_PLANNER_CONTEXT_LIMIT=1000000
Tool Configuration
These variables control how Goose handles tool permissions and their execution.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_MODE | Controls how Goose handles tool execution | "auto", "approve", "chat", "smart_approve" | "smart_approve" |
GOOSE_TOOLSHIM | Enables/disables tool call interpretation | "1", "true" (case insensitive) to enable | false |
GOOSE_TOOLSHIM_OLLAMA_MODEL | Specifies the model for tool call interpretation | Model name (e.g. llama3.2, qwen2.5) | System default |
GOOSE_CLI_MIN_PRIORITY | Controls verbosity of tool output | Float between 0.0 and 1.0 | 0.0 |
GOOSE_CLI_TOOL_PARAMS_TRUNCATION_MAX_LENGTH | Maximum length for tool parameter values before truncation in CLI output (not in debug mode) | Integer | 40 |
GOOSE_CLI_SHOW_COST | Toggles display of model cost estimates in CLI output | "true", "1" (case insensitive) to enable | false |
Examples
# Enable tool interpretation
export GOOSE_TOOLSHIM=true
export GOOSE_TOOLSHIM_OLLAMA_MODEL=llama3.2
export GOOSE_MODE="auto"
export GOOSE_CLI_MIN_PRIORITY=0.2 # Show only medium and high importance output
export GOOSE_CLI_TOOL_PARAMS_MAX_LENGTH=100 # Show up to 100 characters for tool parameters in CLI output
# Enable model cost display in CLI
export GOOSE_CLI_SHOW_COST=true
Enhanced Code Editing
These variables configure AI-powered code editing for the Developer extension's str_replace
tool. All three variables must be set and non-empty for the feature to activate.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_EDITOR_API_KEY | API key for the code editing model | API key string | None |
GOOSE_EDITOR_HOST | API endpoint for the code editing model | URL (e.g., "https://api.openai.com/v1") | None |
GOOSE_EDITOR_MODEL | Model to use for code editing | Model name (e.g., "gpt-4o", "claude-3-5-sonnet") | None |
Examples
This feature works with any OpenAI-compatible API endpoint, for example:
# OpenAI configuration
export GOOSE_EDITOR_API_KEY="sk-..."
export GOOSE_EDITOR_HOST="https://api.openai.com/v1"
export GOOSE_EDITOR_MODEL="gpt-4o"
# Anthropic configuration (via OpenAI-compatible proxy)
export GOOSE_EDITOR_API_KEY="sk-ant-..."
export GOOSE_EDITOR_HOST="https://api.anthropic.com/v1"
export GOOSE_EDITOR_MODEL="claude-3-5-sonnet-20241022"
# Local model configuration
export GOOSE_EDITOR_API_KEY="your-key"
export GOOSE_EDITOR_HOST="http://localhost:8000/v1"
export GOOSE_EDITOR_MODEL="your-model"
Tool Selection Strategy
These variables configure the tool selection strategy.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_ROUTER_TOOL_SELECTION_STRATEGY | The tool selection strategy to use | "default", "vector", "llm" | "default" |
GOOSE_EMBEDDING_MODEL_PROVIDER | The provider to use for generating embeddings for the "vector" strategy | See available providers (must support embeddings) | "openai" |
GOOSE_EMBEDDING_MODEL | The model to use for generating embeddings for the "vector" strategy | Model name (provider-specific) | "text-embedding-3-small" |
Examples
# Use vector-based tool selection with custom settings
export GOOSE_ROUTER_TOOL_SELECTION_STRATEGY=vector
export GOOSE_EMBEDDING_MODEL_PROVIDER=ollama
export GOOSE_EMBEDDING_MODEL=nomic-embed-text
# Or use LLM-based selection
export GOOSE_ROUTER_TOOL_SELECTION_STRATEGY=llm
Embedding Provider Support
The default embedding provider is OpenAI. If using a different provider:
- Ensure the provider supports embeddings
- Specify an appropriate embedding model for that provider
- Ensure the provider is properly configured with necessary credentials
Security Configuration
These variables control security related features.
Variable | Purpose | Values | Default |
---|---|---|---|
GOOSE_ALLOWLIST | Controls which extensions can be loaded | URL for allowed extensions list | Unset |
GOOSE_DISABLE_KEYRING | Disables the system keyring for secret storage | Set to any value (e.g., "1", "true", "yes") to disable. The actual value doesn't matter, only whether the variable is set. | Unset (keyring enabled) |
When the keyring is disabled, secrets are stored here:
- macOS/Linux:
~/.config/goose/secrets.yaml
- Windows:
%APPDATA%\Block\goose\config\secrets.yaml
Langfuse Integration
These variables configure the Langfuse integration for observability.
Variable | Purpose | Values | Default |
---|---|---|---|
LANGFUSE_PUBLIC_KEY | Public key for Langfuse integration | String | None |
LANGFUSE_SECRET_KEY | Secret key for Langfuse integration | String | None |
LANGFUSE_URL | Custom URL for Langfuse service | URL String | Default Langfuse URL |
LANGFUSE_INIT_PROJECT_PUBLIC_KEY | Alternative public key for Langfuse | String | None |
LANGFUSE_INIT_PROJECT_SECRET_KEY | Alternative secret key for Langfuse | String | None |
Notes
- Environment variables take precedence over configuration files.
- For security-sensitive variables (like API keys), consider using the system keyring instead of environment variables.
- Some variables may require restarting Goose to take effect.
- When using the planning mode, if planner-specific variables are not set, Goose will fall back to the main model configuration.