Supported LLM Providers
Goose is compatible with a wide range of LLM providers, allowing you to choose and integrate your preferred model.
Goose relies heavily on tool calling capabilities and currently works best with Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o (2024-11-20) model. Berkeley Function-Calling Leaderboard can be a good guide for selecting models.
Available Providers
Provider | Description | Parameters |
---|---|---|
Amazon Bedrock | Offers a variety of foundation models, including Claude, Jurassic-2, and others. AWS environment variables must be set in advance, not configured through goose configure | AWS_PROFILE , or AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY , AWS_REGION , ... |
Amazon SageMaker TGI | Run Text Generation Inference models through Amazon SageMaker endpoints. AWS credentials must be configured in advance. | SAGEMAKER_ENDPOINT_NAME , AWS_REGION (optional), AWS_PROFILE (optional) |
Anthropic | Offers Claude, an advanced AI model for natural language tasks. | ANTHROPIC_API_KEY , ANTHROPIC_HOST (optional) |
Azure OpenAI | Access Azure-hosted OpenAI models, including GPT-4 and GPT-3.5. Supports both API key and Azure credential chain authentication. | AZURE_OPENAI_ENDPOINT , AZURE_OPENAI_DEPLOYMENT_NAME , AZURE_OPENAI_API_KEY (optional) |
Databricks | Unified data analytics and AI platform for building and deploying models. | DATABRICKS_HOST , DATABRICKS_TOKEN |
Docker Model Runner | Local models running in Docker Desktop or Docker CE with OpenAI-compatible API endpoints. Because this provider runs locally, you must first download a model. | OPENAI_HOST , OPENAI_BASE_PATH |
Gemini | Advanced LLMs by Google with multimodal capabilities (text, images). | GOOGLE_API_KEY |
GCP Vertex AI | Google Cloud's Vertex AI platform, supporting Gemini and Claude models. Credentials must be configured in advance. | GCP_PROJECT_ID , GCP_LOCATION and optionally GCP_MAX_RATE_LIMIT_RETRIES (5), GCP_MAX_OVERLOADED_RETRIES (5), GCP_INITIAL_RETRY_INTERVAL_MS (5000), GCP_BACKOFF_MULTIPLIER (2.0), GCP_MAX_RETRY_INTERVAL_MS (320_000). |
GitHub Copilot | Access to GitHub Copilot's chat models including gpt-4o, o1, o3-mini, and Claude models. Uses device code authentication flow for secure access. | Uses GitHub device code authentication flow (no API key needed) |
Groq | High-performance inference hardware and tools for LLMs. | GROQ_API_KEY |
Ollama | Local model runner supporting Qwen, Llama, DeepSeek, and other open-source models. Because this provider runs locally, you must first download and run a model. | OLLAMA_HOST |
Ramalama | Local model using native OCI container runtimes, CNCF tools, and supporting models as OCI artifacts. Ramalama API an compatible alternative to Ollama and can be used with the Goose Ollama provider. Supports Qwen, Llama, DeepSeek, and other open-source models. Because this provider runs locally, you must first download and run a model. | OLLAMA_HOST |
OpenAI | Provides gpt-4o, o1, and other advanced language models. Also supports OpenAI-compatible endpoints (e.g., self-hosted LLaMA, vLLM, KServe). o1-mini and o1-preview are not supported because Goose uses tool calling. | OPENAI_API_KEY , OPENAI_HOST (optional), OPENAI_ORGANIZATION (optional), OPENAI_PROJECT (optional), OPENAI_CUSTOM_HEADERS (optional) |
OpenRouter | API gateway for unified access to various models with features like rate-limiting management. | OPENROUTER_API_KEY |
Snowflake | Access the latest models using Snowflake Cortex services, including Claude models. Requires a Snowflake account and programmatic access token (PAT). | SNOWFLAKE_HOST , SNOWFLAKE_TOKEN |
Venice AI | Provides access to open source models like Llama, Mistral, and Qwen while prioritizing user privacy. Requires an account and an API key. | VENICE_API_KEY , VENICE_HOST (optional), VENICE_BASE_PATH (optional), VENICE_MODELS_PATH (optional) |
xAI | Access to xAI's Grok models including grok-3, grok-3-mini, and grok-3-fast with 131,072 token context window. | XAI_API_KEY , XAI_HOST (optional) |
CLI Providers
Goose also supports special "pass-through" providers that work with existing CLI tools, allowing you to use your subscriptions instead of paying per token:
Provider | Description | Requirements |
---|---|---|
Claude Code (claude-code ) | Uses Anthropic's Claude CLI tool with your Claude Code subscription. Provides access to Claude with 200K context limit. | Claude CLI installed and authenticated, active Claude Code subscription |
Gemini CLI (gemini-cli ) | Uses Google's Gemini CLI tool with your Google AI subscription. Provides access to Gemini with 1M context limit. | Gemini CLI installed and authenticated |
CLI providers are cost-effective alternatives that use your existing subscriptions. They work differently from API providers as they execute CLI commands and integrate with the tools' native capabilities. See the CLI Providers guide for detailed setup instructions.
Configure Provider
To configure your chosen provider or see available options, run goose configure
in the CLI or visit the Settings
page in the Goose Desktop.
- Goose Desktop
- Goose CLI
To update your LLM provider and API key:
- Click the button in the top-left to open the sidebar
- Click the
Settings
button on the sidebar - Click the
Models
tab - Click
Configure Providers
- Click
Configure
on the LLM provider to update - Add additional configurations (API key, host, etc) then press
submit
To change provider model
- Click the button in the top-left to open the sidebar
- Click the
Settings
button on the sidebar - Click the
Models
tab - Click
Switch models
- Select a Provider from drop down menu
- Select a model from drop down menu
- Press
Select Model
You can explore more models by selecting a provider
name under Browse by Provider
. A link will appear, directing you to the provider's website. Once you've found the model you want, return to step 6 and paste the model name.
- Run the following command:
goose configure
- Select
Configure Providers
from the menu and press Enter.
┌ goose-configure
│
◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension
└
- Choose a model provider and press Enter.
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◆ Which model provider should we use?
│ ● Anthropic (Claude and other models from Anthropic)
│ ○ Databricks
│ ○ Google Gemini
│ ○ Groq
│ ○ Ollama
│ ○ OpenAI
│ ○ OpenRouter
└
- Enter your API key (and any other configuration details) when prompted.
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Anthropic
│
◆ Provider Anthropic requires ANTHROPIC_API_KEY, please enter a value
│
└
- Enter your desired
ANTHROPIC_HOST
or you can use the default one by hitting theEnter
key.
◇ Enter new value for ANTHROPIC_HOST
│ https://api.anthropic.com (default)
- Enter the model you want to use or you can use the default one by hitting the
Enter
key.
│
◇ Model fetch complete
│
◇ Enter a model from that provider:
│ claude-3-5-sonnet-latest (default)
│
◓ Checking your configuration...
└ Configuration saved successfully
Using Custom OpenAI Endpoints
Goose supports using custom OpenAI-compatible endpoints, which is particularly useful for:
- Self-hosted LLMs (e.g., LLaMA, Mistral) using vLLM or KServe
- Private OpenAI-compatible API servers
- Enterprise deployments requiring data governance and security compliance
- OpenAI API proxies or gateways
Configuration Parameters
Parameter | Required | Description |
---|---|---|
OPENAI_API_KEY | Yes | Authentication key for the API |
OPENAI_HOST | No | Custom endpoint URL (defaults to api.openai.com) |
OPENAI_ORGANIZATION | No | Organization ID for usage tracking and governance |
OPENAI_PROJECT | No | Project identifier for resource management |
OPENAI_CUSTOM_HEADERS | No | Additional headers to include in the request. Can be set via environment variable, configuration file, or CLI, in the format HEADER_A=VALUE_A,HEADER_B=VALUE_B . |
Example Configurations
- vLLM Self-Hosted
- KServe Deployment
- Enterprise OpenAI
- Custom Headers
If you're running LLaMA or other models using vLLM with OpenAI compatibility:
OPENAI_HOST=https://your-vllm-endpoint.internal
OPENAI_API_KEY=your-internal-api-key
For models deployed on Kubernetes using KServe:
OPENAI_HOST=https://kserve-gateway.your-cluster
OPENAI_API_KEY=your-kserve-api-key
OPENAI_ORGANIZATION=your-org-id
OPENAI_PROJECT=ml-serving
For enterprise OpenAI deployments with governance:
OPENAI_API_KEY=your-api-key
OPENAI_ORGANIZATION=org-id123
OPENAI_PROJECT=compliance-approved
For OpenAI-compatible endpoints that require custom headers:
OPENAI_API_KEY=your-api-key
OPENAI_ORGANIZATION=org-id123
OPENAI_PROJECT=compliance-approved
OPENAI_CUSTOM_HEADERS="X-Header-A=abc,X-Header-B=def"
Setup Instructions
- Goose Desktop
- Goose CLI
- Click the button in the top-left to open the sidebar
- Click the
Settings
button on the sidebar - Next to
Models
, click thebrowse
link - Click the
configure
link in the upper right corner - Press the
+
button next to OpenAI - Fill in your configuration details:
- API Key (required)
- Host URL (for custom endpoints)
- Organization ID (for usage tracking)
- Project (for resource management)
- Press
submit
- Run
goose configure
- Select
Configure Providers
- Choose
OpenAI
as the provider - Enter your configuration when prompted:
- API key
- Host URL (if using custom endpoint)
- Organization ID (if using organization tracking)
- Project identifier (if using project management)
For enterprise deployments, you can pre-configure these values using environment variables or configuration files to ensure consistent governance across your organization.
Using Goose for Free
Goose is a free and open source AI agent that you can start using right away, but not all supported LLM Providers provide a free tier.
Below, we outline a couple of free options and how to get started with them.
These free options are a great way to get started with Goose and explore its capabilities. However, you may need to upgrade your LLM for better performance.
Groq
Groq provides free access to open source models with high-speed inference. To use Groq with Goose, you need an API key from Groq Console.
Groq offers several open source models that support tool calling:
- moonshotai/kimi-k2-instruct - Mixture-of-Experts model with 1 trillion parameters, optimized for agentic intelligence and tool use
- qwen/qwen3-32b - 32.8 billion parameter model with advanced reasoning and multilingual capabilities
- gemma2-9b-it - Google's Gemma 2 model with instruction tuning
- llama-3.3-70b-versatile - Meta's Llama 3.3 model for versatile applications
To set up Groq with Goose, follow these steps:
- Goose Desktop
- Goose CLI
To update your LLM provider and API key:
- Click the button in the top-left to open the sidebar.
- Click the
Settings
button on the sidebar. - Click the
Models
tab. - Click
Configure Providers
- Choose
Groq
as provider from the list. - Click
Configure
, enter your API key, and clickSubmit
.
- Run:
goose configure
- Select
Configure Providers
from the menu. - Follow the prompts to choose
Groq
as the provider. - Enter your API key when prompted.
- Enter the Groq model of your choice (e.g.,
moonshotai/kimi-k2-instruct
).
Google Gemini
Google Gemini provides a free tier. To start using the Gemini API with Goose, you need an API Key from Google AI studio.
To set up Google Gemini with Goose, follow these steps:
- Goose Desktop
- Goose CLI
To update your LLM provider and API key:
- Click the button in the top-left to open the sidebar.
- Click the
Settings
button on the sidebar. - Click the
Models
tab. - Click
Configure Providers
- Choose
Google Gemini
as provider from the list. - Click
Configure
, enter your API key, and clickSubmit
.
- Run:
goose configure
- Select
Configure Providers
from the menu. - Follow the prompts to choose
Google Gemini
as the provider. - Enter your API key when prompted.
- Enter the Gemini model of your choice.
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Google Gemini
│
◇ Provider Google Gemini requires GOOGLE_API_KEY, please enter a value
│▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪
│
◇ Enter a model from that provider:
│ gemini-2.0-flash-exp
│
◇ Hello! You're all set and ready to go, feel free to ask me anything!
│
└ Configuration saved successfully
Local LLMs
Goose is a local AI agent, and by using a local LLM, you keep your data private, maintain full control over your environment, and can work entirely offline without relying on cloud access. However, please note that local LLMs require a bit more set up before you can use one of them with Goose.
Goose extensively uses tool calling, so models without it can only do chat completion. If using models without tool calling, all Goose extensions must be disabled.
Here are some local providers we support:
- Ollama
- Docker Model Runner
- Ramalala
- DeepSeek-R1
- Other Models
- Download Ramalama.
- In a terminal, run any Ollama model supporting tool-calling or GGUF format HuggingFace Model:
The --runtime-args="--jinja"
flag is required for Ramalama to work with the Goose Ollama provider.
Example:
ramalama serve --runtime-args="--jinja" ollama://qwen2.5
- In a separate terminal window, configure with Goose:
goose configure
- Choose to
Configure Providers
┌ goose-configure
│
◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension
└
- Choose
Ollama
as the model provider since Ramalama is API compatible and can use the Goose Ollama provider
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◆ Which model provider should we use?
│ ○ Anthropic
│ ○ Databricks
│ ○ Google Gemini
│ ○ Groq
│ ● Ollama (Local open source models)
│ ○ OpenAI
│ ○ OpenRouter
└
- Enter the host where your model is running
For the Ollama provider, if you don't provide a host, we set it to localhost:11434
. When constructing the URL, we preprend http://
if the scheme is not http
or https
. Since Ramalama's default port to serve on is 8080, we set OLLAMA_HOST=http://0.0.0.0:8080
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◆ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://0.0.0.0:8080
└
- Enter the model you have running
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◇ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://0.0.0.0:8080
│
◇ Enter a model from that provider:
│ qwen2.5
│
◇ Welcome! You're all set to explore and utilize my capabilities. Let's get started on solving your problems together!
│
└ Configuration saved successfully
If you notice that Goose is having trouble using extensions or is ignoring .goosehints, it is likely that the model's default context length of 2048 tokens is too low. Use ramalama serve
to set the --ctx-size, -c
option to a higher value.
The native DeepSeek-r1
model doesn't support tool calling, however, we have a custom model you can use with Goose.
Note that this is a 70B model size and requires a powerful device to run smoothly.
- Download Ollama.
- In a terminal window, run the following command to install the custom DeepSeek-r1 model:
ollama run michaelneale/deepseek-r1-goose
- In a separate terminal window, configure with Goose:
goose configure
- Choose to
Configure Providers
┌ goose-configure
│
◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension
└
- Choose
Ollama
as the model provider
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◆ Which model provider should we use?
│ ○ Anthropic
│ ○ Databricks
│ ○ Google Gemini
│ ○ Groq
│ ● Ollama (Local open source models)
│ ○ OpenAI
│ ○ OpenRouter
└
- Enter the host where your model is running
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◆ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
└
- Enter the installed model from above
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◇ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
│
◇ Enter a model from that provider:
│ michaelneale/deepseek-r1-goose
│
◇ Welcome! You're all set to explore and utilize my capabilities. Let's get started on solving your problems together!
│
└ Configuration saved successfully
- Download Ollama.
- In a terminal, run any model supporting tool-calling
Example:
ollama run qwen2.5
- In a separate terminal window, configure with Goose:
goose configure
- Choose to
Configure Providers
┌ goose-configure
│
◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension
└
- Choose
Ollama
as the model provider
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◆ Which model provider should we use?
│ ○ Anthropic
│ ○ Databricks
│ ○ Google Gemini
│ ○ Groq
│ ● Ollama (Local open source models)
│ ○ OpenAI
│ ○ OpenRouter
└
- Enter the host where your model is running
For Ollama, if you don't provide a host, we set it to localhost:11434
.
When constructing the URL, we prepend http://
if the scheme is not http
or https
.
If you're running Ollama on a different server, you'll have to set OLLAMA_HOST=http://{host}:{port}
.
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◆ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
└
- Enter the model you have running
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◇ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
│
◇ Enter a model from that provider:
│ qwen2.5
│
◇ Welcome! You're all set to explore and utilize my capabilities. Let's get started on solving your problems together!
│
└ Configuration saved successfully
If you notice that Goose is having trouble using extensions or is ignoring .goosehints, it is likely that the model's default context length of 4096 tokens is too low. Set the OLLAMA_CONTEXT_LENGTH
environment variable to a higher value.
- Get Docker
- Enable Docker Model Runner
- Pull a model, for example, from Docker Hub AI namespace, Unsloth, or from HuggingFace
Example:
docker model pull hf.co/unsloth/gemma-3n-e4b-it-gguf:q6_k
- Configure Goose to use Docker Model Runner, using the OpenAI API compatible endpoint:
goose configure
- Choose to
Configure Providers
┌ goose-configure
│
◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension
└
- Choose
OpenAI
as the model provider:
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◆ Which model provider should we use?
│ ○ Anthropic
│ ○ Amazon Bedrock
│ ○ Claude Code
│ ● OpenAI (GPT-4 and other OpenAI models, including OpenAI compatible ones)
│ ○ OpenRouter
- Configure Docker Model Runner endpoint as the
OPENAI_HOST
:
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ OpenAI
│
◆ Provider OpenAI requires OPENAI_HOST, please enter a value
│ https://api.openai.com (default)
└
The default value for the host-side port Docker Model Runner is 12434, so the OPENAI_HOST
value could be:
http://localhost:12434
.
- Configure the base path:
◆ Provider OpenAI requires OPENAI_BASE_PATH, please enter a value
│ v1/chat/completions (default)
└
Docker model runner uses /engines/llama.cpp/v1/chat/completions
for the base path.
- Finally configure the model available in Docker Model Runner to be used by Goose:
hf.co/unsloth/gemma-3n-e4b-it-gguf:q6_k
│
◇ Enter a model from that provider:
│ gpt-4o
│
◒ Checking your configuration...
└ Configuration saved successfully
Azure OpenAI Credential Chain
Goose supports two authentication methods for Azure OpenAI:
- API Key Authentication - Uses the
AZURE_OPENAI_API_KEY
for direct authentication - Azure Credential Chain - Uses Azure CLI credentials automatically without requiring an API key
To use the Azure Credential Chain:
- Ensure you're logged in with
az login
- Have appropriate Azure role assignments for the Azure OpenAI service
- Configure with
goose configure
and select Azure OpenAI, leaving the API key field empty
This method simplifies authentication and enhances security for enterprise environments.
If you have any questions or need help with a specific provider, feel free to reach out to us on Discord or on the Goose repo.