Skip to main content

Supported LLM Providers

Goose is compatible with a wide range of LLM providers, allowing you to choose and integrate your preferred model.

Model Selection

Goose relies heavily on tool calling capabilities and currently works best with Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o (2024-11-20) model. Berkeley Function-Calling Leaderboard can be a good guide for selecting models.

Available Providers

ProviderDescriptionParameters
Amazon BedrockOffers a variety of foundation models, including Claude, Jurassic-2, and others. AWS environment variables must be set in advance, not configured through goose configureAWS_PROFILE, or AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, ...
AnthropicOffers Claude, an advanced AI model for natural language tasks.ANTHROPIC_API_KEY, ANTHROPIC_HOST (optional)
Azure OpenAIAccess Azure-hosted OpenAI models, including GPT-4 and GPT-3.5. Supports both API key and Azure credential chain authentication.AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_DEPLOYMENT_NAME, AZURE_OPENAI_API_KEY (optional)
DatabricksUnified data analytics and AI platform for building and deploying models.DATABRICKS_HOST, DATABRICKS_TOKEN
GeminiAdvanced LLMs by Google with multimodal capabilities (text, images).GOOGLE_API_KEY
GCP Vertex AIGoogle Cloud's Vertex AI platform, supporting Gemini and Claude models. Credentials must be configured in advance. Follow the instructions at https://cloud.google.com/vertex-ai/docs/authentication.GCP_PROJECT_ID, GCP_LOCATION and optional GCP_MAX_RETRIES (6), GCP_INITIAL_RETRY_INTERVAL_MS (5000), GCP_BACKOFF_MULTIPLIER (2.0), GCP_MAX_RETRY_INTERVAL_MS (320_000).
GitHub CopilotAccess to GitHub Copilot's chat models including gpt-4o, o1, o3-mini, and Claude models. Uses device code authentication flow for secure access.Uses GitHub device code authentication flow (no API key needed)
GroqHigh-performance inference hardware and tools for LLMs.GROQ_API_KEY
OllamaLocal model runner supporting Qwen, Llama, DeepSeek, and other open-source models. Because this provider runs locally, you must first download and run a model.OLLAMA_HOST
OpenAIProvides gpt-4o, o1, and other advanced language models. Also supports OpenAI-compatible endpoints (e.g., self-hosted LLaMA, vLLM, KServe). o1-mini and o1-preview are not supported because Goose uses tool calling.OPENAI_API_KEY, OPENAI_HOST (optional), OPENAI_ORGANIZATION (optional), OPENAI_PROJECT (optional), OPENAI_CUSTOM_HEADERS (optional)
OpenRouterAPI gateway for unified access to various models with features like rate-limiting management.OPENROUTER_API_KEY

Configure Provider

To configure your chosen provider or see available options, run goose configure in the CLI or visit the Settings page in the Goose Desktop.

To update your LLM provider and API key:

  1. Click the gear on the Goose Desktop toolbar
  2. Click Advanced Settings
  3. Under Models, click Configure provider
  4. Click Configure on the LLM provider to update
  5. Add additional configurations (API key, host, etc) then press submit

To change provider model

  1. Click the gear on the Goose Desktop toolbar
  2. Click Advanced Settings
  3. Under Models, click Switch models
  4. Select a Provider from drop down menu
  5. Select a model from drop down menu
  6. Press Select Model

You can explore more models by selecting a provider name under Browse by Provider. A link will appear, directing you to the provider's website. Once you've found the model you want, return to step 6 and paste the model name.

Using Custom OpenAI Endpoints

Goose supports using custom OpenAI-compatible endpoints, which is particularly useful for:

  • Self-hosted LLMs (e.g., LLaMA, Mistral) using vLLM or KServe
  • Private OpenAI-compatible API servers
  • Enterprise deployments requiring data governance and security compliance
  • OpenAI API proxies or gateways

Configuration Parameters

ParameterRequiredDescription
OPENAI_API_KEYYesAuthentication key for the API
OPENAI_HOSTNoCustom endpoint URL (defaults to api.openai.com)
OPENAI_ORGANIZATIONNoOrganization ID for usage tracking and governance
OPENAI_PROJECTNoProject identifier for resource management
OPENAI_CUSTOM_HEADERSNoAdditional headers to include in the request. Can be set via environment variable, configuration file, or CLI, in the format HEADER_A=VALUE_A,HEADER_B=VALUE_B.

Example Configurations

If you're running LLaMA or other models using vLLM with OpenAI compatibility:

OPENAI_HOST=https://your-vllm-endpoint.internal
OPENAI_API_KEY=your-internal-api-key

Setup Instructions

  1. Click ... in the upper right corner
  2. Click Advanced Settings
  3. Next to Models, click the browse link
  4. Click the configure link in the upper right corner
  5. Press the + button next to OpenAI
  6. Fill in your configuration details:
    • API Key (required)
    • Host URL (for custom endpoints)
    • Organization ID (for usage tracking)
    • Project (for resource management)
  7. Press submit
Enterprise Deployment

For enterprise deployments, you can pre-configure these values using environment variables or configuration files to ensure consistent governance across your organization.

Using Goose for Free

Goose is a free and open source AI agent that you can start using right away, but not all supported LLM Providers provide a free tier.

Below, we outline a couple of free options and how to get started with them.

Limitations

These free options are a great way to get started with Goose and explore its capabilities. However, you may need to upgrade your LLM for better performance.

Google Gemini

Google Gemini provides a free tier. To start using the Gemini API with Goose, you need an API Key from Google AI studio.

To set up Google Gemini with Goose, follow these steps:

To update your LLM provider and API key:

  1. Click on the three dots in the top-right corner.
  2. Select Provider Settings from the menu.
  3. Choose Google Gemini as provider from the list.
  4. Click Edit, enter your API key, and click Set as Active.

Local LLMs (Ollama)

Ollama provides local LLMs, which requires a bit more set up before you can use it with Goose.

  1. Download Ollama.
  2. Run any model supporting tool-calling:
Limited Support for models without tool calling

Goose extensively uses tool calling, so models without it (e.g. DeepSeek-r1) can only do chat completion. If using models without tool calling, all Goose extensions must be disabled. As an alternative, you can use a custom DeepSeek-r1 model we've made specifically for Goose.

Example:

ollama run qwen2.5
  1. In a separate terminal window, configure with Goose:
goose configure
  1. Choose to Configure Providers
┌   goose-configure 

◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension

  1. Choose Ollama as the model provider
┌   goose-configure 

◇ What would you like to configure?
│ Configure Providers

◆ Which model provider should we use?
│ ○ Anthropic
│ ○ Databricks
│ ○ Google Gemini
│ ○ Groq
│ ● Ollama (Local open source models)
│ ○ OpenAI
│ ○ OpenRouter

  1. Enter the host where your model is running
Endpoint

For Ollama, if you don't provide a host, we set it to localhost:11434. When constructing the URL, we preprend http:// if the scheme is not http or https. If you're running Ollama on port 80 or 443, you'll have to set OLLMA_HOST=http://host:{port}

┌   goose-configure 

◇ What would you like to configure?
│ Configure Providers

◇ Which model provider should we use?
│ Ollama

◆ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434

  1. Enter the model you have running
┌   goose-configure 

◇ What would you like to configure?
│ Configure Providers

◇ Which model provider should we use?
│ Ollama

◇ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434

◇ Enter a model from that provider:
│ qwen2.5

◇ Welcome! You're all set to explore and utilize my capabilities. Let's get started on solving your problems together!

└ Configuration saved successfully

DeepSeek-R1

Ollama provides open source LLMs, such as DeepSeek-r1, that you can install and run locally. Note that the native DeepSeek-r1 model doesn't support tool calling, however, we have a custom model you can use with Goose.

warning

Note that this is a 70B model size and requires a powerful device to run smoothly.

  1. Download and install Ollama from ollama.com.
  2. In a terminal window, run the following command to install the custom DeepSeek-r1 model:
ollama run michaelneale/deepseek-r1-goose
  1. Click ... in the top-right corner.
  2. Navigate to Advanced Settings -> Browse Models -> and select Ollama from the list.
  3. Enter michaelneale/deepseek-r1-goose for the model name.

Azure OpenAI Credential Chain

Goose supports two authentication methods for Azure OpenAI:

  1. API Key Authentication - Uses the AZURE_OPENAI_API_KEY for direct authentication
  2. Azure Credential Chain - Uses Azure CLI credentials automatically without requiring an API key

To use the Azure Credential Chain:

  • Ensure you're logged in with az login
  • Have appropriate Azure role assignments for the Azure OpenAI service
  • Configure with goose configure and select Azure OpenAI, leaving the API key field empty

This method simplifies authentication and enhances security for enterprise environments.


If you have any questions or need help with a specific provider, feel free to reach out to us on Discord or on the Goose repo.