Skip to main content

Tool Selection Strategy

Preview Feature

The Tool Selection Strategy is currently in preview. The Vector selection strategy is currently limited to Claude models served on Databricks.

When you enable an extension, you gain access to all of its tools. For example, the Google Drive extension provides tools for reading documents, updating permissions, managing comments, and more. By default, Goose loads all tools into context when interacting with the LLM.

Enabling multiple extensions gives you access to a wider range of tools, but loading a lot of tools into context can be inefficient and confusing for the LLM. It's like having every tool in your workshop spread out on your bench when you only need one or two.

Choosing an intelligent tool selection strategy helps avoid this problem. Instead of loading all tools for every interaction, it loads only the tools needed for your current task. Both vector and LLM-based strategies ensure that only the functionality you need is loaded into context, so you can keep more of your favorite extensions enabled. These strategies provide:

  • Reduced token consumption
  • Improved LLM performance
  • Better context management
  • More accurate and efficient tool selection

Tool Selection Strategies

StrategySpeedBest ForExample Query
DefaultFastestFew extensions, simple setupsAny query (loads all tools)
VectorFastKeyword-based matching"read pdf file"
LLM-basedSlowerComplex, ambiguous queries"analyze document contents"

Default Strategy

The default strategy loads all tools from enabled extensions into context, which works well if you only have a few extensions enabled. When you have more than a few extensions enabled, you should use the vector or LLM-based strategy for intelligent tool selection.

Best for:

  • Simple setups with few extensions
  • When you want all tools available at all times
  • Maximum tool availability without selection logic

Vector Strategy

The vector strategy uses mathematical similarity between embeddings to find relevant tools, providing efficient matching based on vector similarity between your query and available tools.

Best for:

  • Situations where fast response times are critical
  • Queries with keywords that match tool names or descriptions

Example:

  • Prompt: "read pdf file"
  • Result: Quickly matches with PDF-related tools based on keyword similarity
Embedding Model

The default embedding model is text-embedding-3-small. You can change it using environment variables.

LLM-based Strategy

The LLM-based strategy leverages natural language understanding to analyze tools and queries semantically, making selections based on the full meaning of your request.

Best for:

  • Complex or ambiguous queries that require understanding context
  • Cases where exact keyword matches might miss relevant tools
  • Situations where nuanced tool selection is important

Example:

  • Prompt: "help me analyze the contents of my document"
  • Result: Understands context and might suggest both PDF readers and content analysis tools

Configuration

  1. Click the gear icon ⚙️ on the top toolbar
  2. Click Advanced settings
  3. Under Tool Selection Strategy, select your preferred strategy:
    • Default
    • Vector
    • LLM-based