Support reasoning_effort in LLM Config¶
Summary¶
Discovered that Trailblaze only exposes temperature as a model-level inference parameter, but OpenAI’s reasoning models (o-series, gpt-5+) use reasoning_effort to control how much internal reasoning the model performs. Koog 0.7.2 already has full support — we just aren’t wiring it through.
What We Learned¶
reasoning_effortis the de facto standard name. OpenAI coined it; Google adopted the same name for Gemini 2.5. Values:none,minimal,low,medium,high.- Anthropic is the outlier — they use
thinking.budget_tokens(a token count, not an enum). Different enough that it likely needs its own field later. - Koog 0.7.2 already has
ReasoningEffortenum andOpenAIChatParams.reasoningEffort. The enum maps directly to OpenAI’s serialized values ("none","minimal","low","medium","high"). - Trailblaze currently constructs plain
LLMParamsatTrailblazeKoogLlmClientHelper.kt:300, which only carriestemperature. It would need to constructOpenAIChatParamsinstead to passreasoningEffortthrough.
What Needs to Change¶
The plumbing already exists in Koog. Trailblaze needs to thread it through the config → resolve → execute pipeline:
LlmModelConfigEntry— Addreasoning_effort: String?field (serialized from YAML)TrailblazeLlmModel— AdddefaultReasoningEffort: String?fieldLlmConfigResolver— Map the config entry field to the resolved modelLlmConfigMerger.mergeModelEntry()— Addreasoning_effortto field-level merge (same pattern astemperature)TrailblazeKoogLlmClientHelper— ConstructOpenAIChatParams(reasoningEffort = ...)instead of plainLLMParamswhen reasoning_effort is set- Docs — Update
llm_configuration.mdschema reference
User-facing YAML¶
llm:
providers:
openai:
models:
- id: gpt-5
reasoning_effort: medium # merges with built-in cost/context/etc.
The existing deep merge behavior means users can override just reasoning_effort on a model without re-specifying cost, context_length, or other fields.
Open Questions¶
- Should
reasoning_effortalso be settable at thedefaults:level (apply to all models)? - How to handle Anthropic’s
thinking.budget_tokens— separate field, or generalize into a provider-agnosticreasoning:block? - Should we validate that
reasoning_effortis only set on models/providers that support it, or pass it through and let the API error?
Key Files¶
opensource/trailblaze-models/src/commonMain/kotlin/xyz/block/trailblaze/llm/config/LlmModelConfigEntry.ktopensource/trailblaze-models/src/commonMain/kotlin/xyz/block/trailblaze/llm/TrailblazeLlmModel.ktopensource/trailblaze-models/src/commonMain/kotlin/xyz/block/trailblaze/llm/config/LlmConfigMerger.ktopensource/trailblaze-models/src/commonMain/kotlin/xyz/block/trailblaze/llm/config/LlmConfigResolver.ktopensource/trailblaze-agent/src/main/java/xyz/block/trailblaze/agent/TrailblazeKoogLlmClientHelper.kt- Koog source:
ai.koog.prompt.executor.clients.openai.OpenAIParams(OpenAIChatParams class) - Koog source:
ai.koog.prompt.executor.clients.openai.base.models.ReasoningEffort(enum)