goose Mobile Access and Native Terminal Support

We're excited to announce two new ways to interact with goose: a native iOS app for mobile access and native terminal integration. Both give you more flexibility in how and where you use your AI agent.

We're excited to announce two new ways to interact with goose: a native iOS app for mobile access and native terminal integration. Both give you more flexibility in how and where you use your AI agent.

There is an emerging approach to MCP tool calling referred to as "sandbox mode" or "code mode". These ideas were initially presented by Cloudflare in their Code Mode: the better way to use MCP post and Anthropic in their Code execution with MCP: Building more efficient agents posts. Since the approach and the benefits are clearly laid out in those posts I will summarize them here.

AI agents are often described as brilliant, overeager interns. They're desperate to help, but sometimes that enthusiasm leads to changes you never asked for. This is by design: the large language models powering agents are trained to be helpful. But in code, unchecked helpfulness can create chaos. Even with clear instructions and a meticulous plan, you might hear, "Let me just change this too…" A modification that's either unnecessary or, worse, never surfaced for review.
Sure, you can scour git diff to find and revert issues. But in a multi-step process touching dozens of files, untangling one small, unwanted change becomes a manual nightmare. I've spent hours combing through 70 files to undo a single "helpful" adjustment. Asking the agent to revert is often futile, as conversational memory isn't a snapshot of your codebase.

If you've been following MCP, you've probably heard about tools which are functions that let AI assistants do things like read files, query databases, or call APIs. But there's another MCP feature that's less talked about and arguably more interesting: Sampling.
Sampling flips the script. Instead of the AI calling your tool, your tool calls the AI.

You've heard the buzz: AI is reshaping our work. Maybe you've tinkered with ChatGPT, or your company is pushing you to "level up." But between the hype and the endless tutorials, a gnawing question remains: how do you move from theory to building something real?
The answer is practice. Not just following steps, but creating, problem-solving, and learning by doing.
That's why we're launching Advent of AI, a 17-day challenge series starting December 1st. Whether you're a beginner taking your first steps or an advanced developer exploring AI agents, this is for you. Each weekday, you'll get a new, hands-on project designed to transform you from an AI spectator into a confident builder.

Lately, I've seen more developers online starting to side eye MCP. There was a tweet by Darren Shepherd that summed it up well:
"Most devs were introduced to MCP through coding agents (Cursor, VSCode) and most devs struggle to get value out of MCP in this use case... so they are rejecting MCP because they have a CLI and scripts available to them which are way better for them."
Fair. Most developers were introduced to MCPs through some chat-with-your-code experience, and sometimes it doesn't feel better than just opening your terminal and using the tools you know. But here's the thing...

Creating content is fun.
Promoting it (aka the most important part) drains my soul 😩
When I posted that on LinkedIn the other night, I realized I'm definitely not the only one who feels this way. You spend hours making this masterpiece, and then you have to remember to promote it across multiple platforms every single time.
It’s exhausting, so I decided to automate it.

"Migrate my app from x language to y language." You hit enter, watch your AI agent spin its wheels, and eventually every success story you've heard feels like a carefully orchestrated lie.
Most failures have less to do with the agent's capability and more to do with poor prompt and context strategy. Think about it: if someone dropped you into a complex, unfamiliar codebase and said "migrate this," you'd be lost without a plan. You'd need to explore the code, ask questions about its structure, and break the work into manageable steps.
Your AI agent needs the same approach: guided exploration, strategic questions, and decomposed tasks.

I code best when I sit criss-cross applesauce on my bed or couch with my laptop in my lap, a snack nearby, and no extra screens competing for my attention. Sometimes I keep the editor and browser side by side; other times, I make them full screen and switch between applications. I don't like using multiple monitors, and my developer environment is embarrassingly barebones.
The described setup allows me to fall into a deep flow state, which is essential for staying productive as a software engineer. It gives me the focus to dig beneath the surface of a problem, trace its root cause, and think about how every fix or improvement affects both users and the system as a whole. While quick bursts of multitasking may work well for other fields, real productivity in engineering often comes from long stretches of uninterrupted thought.
Recently, my workflow changed.

My mom was doing her usual Sunday ritual she had her pen, paper, calculator, and a pile of receipts. I’ve tried to get her to use every budgeting app out there, but she’s old school and always says the same thing:
“They’re all too complicated.”