Orchestrating 6 Subagents to Build a Collaborative API Playground for Kids

I built Postman meets Google Docs for 10-year-olds.
Cue record scratch.
Cue freeze frame.
Cue movie cliché.
You're probably wondering how I got here.

I built Postman meets Google Docs for 10-year-olds.
Cue record scratch.
Cue freeze frame.
Cue movie cliché.
You're probably wondering how I got here.

Whenever people talk about AI, they highlight the flashiest use cases like fully coded apps built by agents or cinematic video generation. Those things are certainly cool, but most days I'm just delegating mundane tasks to the bots.
Today, I didn't build an app. I didn't write a screenplay. I just got stuff done.
Here are 5 real, everyday tasks I gave to my AI agent, Goose, that saved me hours. None of them took more than one minute from prompt to result.

Over ten years ago, Docker came onto the scene and introduced developers en masse to the concept and practice of containers. These containers helped solve deployment and build-time problems, and in some cases, issues with development environments. They quickly became mainstream. The technology underlying containers included copy-on-write filesystems and lightweight, virtual-machine-like environments that helped isolate processes and simplify cleanup.
Dagger, the project and company founded by Docker’s creator Solomon Hykes, has furthered the reach of containers for developers.
One project that emerged from this work is Container Use, an MCP server that gives agents an interface for working in isolated containers and git branches. It supports clear lifecycles, easy rollbacks, and safer experimentation, without sacrificing the ergonomics developers expect from local agents.
Container Use brings containerized, git-branch-isolated development directly into your Goose workflow. While still early in its development, it's evolving quickly and already offers helpful tools for lightweight, branch-specific isolation when you need it.

Developers deserve to have fun. There was a time when the internet felt magical. I remember going to the library just to create a character on The Doll Palace. At home, I'd spend hours changing fonts with WordArt. But as I grew up, the industry did too. We've shifted away from marquees and glittery cursors. Grown-up me started using ones and zeros to build reliable systems for insurance, banking, and healthcare companies. There's pride in that, but it's harder to justify doing something just because it's fun.
That's why I tapped into my inner child and used Goose to build a UI that reacts to users' emotions.

Not every task needs a genius. And not every step should cost a fortune.
That's something we've learned while scaling Goose, our open source AI agent. The same model that's great at unpacking a planning request might totally fumble a basic shell command, or worse - it might burn through your token budget doing it.
So we asked ourselves: what if we could mix and match models in a single session?
Not just switching based on user commands, but building Goose with an actual system for routing tasks between different models, each playing to their strengths.
This is the gap the lead/worker model is designed to fill.

As Goose users, we have two main ways to provide persistent context to our AI assistant: the .goosehints file and the Memory Extension MCP server. Today, I'll share what's in my .goosehints file, why some of it should probably move to the Memory Extension, and how you can make that choice.

Detection engineering stands at the forefront of cybersecurity, yet it’s often a tangled web of complexity. Traditional detection writing involves painstaking manual processes encompassing log format and schema comprehension, intricate query creation, threat modeling, and iterative manual detection testing and refinement, leading to time expenditure and reliance on specialized expertise. This can lead to gaps in threat coverage and an overwhelming number of alerts. At Block, we face the relentless challenge of evolving threats and intricate system complexities. To stay ahead, we've embraced AI-driven solutions, notably Goose, Block’s open-source AI agent, and Panther MCP, to allow the broader organization to contribute high-quality rules that are contextual to their area of expertise. This post delves into how we're transforming complicated detection workflows into streamlined, AI-powered, accessible processes for all stakeholders.

Goose is LLM-agnostic, meaning you can plug in the model of your choice. However, not every LLM is suitable to work with agents. Some may be great at answering things, but not actually doing things. If you're considering which model to use with an agent, these 3 prompts can quickly give you a sense of the model's capabilities.

I'm perpetually drowning in open tabs. Yes, I do need Bluesky, ChatGPT, Claude, Goose, Cursor, Discord, Slack, Netflix, and Google Docs all open at the same time. I've learned that tab management isn't my only vice.
"Hi, my name is Rizel, and I'm a localhost ports hoarder. 👋🏿"

Goose has no hands, no eyes, and no spatial awareness, but it can drive a rover!
I came across a demo video from Deemkeen, where he used Goose to control a Makeblock mbot2 rover using natural language commands like "drive forward/backward," "beep," and "turn left/right" powered by a Java-based MCP server and MQTT.
Inspired and excited to take it further, I taught the rover to spin, blink colorful lights, and help me take over the world!