If you had told me a year ago that one of the technologies that would push my wife and me further outdoors would be AI, I probably would have laughed.
At the time, we were still mostly indoor people. My hobbies were things like video games and smart home automation; my wife at least had a thriving collection of indoor plants, but the outdoors still felt more like a place we passed through than a place we lived in.
Then we moved away from the city and into a place with a yard.
It was great being outside. We spent so many hours there that first summer, but the backyard did not quite feel like ours yet. The moles had destroyed the grass, and to be honest, we did not have much reason to care. Then my dad suggested we grow peppers.
That sounded manageable. I love spicy food in a way that makes people question whether I eat anything else. We would probably run out of toilet paper in my house before we ran out of cayenne pepper. If we could have peppers right there in the backyard, that felt like an immediate win.
We started reading and realized peppers could grow just fine in bags or planters, even without going into the ground. Easy enough. We got the bags during the week and met up with my dad that weekend to pick out seedlings.
Then we got to Norfolk Feed & Seed and were not even remotely emotionally prepared for the buffet of fruit varieties we could apparently just grow. There were so many options, including more peppers than we expected and, more importantly, watermelon seedlings.
My dad chuckled as we loaded them into the cart.
A couple weeks later, while we were reading up on next steps for watermelon care, we finally pieced together why.
The shop groups the seedlings. When you buy one variety, you buy three seedlings. Which meant our three watermelons had quietly become nine. And if you know anything about watermelons - which, to be clear, we absolutely did not - you need a ton of space to grow even one.
We did not have a ton of space. We also still had moles. Anything we planted straight into the ground was at risk of getting eaten, displaced, or otherwise bullied before it ever had a chance to root.
So we ended up with a very specific and slightly ridiculous question:
Can our watermelons be not on the ground?
From there, imagine a cutscene of spirited collaboration between me, my wife, my dad, some wonderfully patient hardware store employees, and Claude.
And the result was a floating watermelon garden.


This was the fun part. We got to ask Claude anything. After exhausting every page we could find about all the normal ways people grow watermelons, we started running the weird scenarios by it instead. It listened to all of our indoor-people ideas and helped us turn them into something at least remotely reasonable.
If you wanted to build your own floating watermelon garden, ours included a cucumber trellis, mosquito netting, magnetic mesh shelf doors, a garage shelf, and whatever else we could either find in a store or describe well enough for Claude to help us reason about.

Of course, when it came time to make real decisions, we still needed my dad and those very patient store employees. But that was the beauty of the collaboration. Claude did not replace judgment. It gave us fluency. It helped translate our city-living, pro-gaming, but watermelon-growing dreams into something we could actually attempt.
I think this is the best part of AI. If a gamer can better understand gardening, that is a W for me.
What Claude really gave us was not expertise in a box. It gave us a lower activation energy to try. We could ask dumb questions, pressure-test weird ideas, and keep iterating without losing the thread. A lot of the time, the gap AI is filling is not a person’s job. It is the gap between “we have an idea” and “we know what to try next.”
That, to me, is the beautiful version of this technology.
But that same strength is also where the danger lives.
Our floating watermelon garden worked because Claude was not the only voice in the room. We still had my dad. We still had the wonderfully patient hardware store employees. We still had our own judgment. Claude helped us become conversational in a domain that was new to us, but the humans were still doing the verifying.
That distinction matters.
Because the same thing that makes AI useful in everyday life - its fluency, speed, and confidence - can also make it dangerous in everyday life. Once a system sounds helpful enough, people naturally lean on it. And once you start leaning on a tool for real decisions - what to plant, what to build, what to connect, what to trust - “mostly useful” stops being good enough.

We even learned that in the garden. Claude confidently told us we could use our grass clippings as mulch. What we got instead was a mosquito problem so disrespectful that we ended up swapping to straw and barricading the watermelons. The cost there was low enough to laugh about later, but the lesson stuck: a confident answer is not the same thing as a reliable one.
We also got a strange little bonus out of the chaos. Once the watermelons were gone, grass started growing out of the bags - the same tall fescue grass we had planted to try to bring the yard back to life.
We thought it would die over the winter.
It did not.
This year, we have this:

And I have learned the same lesson in places where the cost is a lot less cute.
I built a containerized CLI agent called Clide so I could use models like Claude with tighter controls. At one point, I let it help me reconfigure access to a remote server. The safe sequence was obvious: verify the private connection works before closing the public one.
Instead, it closed the public path first.
My “PhD-level intelligence” agent had all the information I had and more, and we had written the plan explicitly together. Then, in a moment of synthetic brilliance, Clide decided it would be more efficient to do both tasks at once. I got to watch in disbelief as it snatched defeat from the jaws of victory.
And then came the part that always gets me with these systems: the response. Not recovery. Not initiative. Not even especially useful damage control. Just articulate apologies and a promise that it understood what went wrong and would not do it again.
Spoiler alert: it did not.
That experience is one of many that has forced the same lesson back onto me: fluency is not judgment.
These tools can be remarkably helpful. They can infer what we mean, smooth over language, and make us feel like we are collaborating with something far more competent than a normal piece of software. But that does not mean they understand consequences. It does not mean they know when they are wrong. And it definitely does not mean they are trustworthy on their own.
That gap - between sounding right and being safe to rely on - is exactly why I care about AI safety.
I do not want less of this technology in my life. I want better versions of it.
I want the version that helps turn indoor people into gardeners, weird yard ideas into real projects, and curiosity into action - without quietly smuggling bad judgment in under a confident tone. I want tools that are useful, legible, and honest about uncertainty.
I want systems that reduce friction without asking us to surrender responsibility; a system people can trust.
In the mean time, these gardening gamers are embarking on a new challenge with Claude should you care to tune back in. Our quest? Naturally Claude is going to help us turn our watermelon-tall-fescue bags into solar powered smart bird feeders with multiple cameras and sensors…and hopefully the help of some wonderfully patient bird lovers. More to follow…