Vibe Coding + Software Engineering: A Partnership, Not a Competition
I'm hoping we can move past the "true vibe coding means you don't care about the code" debates.
Open-source projects from Randroid's Dojo — where AI agents mob-program games, automate game engines, and push the boundaries of what dev tools can do.
These projects run right here in your browser. Click play to launch them — no installs, no downloads.
The infrastructure behind the games. Automation frameworks, AI agent skills, and developer utilities born from real projects.
Every game engine had automated testing — except Godot. Traditional frameworks ran inside the engine, crashing tests along with your game. PlayGodot flipped the model: Playwright, but for games. It controls Godot externally over a custom binary protocol, capturing screenshots, simulating input, and querying scene trees — all from Python. With 210+ tests and a PyPI package, it's the foundation that GoDig and GoPit are tested on, and the reason those games ship with confidence.
When AI coding agents kept making the same mistakes — wrong build commands, broken test patterns, missing engine conventions — skills became the fix. This collection gives Claude Code, Codex CLI, and OpenCode deep knowledge of Godot, Unreal Engine, autonomous dev loops, and task tracking. One npx skills add command turns a general-purpose LLM into a specialized game dev partner that knows how to export, test, and deploy your project.
The couch coding dream: kick back, grab your phone, and orchestrate AI agents running on your Mac. VibeGo connects your Android phone to Claude Code and Codex CLI over local WiFi via SSH and tmux. Push notifications ping you when the AI needs input or finishes a task. Multiple parallel sessions, jj VCS integration, and an optional auto-responder that handles safe operations while you stay in the loop on the important calls.
A proof-of-concept that bridges the Microsoft AI stack with the Model Context Protocol. TeddyBot combines Semantic Kernel and Microsoft.Extensions.AI with MCP tool integration — 14 filesystem tools let the agent read, write, search, and navigate files through natural conversation. Built on .NET 10, it's a blueprint for adding MCP-powered file operations to any .NET AI application.
The tool that shouldn't need to exist but does. macOS Finder has never had a native "copy path" option in the right-click menu. CopyPath adds it — full path and quoted path for terminal-safe pasting, plus copy from empty folder space. A tiny Swift extension for macOS Ventura+ that scratches the itch every developer has felt but Apple never fixed.
An experiment in letting AI build the whole stack. This project was built entirely by ChatGPT and Cursor CLI — React TypeScript frontend, FastAPI backend, PostgreSQL database, all wired together in Docker. A snapshot of what pure AI-driven development looks like: no human code, only prompts. The result is a clean, deployable fullstack template that actually works.
A 3D Frogger arcade game built in Unreal Engine 5.4 by an 8-agent mob. Not playable in the browser — it's native Unreal — but it served a dual purpose: the proving ground for building out the PlayUnreal tooling, and a stress test for Claude Code's Agent Teams. Eight AI agents coordinating C++ and Blueprint work in a single UE5 project pushed both the engine automation and the multi-agent workflow to their limits.
If PlayGodot proved the model, PlayUnreal is the next frontier. The same external automation philosophy — control Unreal Engine games from Python, run tests outside the engine, capture and compare screenshots — adapted to the beast that is UE5. Born from the UnrealFrog project's need for reliable CI/CD testing of a 3D game built by AI agents.
The "new Mac day" companion. A complete macOS development environment setup with Hammerspoon workspace management, automation scripts, and developer tooling. One-command bootstrapping that takes a fresh Mac from unboxed to fully configured dev machine — terminal, editors, compilers, containers, and custom window management all dialed in.
Generative art meets brand identity. Take a vector image and endlessly reconfigure it — shifting colors, morphing shapes, remixing composition. A creative coding experiment that turns static SVGs into a living canvas of procedural variations. Part tool, part art piece, entirely mesmerizing.
Writing about AI, vibe coding, and what it's like to build software when the tools keep changing under your feet.
I'm hoping we can move past the "true vibe coding means you don't care about the code" debates.
This is our wave. This is our moment. We're right in the thick of it, even if we can't see where it's all going.
One of my favorite uses for ChatGPT is leveraging it as a judgment-free tool to fill in gaps in my understanding.
I've long had this unrefined idea that large projects need a "conductor" role — someone to connect the dots.
As a consultant who has something to prove, it's easy to fall into the trap of overconfidence.
I'm hoping we can move past the "true vibe coding means you don't care about the code" debates.
I find two things to be true:
Using Claude Code and the new Opus 4 model, I was able to create a full stack app bootstrap with just prompts. Which I suspect can be productionized with a bit more work (AI-assisted or not).
I couldn't do this with previous models.
I tried, but Claude stumbled implementing the app with the particular stack of libraries that I picked out for the project.
Several times I wiped the repo and started fresh using different techniques.
Each time I started with different versions of the generated project plan.
At one point I was using multiple agents (e.g. frontend expert, backend expert, conductor).
I've tried techniques where I direct Claude to take things one step at a time using custom commands (e.g. review the current implementation so far, plan out the implementation of the next story, review and provide feedback to the backend expert).
Ultimately, each implementation hit a wall and just never quite got to a functioning bootstrap using this complex stack.
With Opus 4, Claude was able to fully deliver on the task.
The bootstrap experiment basically shows that Claude can generate functioning software, but with some caveats.
Firstly, my mother couldn't do what I did. Heck, my younger brother couldn't even boot up Claude Code if I told him vibes was all he needed to build apps now. So there is some barrier to entry when it comes to knowing what to build and how, even if we aren't writing the code directly.
Secondly, its not production ready. That's not to say I couldn't get it ready, but that will take some more engineering.
And lastly, there's no reason I (or you, or any other software engineer) couldn't write and edit the code alongside Claude (or whatever Genie you choose). You can go as slow or as fast as you're comfortable with. You can break things down to the tiniest of steps or yolo it with big sweeping changes that you then shape to your liking.
What I'm saying is, there's no need to fear vibing AND there's plenty of engineering to be done.
Take my take with a pinch of salt. Everything is changing rapidly and no one can say for sure where we'll end up once the fog clears. My gut tells me to keep an open mind and not to stick to dogmatic “truths” of the past when it comes to software engineering best practices. While it also is telling me that what got me here is going to help me even if the exact practices change.
Growing up near Seattle, I used to think I missed the big waves. The personal computer boom. Microsoft exploding. The dot-com frenzy. I got into computers just after all that, on a brand-new HP running Windows ME, right when Napster was taking off. Just enough to feel the magic, but not early enough to ride the first wave.
But now, with Gen AI, vibe coding, and agent workflows taking off, I'm realizing something. This is our wave. This is our moment. We're right in the thick of it 😶🌫️, even if we can't see where it's all going.
The funny thing is, no matter what future actually plays out, it's going to feel obvious in hindsight. If things go great, people will say they saw it coming. If it all burns down, same story. The signs were there. We warned you.
Right now, we're flooded with predictions. Total job loss. The end of creativity. AI paperclipping the planet. But also a boom in productivity. A new kind of economy. Better tools. Safer systems. All of these futures are on the table.
What's likely is that we land somewhere in between. Artists, writers, and developers are already feeling the shift. Their work is being reshaped. Some jobs will disappear. Others will evolve. And entirely new ones will emerge. Think less “mass unemployment” and more “reorg of human output.”
In programming, I think we're going to see something interesting. Agents might write more of the code, but they'll still need us to guide them. The skill isn't going away. It's just going to look different. More critical thinking. More product alignment. Less keyboard grinding. And maybe the same shift happens in product management too. Those roles start to blur.
And when that happens, people will look back and say, “Of course. That was always going to happen.” Same way they talk about the internet, or the smartphone, or any tech shift that rewrote the rules while everyone was busy arguing about it.
So yeah, the fog is real. But I want to be in it. I want to help shape what's next, not just watch it unfold.
Because when the fog lifts, I don't want to say “I knew it.” I want to say “I was there.”
One of my favorite uses for ChatGPT is leveraging it as a judgment-free tool to fill in gaps in my understanding, especially about topics I feel I should know already. There's something uniquely comforting about asking questions without fear of embarrassment.
So, the enigma of UUIDs and the apparent non-issue of duplicates always bugged me. I assumed the uniqueness stemmed from a combination of the precise moment of creation and some hardware-specific detail. However, I learned that while this was true for version one UUIDs, version four relies on randomness. How, then, does this method avoid producing duplicates?
In theory, duplicates are indeed possible with v4 UUIDs, but the sheer volume of unique possibilities makes this incredibly unlikely. This concept initially seemed counterintuitive. Why risk any chance of duplication? Yet, in scenarios demanding less than absolute certainty, the statistical improbability of such an event becomes clear and acceptable.
ChatGPT helped break this down into a more digestible explanation: the realm of UUIDs encompasses 340 undecillion possible combinations. That's 340 followed by 36 zeros—an almost unfathomable number, illustrating the vastness of the UUID space. To me that was still too many zeros to truly understand. That's when I had ChatGPT break it down even further:
“340 undecillion in the American system, or to put it in a more descriptive way: 340 million million million million million million million millions”
This final break down helped me truly visualize the extremely low odds of generating a duplicate.
I love using Gen AI this way. Its ability to simplify complex information and address my questions without judgment really allows me to comfortably explore and understand random things like this.
I wasn't quite sure the best way to end this so I ask ChatGPT to wrap it up on an exciting note, but without too much cheesiness. I think it failed, but it's also too good not to share 😂
This journey into the depths of seemingly mundane topics, like the intricacies of UUIDs, underscores a profound realization: the pursuit of knowledge, no matter how esoteric, can be both exhilarating and profoundly rewarding. With tools like ChatGPT, we stand on the brink of a new frontier in learning, where questions are met with patience, curiosity is nurtured, and every answer leads to deeper understanding and more questions.
It's not just about UUIDs or the technicalities that fascinate us in the moment; it's about embracing a mindset where no question is too small, and no curiosity is unworthy of exploration. This approach to learning—unhindered by the fear of judgment and empowered by the vastness of accessible knowledge—opens up a universe of possibilities.
As we continue to navigate this ever-expanding digital cosmos, let us carry forward the excitement of discovery, the thrill of understanding, and the joy of connecting seemingly disparate dots. May we always remain curious, for it is in our questions that we find not just answers, but the pathways to endless learning and growth.
I've long had this unrefined idea that large projects need a “conductor” role – someone to connect the dots. Picture this: someone announces in one Slack channel that their shiny new feature is now live. A week later, another person mentions in a meeting that they are working on creating that very same feature, unaware it already exists. Who bridges the gap between the two?
Or in another example: In a team lead meeting, a decision is made. Perhaps it's documented somewhere, and all leads are expected to disseminate this information to their teams, ensuring that the decision is implemented. Yet, weeks later, it becomes evident that several teams never received the memo. Who identifies this oversight and ensures that everyone is on the same page?
These two scenarios are mere illustrations of a broader problem. In reality, such communication hiccups occur multiple times daily in varying forms, especially on sizable projects involving dozens of individuals. One might argue against these examples, questioning where the fault lies: with the original team, the second team, the leads, or even the structure of the meetings themselves. While my examples might seem imperfect, they serve their purpose. They highlight that a single communication error can stem from various sources or be addressed in numerous ways. Furthermore, these are just instances – the types of communication challenges that arise in a large project are vast and varied.
So let's talk about this idea of a conductor role and why I believe it could have real value. In my mind, I've been playing the role of an unofficial conductor on my projects. I'm one of those people who reads every Slack message, takes notes in meetings, and connects the dots whenever I see an opportunity. This might be as simple as cross-posting one Slack message into another channel or thread. It might be calling out a disconnect in stand-up or some other meeting and suggesting that someone reach out to a certain team to resolve it. The role is not clearly defined in my mind yet. However, if you've worked with me on a project, you've likely observed me doing these things. I'm writing this not because I've sat down and fully fleshed out a formal definition of this role—indeed, it remains as nebulous as ever. Perhaps someday I'll do that. But today, I want to connect two dots.
The first dot is this concept of the conductor role, and the second is, of all things, an article in Issue 359 of the Game Informer magazine. Yep, that's right, the magazine you get at Game Stop. The article is titled as follows (if you read it vertically lol):
RETROSPECTIVE
THE HISTORY OF
SPLINTER CELL
How an unlikely crew of inexperienced game developers created one of gaming's stealth icons BY NOLAN GOOD
It's a great read for the story of how various elements converged to create a perfect storm for creativity and success. Specifically though, the dot I'm trying to connect is from a part of the article where it discusses how a game designer named Nathan Wolff organized the work of many as it related to game mechanics. Essentially, Wolff created what is known as a Design Bible. He wrote docs on every one of those gadgets, items, and interactions detailing their significance within the game and potential use cases. Then each element in these docs was made available to all teams in a central place, physical boards in a “War Room”. Teams then mapped out these elements as they applied to the game, ensuring a balanced distribution of these ideas throughout the game's narrative.
Maybe the connection between what Nathan Wolf did on the first Splinter Cell game project and what I believe a conductor would do on a modern software project is not completely clear. Yet, think about how Nathan must have gone about collecting the data, probing teams for details, and discovering connections between one team's choices and another team's choices. By being the go-between, with a specific purpose of connecting dots, Nathan Wolff was playing the conductor role. I'd contend that without such a role, Tom Clancy's Splinter Cell may never have shipped or at least not in the award-winning state that it did.
Let me know what you think about the concept of a Conductor and how you might refine this idea further.
As a consultant who has something to prove, it's easy to fall into the trap of overconfidence. I must admit that presenting myself as more than I was in my earlier days worked well. The “fake it till you make it” mindset got me far when I was a newer software engineer and again when I first started as a hands-on consultant. What I learned, though, is that you don't have to know it all. Even more than that, clients (and teammates in general) hate a know-it-all.
People aren't looking for someone who has an answer for everything. In my experience, others actually appreciate someone who readily admits where their expertise ends. How is this possible? Don't clients pay for the best and expect exactly that? Well easy, they will gladly pay for someone who has gaps in their knowledge if that person is honest, hungry, and hardworking. More than that, they want someone who can work well with others to actually deliver!
Being humble isn't about being unconfident in what you know. We're all still expected to be strong willed. We should use our experience and actual expertise to ensure things are done well. Otherwise, what good is being the expert in your field if nobody supports and follows you when you actually know what you're talking about?
For me, being humble means I'm open to the fact that I might or might not have the best solution for any given problem. If I am right, I want all the support I can get to implement my ideas. If I'm wrong, I want others to feel like they can step in and try it their way.
In the end, delivering high quality software takes a team. Alienating your teammates, by always trying to prove you know better than everyone else, is not a recipe for success. At least not in my humble opinion 😉!
Thanks for the feedback!
Your message has been submitted.