From Idea to Launch in Two Days: Building a Product with AI
I've been using AI for coding for quite a while now, but mostly for isolated tasks: writing functions, debugging, explaining code. I wanted to see how far I could push it. Could I build an entire product from scratch?
The catch: I wanted to build something I'd dreamed about but never attempted, partly due to time constraints, but mostly because I lacked the skills. Specifically, I'm terrible at frontend development and design. I can barely draw a straight line, and translating what looks good in my head to something real has always been a struggle.
The result? I built and launched TalkPulse in two days, with an extra day for hosting, performance tuning, and refactoring. I can't remember the last time I enjoyed building something this much.
What I built
TalkPulse solves a problem I've had for years: getting feedback from audiences after presentations. There are plenty of tools out there (and AI actually helped me evaluate them), but I wanted something simple both for speakers and their audience. No friction, no complexity, just quick and useful feedback.
This wasn't just an exercise in using AI. I genuinely needed this tool for myself. And now it exists, it's live, and anyone can use it for free at talkpulse.app.
The joy of creating
What struck me most about this experience was how much fun I had.
As a backend developer with over a decade of experience in cloud-native technologies, ops, and open source, I rarely see the visual results of my work. My output is usually JSON responses, infrastructure configs, and CLI tools. Building something with a real UI, watching it come together visually, was genuinely exciting.
But the joy wasn't just about seeing pixels on screen. It was about being able to focus on creating rather than getting bogged down by technical hurdles or skill gaps. I felt in charge. Things were happening. I could see results quickly.
Usually, when I've dipped my toes into frontend development, I'd spend hours fighting with CSS, confused by the latest JavaScript framework trends, struggling to make something that doesn't look terrible. This time, I could describe what I wanted and watch it materialize. Features that Claude estimated would take "1-3 business days" were done in five minutes.
I pushed Claude Pro to its limits: I had to upgrade to Max after half a day just to keep going. That gives you a sense of how much back-and-forth was involved, but it was still orders of magnitude faster than doing it myself.
The tech stack and workflow
Here's what I used:
- Claude Code with Opus for the heavy lifting
- Lovable to generate the initial UI from a single (admittedly long) prompt
- TanStack Start as the full-stack framework (I migrated from Lovable's output)
- shadcn/ui for components
- Clerk for authentication (massive time saver)
- Cloudflare Workers for hosting
The migration from Lovable to TanStack Start happened early. Lovable is excellent at generating React frontends, but I needed a proper full-stack solution. TanStack Start is gaining traction, and the migration was surprisingly smooth.
Clerk deserves special mention. Authentication is one of those things that can eat up days of development time. With Clerk, it was basically plug and play. Cloudflare Workers was similarly painless: just a TOML file and some config tweaks.
Evolving the workflow
My workflow evolved significantly over those two days.
Initially, I worked in a single Claude Code session. Then I discovered workmux, a tool that creates Git worktrees and runs coding agents in tmux sessions. This let me work on multiple features in parallel. While one agent was researching or planning, I could write prompts for another task.
As a bonus, it finally gave me a reason to learn tmux, which I'd wanted to do for years but never had the motivation for.
Eventually, I integrated the Claude Code GitHub Action and started working through issues. This slowed things down a bit, but gave me a record of all changes. More importantly, running the agent in a sandboxed environment made it easier to grant broader permissions. Worst case, I just closed the PR and started over.
Speaking of permissions: one big time-saver was granting read permissions upfront for everything. The agent only asked for confirmation when making changes. This small tweak dramatically reduced the friction of constant permission prompts.
Waking up from the dream
Here's the uncomfortable part.
As time went by, I stopped looking at the actual code. As long as it worked, I trusted the AI with larger and larger changes. I just accepted whatever it produced.
After two days, when I finally sat down to review what had been built, it felt like waking up from a dream. A bit disorienting. The codebase had evolved, the structure was different from what I remembered, and, as it turned out, it didn't always follow best practices.
Some issues I found:
- Separation of concerns was often missing
- Code structure was inconsistent
- Dead code everywhere (previously generated code that was no longer used but never cleaned up)
- Old dependencies and stale practices that were confusing
I needed to re-learn the codebase I had supposedly just built and make substantial changes to satisfy my inner perfectionist. Which makes it even more remarkable that I managed to launch in just three days.
The limits of AI
AI excels at writing code, especially when given good specifications. But there are scenarios where it seriously struggles and can even lead you astray.
I spent about an hour debugging an issue on Cloudflare Workers, following the AI's suggestions. It kept pointing at Clerk configuration, PostHog integration, various external services. I followed its instructions dutifully. Nothing worked.
Finally, I gave up on the AI and followed my instincts. I found the problem in 10 minutes.
The culprit? A typo in the configuration: treu instead of true.
The AI was guessing, looking at things that weren't the problem and that I'd already investigated. When it comes to debugging integrations, finding that needle in the haystack, it lacks the intuition that comes from experience. I've had similar experiences debugging Kubernetes issues: even with full cluster access, the AI gives up after the obvious things don't work.
I'm not saying AI is useless for debugging. Others have probably had better luck. But in my experience, there's room for improvement.
Trust, boundaries, and the waterfall question
This raises some interesting questions about trust.
If garbage goes in, garbage comes out, right? And I'll admit, I sometimes pushed Claude with vague, half-baked descriptions, sometimes just a few words. More often than not, it figured out what I wanted. So maybe AI can make sense of garbage. Sometimes.
But there's a pattern: the larger the task and the fewer constraints you provide, the higher the chance it goes off in an unexpected direction.
Is this the AI's fault? Probably not. The same would be true for a human developer. If you can't read someone's mind, you can't build exactly what they're imagining.
The danger is in how much we trust seemingly working code. How much of a dead end might we find ourselves in days, weeks, or months later when we actually need to understand and modify that code? Even after just two days, I had to turn back from some dead ends.
What happens to a startup that builds with AI for months, only to discover:
- Nobody on the team understands the codebase anymore
- The code has become so fragmented that even AI struggles to make changes correctly
I don't know the answers to these questions.
Here's another thought that keeps nagging at me: if we compensate by writing detailed specifications to keep the AI on track, aren't we just going back to waterfall? Writing massive specs before any code gets written? Wouldn't we lose the agility needed in a fast-paced world? Is there a chance we'd be faster just doing the work ourselves instead of writing bulletproof specifications?
I honestly don't know. And AI is evolving so fast that even if these concerns are valid today, they might be irrelevant in a month.
What I learned
A few takeaways from this experience:
Start over freely. One of the best things about this process is how quickly you can iterate. Don't like something? Throw it away and start fresh. You're not losing weeks of work, maybe a few prompts. I deleted context and closed PRs multiple times when things went off track.
You'll get better at prompting. I can't objectively describe what makes a good prompt, but I could feel myself improving with each attempt. It's a skill that develops through practice.
Grant permissions strategically. Giving read access to everything while requiring approval for changes was a good balance. Running in sandboxed environments (like GitHub Actions) lets you be even more permissive.
AI broadens your horizons. This experience showed me what's now possible. There are things I know I can build now that I never would have attempted before. My mind is filled with ideas, and I can't even decide what to do next.
What's next
TalkPulse is live and free at talkpulse.app. I might put it on Product Hunt: what started as a joke ("I built a product in two days, might as well make it official") might actually be worth doing.
Whether it becomes a business, I don't know. If it gets popular or maintenance costs grow, I might add paid features. But there's something appealing about this as a way of life: building things that people can use, so they don't have to build it themselves.
If I had to summarize: this was a genuinely great experience. I managed to build something from the ground up that I'd wanted to create for some time. I enjoyed the process, even with the setbacks. I finished something which, let's be honest, is rare for side projects.
AI is an invaluable tool that keeps getting better. It fills in skill gaps that would otherwise stop us from building things. It's not perfect, and there are real questions about trust and code quality. But there's no getting around it anymore.
I'll definitely be doing this again.