AI Is Not There Yet
The Honest Truth From Someone Who Really Wants It to Be
As an AI consultant and automations expert, this is not the post I want to write.
But it’s true. AI is not there yet.
As much as I want it to be the solution to everything, it’s got its flaws. Real ones. Frustrating ones. The kind that make you want to throw your laptop across the room at 11pm when something that should work simply doesn’t.
I spend most of my time in this newsletter telling you how amazing AI is. And it is. But today, I owe you the full picture.
Let me tell you about my last two weeks.
The Lovable Problem
Let’s start with Lovable.
Wow. It’s one of the most amazing applications I’ve ever seen. It builds beautiful websites and applications that are live within minutes. Designed, fully functional, ready to use. I’ve recommended it in this newsletter multiple times.
But here’s what I’ve discovered: Google hates Lovable websites.
Not because they’re ugly. They’re gorgeous. Not because they’re slow. They’re fast.
It’s because of how Lovable builds them. The way it structures the code makes it difficult for Google to crawl and index properly. The search engine can’t read the content the way it needs to.
If you’re building a portfolio site, a demo, an internal tool, or something you don’t need to rank in search? Lovable is still incredible.
But if you want your website to show up when people Google your services? Lovable is great for creating the design, but you’ll need to convert that design to a more search-friendly format.
Based on my research, and sticking with our Pure Code theme, you may want to consider Astro. It’s a framework built specifically for content-heavy sites that need to rank.
(Please, someone build a Lovable-to-Astro converter soon. I’ll be your first client.)
The Vibe Coding Reality Check
Now let me tell you about my week in Vibe Coding hell.
Vibe Coding is still amazingly cool and fun. Until you realize what you built doesn’t work and you don’t know why.
Then comes hours. Days. Screenshots to Claude. Clarification questions. “Debugging.” Error messages that mean nothing to you. Stack traces that might as well be ancient hieroglyphics.
I’m currently working on a project I’m incredibly excited about. I won’t share the details yet, but I believe in this idea deeply.
So I did something different this time. Instead of my usual approach of giving Claude a few bullet points and asking it to build something, I invested heavily in the prep work.
Five hours of discussions with Claude. Brainstorming how every piece would work. I’d take breaks, hit the gym to clear my head, then jump back in with more ideas. I mapped out edge cases. I thought through user flows. I wanted the first build to be perfect.
(Just to clarify my process: I brainstorm with Claude in regular chat. When the idea is fully formed, I ask Claude to write me a detailed prompt that I’ll use in Claude Code to actually build the thing.)
Finally, I had it. The perfect prompt. Every detail accounted for. Every requirement specified.
I was working remotely that day and wanted to deploy it on my main computer at home so I could watch it come to life. Here’s how obsessed I was: on the drive home, I genuinely worried about getting in a car accident and this thing never getting made.
I get home. I deploy the prompt. Claude Code works for about 30 minutes, building everything. Then it tells me to set up the connections: Supabase, Mailgun, AWS, Stripe, Railway (it can’t do that itself, yet).
Fine. I’d used all these services before. I knew it wouldn’t be easy, but it should be easier than the first time.
It took four hours.
Four hours to get everything connected with green checkmarks. And it wasn’t smooth. Claude Code builds everything in Python and names things like environment variables and database connections, but those names don’t always match what Railway expects or how Supabase formats them. So you try. It breaks. You screenshot. You chat. You fix. You try again.
Frustrating. But doable.
Finally: green checks everywhere. Successful builds. API keys locked in. Everything connected.
I take a slow breath. I launch it.
ERROR.
Basic functionality. Doesn’t work.
Welcome to My World
Now I understand what developers have been dealing with forever.
I’m sure there are a few reading this right now, laughing, saying: “Welcome to my world, buddy.”
Fair. I get it now.
The gap between “it built successfully” and “it actually works” is a canyon I didn’t fully appreciate until I was standing at the edge of it, staring down.
But Here’s the Thing
Despite all my frustrations, I need to keep this in perspective.
Six months ago, I could have never built the things I’ve built with AI.
Not even close.
I’ve never felt so free to create whatever comes to mind. Design something new. Build something just for fun. Test an idea without hiring a developer or learning to code for three years first.
And as I keep trying, I get better. I know what to expect. I recognize error patterns. I understand what Claude needs from me to help troubleshoot.
The work is working on me.
Making me better. Making me more likely to reach a place where this becomes easy.
And here’s the other thing: as I keep working, AI keeps improving. The models get smarter. The tools get more reliable. The error messages get clearer. The gap between “built” and “works” gets smaller.
Soon, AI will make fewer mistakes. And so will I. And it will hopefully keep amazing me in good ways instead of frustrating ones.
The Bottom Line
AI isn’t perfect.
It will waste your time. It will break things. It will confidently build something that doesn’t work and have no idea why.
But guess what? We aren’t perfect either.
And despite every frustration, every late night debugging session, every moment I wanted to give up...
AI is still the best damn guide I’ve ever had.
Sure, sometimes we wander off trail. Sometimes we get completely lost. But we keep moving.
And we’ll get there. Together.
-Scott
SmartOwner is published (almost) daily by Scott McIntosh at DigitalTreehouse. Want AI consulting or automation for your business? Reply to this email.


