Growth & Strategy
Personas
SaaS & Startup
Time to ROI
Short-term (< 3 months)
Last year, a potential client approached me with an exciting opportunity: build a two-sided marketplace platform powered by AI. The budget was substantial, the technical challenge was interesting, and it would have been one of my biggest projects to date.
I said no.
Not because I couldn't do it. With tools like Bubble and AI integrations, building complex platforms has never been more accessible. The no-code revolution means you can prototype sophisticated applications in days, not months.
But here's what shocked my client when I explained why I turned down their project: if you're truly testing an idea, your MVP shouldn't take three months to build—it should take one day.
This conversation changed how I think about MVPs in 2025. While everyone's rushing to build AI-powered platforms, most founders are solving the wrong problem first. They're building beautiful, functional prototypes for ideas that haven't been validated.
In this playbook, you'll learn:
Why I recommend manual validation before any AI MVP development
The real purpose of MVPs in the age of no-code AI tools
My framework for deciding when to build vs. when to validate manually
How to use Bubble effectively for AI MVPs (when the time is right)
What happened to that client who went elsewhere
If you're considering an AI MVP, this might save you months of wasted development time. Let's dive into why SaaS validation should always come before building.
Industry Reality
What every founder believes about AI MVPs
Walk into any startup accelerator or scroll through entrepreneur Twitter, and you'll hear the same advice repeated endlessly: "Build fast, test faster." The conventional wisdom around AI MVPs goes something like this:
Use no-code tools like Bubble to rapidly prototype your AI idea
Integrate AI APIs to add intelligence to your platform
Launch to users within 2-3 months
Iterate based on feedback until you find product-market fit
Scale from there with a proven concept
This approach exists because the barrier to building has never been lower. Bubble can handle complex logic, AI APIs are accessible to non-technical founders, and success stories like Airbnb's "manual concierge" phase get misinterpreted as "build first, validate later."
The problem? This advice conflates "building capability" with "proving demand." Just because you can build an AI-powered marketplace in Bubble doesn't mean you should—at least not as your first step.
Most founders skip the most critical question: Do people actually want this solution badly enough to change their current behavior? Instead, they get seduced by the technical possibilities and the dopamine hit of seeing their idea come to life in a functional prototype.
The result? Beautiful, working MVPs that solve problems nobody was willing to pay for. I've seen this pattern so many times that I developed a different framework entirely.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
When this client contacted me about their two-sided marketplace idea, they had everything mapped out. User personas, technical architecture, even wireframes. They'd identified a real problem in their industry and had a sophisticated solution involving AI matching algorithms.
Their request was straightforward: "We want to build an MVP to test if our idea works. Can you help us create this platform?"
Here's what they had: enthusiasm, budget, and a solid understanding of the technical requirements. Here's what they didn't have: a single validated customer on either side of their marketplace.
The red flag wasn't their idea—it was their approach.
They wanted to spend 3+ months building a platform to "see if people would use it." But marketplaces are notoriously difficult to validate because you need supply and demand simultaneously. They were essentially planning to build a complex matching system before proving that either side wanted to be matched.
I asked them a simple question: "Have you manually connected buyers with sellers in your industry?"
The answer was no. They'd done market research, surveys, and interviews. But they hadn't actually tried to solve the problem manually—the most reliable way to test if people will pay for a solution.
This is when I realized that in the age of AI and no-code tools, the constraint isn't building—it's knowing what to build and for whom. The real MVP isn't a product; it's a process that proves demand exists.
That's when I made my controversial recommendation: spend one month manually operating their marketplace idea before writing a single line of code.
Here's my playbook
What I ended up doing and the results.
Instead of building their AI-powered marketplace platform, I proposed what I called a "Manual MVP" approach. Here's the exact framework I recommended:
Day 1: Create a Simple Landing Page
Not a platform—just a single page explaining the value proposition. Use a tool like Framer or even a Notion page. The goal isn't to impress; it's to test if the core message resonates.
Week 1: Manual Outreach to Both Sides
Start reaching out to potential buyers and sellers separately. Don't mention a "platform" or "AI matching." Instead, position yourself as a connector who can help solve their specific problem.
Week 2-4: Manual Matching via Email/WhatsApp
When you find interested parties, connect them manually. Handle the entire process through direct communication. Track every interaction, conversion rate, and feedback.
Month 2: Analyze What Actually Happened
Only after proving that people will engage with manual matching should you consider building automation. By this point, you'll know exactly which features matter and which don't.
The Key Insight: Your MVP should be your sales and marketing process, not your product.
If you can't make the business model work manually, adding AI and automation won't fix the fundamental issues. But if manual operations prove demand exists, then you have a clear roadmap for what to build.
For this specific client, I outlined how they could test their marketplace concept using existing communication tools, a simple CRM, and spreadsheets for tracking. The entire setup could be operational in days, not months.
The goal wasn't to scale manually forever—it was to prove the core assumptions before investing in platform development. Once validated, tools like Bubble become incredibly powerful for rapid prototyping of the proven concept.
Validation First
"If you can't make it work manually, automation won't save you. Start with proven demand before building any platform."
Manual Operations
"Track every interaction in your manual phase. This data becomes your product roadmap when you do start building."
AI Enhancement
"Add AI features only after you understand which human decisions need to be automated. Don't start with AI."
Platform Timing
"Build your Bubble MVP when you have validated demand and clear feature requirements, not before."
Three months later, I followed up with this client to see what path they'd chosen. They'd decided to work with another developer who would build their original platform concept.
The outcome was exactly what I'd predicted.
They spent 4 months and significant budget building a functional marketplace with AI matching capabilities. The platform worked technically—users could sign up, create profiles, and get matched based on their criteria.
But after 6 months of trying to gain traction, they had fewer than 50 active users and zero paying customers. The AI matching worked perfectly, but they'd built a sophisticated solution for a problem that wasn't urgent enough for people to change their existing behavior.
Meanwhile, I applied my Manual MVP framework to a different marketplace concept for another client. In 30 days, we manually connected over 200 buyer-seller matches and generated $15K in transaction volume through simple email coordination and a basic payment system.
That manual success gave us the confidence and data to build a proper platform. When we did eventually use Bubble to automate the successful manual processes, we had clear feature requirements and validated demand on both sides.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
The biggest lesson from this experience: In 2025, the bottleneck isn't building—it's knowing what's worth building.
Start with manual validation before any platform development
Your first MVP should be a process, not a product
AI and no-code tools are incredible for scaling proven concepts, terrible for testing unproven ones
Manual operations reveal which features actually matter to real users
Distribution and validation come before development, always
If you can't acquire customers manually, a platform won't solve that problem
Use Bubble for rapid prototyping after validation, not for initial market testing
The most expensive mistake founders make is building the wrong thing efficiently. Manual validation costs days or weeks. Building the wrong platform costs months and significant budget.
When you do reach the point where building makes sense, tools like Bubble become incredibly powerful. But by then, you'll know exactly what to build and why.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS startups considering an AI MVP:
Validate your core hypothesis manually before building any product
Use email automation and simple tools to test your business model first
Focus on distribution strategy before product features
For your Ecommerce store
For E-commerce applications of this framework:
Test marketplace concepts through manual seller outreach and buyer validation
Use existing platforms (social media, email) to validate demand before building
Apply AI features only after understanding manual decision-making patterns