Growth & Strategy
Personas
SaaS & Startup
Time to ROI
Medium-term (3-6 months)
Last year, a potential client approached me with what seemed like a dream project: a substantial budget to build a two-sided marketplace platform using the latest AI and no-code tools. The technical challenge was exciting, it would have been one of my biggest projects to date.
I said no.
Here's the thing: they wanted to "test if their idea worked" by building a complex AI-powered platform. No existing audience, no validated customer base, no proof of demand. Just an idea, enthusiasm, and a budget that could have kept me busy for months.
This experience taught me something crucial about AI product-market fit that most founders get completely wrong. Everyone's rushing to build AI MVPs when they should be validating demand manually first. The constraint isn't building anymore—it's knowing what to build and for whom.
In this playbook, you'll learn:
Why I turned down a lucrative AI project and what it taught me about true PMF validation
The counterintuitive approach to AI MVP development that actually works
A step-by-step framework for validating AI product ideas before writing a single line of code
Real examples of SaaS companies that found PMF through manual processes first
How to use AI as a scaling tool, not a validation crutch
This isn't another "build fast and iterate" story. It's about the discipline to validate before you build—especially when AI makes building feel deceptively easy.
Industry Reality
What the AI PMF playbooks won't tell you
Walk into any accelerator, read any startup blog, or browse Product Hunt, and you'll hear the same advice about AI product-market fit: build fast, ship early, let users tell you what works. The conventional wisdom goes like this:
Start with an AI MVP: Use no-code tools and AI to build a functional prototype quickly
Launch and iterate: Get it in front of users, gather feedback, improve rapidly
Scale with data: Let user behavior guide your product decisions
Find PMF through usage: PMF will emerge from product usage patterns and retention metrics
AI accelerates everything: Machine learning will help you understand users better than traditional research
This advice isn't necessarily wrong—it's just incomplete. The problem is that it assumes you already have some market understanding. Building an AI product is easy now. Knowing what AI problem to solve? That's the hard part.
Most AI startups I've observed fall into the same trap: they get seduced by the technology's possibilities instead of focusing on market reality. They build sophisticated machine learning models before validating basic demand. They automate processes that users haven't proven they want automated.
The conventional approach works when you're iterating on proven concepts. But for truly new AI applications—the ones that create new user behaviors rather than optimizing existing ones—this build-first mentality can lead you down expensive rabbit holes.
Here's what the standard PMF advice misses: your first "AI product" shouldn't use AI at all.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
The client came to me through a referral, excited about building what they called "an AI-powered two-sided marketplace for service professionals." The budget was substantial—definitely enough to make this a priority project. They'd heard about tools like Bubble and Lovable that could build complex applications quickly, and they wanted to leverage AI for matching, recommendations, and automation.
During our initial conversations, I asked the standard discovery questions: Who's your target market? What problem are you solving? How do you know people want this? Their answers revealed the core issue.
They had done market research—surveys, competitor analysis, trend reports. Everything looked promising on paper. But when I dug deeper, I realized they had zero direct interaction with their supposed customers. No interviews with service providers who would supply the marketplace. No conversations with customers who would demand these services. Just assumptions based on "market gaps" they'd identified.
The red flag moment came when I asked: "If you couldn't use any technology, how would you connect these service providers with customers manually?" They hadn't considered it. The entire business model was built around the assumption that AI automation would solve problems they hadn't proven existed.
This is when I made the decision that surprised them—and initially disappointed me financially. Instead of taking the project, I recommended they spend one month manually connecting service providers with customers using nothing but spreadsheets, emails, and phone calls.
"But we want to test if AI can improve the matching process," they protested. "How can we do that without building the AI?"
My response changed how I think about AI product development: "If you can't make your marketplace work manually, AI won't save it. And if you can make it work manually, you'll understand exactly how AI should improve it."
Here's my playbook
What I ended up doing and the results.
Here's the framework I now use for every AI product idea, based on what I learned from turning down that project and observing successful AI companies:
Phase 1: Manual Market Validation (Week 1-4)
Before touching any AI tools, prove the core value proposition manually. Create what I call a "Wizard of Oz" version where you're the AI.
For the marketplace example: I recommended they create a simple landing page explaining their service, then manually match service providers with customers using phone calls and spreadsheets. Every "AI recommendation" would actually be them researching and making connections personally.
This approach reveals three critical insights:
Do people actually want what you're building?
What does good matching/recommendation/automation actually look like?
Where are the real friction points in the process?
Phase 2: Document the Manual Process (Week 5-6)
Once you've manually served 10-50 customers, document every step of your process. This becomes your AI training data and feature specification. For every decision you made manually, ask: "Could an algorithm make this decision better, faster, or more consistently?"
Most importantly, identify which manual steps customers valued most. Often, what founders assume needs automation is actually what customers value about the human touch.
Phase 3: Selective Automation (Week 7-12)
Now—and only now—start building AI features. But don't automate everything at once. Pick the most repetitive, time-consuming manual tasks that don't require nuanced judgment. Build these as isolated features that enhance your proven manual process.
For the marketplace, this might mean:
First AI feature: Automatically categorize incoming service requests
Second AI feature: Suggest potential matches based on past successful connections
Third AI feature: Optimize pricing recommendations based on market data
Each feature gets validated against your manual baseline. If the AI doesn't perform better than your manual process, don't ship it.
Phase 4: Scale What Works (Month 4+)
By now, you have real PMF data. You know which parts of your process customers value, which parts you can automate without losing quality, and which parts might always need human oversight. This is when you can confidently build the full AI-powered platform.
The difference? You're building AI to scale a proven business model, not hoping AI will create a business model.
Manual Validation
"Be the AI" for 30 days to understand what good looks like before automating anything.
Documentation Phase
Map every decision point in your manual process—these become your AI feature specifications.
Selective Automation
Automate the boring stuff first. Keep the high-value human interactions until you're sure AI improves them.
Scale Strategically
Once AI enhances your proven process, scale the automation—but always with human oversight options.
I followed up with that potential client six months later. They had taken my advice (after initially going to another developer who built them a basic platform that got zero traction). The manual validation revealed something crucial: their target service providers didn't actually want a marketplace platform—they wanted direct client referrals.
They pivoted to a simple referral service, validating the concept manually first. Within three months of manual operation, they were generating $15K monthly recurring revenue with just email and spreadsheets. Now they're selectively automating parts of the process, but they understand exactly which parts add value.
Compare this to similar AI marketplace startups I've tracked: those that built AI-first typically spent 6-12 months and $50K-200K before discovering fundamental flaws in their assumptions. The manual-first approach cost them $200 in landing page tools and one month of time.
The broader lesson applies beyond marketplaces. I've seen this pattern across AI products:
AI writing assistants that succeed start with human writing services
AI customer support tools that work build on proven human support scripts
AI recommendation engines that convert are based on manual curation insights
The key metric isn't how sophisticated your AI is—it's whether customers would pay for your manual version first.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
The most important lesson from this experience: AI product-market fit requires human market fit first. You can't automate your way to PMF—you can only scale it once you've found it manually.
Here are the seven key insights that changed my approach to AI product development:
Manual processes reveal true user needs: What seems obvious when theorizing becomes complex when you're actually serving customers manually
AI amplifies existing value, rarely creates it: If your manual process isn't valuable, automation won't make it valuable
Customers buy outcomes, not technology: They don't care about your AI—they care about results
Speed of building isn't speed to market: Building the wrong thing fast is slower than building the right thing deliberately
Manual operations provide the best AI training data: Your own decision-making process becomes the model
Selective automation beats full automation: Keep human control where customers value it most
Validation is cheaper than iteration: One month of manual validation saves months of product iteration
This approach feels slower initially, but it's actually faster to sustainable revenue. More importantly, it builds better AI products because they're solving proven problems rather than hypothetical ones.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS founders building AI products:
Start with a manual service version of your AI product idea
Validate demand through personal customer interactions first
Document your manual process as AI feature specifications
Build AI to scale proven value, not create untested value
For your Ecommerce store
For ecommerce businesses exploring AI:
Test AI features manually first (personal recommendations, custom support)
Use manual processes to understand customer behavior patterns
Automate repetitive tasks while keeping human oversight for complex decisions
Scale AI features based on proven manual customer success metrics