Growth & Strategy
Personas
SaaS & Startup
Time to ROI
Short-term (< 3 months)
Last year, a potential client approached me with what seemed like a goldmine opportunity: build a complex AI-powered two-sided marketplace platform. The budget was substantial, the technical challenge was exciting, and it would have been one of my biggest projects to date.
I said no.
Not because I couldn't build it—with today's AI and no-code tools, technically you can build almost anything. The red flag was hidden in their core statement: "We want to see if our idea is worth pursuing."
They had no existing audience, no validated customer base, no proof of demand. Just an idea and enthusiasm. Sound familiar?
This experience taught me that in the age of AI, the constraint isn't building—it's knowing what to build and for whom. Customer interviews aren't just nice-to-have research anymore; they're your first line of defense against building something nobody wants.
Here's what you'll learn from my approach to AI customer discovery:
Why traditional customer interview frameworks fail for AI products
The 3-step validation process I use before any AI project
How to identify real AI use cases vs. "AI-washed" solutions
The specific questions that reveal actual demand
When to walk away from seemingly profitable AI projects
Let me share what happened when I chose customer validation over immediate revenue—and why that decision saved both my client and me from a spectacular failure.
Reality Check
What the AI hype cycle teaches us about customer discovery
The AI industry has created a perfect storm of misguided customer research. Every startup accelerator, business blog, and consultant is preaching the same gospel: "Find pain points, build AI solutions, scale fast."
Here's what conventional wisdom tells you about AI customer interviews:
Focus on efficiency gains - Ask how AI can make existing processes faster
Identify automation opportunities - Look for repetitive tasks to replace
Emphasize cost savings - Calculate ROI based on time saved
Demo the technology first - Show AI capabilities to inspire use cases
Target early adopters - Find tech-savvy customers willing to experiment
This approach exists because it mirrors how other software categories have been validated. It worked for SaaS, mobile apps, and cloud platforms. The problem? AI is fundamentally different.
Unlike traditional software where features solve specific problems, AI often creates solutions looking for problems. Customers can't articulate needs for capabilities they don't understand. They say they want "AI-powered insights" without knowing what insights they actually need.
The result? A graveyard of technically impressive AI products that nobody uses after the initial demo. Companies spend months building sophisticated models for problems that don't actually exist or aren't painful enough to solve.
I learned this lesson the hard way, but not by building the wrong thing—by nearly saying yes to building it.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
When that marketplace client first pitched their idea, everything looked promising on paper. They'd done their homework—market research showing demand for their category, competitive analysis proving the space wasn't saturated, even preliminary technical specs for the AI matching algorithm.
But something felt off during our discovery calls. They kept talking about features—the AI recommendation engine, the smart matching system, the predictive analytics dashboard. When I asked about their target users, they had personas. When I asked about their go-to-market strategy, they had a plan.
What they didn't have was a single conversation with someone who would actually use their product.
The first red flag: When I asked "Who specifically have you talked to about this problem?" their answer was "We've done extensive market research." Market research isn't customer interviews. It's data about markets, not conversations with humans.
The second red flag: Their timeline was backwards. They wanted to build first, then find customers during beta. This is the classic "if you build it, they will come" fallacy that kills most startups—AI or otherwise.
The third red flag: Every use case they described was hypothetical. "Users will be able to..." "The system will help them..." "Our AI will automatically..." No current behavior. No existing workflows. No manual processes they were already doing.
That's when I realized they weren't looking to solve a customer problem—they were looking to validate a technology solution. Completely backwards.
Instead of taking their money and building what they thought they wanted, I proposed something different: spend one week conducting real customer interviews before writing a single line of code. If we couldn't find ten people currently struggling with the problem they claimed to solve, we shouldn't build the solution.
They initially resisted. "But we already know there's demand." "We've seen the market data." "Can't we just start with an MVP?"
I told them something that initially shocked them: "If you're truly testing market demand, your MVP should take one day to build—not three months."
Here's my playbook
What I ended up doing and the results.
After nearly making the expensive mistake with that marketplace project, I developed a systematic approach to AI customer interviews that focuses on current behavior rather than future possibilities. Here's the exact process I now use:
Step 1: The Reality Validation Framework
Before any AI project, I run what I call "reality validation"—proving the problem exists in the wild, not just in market research reports. This starts with identifying people who are currently dealing with the problem manually.
The key insight: If people aren't already struggling with something enough to have created workarounds, AI won't magically make them care about solving it.
I look for three specific behavioral indicators:
Manual processes they hate but can't avoid - They're doing something tedious but necessary
Workaround solutions they've built - Spreadsheets, scripts, or processes to handle the problem
Budget already allocated - They're spending money (time, tools, or people) on the current solution
Step 2: The "Show Me" Interview Structure
Traditional customer interviews ask hypothetical questions: "Would you use a tool that..." "How much would you pay for..." "What features matter most?"
AI customer interviews need to focus on current reality. My interview script follows this structure:
Opening (5 minutes): "I'm researching how [specific role] currently handle [specific task]. Can you walk me through the last time you dealt with this?"
Process Deep-dive (15 minutes): "Show me exactly what you did. What tools did you use? How long did it take? What was frustrating about it?"
Pain Quantification (10 minutes): "How often does this happen? What's the cost when it goes wrong? Have you looked for solutions?"
Validation Test (5 minutes): "If I could solve this specific part [describe one narrow piece], would that be worth paying for?"
Notice what's missing: any mention of AI, automation, or future capabilities. I'm focused entirely on understanding their current world.
Step 3: The Demand Proof Protocol
The final step separates real opportunities from interesting problems. I use a simple test: can I get them to commit to something before building anything?
This doesn't mean pre-sales (though that's ideal). It means proof they'll engage:
Email signup for updates about the solution
Beta testing commitment with specific time allocated
Referral to colleagues with the same problem
Current solution sharing - showing me their existing workarounds
If ten people won't give me their email address for updates about solving their "urgent problem," it's not actually urgent.
This process saved that potential client from building a two-sided marketplace with no demand on either side. Instead of a three-month development project, we spent one week discovering that their target users were already satisfied with existing solutions.
Problem Discovery
Focus on current pain, not future possibilities. Look for manual processes they already hate doing.
Behavior Mapping
Document their exact current workflow before suggesting any AI improvements.
Commitment Testing
Get small commitments before building. Email signups prove more than survey responses.
Reality Filtering
Walk away from projects where customers won't engage in discovery—they won't engage with your product either.
Using this customer discovery process has fundamentally changed how I approach AI projects. Instead of building solutions looking for problems, I now only work on AI applications where demand is proven before development starts.
The specific outcomes from implementing this approach:
Project Success Rate: Before using systematic customer interviews, about 30% of my AI projects achieved meaningful user adoption. Now it's closer to 80%—simply because I'm only building things people actually want.
Development Time: Customer interviews add 1-2 weeks upfront but save 2-3 months of pivoting and rebuilding. The overall time to market is actually faster.
Client Satisfaction: Clients are initially resistant to "slowing down" for interviews, but they're much happier when their product actually gets used.
That marketplace client? They thanked me six months later. Instead of building a platform nobody wanted, they used the customer insights to pivot into a much simpler solution that's now generating revenue.
The most surprising outcome was how many "AI projects" turned out not to need AI at all. When you understand the real problem, the solution is often much simpler than artificial intelligence.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
Here are the seven most important lessons from conducting hundreds of AI customer interviews:
Current behavior predicts future adoption - If they're not doing something manually, they won't do it with AI
Market research isn't customer validation - Data about markets doesn't replace conversations with humans
Technology demos create false demand - People get excited about AI capabilities but won't pay for solutions to problems they don't have
Small commitments reveal real interest - Email signups matter more than "I'd definitely use this" statements
Workarounds are goldmines - If they've built manual processes, there's proven demand for automation
Budget allocation trumps feature requests - What they're currently spending money on shows true priorities
Walking away is often the right choice - Rejecting projects without proven demand saves everyone time and money
The biggest lesson: Your first MVP shouldn't be your product—it should be your customer discovery process. If you can't validate demand in a week, you probably can't build sustainable demand in a year.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS startups building AI features:
Interview existing customers about current manual processes before building AI features
Test demand with email captures before development sprints
Focus on workflow integration over standalone AI tools
Measure engagement with current features before adding AI complexity
For your Ecommerce store
For e-commerce companies considering AI implementation:
Interview customers about shopping behavior, not AI preferences
Look for manual tasks in your operations that need automation
Test personalization manually before building AI recommendation engines
Focus on inventory and logistics problems over customer-facing AI features