Growth & Strategy

How to Collect Feedback on AI MVP Without Building a "Perfect" Product First


Personas

SaaS & Startup

Time to ROI

Short-term (< 3 months)

When I started consulting for AI startups, I kept seeing the same pattern: founders spending months perfecting their AI models before showing them to anyone. They'd obsess over accuracy rates, fine-tune algorithms, and build elaborate UIs. Then they'd launch to… crickets.

The problem? They were solving the wrong problem beautifully. I've learned that collecting feedback on your AI MVP isn't about having a perfect product – it's about validating whether you're building something people actually want, even if it's clunky.

Last year, I turned down a massive AI platform project because the client wanted to "test if their idea works" by building a complete product first. Instead, I showed them how to validate demand in days, not months. The same principles apply whether you're building an AI chatbot, recommendation engine, or prediction tool.

Here's what you'll learn from my experience helping AI startups collect meaningful feedback:

  • Why your first MVP should be your marketing and sales process, not your product

  • How to test AI value propositions without writing a single line of code

  • The 3-tier feedback collection system I use with AI startups

  • Real examples of manual validation methods that work

  • How to structure feedback loops that actually improve your AI product

Industry Reality

What every AI founder thinks they need to do

The AI startup world is obsessed with technical perfection. Every accelerator, mentor, and blog post tells you the same thing:

  • Build a minimum viable product that demonstrates your AI capabilities

  • Focus on model accuracy before showing it to users

  • Create polished demos that showcase your technology

  • Collect feedback on features and user interface

  • Iterate based on user behavior within your app

This advice exists because it worked in the early days of software. Build something functional, put it in front of users, see what they do. The problem? AI products are fundamentally different from traditional software.

With regular software, users understand what they're getting. A project management tool manages projects. A CRM manages customer relationships. But AI? Most people don't understand what AI can or can't do for them.

When you put an "AI-powered" product in front of someone, they're not just evaluating your features – they're trying to understand if AI itself solves their problem. That's why traditional MVP feedback collection fails for AI products.

The conventional approach assumes people know what they want from AI. In reality, they need to experience the value before they can give meaningful feedback about the implementation.

Who am I

Consider me as your business complice.

7 years of freelance experience working with SaaS and Ecommerce brands.

I learned this lesson when a client approached me with a brilliant AI idea: a tool that would analyze customer support tickets and automatically suggest responses. They'd spent three months building a prototype with decent accuracy, clean UI, and integration capabilities.

"We want to see if our idea is worth pursuing," they told me. They had no existing audience, no validated customer base, no proof of demand. Just an idea and a working prototype.

This is when I realized something critical: if you're truly testing market demand, your MVP should take one day to build, not three months.

Here's what happened when we tested their AI concept manually first. Instead of refining their algorithm, we created a simple landing page explaining the value proposition. Then we did something counterintuitive – we manually delivered the service they wanted to automate.

For two weeks, my client's team manually read support tickets and suggested responses. They sent these suggestions to potential customers via email. No AI involved. Just humans doing what the AI would eventually do.

The results were eye-opening. Customers didn't want suggested responses – they wanted the AI to write the responses completely. They didn't care about "suggestions" because they still had to review and edit everything. The real pain point wasn't finding the right response; it was the time spent writing responses at all.

This manual testing revealed something their prototype couldn't: the core value proposition was wrong. Customers wanted automation, not assistance. By the time we learned this, they could have spent three more months building the wrong thing.

My experiments

Here's my playbook

What I ended up doing and the results.

Based on this experience and similar situations with other AI startups, I developed a three-tier approach to collecting feedback on AI MVPs. The key insight? Your MVP should be your marketing and sales process, not your product.

Tier 1: Validate the Problem (Days 1-7)

Start with a simple landing page or Notion doc explaining your AI's value proposition. Don't mention the technology – focus on the outcome. "We help customer support teams respond 3x faster" instead of "Our AI analyzes tickets and suggests responses."

Create a basic signup form asking for email and one qualifying question about their current process. Then do manual outreach to potential users. Your goal isn't to sell anything – it's to confirm the problem exists and understand how people currently solve it.

I had one client test their AI recruitment tool this way. Instead of building algorithms, they manually matched candidates to job descriptions for a week. They learned that companies didn't want better matching – they wanted faster screening. Completely different product.

Tier 2: Manual Service Delivery (Weeks 2-4)

This is where the magic happens. Manually deliver what your AI would eventually automate. If you're building a content generation AI, write the content yourself. If it's a data analysis AI, analyze the data manually.

The feedback you get from manual delivery is invaluable because customers experience the actual value, not just a demo. They'll tell you what they really need, not what they think they should want from an AI tool.

One fintech AI startup I worked with manually processed loan applications for two weeks instead of building their algorithm first. They discovered that speed mattered more than accuracy – customers wanted decisions in hours, not days, even if the approval rate was slightly lower.

Tier 3: Wizard of Oz MVP (Month 2)

Only after proving demand through manual delivery should you build anything that looks like a product. But even then, don't build the AI yet. Create a simple interface where users submit requests, and you manually process them behind the scenes.

This "Wizard of Oz" approach lets you test user workflows, understand edge cases, and refine your value proposition before investing in complex algorithms. Users interact with what feels like an AI product, but humans are doing the work.

Feedback Collection Framework

At each tier, ask specific questions:

  • Problem validation: "How do you currently solve this? What's the biggest pain point?"

  • Solution validation: "Would this outcome be worth paying for? What would have to be true for you to switch?"

  • Product validation: "What would make this 10x better? What's missing?"

Problem Discovery

Focus on understanding the pain, not selling the solution. Ask about current workflows first.

Manual Validation

Deliver the service manually before building anything. This reveals what customers actually value.

Wizard Interface

Create a simple UI that feels like AI but runs on human intelligence behind the scenes.

Feedback Loops

Structure questions around problem-solution-product fit, not just feature requests.

Using this three-tier approach, I've seen AI startups save months of development time and thousands in unnecessary features. One client validated their AI writing assistant concept in two weeks instead of building for six months.

The manual delivery phase revealed that customers didn't want an AI writing assistant – they wanted an AI editor that improved their existing content. Completely different product, same target market.

Another client testing an AI sales forecasting tool learned that accuracy wasn't the key metric – explanability was. Sales managers needed to understand why the AI made certain predictions so they could make better decisions.

The most successful validation I've seen was a healthcare AI startup that manually processed medical data for doctors. They discovered that the real value wasn't in faster analysis – it was in catching errors that human reviewers missed. This insight led to a completely different product positioning that made them millions.

Timeline typically looks like this: 1 week for problem validation, 2-3 weeks for manual delivery, 1-2 weeks for wizard MVP. Total time to market validation: 4-6 weeks instead of 3-6 months of building.

Learnings

What I've learned and the mistakes I've made.

Sharing so you don't make them.

Here are the key lessons I've learned from implementing this approach with dozens of AI startups:

  1. Distribution beats product quality – Even the best AI is useless if you can't reach customers. Validate distribution first.

  2. Manual delivery reveals edge cases – You'll discover scenarios your AI needs to handle that you never considered.

  3. Customers can't articulate AI needs – They need to experience the value before they can give useful feedback.

  4. Speed trumps accuracy in early validation – Getting results fast matters more than getting perfect results.

  5. The real product is often different – Manual testing usually reveals that the core value proposition needs adjustment.

  6. Wizard MVPs catch workflow issues – User interface problems surface before you've built complex systems.

  7. Feedback quality improves with each tier – Early feedback is about problems, later feedback is about solutions.

The biggest mistake I see is skipping the manual delivery phase because it "doesn't scale." That's the point – you're not trying to scale yet, you're trying to learn. Scale comes after you've proven the concept works.

How you can adapt this to your Business

My playbook, condensed for your use case.

For your SaaS / Startup

For SaaS AI startups specifically:

  • Start with a simple landing page explaining the outcome, not the technology

  • Use email and spreadsheets to deliver manual service before building APIs

  • Focus on proving people will pay for the outcome, not the AI implementation

For your Ecommerce store

For Ecommerce applications:

  • Test AI recommendations manually by analyzing customer data yourself

  • Validate personalization value through manual email campaigns first

  • Use basic tools to simulate AI features before building complex algorithms

Get more playbooks like this one in my weekly newsletter