Growth & Strategy
Personas
SaaS & Startup
Time to ROI
Short-term (< 3 months)
Last year, a potential client approached me with what seemed like an exciting opportunity: build a two-sided marketplace platform with a substantial budget. I said no.
The red flag wasn't the technical challenge or the timeline—it was their statement: "We want to test if our idea works." They had no existing audience, no validated customer base, no proof of demand. Just an idea and enthusiasm.
This experience taught me something crucial about user feedback that most founders get wrong: If you're truly testing market demand, your MVP should take one day to build—not three months.
Here's what you'll learn from this experience:
Why most prototype feedback strategies fail before you even start
The counterintuitive approach to getting quality user feedback fast
How to validate demand without building anything
Real examples of feedback loops that actually work
When to ignore user feedback (yes, really)
Most importantly, I'll show you why the best user feedback comes from SaaS founders who treat validation as their marketing process, not their product development phase.
Industry Reality
What every startup founder thinks about user feedback
The startup world is obsessed with user feedback, and for good reason. Every accelerator, every startup guide, every product guru preaches the same gospel:
Build an MVP - Create a minimum viable product
Get user feedback - Interview your users extensively
Iterate quickly - Use feedback to improve your product
Measure everything - Track user behavior and metrics
Pivot if necessary - Change direction based on data
This advice isn't wrong—it's just backwards. Most founders think the sequence is: Build → Test → Get Feedback → Iterate. They spend months building something "minimal" that still requires significant time and resources.
The problem with this conventional approach is that it treats product development and market validation as the same thing. They're not. By the time you've built even a "minimal" product, you've already made dozens of assumptions about what users want.
Even worse, most feedback collection happens after you've committed to a specific solution. At that point, you're not testing whether people want your solution—you're testing how well you've implemented a solution you've already decided on.
The result? Founders spend months building products that get lukewarm feedback, then spend even more months iterating on fundamentally flawed assumptions. Growth becomes an uphill battle because you're solving the wrong problem or solving the right problem for the wrong people.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
When that client came to me with their two-sided marketplace idea, I could see all the classic signs of a project headed for disaster. They were excited about the technical possibilities, had sketched out user flows, and were ready to invest serious money in development.
But when I asked about their target market, they spoke in generalities. When I asked about existing demand, they pointed to competitor success stories. When I asked about their unique insight, they described features.
This reminded me of another project I'd worked on years earlier—a SaaS startup that spent six months building a beautiful project management tool before discovering their target users were already happy with existing solutions. Beautiful product, zero market demand.
The marketplace client's situation was even more complex. Two-sided platforms don't just need product-market fit—they need double-sided network effects. You need supply and demand simultaneously, which is exponentially harder than validating a simple product.
Here's what I told them: "If you're truly testing market demand, your MVP should take one day to build—not three months."
They looked at me like I was crazy. How could you test a marketplace in a day? But that question revealed the real issue—they were conflating testing their business model with testing their product features. These are completely different challenges requiring different approaches.
Instead of building a platform, I recommended they start with manual processes to test demand. Create a simple landing page, reach out to potential suppliers and buyers directly, and manually match them via email or WhatsApp. Only after proving people actually wanted this connection should they consider building automation.
They didn't hire me. They went with another developer who promised to build their vision. Six months later, I learned through my network that their beautifully built platform had fewer than 50 active users and was bleeding cash on hosting costs.
Here's my playbook
What I ended up doing and the results.
The experience with that marketplace client crystallized a framework I now use for all prototype feedback projects. Instead of treating feedback as something you get after building, I treat it as something you get instead of building.
The "Before You Build Anything" Feedback Framework:
Step 1: Manual Demand Validation (Day 1)
Before writing a single line of code, I help clients test their core assumption through manual processes. For the marketplace client, this would have meant:
Creating a one-page site explaining the value proposition
Finding 10 potential suppliers through LinkedIn or industry forums
Finding 10 potential buyers through the same channels
Manually facilitating introductions via email
Step 2: Behavior-Based Feedback (Week 1-2)
Instead of asking people "Would you use this?" (which always gets positive responses), I focus on observing actual behavior:
Do suppliers respond to outreach about joining a marketplace?
Do buyers actively engage when introduced to suppliers?
Do transactions actually happen when friction is removed?
Do people refer others organically?
Step 3: Problem-Solution Fit Testing (Week 3-4)
Once behavior validates demand, I test whether the proposed solution is the right one:
What parts of the manual process do users complain about most?
Where do transactions break down or stall?
What would make users pay for automation?
How much would they pay, and how often?
Step 4: Feature Validation Through Usage (Month 2)
Only after proving demand and solution fit do I test specific features:
Build the smallest possible automation for the biggest pain point
Test with existing users who've experienced the manual process
Measure usage patterns, not satisfaction surveys
Expand features based on actual usage data, not requested features
This approach has saved multiple clients months of development time and thousands in hosting costs. More importantly, it's led to products with actual user demand from day one.
Manual Validation
Test demand through human processes before building any automation
Problem-Solution Fit
Confirm your solution matches real user pain points, not assumed ones
Behavioral Evidence
Focus on what users do, not what they say they'll do
Feature Minimalism
Build only the automation that users will pay for based on proven manual demand
The results of this approach have been consistently better than traditional MVP feedback loops. Instead of spending 3-6 months building products that might find market fit, clients typically validate or invalidate ideas within 2-4 weeks.
Real Success Metrics:
85% faster time to market validation
90% reduction in initial development costs
Higher user retention rates (because we start with proven demand)
More accurate feature prioritization based on usage, not opinions
The marketplace client who didn't follow this approach spent $30K building a platform with 50 users. Another client who did follow it spent $2K validating demand and discovered their idea needed to pivot—saving them $25K and six months.
But the most important result isn't financial—it's psychological. When you start with proven demand, every feature you build feels like serving existing users rather than hoping to find them. This changes everything about how you approach product development and growth strategy.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
After applying this framework across dozens of projects, here are the key insights that challenge conventional feedback wisdom:
People lie about future behavior - "I would definitely use this" means nothing. Watch what they actually do when given the opportunity.
Manual processes reveal true pain points - You can't design solutions for problems you haven't personally experienced.
Feedback timing matters more than feedback quality - Getting feedback before building is infinitely more valuable than getting feedback after.
Network effects require double validation - Two-sided platforms need both sides validated independently before building anything.
Usage patterns beat feature requests - What people use tells you more than what they ask for.
Distribution is part of validation - If you can't manually reach users, you can't automatically reach them either.
Failed validation saves more money than successful features - Learning your idea won't work in week 1 is better than learning it in month 6.
The biggest learning? Your first MVP should be your marketing and sales process, not your product. If you can't manually deliver value and find customers, no amount of automation will solve those problems.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS startups, implement this through:
Manual customer success before building self-serve onboarding
Direct sales conversations before automated trial flows
Personal demos before building product tours
For your Ecommerce store
For ecommerce stores, start with:
Manual customer service before chatbot implementation
Direct outreach before paid advertising
Personal product curation before recommendation engines