Growth & Strategy
Personas
SaaS & Startup
Time to ROI
Short-term (< 3 months)
Last year, a potential client approached me with what seemed like a straightforward request: build a two-sided marketplace platform. The budget was substantial, the technical challenge was interesting, and it would have been one of my biggest projects to date.
I said no.
Why? Because their core statement revealed a fundamental misunderstanding about what metrics actually define MVP success. They wanted to "test if their idea works" by building a complex platform first, then measuring success later. This backwards approach is exactly why most MVPs fail before they even launch.
Here's what you'll learn from my experience rejecting this project and the framework I shared with them instead:
Why building-first metrics lead to expensive failures
The 3-layer validation framework that saves months of development
How to measure demand before writing a single line of code
Real metrics that predict long-term product success
When technical metrics actually matter (spoiler: it's later than you think)
Most founders are tracking vanity metrics while ignoring the signals that actually predict success. Let me show you what I've learned from turning down lucrative projects and the growth-focused approach that actually works.
Industry Reality
What every startup founder has been told
If you've been in the startup world for more than five minutes, you've heard the standard MVP metrics gospel. Every accelerator, every startup guru, every product management course preaches the same formula:
Build → Measure → Learn
The industry typically recommends tracking these "essential" MVP metrics:
User acquisition numbers - How many people signed up
Engagement rates - Daily/monthly active users
Feature adoption - Which features get used most
Technical performance - Load times, uptime, bug counts
Conversion funnels - Sign-up to activation ratios
This conventional wisdom exists because it's borrowed directly from established product companies. It assumes you already have a validated product-market fit and you're optimizing an existing system. The problem? Most MVPs die before they ever reach the optimization stage.
Where this falls short in practice is brutal: founders spend 90% of their time building the product and 10% understanding if anyone actually wants it. They track sophisticated product metrics while ignoring the fundamental question of whether they're solving a real problem for real people.
The result? Beautiful dashboards full of metrics measuring the wrong thing entirely. You end up with perfect data about a product nobody wants.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
When that marketplace client came to me, they had everything figured out except the most important part. They knew their target audience, they had wireframes, they'd even calculated their projected revenue. What they didn't have was a single validated customer.
Their plan was classic: build the platform, launch it, then measure success based on user signups and engagement. They wanted to invest months in development to "test if the idea works." This is the startup equivalent of building a restaurant and then asking if people like the food.
I've seen this pattern too many times. The client was treating their MVP like a product launch instead of a hypothesis test. They were confusing "minimum viable product" with "minimum marketable product" - a mistake that costs founders months of time and thousands of dollars.
But here's what really bothered me about their approach: they had no existing audience, no validated customer base, and no proof of demand. Yet they wanted to measure success through post-launch metrics. It's like trying to measure the success of a bridge by counting cars after you've already built it, without first confirming anyone needs to cross the river.
The red flag wasn't their enthusiasm or their budget - it was their fundamental misunderstanding of what an MVP should measure. They were optimizing for the wrong stage of the product lifecycle entirely.
Here's my playbook
What I ended up doing and the results.
Instead of taking their money and building what would likely become an expensive experiment, I shared my 3-layer validation framework. This approach measures demand and validates assumptions before building anything complex.
Layer 1: Problem Validation (Week 1)
I told them to create a simple landing page explaining their value proposition. Not a prototype, not wireframes - just a clear description of the problem they solve and how they solve it. The metric that matters here isn't conversion rate or signup numbers. It's this: Can you get 10 people to have a 30-minute conversation about this problem?
If you can't find 10 people willing to talk about the problem for 30 minutes, you don't have a problem worth solving. This is the most important metric nobody talks about: conversation willingness rate.
Layer 2: Solution Validation (Weeks 2-4)
Once you've confirmed the problem exists, the next metric is manual solution validation. For their marketplace, I suggested they manually match supply and demand via email or WhatsApp. The key metric here: How many successful manual transactions can you facilitate?
If you can't manually create value for at least 20 transactions, automation won't save you. This metric reveals whether your solution actually works, not just whether your website looks professional.
Layer 3: Demand Intensity (Month 2)
Only after proving manual validation should you measure demand intensity. The metric that predicts long-term success: What percentage of successful manual users ask when the automated version will be ready?
I call this the "When Can I Have This?" metric. If people aren't asking when they can use your solution regularly, you have a nice-to-have, not a must-have.
The Technical Metrics Come Last
Only after validating all three layers should you build technical features and measure traditional product metrics. By then, you're measuring optimization of a validated system, not hoping your beautiful product finds a market.
Conversation Rate
How many people will discuss your problem for 30+ minutes without any incentive
Manual Success
Successful transactions you can facilitate manually before building automation
Demand Intensity
Percentage of manual users who ask when the automated version will be available
Market Timing
Whether people are actively looking for solutions or need to be educated about the problem
The client initially pushed back on this approach. They wanted to measure "real" metrics like user acquisition and feature adoption. But here's what happened when they tested my framework:
Week 1: They struggled to find 10 people willing to have problem-focused conversations. This was their first red flag that the problem might not be as urgent as they assumed.
Week 3: They manually facilitated 5 successful marketplace transactions. Not the hundreds they'd projected, but enough to prove the concept could work with the right audience.
Month 2: 80% of their successful manual users asked about automation timelines. This high "When Can I Have This?" percentage indicated genuine demand intensity.
Most importantly: They discovered their initial target market was wrong. The manual validation revealed a completely different customer segment that had much higher demand intensity. This insight would have been impossible to discover through post-launch metrics.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
After years of seeing founders measure the wrong things, here are the lessons that actually matter:
Conversation willingness predicts everything - If people won't talk about your problem, they won't pay for your solution
Manual validation reveals assumptions - Automation hides whether your solution actually works
Demand intensity beats user volume - 10 people who desperately need your solution beat 1000 who might use it
Market timing is a metric - Are people actively searching for solutions or do you need to educate them?
Technical metrics come last - Perfect code doesn't fix poor product-market fit
Distribution validation matters more than feature validation - How you'll reach customers is more important than what features you'll build
Revenue timeline is a leading indicator - How long from first contact to first payment reveals business model viability
The biggest mistake? Measuring post-launch metrics on pre-launch assumptions. Validate the assumptions first, then build the measurement system.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS startups specifically:
Measure problem-solution fit before product-market fit
Track manual user acquisition before automated funnels
Validate pricing through direct conversations, not A/B tests
Focus on customer success velocity over feature velocity
For your Ecommerce store
For ecommerce specifically:
Test demand through pre-orders or waitlists before inventory
Measure repeat purchase intent, not just first purchase
Validate distribution channels manually before automation
Track customer acquisition cost through organic methods first