Growth & Strategy

Why I Turned Down a $XX,XXX Platform Project (And What I Told the Client About Testing Instead)


Personas

SaaS & Startup

Time to ROI

Short-term (< 3 months)

Last year, a potential client approached me with an exciting opportunity: build a two-sided marketplace platform. The budget was substantial, the technical challenge was interesting, and it would have been one of my biggest projects to date. I said no.

Here's the thing - they came to me because they'd heard about no-code tools like AI and platforms like Bubble that could build anything quickly and cheaply. They weren't wrong technically - you can build a complex platform with these tools. But their core statement revealed the problem: "We want to see if our idea is worth pursuing."

They had no existing audience, no validated customer base, no proof of demand. Just an idea and enthusiasm. And they wanted to spend months building before testing a single assumption.

What I learned from this experience - and from watching too many founders make the same mistake - is that SaaS user testing isn't about perfecting your Bubble prototype. It's about validating demand before you build anything complex.

In this playbook, you'll discover:

  • Why most MVP testing strategies fail before they start

  • The one-day validation method I recommend instead of three-month builds

  • How to structure user testing that actually predicts market success

  • When Bubble prototyping makes sense (and when it doesn't)

  • The testing framework that saves months of development time

Industry Reality

What every startup founder believes about MVP testing

The startup world has created this mythology around MVPs that's honestly doing more harm than good. Walk into any accelerator or read any product blog, and you'll hear the same advice repeated like gospel:

"Build fast, test early, iterate quickly." Sounds great, right? But here's what this actually translates to in practice:

  1. Build a functional prototype - Even with no-code tools like Bubble, this takes weeks or months

  2. Get it in front of users - Usually friends, family, or whoever you can convince to test

  3. Collect feedback and iterate - Add features, fix bugs, rebuild sections

  4. Repeat until success - Keep building until something sticks

This approach exists because it feels productive. You're building, you're shipping, you're "failing fast." The problem? You're optimizing for the wrong thing entirely.

Most founders confuse product validation with idea validation. They think testing means putting a working prototype in someone's hands and watching them use it. But by the time you have a working prototype, you've already made dozens of assumptions about what people want.

The conventional wisdom fails because it starts with the solution (your product idea) instead of the problem (what users actually need). You end up testing your implementation of an idea rather than testing whether the idea solves a real problem people will pay for.

Even worse, most "user testing" at the MVP stage is really just usability testing in disguise. You're asking "How do I make this easier to use?" instead of "Should this exist at all?"

Who am I

Consider me as your business complice.

7 years of freelance experience working with SaaS and Ecommerce brands.

So back to my client with the marketplace idea. When they explained their vision, I could see they'd already designed the entire user journey in their heads. They knew exactly how buyers and sellers would interact, what features were essential, how the platform would make money. They just needed someone to build it.

This is the classic founder trap - falling so in love with your solution that you skip validating the underlying problem. Their reasoning was logical: "If we build it and people don't use it, we'll know the idea doesn't work." But that's a $30,000+ experiment with a binary outcome.

The real issue wasn't technical - it was market validation. They wanted to test if their marketplace idea had legs, but they were approaching it backwards. Instead of starting with "Do people have this problem?" they jumped straight to "Will people use our solution?"

Here's what I discovered when I dug deeper into their assumptions:

  • They'd never talked to their target buyers about the current solutions they were using

  • They had no idea what sellers were currently paying for similar services

  • They'd never tested whether their value proposition resonated with either side

  • Most importantly - they had no existing relationships with potential users

The platform they wanted to build was essentially a bet that they could create supply and demand simultaneously. That's one of the hardest problems in business, and they wanted to test it by building the entire solution first.

I've seen this pattern repeatedly with growth-focused founders. They get excited about the tools (Bubble, no-code, AI) and think the ease of building means they should start building immediately. But easier building just means you can fail faster and more expensively if you're building the wrong thing.

What struck me most was their timeline pressure. They felt like they needed to move fast because competitors might enter the space. But they were optimizing for speed of building rather than speed of learning. Those are completely different things.

My experiments

Here's my playbook

What I ended up doing and the results.

Instead of taking their money to build a platform, I proposed a completely different approach. What I call "demand validation before development" - and it can be done in days, not months.

Day 1: Create the Landing Page
Instead of a Bubble app, we started with a simple landing page explaining the value proposition. Not a coming soon page - a page that positioned the service as if it already existed. The goal was to test messaging, not functionality.

Week 1: Manual Outreach to Both Sides
This is where most founders get squeamish, but it's the most important part. We identified 50 potential buyers and 50 potential sellers. Instead of building a platform to connect them, we reached out manually to understand their current process.

The questions weren't about our solution - they were about their existing problems:

  • "How do you currently handle [specific process]?"

  • "What's most frustrating about your current approach?"

  • "If there was a better way, what would that look like?"

  • "What would you pay for a solution that did X?"

Week 2-4: Manual Matching Process
Here's the key insight: before building a platform to automate connections, we tested whether the connections themselves had value. We manually matched buyers with sellers via email and WhatsApp.

This "concierge MVP" approach revealed everything we needed to know about market demand without writing a single line of code. We learned:

  • Which value propositions resonated with each side

  • What the actual friction points were in the process

  • Whether people would pay for the service

  • What the right pricing model should be

The Bubble Testing Framework
Only after validating demand would I recommend building in Bubble. But when you do, here's the testing approach that actually works:

1. Feature Isolation Testing - Build one core feature at a time and test it independently. Don't build the entire platform and then test usability.

2. Behavioral Metrics Over Feedback - Track what users do, not what they say. Time spent, actions completed, return visits matter more than survey responses.

3. Cohort-Based Testing - Test with small groups of real users who fit your target profile. Friends and family feedback is worse than useless.

4. Iterative Feature Addition - Add complexity only after the previous level is proven. Start with core value delivery before adding convenience features.

Problem Discovery

Test the problem before building the solution. Manual outreach reveals real pain points faster than any prototype.

Demand Validation

Manually facilitate the core value exchange. If you can't create value manually then automation won't help.

Feature Isolation

Build and test one core feature at a time. Complete platforms are impossible to debug when they don't work.

Behavioral Focus

Track actions not opinions. What users do tells you more about market fit than what they say.

The outcome validated my approach completely. After four weeks of manual testing, we discovered something crucial: the buyers loved the idea, but the sellers weren't interested at the proposed price point.

This wasn't a usability problem or a feature gap - it was a fundamental market mismatch. The buyers expected to pay 30% less than sellers were willing to accept. No amount of platform optimization would have solved that.

More importantly, we learned this in one month for less than $2,000 in time and outreach costs. Building the full platform would have taken three months and $30,000+, only to discover the same economic reality.

The client initially felt disappointed, but then realized we'd saved them from a much more expensive mistake. They pivoted to a different model based on what we learned from the seller conversations - one that actually matched market economics.

The broader impact was even more valuable. This experience taught them the difference between building something people will use versus building something people will pay for. Those are completely different validation challenges.

When they did eventually build (six months later, with a different model), they already had a waiting list of validated customers. The Bubble development took half the time because they knew exactly what to build and for whom.

Learnings

What I've learned and the mistakes I've made.

Sharing so you don't make them.

This experience reinforced several principles that now guide every project recommendation I make:

1. Validate Market Economics Before Building Anything
The most beautiful Bubble app won't save you from bad unit economics. Test pricing and willingness to pay before you test usability.

2. Manual Processes Reveal More Than Automated Ones
When you manually facilitate the value exchange, you see every friction point and assumption. Automation hides these insights until it's too late.

3. User Testing ≠ Market Testing
People will use lots of things they won't pay for. Focus on testing purchase intent and actual behavior, not just engagement.

4. Speed of Learning > Speed of Building
No-code tools make building faster, but they don't make learning faster. The bottleneck is usually market validation, not development.

5. Distribution Comes Before Product
If you can't reach your target users for manual testing, you won't be able to reach them for product adoption either.

6. Platform Businesses Are Distribution Businesses First
Two-sided marketplaces fail on distribution problems, not product problems. Test your ability to attract both sides before building connection tools.

7. Concierge MVPs Scale Better Than Product MVPs
Starting with high-touch, manual processes teaches you what to automate and what to keep human. Starting with automation teaches you very little.

How you can adapt this to your Business

My playbook, condensed for your use case.

For your SaaS / Startup

For SaaS founders considering Bubble MVP development:

  • Start with manual validation of your core value proposition

  • Test pricing and purchase intent before building features

  • Build one feature at a time and measure actual usage

  • Focus on behavioral metrics over user feedback

For your Ecommerce store

For e-commerce businesses testing new products or features:

  • Test demand with pre-orders or waitlists before building

  • Use landing pages to validate messaging and pricing

  • Manual fulfillment can validate processes before automation

  • Focus on purchase intent testing over usability testing

Get more playbooks like this one in my weekly newsletter