Growth & Strategy
Personas
SaaS & Startup
Time to ROI
Medium-term (3-6 months)
Last month, I watched a startup founder spend €15,000 on Google Ads in two weeks, then immediately pivot to LinkedIn outreach when the ads "didn't work." Sound familiar?
Here's the thing - most founders treat channel testing like throwing spaghetti at the wall. They'll try Facebook ads for a week, SEO for a month, then jump to cold email because some podcast guest said it was "the secret." What they're missing is a systematic approach to actually testing channels rather than just trying them.
After working with dozens of clients and running my own channel experiments, I've seen the same pattern: companies that succeed have a framework for testing. Companies that fail just wing it. The difference isn't luck - it's process.
In this playbook, you'll learn:
Why most "channel testing" is actually just expensive guessing
The exact framework I use to test 3-5 channels simultaneously without burning cash
Real examples from client projects where this approach found winning channels
Common testing mistakes that waste months of effort
When to kill a channel vs. when to double down
This isn't theory - it's a battle-tested checklist that's helped identify winning channels for everyone from B2B SaaS startups to e-commerce stores. Let's dig in.
Industry Reality
What most growth experts won't tell you
Open any growth marketing blog and you'll see the same advice: "Test multiple channels!" "Find your channel-market fit!" "Use the bullseye framework!" All solid advice in theory. The problem? Nobody tells you how to actually test channels without going broke.
The conventional wisdom says:
Start with 3-5 channels and "see what sticks"
Give each channel "enough time" to work (but never defines what that means)
Focus on the channels with the best ROI
Double down on what works
Track everything and optimize
Sounds logical, right? Here's why this approach kills most startups: it's not actually testing - it's just expensive experimenting without structure. Most founders end up spending weeks on SEO content that gets no traffic, burning through ad budgets on poorly optimized campaigns, or sending cold emails that land in spam folders.
The real issue? They're not testing apples to apples. They're comparing a well-executed paid campaign to a poorly planned SEO strategy, or a professional email sequence to amateur LinkedIn outreach. Without consistent testing methodology, you're not learning which channels work - you're learning which executions work.
This creates a dangerous cycle: try something half-heartedly, see poor results, blame the channel, move to the next shiny object. Meanwhile, your competitor finds success with the exact same channel because they approached it systematically.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
Three years ago, I was working with a B2B SaaS client who was convinced their product wasn't finding traction because they were in a "difficult market." They'd tried Facebook ads (failed), content marketing (no results), and even hired a LinkedIn outreach agency (minimal response). Every channel seemed broken.
The reality? They weren't testing channels - they were just trying things randomly. Their Facebook ads ran for two weeks with no landing page optimization. Their "content marketing" was three blog posts written by an intern. The LinkedIn outreach was generic templates sent to anyone with a director title.
When I audited their approach, I realized they had all the classic symptoms of bad channel testing: no consistent time frames, no standardized success metrics, no minimum viable tests, and no systematic approach to optimization. They were essentially conducting experiments without any scientific method.
This is when I developed what I now call the "Minimum Viable Channel Test" framework. Instead of trying to make every channel perfect, we focused on creating fair, comparable tests that could actually tell us which channels had potential.
The breakthrough came when we realized most "failed" channels weren't actually failing - they were just being executed poorly or measured incorrectly. We needed a way to test the channel potential separate from our execution quality.
Here's my playbook
What I ended up doing and the results.
Here's the exact framework I developed after that project, refined through dozens of subsequent tests:
Phase 1: Channel Selection & Hypothesis Formation
First, I create a hypothesis for each channel based on three factors: audience alignment, message-channel fit, and competitive landscape. For each potential channel, I write down exactly why I think it might work and what success would look like.
For the SaaS client, our hypotheses were:
LinkedIn: High-value B2B decision makers are active here, our product solves workflow problems
SEO: Buyers search for "project management software" - 2,400 monthly searches
Cold Email: Direct outreach to procurement teams who budget for tools like ours
Phase 2: Minimum Viable Test Design
For each channel, I design the simplest possible test that can produce meaningful data. This isn't about perfection - it's about creating comparable experiments.
LinkedIn Test: 100 connection requests + 3-message sequence to target persona, tracked over 4 weeks
SEO Test: 5 optimized pages targeting our core keywords, tracked for 8 weeks
Cold Email Test: 500 emails across 3 different subject lines, tracked over 3 weeks
Phase 3: Execution Standards
This is where most people fail. Instead of trying to execute everything perfectly, I set "minimum viable execution" standards for each test. The goal is consistency, not perfection.
For example, all email sequences had to follow the same 3-email structure: problem identification, solution preview, specific ask. All LinkedIn messages had to include a personalized first line and specific call-to-action. All SEO pages had to target one primary keyword and include at least 1,500 words.
Phase 4: Measurement & Decision Framework
Here's what most frameworks miss: you need different success metrics for different stages of channel maturity. A new LinkedIn outreach campaign can't be measured the same way as a 6-month-old SEO strategy.
I created three measurement phases:
- Week 1-2: Engagement metrics (open rates, response rates, click-through rates)
- Week 3-6: Lead quality metrics (meeting bookings, trial signups)
- Week 7+: Revenue metrics (closed deals, LTV)
The Results Framework
After each test period, I score each channel on four dimensions: volume potential, conversion quality, execution complexity, and scalability. This creates a comparable "channel score" that removes personal bias from decision-making.
For this client, LinkedIn scored highest on conversion quality but lowest on volume. SEO scored medium on everything but highest on scalability. Cold email scored highest on volume but medium on quality. This data-driven approach helped us decide where to double down.
Testing Hypothesis
Each channel needs a clear hypothesis about why it might work for your specific business and audience
Minimum Viable Tests
Design the simplest test that can produce meaningful data - consistency beats perfection
Measurement Phases
Different success metrics for different stages: engagement → lead quality → revenue
Decision Framework
Score channels on volume potential + conversion quality + execution complexity + scalability
The framework worked. Over 12 weeks, we identified that LinkedIn had the highest conversion rate (8% meeting-to-trial), but SEO had the best long-term scalability. Cold email generated the most immediate volume but lowest-quality leads.
Most importantly, we discovered that their "failed" Facebook ads actually had decent engagement metrics - they just hadn't tracked beyond initial click-through rates. When we retested Facebook with proper landing page optimization and extended the tracking period, it became their second-best channel for lead volume.
The framework eliminated the guesswork. Instead of jumping between channels randomly, we could make data-driven decisions about where to invest time and budget. Six months later, they had a profitable, predictable growth engine running across three channels.
But the real breakthrough wasn't finding the "winning" channel - it was developing a systematic approach they could use to test any new channel opportunity that emerged.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
Here's what I learned from implementing this framework across dozens of client projects:
Most "failed" channels are actually failed executions. Channel testing requires systematic methodology, not just trying things.
Time frames matter more than people think. B2B channels need 6-8 weeks minimum. B2C can show results in 2-4 weeks.
Conversion quality beats volume early-stage. 10 high-intent leads beat 100 tire-kickers every time.
Channel combinations often outperform individual channels. SEO + LinkedIn outreach = compound effects.
Execution complexity is a hidden cost. A "working" channel that requires 20 hours/week isn't scalable.
Personal bias kills good testing. Founders love channels they understand and avoid ones they don't.
Minimum viable tests prevent perfectionism paralysis. Done is better than perfect when you're testing hypotheses.
The biggest mistake I see? Trying to perfect one channel before testing others. By the time you've "perfected" SEO, you could have identified three other viable channels. Test first, optimize later.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS startups implementing this framework:
Start with LinkedIn + SEO + one experimental channel
Focus on lead quality over volume in early tests
Track trial-to-paid conversion by channel source
For your Ecommerce store
For e-commerce stores testing channels:
Begin with paid social + SEO + email marketing
Measure customer lifetime value by acquisition channel
Test seasonal channels separately from evergreen ones