Growth & Strategy

How I Stopped Tracking the Wrong Metrics and Finally Found My Winning Traction Channel


Personas

SaaS & Startup

Time to ROI

Medium-term (3-6 months)

I was drowning in analytics. Google Analytics showed thousands of visitors, Facebook Ads reported incredible reach, and our content was getting shared across LinkedIn. By every "industry standard" metric, we were crushing it. But our revenue was flatlining.

It took me six months and multiple failed experiments with both B2B SaaS clients and e-commerce stores to realize I was measuring the wrong things entirely. Most businesses track what's easy to measure, not what actually drives growth. They optimize for vanity metrics while their best traction channels go unnoticed.

Here's what I discovered: the metrics that make you feel good are rarely the metrics that make you money. Real traction measurement isn't about tracking everything – it's about tracking the right things in the right way.

In this playbook, you'll learn:

  • Why traditional attribution models lie to you (and what to track instead)

  • The three-layer measurement framework I use with every client

  • How to identify your "dark funnel" and measure unmeasurable channels

  • Real examples from my client work where obvious winners turned out to be losers

  • The counterintuitive approach that revealed hidden revenue sources

This isn't another "set up Google Analytics" guide. This is about developing the right mindset and frameworks to find your actual winning channels, even when the data lies to you. Let's get into how I learned this the hard way.

Industry Reality

What every growth expert tells you to measure

If you've read any growth marketing content in the last five years, you've seen the same measurement advice everywhere. It's become the accepted wisdom, and honestly, it makes logical sense on paper.

The traditional approach focuses on five key areas:

  1. Attribution tracking - First-touch, last-touch, multi-touch attribution models to "properly" credit each channel

  2. Channel-specific metrics - Cost per click for ads, open rates for email, engagement for social media

  3. Conversion funnels - Track users from awareness through purchase with clean, linear progression

  4. Cohort analysis - Measure retention and lifetime value by acquisition channel

  5. ROI calculations - Simple revenue divided by cost for each channel

This approach exists because it feels scientific and measurable. Growth teams love dashboards with clear numbers. Executives want to see which channels are "working" and which should get more budget. Everyone wants the comfort of data-driven decisions.

The problem? Real customer behavior is messier than attribution models can handle. People don't follow linear funnels. They research on Google, see your ad on Facebook, ask a friend, visit your website three times, read reviews, and maybe sign up after seeing your content on LinkedIn.

Traditional measurement tries to force this complex journey into neat, trackable boxes. But when you over-rely on attribution, you end up optimizing for the wrong things and missing your best channels entirely.

Who am I

Consider me as your business complice.

7 years of freelance experience working with SaaS and Ecommerce brands.

The wake-up call came during a project with an e-commerce client who was heavily reliant on Facebook Ads. Their Facebook Ads Manager showed a 2.5 ROAS – respectable, profitable, totally fine. But I had a gut feeling something wasn't adding up with their overall growth trajectory.

I decided to run a complete SEO overhaul in parallel with their existing Facebook campaigns. Within a month of implementing the SEO strategy, their Facebook ROAS suddenly jumped from 2.5 to 8-9. Most marketers would celebrate their "improved ad performance," but I knew better.

What was actually happening? SEO was driving significant organic traffic and conversions, but Facebook's attribution model was claiming credit for organic wins. People were searching for the brand (because of broader visibility), landing on the site organically, but Facebook took credit because they had seen an ad at some point.

This revelation led me to completely rethink how I measure traction channels. I started testing a different approach with multiple clients – both B2B SaaS and e-commerce stores. Instead of trying to track and control every interaction, I focused on what I call "coverage measurement."

The key insight: most businesses oversimplify the customer journey. They want to believe it's linear: see ad → buy product. But real customer behavior is messy. A typical journey actually looks like multiple touchpoints across channels, often over weeks or months.

I learned to stop believing in "build it and they will come" and start believing in "distribute everywhere they already are." The goal isn't to track and control every interaction – it's to expand visibility across all possible touchpoints and measure the right signals.

My experiments

Here's my playbook

What I ended up doing and the results.

After testing this approach across multiple client projects, I developed what I call the "Coverage + Signal + Revenue" framework. It acknowledges that attribution is broken while still giving you actionable data for channel optimization.

Layer 1: Coverage Measurement

Instead of obsessing over attribution, I measure how well each channel is expanding overall brand visibility. For SEO, this means tracking brand search volume increases. For content marketing, it's measuring how often your brand gets mentioned or linked to organically. For paid ads, it's measuring whether overall organic traffic increases during campaign periods.

The insight: when a channel is truly working, it creates a halo effect that boosts other channels. If your Facebook ads are genuinely effective, you should see more people searching for your brand name, more direct traffic, and more organic social mentions.

Layer 2: Signal Detection

I track leading indicators that predict revenue rather than trying to attribute revenue perfectly. For B2B SaaS, this might be tracking which channels drive the highest-intent demo requests or bring in users who actually complete onboarding. For e-commerce, it's identifying which channels bring customers who browse multiple products or add items to wishlists.

For example, with one e-commerce client, I noticed that while Instagram ads had a lower immediate conversion rate than Google Ads, Instagram traffic had a 40% higher average order value and 60% better retention. Google Ads were bringing bargain hunters; Instagram was bringing brand enthusiasts.

Layer 3: Revenue Impact Measurement

Here's where I measure differently than most. Instead of trying to attribute every dollar, I run controlled experiments. I'll turn off a channel completely for two weeks and measure the impact on overall revenue. Or I'll dramatically increase spend in one channel and see if total revenue increases proportionally.

With the same e-commerce client, when we paused Facebook ads for two weeks, overall revenue dropped by only 15% – much less than expected based on Facebook's reported attribution. When we paused Google Ads for two weeks, revenue dropped 45%. This told us Google was more crucial than Facebook, despite similar reported ROAS.

The Integration Approach

I also started measuring cross-channel integration effects. How does email performance change when you're running paid social campaigns? Do organic conversions increase during periods of high content production? This holistic view revealed that our best "channel" was actually the combination of SEO content + email nurturing + targeted ads, not any single channel in isolation.

Coverage Metrics

Track brand visibility expansion, not just direct conversions from each channel

Signal Quality

Measure leading indicators and user behavior quality, not just volume

Revenue Testing

Run controlled experiments by pausing channels to measure true impact

Integration Effects

Monitor how channels boost each other rather than competing for attribution

The results from this approach were eye-opening across multiple client projects. The channels we thought were winning often weren't, and the channels we thought were failing were actually our best performers.

With the e-commerce client, we discovered that their email marketing – which showed mediocre open rates and click-through rates – was actually their highest-ROI channel. When we paused it for testing, customer lifetime value dropped significantly because email was the key to driving repeat purchases.

For a B2B SaaS client, we found that LinkedIn content (which showed low direct conversions) was crucial for closing enterprise deals. Sales teams reported that prospects frequently mentioned seeing the founder's LinkedIn posts during discovery calls. The content wasn't driving immediate signups, but it was building trust that led to larger deals.

Most importantly, this framework helped clients make better budget allocation decisions. Instead of shifting money to whatever showed the best last-click attribution, they could invest in channel combinations that actually drove business growth.

The measurement approach also revealed the power of consistency over optimization. Channels that performed steadily for months often outperformed channels that had occasional spike months but were inconsistent overall.

Learnings

What I've learned and the mistakes I've made.

Sharing so you don't make them.

Here are the seven most important lessons from implementing this measurement approach across multiple clients:

  1. Attribution lies, but patterns don't - Don't trust single-touch attribution, but look for consistent patterns across multiple data sources

  2. The best channels are often invisible - Your highest-impact channel might not show direct conversions but enables everything else to work better

  3. Measure customer quality, not just quantity - A channel that brings fewer but better customers usually wins long-term

  4. Test through subtraction, not addition - You learn more by turning things off than by adding new tracking

  5. Integration beats optimization - Channels working together usually outperform channels optimized in isolation

  6. Time delays matter - B2B channels especially show impact weeks or months after initial engagement

  7. Dark funnel is real - Accept that 30-50% of conversions will have unclear attribution, and that's normal

The biggest mindset shift: stop trying to track everything perfectly, and start tracking the right things imperfectly. Perfect measurement is a distraction from good decision-making.

This approach works best when you have multiple channels running simultaneously and enough volume to run meaningful tests. It's less useful for very early-stage startups that can only afford to test one channel at a time.

How you can adapt this to your Business

My playbook, condensed for your use case.

For your SaaS / Startup

  • Track trial-to-paid conversion rates by channel, not just signup volume

  • Measure feature adoption depth for users from different channels

  • Monitor enterprise deal velocity and size by content touchpoints

  • Test channel impact on customer lifetime value and expansion revenue

For your Ecommerce store

  • Compare average order value and repeat purchase rates by traffic source

  • Track cart abandonment recovery success by original acquisition channel

  • Measure seasonal channel performance for inventory and budget planning

  • Monitor brand search volume increases during paid campaign periods

Get more playbooks like this one in my weekly newsletter