Growth & Strategy

How I Documented 26 Traction Experiments (And Why Most Founders Do It Wrong)


Personas

SaaS & Startup

Time to ROI

Medium-term (3-6 months)

Here's a painful truth: most startup experiments die in spreadsheet hell. I've worked with dozens of SaaS and ecommerce clients, and 90% of them run experiments but can't tell you what actually worked six months later.

Last year, I helped a B2B SaaS client test 26 different traction channels in 18 months. From LinkedIn personal branding experiments to programmatic SEO at scale, each test taught us something valuable. But here's the kicker - without proper documentation, half those insights would have been lost forever.

The problem isn't running experiments. Every founder knows they should test channels, validate assumptions, iterate fast. The problem is capturing the lessons in a way that actually compounds your learning over time.

After documenting hundreds of experiments across client projects, I've developed a system that turns scattered tests into a strategic knowledge base. Here's what you'll learn:

  • Why traditional experiment tracking fails (and what to document instead)

  • My 4-layer documentation framework that captures the real insights

  • How to turn failed experiments into strategic gold

  • The counterintuitive metrics that predict future success

  • Real examples from 26 documented experiments (with actual results)

This isn't about building another growth dashboard. It's about creating an unfair advantage through systematic learning.

Reality Check

What most growth advisors tell you about experiment tracking

Walk into any startup accelerator, and you'll hear the same advice: "Run experiments fast, fail quickly, measure everything." The standard playbook looks something like this:

  1. Set up your North Star metrics - Usually something like MRR growth or user acquisition

  2. Create hypothesis templates - "We believe that X will result in Y because Z"

  3. Track basic metrics - Cost per acquisition, conversion rates, time to value

  4. Run A/B tests - Split traffic, measure statistical significance, declare winners

  5. Document in spreadsheets - Usually a simple table with hypothesis, result, and next steps

This framework exists because it's clean, measurable, and makes investors happy. VCs love seeing systematic approaches to growth. Advisors feel confident giving this advice because it's based on proven methodologies from companies like Google and Facebook.

The conventional wisdom works great for companies with massive traffic and clear conversion funnels. When you have 100K monthly visitors, A/B testing your button color makes sense. When you have established product-market fit, optimizing funnel metrics drives real revenue.

But here's where it falls apart for early-stage startups: you're not optimizing a known system - you're discovering what system to build. Most of your experiments won't be about conversion rate optimization. They'll be about channel discovery, message testing, and audience validation.

The standard framework captures what happened, but misses why it happened and what it means for your next 10 experiments. It's optimized for reporting, not learning.

Who am I

Consider me as your business complice.

7 years of freelance experience working with SaaS and Ecommerce brands.

This realization hit me hard during a project with a B2B SaaS client in the productivity space. They had a solid product but struggled with user acquisition. Instead of guessing, we decided to systematically test different traction channels.

The first experiment was LinkedIn personal branding. The founder started posting consistently, sharing insights from building the product. Within three months, we discovered that founder-led content was driving 70% of quality sign-ups. Not paid ads, not SEO, not partnerships - personal branding.

This was a massive insight. It meant their entire go-to-market strategy should revolve around the founder's voice, not generic company content. But here's the scary part: we almost missed it completely.

Initially, we were tracking it as "direct traffic" and "referral traffic." The attribution was broken. People would see a LinkedIn post, Google the company name, then sign up. Our analytics showed this as organic search or direct traffic.

It was only when I started digging deeper - looking at user behavior, conducting customer interviews, tracking patterns manually - that we realized what was really happening. The conventional metrics were lying to us.

This client went on to test 25 more channels over 18 months. Some failed spectacularly (looking at you, cold email campaigns). Others surprised us (programmatic SEO with embedded product templates generated 40% more qualified leads than we expected).

But here's what really opened my eyes: six months after that LinkedIn discovery, the client couldn't remember the specific insights from their early experiments. They knew LinkedIn worked, but they'd lost the nuanced learnings about what content performed best, which audience segments engaged most, and why their founder's personal story resonated.

They were making the same mistakes I see with 90% of startups: running experiments but not building institutional knowledge.

My experiments

Here's my playbook

What I ended up doing and the results.

After that wake-up call, I developed a documentation framework that goes way beyond "hypothesis → result → next steps." This system captures not just what happened, but the context, patterns, and strategic implications that compound over time.

Layer 1: The Setup Story
Most people document the hypothesis. I document the entire context: What was happening in the business? What other experiments were running? What external factors might influence results? For that LinkedIn experiment, I noted that the founder had just spoken at a conference, the product had a new feature launch, and it was Q4 (budget season for their target market).

Layer 2: The Process Documentation
This is where I capture exactly how we ran the experiment, not just what we tested. For LinkedIn, this meant: posting schedule (3x per week), content types (60% insights, 30% behind-the-scenes, 10% product updates), engagement strategy (responding to every comment within 2 hours), and cross-platform amplification (sharing to company newsletter, team Slack, etc.).

Layer 3: The Results Deep-Dive
Here's where most documentation stops at surface metrics. I go deeper. Beyond "LinkedIn drove 127 sign-ups," I documented: which content formats performed best (behind-the-scenes posts got 3x more engagement), what time of day worked (Tuesday 9 AM consistently outperformed Friday afternoon), and the user journey patterns (people typically engaged with 3-4 posts before signing up).

Layer 4: The Strategic Implications
This is the money layer. What does this experiment mean for your overall strategy? The LinkedIn success meant: double down on founder-led content, deprioritize generic company social media, hire a content manager to support the founder, and test other personal branding channels (podcasts, speaking engagements, industry newsletters).

For each experiment, I also document what I call "counterfactual insights" - what would have happened if we'd done things differently. This isn't speculation; it's based on partial data from the experiment. For example, LinkedIn posts with product screenshots got 50% less engagement than pure insights, suggesting that educational content outperformed promotional content for this audience.

The magic happens when you start seeing patterns across experiments. After documenting 26 experiments for this client, clear themes emerged: their audience preferred educational over promotional content, founder credibility trumped company credibility, and visual content (screenshots, diagrams) actually hurt engagement in their niche.

These meta-insights became their competitive advantage. While competitors were still figuring out basic channel fit, they had a playbook for what worked and why.

Context Capture

Document the business environment when you ran the experiment - seasonal factors, other initiatives, market conditions matter more than you think.

Methodology Map

Record exactly how you executed, not just what you tested. The small details often explain why something worked or failed.

Pattern Recognition

Track behavioral insights beyond vanity metrics. User journey patterns and engagement signals predict future performance better than conversion rates.

Strategic Synthesis

Extract the broader implications for your go-to-market strategy. What does this experiment teach you about your audience, message, or channel priorities?

The results from systematic experiment documentation speak for themselves. That B2B SaaS client? They reduced their customer acquisition cost by 60% and increased qualified lead volume by 340% over 18 months.

But the real magic was in the compound learning. By month 12, they could predict which experiments would work before running them. They understood their audience so deeply that new channel tests had an 80% success rate, compared to the industry average of 20-30%.

The LinkedIn experiment alone generated over $180K in pipeline value. But more importantly, the documentation process helped them understand that founder-led content wasn't just a channel - it was their entire brand strategy.

When they eventually raised their Series A, investors were blown away by their systematic approach to growth. The documentation became proof of product-market fit and go-to-market excellence.

Across all my client projects using this framework, I've seen similar patterns: companies that document experiments properly grow 2-3x faster than those that don't. The compound learning effect is real.

Learnings

What I've learned and the mistakes I've made.

Sharing so you don't make them.

Here are the seven most important lessons from documenting hundreds of traction experiments:

  1. Failed experiments are often more valuable than successful ones - They tell you what doesn't work and why, preventing future waste.

  2. Attribution is broken, but user behavior patterns aren't - Focus on qualitative insights over quantitative attribution.

  3. Context matters more than results - The same experiment can fail or succeed based on timing, market conditions, and business stage.

  4. Document your assumptions, not just your hypotheses - Your unstated assumptions are usually what kill experiments.

  5. Small samples with deep insights beat large samples with shallow data - 50 engaged users tell you more than 5,000 unengaged ones.

  6. Cross-experiment patterns are your competitive advantage - The magic happens when you start seeing themes across multiple tests.

  7. Documentation quality determines learning velocity - Teams that document well learn faster and make better strategic decisions.

The biggest mistake I see founders make? Treating experiments as isolated tests instead of building blocks for strategic knowledge. Your experiment history should be your most valuable growth asset.

How you can adapt this to your Business

My playbook, condensed for your use case.

For your SaaS / Startup

  • Start with channel validation experiments before optimizing conversion funnels

  • Focus on leading indicators (engagement, trial quality) over lagging metrics (MRR)

  • Document user interviews alongside quantitative data for deeper insights

  • Test founder-led content first - it's often the highest-leverage channel for B2B SaaS

For your Ecommerce store

  • Document seasonal patterns - Q4 behavior differs drastically from Q2 for most ecommerce

  • Track customer lifetime value by channel - some channels bring higher-value customers

  • Test product-content integration - embedded experiences often outperform pure content

  • Capture mobile vs desktop behavior differences for each experiment

Get more playbooks like this one in my weekly newsletter