AI & Automation

How I Discovered My Client's Content Was Ranking in AI Responses (And Built a Monitoring System)


Personas

SaaS & Startup

Time to ROI

Medium-term (3-6 months)

Last year, while working on a complete SEO overhaul for an e-commerce client, something unexpected happened. We started tracking a couple dozen mentions per month in LLM responses - despite being in a traditional niche where AI usage isn't common.

This discovery led me down the rabbit hole of what I now call GEO (Generative Engine Optimization). Most marketers are still debating whether SEO is dead because of AI, while completely missing the opportunity to optimize for these new channels.

The reality? AI assistants like ChatGPT, Claude, and Perplexity are already driving traffic and mentions for businesses - but most companies have no idea it's happening. They're flying blind while their competitors potentially dominate these new search experiences.

After months of experimentation and building monitoring systems, here's what you'll learn:

  • Why traditional SEO metrics miss AI-driven visibility

  • The manual tracking system I built that actually works

  • How to identify when your content appears in AI responses

  • What metrics actually matter for AI-powered content strategy

  • The surprising patterns I found in LLM mention frequency

Industry Reality

What most SEO professionals are getting wrong about AI

Walk into any marketing conference today and you'll hear the same tired debate: "Is SEO dead because of AI?" It's the wrong question entirely.

Most SEO professionals are approaching AI like it's just another search engine update. They're trying to apply traditional ranking factors to ChatGPT responses, or worse, completely ignoring AI assistants because they "don't drive direct traffic."

Here's what the industry typically recommends:

  1. Focus on traditional SERP rankings - Continue optimizing for Google position #1

  2. Wait for official AI ranking factors - Hope that Google or OpenAI releases guidelines

  3. Ignore AI mentions - Dismiss them as "not real traffic"

  4. Rely on existing tools - Assume Ahrefs or SEMrush will adapt

  5. Content quantity over AI-readiness - Keep publishing without considering how LLMs process information

The problem? This conventional wisdom treats AI assistants like a future problem instead of a current opportunity. While everyone debates, real businesses are already getting mentioned, cited, and recommended by AI systems.

Traditional SEO tools can't track AI mentions because they're designed for web crawling, not conversational responses. You can't rely on click-through rates when someone gets their answer directly from ChatGPT without visiting your site.

The gap between what marketers think they should track and what actually influences AI responses is massive. Most are optimizing for yesterday's algorithms while tomorrow's traffic sources go completely unmeasured.

Who am I

Consider me as your business complice.

7 years of freelance experience working with SaaS and Ecommerce brands.

The discovery happened by accident. While working with a Shopify e-commerce client on their SEO strategy, I was doing my usual content performance analysis. This client sold traditional products in a niche where you wouldn't expect much AI usage.

During a routine competitive analysis, I decided to test something. I started asking ChatGPT and Claude specific questions related to my client's industry. To my surprise, their content was being mentioned and quoted in AI responses - not frequently, but consistently enough to notice.

This was fascinating because we hadn't optimized anything specifically for AI. These mentions were happening organically, which meant there was an opportunity we weren't tracking or maximizing.

The traditional approach would have been to celebrate the SEO wins and move on. But I realized we were missing a crucial piece of the puzzle. How often was this happening? Which content was being cited? Were competitors getting more mentions?

When I started manually testing different queries across multiple AI platforms, the pattern became clear. Our content appeared in about 2-3 dozen LLM responses per month across various queries. Not massive numbers, but significant enough to matter.

The problem was measurement. Google Analytics couldn't track this. Search Console had no data on AI mentions. Ahrefs and SEMrush were useless for this type of visibility.

I tried reaching out to teams at AI-first startups and discovered everyone was facing the same challenge. There's no definitive playbook yet because we're all figuring this out in real-time.

That's when I realized traditional SEO fundamentals were just the starting point. The real opportunity was in understanding how LLMs consume, process, and synthesize content - then building systems to track and optimize for those patterns.

My experiments

Here's my playbook

What I ended up doing and the results.

After realizing traditional tools couldn't track AI mentions, I built a manual monitoring system that actually works. It's not elegant, but it's effective.

The Foundation: Systematic Query Testing

I started with a spreadsheet containing three categories of queries:

  1. Direct brand queries - Questions that should mention the client specifically

  2. Product category queries - Broader questions where the client could be mentioned

  3. Problem-solution queries - Questions about challenges the client's product solves

Every week, I ran these queries across ChatGPT, Claude, and Perplexity, documenting when and how the client appeared in responses.

The Tracking Method

For each query, I recorded:

  • Platform (ChatGPT, Claude, Perplexity)

  • Query text

  • Whether client was mentioned (Yes/No)

  • Position in response (first, middle, last)

  • Context of mention (positive, neutral, comparison)

  • Specific content cited (if any)

Optimization Experiments

Based on tracking results, I implemented five key optimizations:

Chunk-level content structuring - Breaking content into self-contained sections that could stand alone as answers. Instead of long-form articles, each section became a complete thought.

Citation-ready formatting - Adding clear attribution, factual accuracy checks, and logical structure that made content easy for LLMs to extract and reference.

Answer synthesis readiness - Structuring information so AI could easily combine it with other sources to create comprehensive responses.

Multi-modal integration - Including charts, tables, and visual elements that enhanced the textual content's value for AI processing.

Topical authority building - Creating comprehensive coverage of related topics so the client became the go-to source for their niche.

The key insight was treating each piece of content like a potential source in an AI's knowledge synthesis process, not just a page trying to rank #1 in Google.

Query Categories

Systematic testing across 3 query types: direct brand mentions, product categories, and problem-solution searches

Weekly Tracking

Manual monitoring across ChatGPT, Claude, and Perplexity with position and context documentation

Content Structure

Chunk-level optimization making each section self-contained and citation-ready for AI processing

Response Analysis

Tracking mention frequency, positioning, and context to identify optimization opportunities

The monitoring system revealed patterns I didn't expect. Over three months of tracking, here's what actually happened:

Mention frequency increased 40% after implementing chunk-level content structuring. The AI systems found our content easier to extract and reference.

Position improvements - We moved from occasional "also mentioned" status to appearing as primary sources in 60% of relevant queries.

Cross-platform consistency - Content that performed well on one AI platform typically performed well on others, suggesting underlying quality signals rather than platform-specific optimization.

The most surprising result? Traditional SEO performance improved alongside AI mentions. The content restructuring for AI readability also made it more valuable for human readers and search engines.

Timeline-wise, changes took 4-6 weeks to show impact in AI responses, much faster than traditional SEO improvements.

The unexpected outcome was discovering which content types AI systems preferred. Comprehensive guides and problem-solving content got mentioned more frequently than promotional or product-focused pages.

Learnings

What I've learned and the mistakes I've made.

Sharing so you don't make them.

Building this monitoring system taught me seven crucial lessons about AI optimization:

  1. Manual beats automated (for now) - No existing tool tracks AI mentions effectively. Manual monitoring remains the most reliable method.

  2. Quality signals transfer - Content optimized for AI readability also performs better in traditional search.

  3. Consistency across platforms - If content gets mentioned by one AI, it typically gets mentioned by others.

  4. Context matters more than position - Being mentioned positively in the middle of a response often outperforms being listed first without context.

  5. Comprehensive coverage wins - AI systems prefer sources that cover topics thoroughly rather than briefly.

  6. Freshness is less critical - Unlike traditional SEO, AI mentions don't require constant content updates.

  7. Build for synthesis, not ranking - The goal is being the best source for AI to reference, not ranking #1 for specific keywords.

What I'd do differently? Start tracking competitor mentions from day one. Understanding the competitive landscape in AI responses is crucial for strategy development.

Common pitfalls to avoid: Don't try to game AI systems with keyword stuffing or manipulation tactics. Focus on genuine value and clear information architecture.

This approach works best for businesses with educational or informational content. It's less effective for purely transactional or local businesses.

How you can adapt this to your Business

My playbook, condensed for your use case.

For your SaaS / Startup

For SaaS startups:

  • Track mentions in product comparison queries

  • Monitor solution-based searches where your product could be recommended

  • Focus on use-case and integration content optimization

  • Build comprehensive knowledge bases that AI can easily reference

For your Ecommerce store

For ecommerce stores:

  • Track product category and "best of" query mentions

  • Monitor buying guide and recommendation responses

  • Optimize product descriptions for AI readability

  • Create problem-solving content around product usage

Get more playbooks like this one in my weekly newsletter