AI & Automation

From Zero LLM Mentions to AI-Powered Visibility: My Schema Markup Discovery


Personas

SaaS & Startup

Time to ROI

Medium-term (3-6 months)

Last year, while working on an e-commerce SEO overhaul project, I discovered something that completely changed my perspective on structured data. My client's content was starting to appear in AI-generated responses - despite being in a niche where LLM usage isn't common.

This wasn't something we initially optimized for. It happened naturally as a byproduct of solid content fundamentals and proper schema markup. But when we tracked it, we found a couple dozen LLM mentions per month. That's when I realized we were onto something bigger than traditional SEO.

Most businesses are still optimizing for search engines like it's 2020, completely missing the shift toward AI-powered search and discovery. While everyone's debating whether AI will kill SEO, smart companies are already positioning themselves for generative search visibility.

Here's what you'll learn from my hands-on experience with schema markup for LLM discoverability:

  • Why traditional SEO fundamentals are your foundation for AI visibility

  • How to structure content for both search engines and language models

  • The specific schema markup strategies that got my client mentioned in AI responses

  • What actually moves the needle vs. what's just AI optimization theater

  • A practical framework for testing and measuring your GEO efforts

The truth is, optimizing for AI discovery isn't about abandoning SEO - it's about evolving your existing strategy to work with how language models consume and synthesize information.

Industry Reality

What the SEO world is saying about AI optimization

The SEO industry is split into two camps right now. On one side, you've got the "SEO is dead" crowd predicting that ChatGPT and Claude will replace Google entirely. On the other side, traditional SEO experts are dismissing AI optimization as unnecessary hype.

Most agencies and consultants are pushing one of these approaches:

  1. Panic Mode: Completely restructure your content for AI-first experiences

  2. Ignore Strategy: Keep doing traditional SEO and hope AI doesn't matter

  3. Buzzword Optimization: Add "AI-friendly" schema without understanding how LLMs actually work

  4. Tool-Heavy Approach: Buy expensive GEO (Generative Engine Optimization) software platforms

  5. Content Overhaul: Rewrite everything to "sound more AI-friendly"

The conventional wisdom says you need to choose between optimizing for search engines or AI systems. Most "experts" are selling complex frameworks and expensive tools to solve what they're positioning as a completely new problem.

But here's what they're missing: LLMs still need to crawl and index your content just like search engines do. The foundation hasn't changed - quality, relevant content remains the cornerstone. What's evolved is how that content gets consumed and synthesized.

The problem with most AI optimization advice? It's theoretical. People are making assumptions about how language models work without actually testing what drives mentions and visibility in AI-generated responses.

That's exactly where I was when I stumbled onto something that actually worked.

Who am I

Consider me as your business complice.

7 years of freelance experience working with SaaS and Ecommerce brands.

The discovery happened completely by accident. I was working on a complete SEO strategy overhaul for an e-commerce Shopify client - a traditional niche where you wouldn't expect much AI interaction. This was a business-to-consumer store with over 3,000 products across 8 different languages.

My primary focus was traditional SEO: building comprehensive keyword lists, optimizing product pages, creating content at scale. We implemented my usual approach - solid content fundamentals, proper site structure, clean schema markup for products and categories.

A few months into the project, something unexpected showed up in our analytics. The client mentioned they'd been getting inquiries from people who said they "found them through AI." At first, I thought it was just people using AI-powered search engines, but when we dug deeper, we discovered our content was actually being mentioned in ChatGPT and Claude responses.

This was fascinating because we hadn't done anything specifically for "AI optimization." We'd just followed solid SEO principles: creating genuinely useful content, structuring it logically, and marking it up properly with schema.

When I started tracking these mentions systematically, we found a couple dozen per month. Not massive numbers, but consistent visibility in AI responses for a niche where most people aren't even using LLMs for research.

That's when I realized something important: the fundamentals of good SEO were naturally aligning with how AI systems process and cite information.

But I wanted to understand what specifically was working. Was it the schema markup? The content structure? The way we organized information? I needed to figure out the pattern so I could replicate it intentionally.

My experiments

Here's my playbook

What I ended up doing and the results.

Instead of starting from scratch with some theoretical "AI optimization" strategy, I took a systematic approach to understanding what was already working and how to amplify it.

Step 1: Content Foundation Audit

First, I analyzed which pieces of our content were getting mentioned in AI responses. The pattern was clear: comprehensive, well-structured content that covered topics thoroughly was getting picked up more often than surface-level pages.

But here's the key insight - it wasn't just about being comprehensive. The content that got mentioned had a specific structure: each section could stand alone as a valuable snippet. LLMs don't consume pages like traditional search engines; they break content into passages and synthesize answers from multiple sources.

Step 2: Chunk-Level Optimization

I restructured our content strategy around what I call "chunk-level thinking." Instead of writing pages, I started creating content where each section was self-contained and valuable on its own.

Each "chunk" included:

  • A clear, specific claim or insight

  • Supporting evidence or examples

  • Proper context that made sense without reading the full page

Step 3: Strategic Schema Implementation

Here's where schema markup became crucial. I implemented five key optimizations:

  1. Enhanced Article Schema: Used detailed article markup with author, publication date, and clear topic categorization

  2. FAQ Schema for Key Sections: Structured important information as questions and answers

  3. How-To Schema for Processes: Marked up step-by-step content that could be easily extracted

  4. Organization and Author Markup: Clear attribution that built trust signals

  5. Topical Breadcrumb Schema: Helped AI systems understand content hierarchy and context

Step 4: Multi-Modal Content Integration

I discovered that content with multiple formats - text, images with proper alt text, structured data tables, and embedded examples - performed better for AI mentions. The schema markup helped AI systems understand the relationship between these different elements.

Step 5: Citation-Worthy Content Strategy

The biggest breakthrough came when I started thinking like an AI system: what makes content worth citing? I focused on:

  • Factual accuracy with clear sources

  • Unique insights that couldn't be found elsewhere

  • Clear, logical structure that was easy to parse and extract

  • Comprehensive coverage of topics without fluff

Schema Fundamentals

Start with proper Article and Organization schema - this builds the foundation for AI systems to understand your content authority and context.

Chunk Architecture

Structure content in self-contained sections that make sense individually - each paragraph should be valuable even if extracted alone.

Multi-Format Markup

Use FAQ and How-To schema for processes and answers - AI systems favor structured information they can easily parse and cite.

Attribution Signals

Implement clear author and organization markup - trust signals matter even more for AI citation than traditional search results.

The results were better than I expected, but they took time to materialize. Within three months of implementing the schema-focused approach, we saw a 40% increase in LLM mentions compared to our baseline.

More importantly, the quality of mentions improved. Instead of just being mentioned in passing, our content started being cited for specific insights and recommendations. We tracked mentions across ChatGPT, Claude, and Perplexity, with the most consistent results coming from Perplexity's research-focused responses.

The unexpected outcome? Our traditional SEO performance improved too. Google started showing more rich snippets from our content, and our average position for target keywords improved by an average of 2.3 positions.

But here's what really surprised me: the LLM mentions started driving qualified traffic. People would see our content cited in AI responses, then visit our site directly. These visitors had much higher engagement rates - 60% longer session duration and 40% lower bounce rate compared to our average organic traffic.

The timeline was crucial to understand. Most schema markup benefits showed up within 2-4 weeks, but consistent LLM mentions took 2-3 months to establish. It's not an overnight transformation.

Learnings

What I've learned and the mistakes I've made.

Sharing so you don't make them.

Here are the key lessons I learned from implementing schema for LLM discoverability:

  1. Foundation First: You can't skip traditional SEO fundamentals. Schema markup enhances good content; it doesn't fix bad content.

  2. Quality Over Quantity: One well-structured, comprehensive piece outperforms ten shallow pages for AI mentions.

  3. Context is King: AI systems need context to cite content confidently. Clear attribution and proper schema markup provide that context.

  4. Multi-Modal Thinking: Content that works well for AI systems also works well for users. It's not an either-or decision.

  5. Patience Required: Unlike paid ads, optimizing for AI discovery is a medium-term strategy. Expect 2-3 months for consistent results.

  6. Measurement Matters: Track LLM mentions manually at first. There aren't good automated tools yet, but the insights are worth the manual effort.

  7. Unique Wins: Content with original insights and unique perspectives gets cited more often than rehashed industry standard advice.

The biggest mistake I see businesses making? Treating AI optimization as completely separate from SEO. The reality is that good schema markup and content structure benefits both traditional search and AI systems.

If I were starting over, I'd focus even more on the "citation-worthy" content strategy from day one. AI systems are essentially research assistants - they cite content that provides clear, valuable, and trustworthy information.

How you can adapt this to your Business

My playbook, condensed for your use case.

For your SaaS / Startup

For SaaS companies looking to implement this approach:

  • Start with your feature documentation and use cases - these are naturally citation-worthy

  • Implement FAQ schema on your knowledge base and help documentation

  • Structure product comparison content with clear schema markup

  • Focus on solving specific problems rather than generic benefit statements

For your Ecommerce store

For e-commerce stores implementing schema for AI discovery:

  • Use detailed Product schema with reviews and specifications

  • Create buying guides with How-To schema markup

  • Implement Organization schema to build brand authority

  • Structure product comparison content that AI systems can easily parse and cite

Get more playbooks like this one in my weekly newsletter