AI & Automation
Personas
SaaS & Startup
Time to ROI
Medium-term (3-6 months)
Last year, while working on an e-commerce project that was scaling to 20,000+ pages across 8 languages, I stumbled upon something that completely changed how I think about content optimization. The content was starting to appear in ChatGPT responses despite being in a niche where LLM usage isn't common.
This wasn't something we initially optimized for - it happened naturally as a byproduct of solid content fundamentals. But when I tracked it, we were getting dozens of LLM mentions monthly. That's when I realized: the rules for AI visibility are fundamentally different from traditional SEO.
While everyone's obsessing over keyword density and backlinks, AI systems are consuming content in completely different ways. They break content into passages, synthesize answers from multiple sources, and prioritize chunk-level relevance over page-level authority.
Through conversations with teams at AI-first startups like Profound and Athena, I realized everyone is still figuring this out. There's no definitive playbook yet. But I've discovered patterns that work - and more importantly, why most traditional SEO tactics fall flat in the AI era.
Here's what you'll learn from my real-world experiments:
Why LLMs consume content in chunks, not full pages
The metadata strategies that actually get you mentioned by AI
How to structure content for both search engines and language models
The five optimization layers I use for dual visibility
Why traditional SEO foundations still matter in the AI age
This isn't theory - it's based on real implementations across thousands of pages and actual tracking of AI mentions.
Reality Check
What everyone's getting wrong about AI optimization
The SEO industry is in panic mode about AI, and frankly, most of the advice I'm seeing is complete nonsense. Everyone's either treating AI optimization like traditional SEO or abandoning SEO entirely for shiny new tactics.
Here's what the "experts" are telling you to do:
Focus on "AI-friendly" keywords - Whatever that means. Most tools claiming to identify these are just guessing.
Write "conversational" content - As if AI models care about your casual tone when they're processing billions of data points.
Abandon traditional SEO - The worst advice possible. LLM robots still need to crawl and index your content.
Optimize for featured snippets - A lazy assumption that what works for Google's snippets works for ChatGPT.
Use "AI prompts" as headings - This misunderstands how language models actually process information.
The problem with this conventional wisdom? It's based on speculation, not real data. Most "AI SEO experts" have never actually tracked their content's performance in language models. They're selling courses on strategies they've never tested.
Even worse, some are telling you to abandon proven SEO fundamentals. Here's the uncomfortable truth: quality, relevant content remains the cornerstone - whether you're optimizing for Google or GPT-4. The foundation hasn't changed, but there's a new layer we need to add on top.
The real challenge isn't choosing between traditional SEO and AI optimization. It's understanding how to do both simultaneously without compromising either strategy.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
The discovery happened by accident. We were tracking a couple dozen LLM mentions per month for an e-commerce client - content appearing in ChatGPT and Claude responses despite being in a traditional retail niche where AI usage isn't common.
This wasn't something we initially optimized for. It happened naturally. But when I started digging deeper, I realized we'd stumbled onto something significant. The content that was getting mentioned by AI had specific structural patterns that differed from our traditional SEO content.
Here's the situation: We had built 20,000+ pages using AI-powered content generation across 8 languages. The primary goal was traditional SEO - rank on Google, drive organic traffic. But as language models became more prevalent, I started getting curious about a secondary effect.
The client operated in a niche where you wouldn't expect heavy LLM usage - traditional product categories, not tech or marketing. Yet when I manually tested various queries related to their products in ChatGPT and Claude, our content was appearing in responses more frequently than major competitors.
I started tracking this systematically. Every week, I'd run 50+ queries related to the client's product categories and document which sources the AI models referenced. What I found was fascinating: the content that performed well in AI responses had different characteristics than our top-performing SEO pages.
The AI-mentioned content shared these patterns:
Each section could stand alone as a complete answer
Information was structured in logical, sequential chunks
Facts were clearly attributed and verifiable
Content covered topics comprehensively from multiple angles
This led me down the rabbit hole of GEO (Generative Engine Optimization). Through conversations with teams at AI-first startups, I realized everyone is still figuring this out. There's no definitive playbook yet, but patterns were emerging.
The biggest insight? LLMs don't consume pages like traditional search engines. They break content into passages and synthesize answers from multiple sources. This meant restructuring content so each section could stand alone as a valuable snippet.
Here's my playbook
What I ended up doing and the results.
Instead of abandoning traditional SEO for shiny new GEO tactics, I developed a layered approach that optimizes for both search engines and language models simultaneously. Here's the exact framework I implemented across 20,000+ pages:
Layer 1: Solid SEO Foundation
This is non-negotiable. LLM robots still need to crawl and index your content. I started with traditional SEO best practices:
Proper heading structure (H1, H2, H3 hierarchy)
Target keyword optimization
Internal linking strategy
Technical SEO fundamentals
Layer 2: Chunk-Level Thinking
This is where AI optimization diverges from traditional SEO. I restructured content so each section could stand alone:
Each paragraph contains complete thoughts with context
Sections include necessary background information
Key facts are repeated when relevant across chunks
Subheadings clearly indicate content scope
Layer 3: Answer Synthesis Readiness
Language models synthesize information differently than humans browse. I optimized for this:
Logical information flow that supports multiple extraction points
Clear cause-and-effect relationships
Explicit connections between related concepts
Conclusion statements that summarize key points
Layer 4: Citation-Worthiness
AI models favor content they can confidently reference:
Factual accuracy and clear attribution
Specific examples and concrete data points
Balanced perspectives on controversial topics
Clear sourcing and credibility signals
Layer 5: Multi-Modal Support
While not always applicable, I integrated visual elements that support text understanding:
Descriptive alt text that provides context
Tables and charts with comprehensive captions
Infographics that summarize key information
The Implementation Process:
I didn't retrofit all 20,000 pages at once. Instead, I:
Selected 100 high-traffic pages for testing
Applied the five-layer framework
Tracked AI mentions over 3 months
Measured traditional SEO impact
Scaled successful patterns across the site
The key insight: This isn't about choosing between traditional SEO and AI optimization. It's about building the right foundation first, then adding AI-specific layers on top.
Chunk Strategy
Making each content section self-contained with complete context and clear standalone value
Attribution Focus
Ensuring every claim is verifiable with clear sourcing and credibility signals
Multi-Angle Coverage
Addressing topics comprehensively from different perspectives and use cases
Testing Framework
Systematic approach to measuring AI mentions alongside traditional SEO metrics
The results validated my hypothesis about dual optimization. The 100 test pages showed improvements in both traditional SEO and AI visibility without compromising either strategy.
Traditional SEO metrics remained strong:
Organic traffic maintained its growth trajectory
Google rankings for target keywords didn't decline
Click-through rates actually improved due to better content structure
AI visibility showed measurable improvement:
LLM mentions increased from ~24 per month to ~60+ per month
Content appeared in more diverse query types
AI models referenced our content more frequently for authoritative information
The timeline was crucial: Most AI optimization effects became apparent within 4-6 weeks, while traditional SEO benefits continued accumulating over months. This suggests AI models update their training or reference patterns more frequently than search engines update rankings.
Perhaps most importantly, the structural improvements made content more valuable for human readers too. Bounce rates decreased, time on page increased, and user engagement metrics improved across the board.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
After implementing this framework across thousands of pages and tracking results for over a year, here are the key lessons that will save you months of experimentation:
Don't abandon SEO fundamentals. The biggest mistake I see is treating AI optimization as a replacement for traditional SEO. Both are necessary.
AI models favor comprehensive, multi-angle coverage. Surface-level content rarely gets referenced, regardless of optimization.
Chunk-level optimization is more important than page-level optimization. Each section should provide complete value independently.
Testing is essential. AI behavior changes frequently, and what works today might not work tomorrow. Build measurement into your process.
Citation-worthiness trumps everything. AI models will only reference content they can confidently cite as authoritative.
The landscape evolves rapidly. Stay connected with teams actively working in this space rather than relying on static guides.
Quality still matters most. No optimization technique can make poor content perform well in AI responses.
If I were starting this project today, I'd spend more time on structured data implementation and less time on keyword-specific optimization. The future clearly favors semantic understanding over keyword matching.
The biggest pitfall to avoid? Optimizing exclusively for current AI models. The technology changes rapidly, but high-quality, well-structured content remains valuable regardless of how algorithms evolve.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS companies looking to implement this approach:
Start with product documentation and help articles - they're naturally chunk-friendly
Focus on use case pages that answer specific customer questions comprehensively
Ensure integration documentation can be referenced by AI for developer queries
Build measurement into your content workflow from day one
For your Ecommerce store
For ecommerce stores implementing AI-optimized metadata:
Product descriptions should answer comparison questions comprehensively
Category pages need buying guide content that stands alone
FAQ sections should be structured for easy AI extraction and citation
Focus on educational content around product usage and selection