AI & Automation
Personas
SaaS & Startup
Time to ROI
Medium-term (3-6 months)
Last year, while working on an e-commerce SEO overhaul, I stumbled into something that completely changed how I think about search engine optimization. My client's content was starting to appear in AI-generated responses despite being in a traditional niche where LLM usage isn't common.
This discovery led me down the rabbit hole of GEO (Generative Engine Optimization) - and what I found challenged everything I thought I knew about how content gets discovered and cited online.
Most marketers are still operating under the assumption that AI assistants work like traditional search engines. They're pouring resources into backlink building, believing that the same signals that boost Google rankings will get them mentioned by ChatGPT, Claude, or Perplexity.
Here's what you'll learn from my real-world testing:
Why AI assistants fundamentally ignore traditional ranking signals
The surprising factors that actually drive LLM mentions
How to optimize content for chunk-level retrieval instead of page-level ranking
My tested framework for getting consistent AI assistant mentions
Why this shift matters more than most businesses realize
Ready to understand how AI actually discovers and cites content? Let's dive into what I learned from the trenches.
Industry Reality
What Every SEO Professional Believes About AI and Backlinks
Walk into any SEO conference today, and you'll hear the same assumptions repeated like gospel. Everyone believes that AI assistants - ChatGPT, Claude, Perplexity - must be using traditional ranking signals to determine which content to cite and mention.
The logic seems sound on the surface:
Backlink Authority: Sites with more backlinks get higher domain authority, so AI should prioritize them
Search Engine Training: Since AI models are trained on web data, they must inherit Google's ranking preferences
Quality Signals: Backlinks indicate quality content, which AI should naturally prefer
Crawling Patterns: AI systems must follow similar crawling and indexing patterns as search engines
Authority Transfer: High-authority sites should have better chances of being cited by AI
This conventional wisdom drives most current "AI SEO" strategies. Agencies are selling "LLM optimization" services that are just traditional SEO with a new label. They're building links, improving domain authority, and optimizing for search engines while claiming it will help with AI mentions.
The problem? This approach treats AI assistants like fancy search engines. But here's the uncomfortable truth: AI language models don't work like search engines at all.
While search engines crawl, index, and rank pages based on hundreds of signals including backlinks, AI assistants process information fundamentally differently. They break content into chunks, analyze context and relevance, and synthesize answers from multiple sources without considering traditional authority metrics.
Yet most businesses are still optimizing for the wrong system entirely.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
The revelation came during what should have been a routine SEO project. I was working with an e-commerce Shopify client who needed a complete SEO overhaul - nothing glamorous, just the fundamentals done right.
We were three months into the project when something unexpected happened. Despite being in a traditional e-commerce niche where you wouldn't expect much LLM usage, we started tracking mentions of our content in AI-generated responses. Not many - just a couple dozen per month - but they were there.
This was fascinating because we hadn't done anything specifically to optimize for AI. No "GEO techniques," no special formatting, no attempts to game language models. We were just creating solid, comprehensive content that followed traditional SEO best practices.
But here's where it got interesting: the content getting mentioned by AI had no correlation with our highest-authority pages.
Our homepage had dozens of backlinks and the highest domain authority. Our main category pages were well-linked and ranking well in Google. But AI assistants were consistently citing our smaller, more specific content pieces - product guides, comparison articles, and detailed how-to content that had zero backlinks.
I started digging deeper. Through conversations with teams at AI-first startups and my own testing, I realized everyone was figuring this out in real-time. There was no definitive playbook because the landscape was evolving so quickly.
The more I researched, the clearer it became: AI assistants fundamentally don't care about backlinks. They're not crawling the web like Google, they're not calculating PageRank, and they're not influenced by traditional authority signals.
This led me to a crucial realization that changed my entire approach to content optimization.
Here's my playbook
What I ended up doing and the results.
Once I understood that AI assistants work differently, I developed a systematic approach to test what actually drives mentions. Instead of guessing, I created controlled experiments across multiple client projects.
Experiment 1: Authority vs. Utility
I took two pieces of content on the same topic - one from a high-authority page with multiple backlinks, and one from a standalone utility page with zero backlinks. Both covered identical information but structured differently. The utility page got mentioned 3x more often by AI assistants.
Experiment 2: Chunk-Level Optimization
Working with the e-commerce client, we restructured content so each section could stand alone as valuable information. Instead of creating 2,000-word comprehensive guides, we broke content into digestible chunks that each answered specific questions completely.
The results were immediate. AI assistants began citing these restructured pieces consistently, pulling specific sections as standalone answers rather than trying to reference entire pages.
The Five Key Optimizations That Actually Work:
1. Chunk-Level Retrieval Optimization
We made each content section self-contained. Every paragraph or section needed to make sense without surrounding context. AI systems process information in chunks, not full pages, so each chunk needs to be independently valuable.
2. Answer Synthesis Structure
Instead of burying answers deep in content, we structured information for easy extraction. Clear topic sentences, logical progression, and conclusion statements that AI could easily identify and extract.
3. Citation-Worthy Factual Accuracy
AI assistants prioritize content they can cite confidently. We focused on factual accuracy, clear attribution for claims, and avoiding ambiguous statements that AI couldn't confidently reference.
4. Topical Breadth and Depth
Rather than targeting single keywords, we covered topics comprehensively. AI assistants favor content that addresses multiple facets of a topic, giving them more opportunities to cite different aspects.
5. Multi-Modal Content Integration
We included charts, tables, and structured data that AI could easily parse and reference. Visual content organized in logical formats proved much more likely to be cited than pure text.
The breakthrough came when I realized: AI assistants don't rank content - they retrieve it. They're looking for the most relevant, accurate chunk of information to answer a specific query, regardless of the source's authority.
Content Structure
Focus on chunk-level readability and self-contained sections that can stand alone as complete answers
Factual Accuracy
Prioritize verifiable, well-sourced information that AI can confidently cite without hesitation
Topic Coverage
Create comprehensive content covering multiple angles rather than targeting single keywords
Retrieval Optimization
Structure content for easy extraction rather than traditional page-level optimization
The results from this approach were telling. While our traditional SEO metrics remained strong, we saw a significant increase in AI assistant mentions across multiple client projects.
Most importantly, the content getting cited had zero correlation with backlink profiles. Our highest-mentioned pieces often had no external links whatsoever, while high-authority pages with dozens of backlinks were ignored by AI systems.
The timeline was also different from traditional SEO. While backlink-driven rankings can take months to impact, AI mentions began appearing within weeks of implementing chunk-level optimizations.
What surprised me most was the quality of traffic this generated. Users coming from AI assistant recommendations showed higher engagement rates and better conversion metrics than traditional search traffic - likely because AI had pre-qualified their intent by providing relevant, specific answers.
This experience taught me that we're witnessing a fundamental shift in how content gets discovered and consumed online.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
After testing GEO across multiple projects, here are the key lessons that changed how I approach content optimization:
Forget Traditional Authority Metrics: Backlinks, domain authority, and PageRank mean nothing to AI assistants
Optimize for Retrieval, Not Ranking: AI systems retrieve information chunks, not web pages
Structure Beats Authority: Well-structured content from unknown sites outperforms poorly structured content from high-authority domains
Build on SEO Fundamentals: Traditional SEO best practices are your starting point, not your competition
Quality Still Matters: AI assistants are better at detecting valuable content than traditional algorithms
Context Is Everything: AI systems understand semantic relationships better than keyword matching
Test Everything: The landscape changes rapidly - what works today might not work tomorrow
The biggest lesson? Don't abandon what works. Build your GEO strategy on top of strong SEO fundamentals, not instead of them. The platforms may be evolving, but quality, relevant content remains the foundation of any successful optimization strategy.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS companies looking to get mentioned by AI assistants:
Structure product documentation for chunk-level retrieval
Create comprehensive use case content that answers specific queries
Focus on factual accuracy over promotional language
Build content around customer questions, not just features
For your Ecommerce store
For e-commerce stores optimizing for AI mentions:
Create detailed product guides that can stand alone
Structure comparison content for easy extraction
Include tables and structured data AI can easily parse
Focus on answering "how to" and "what is" queries comprehensively