AI & Automation
Personas
SaaS & Startup
Time to ROI
Medium-term (3-6 months)
OK, so here's something that happened recently that completely changed how I think about SEO in the age of AI. I was working with this B2B SaaS client, helping them optimize their content strategy, when we noticed something weird: their articles were getting mentioned by ChatGPT and Claude, despite having virtually no traditional backlinks.
Now, if you've been doing SEO for a while, you know backlinks are the holy grail of ranking factors. Google has been pretty clear about this for years - high-quality backlinks from authoritative domains signal trust and relevance. But here's the thing: LLMs don't work like Google. They don't crawl the web looking for link signals. They process massive datasets and synthesize information in completely different ways.
Through my work with multiple clients and extensive testing, I've discovered that the traditional SEO playbook doesn't directly translate to what I call "GEO" - Generative Engine Optimization. The question isn't whether LLMs count backlinks as ranking signals (spoiler: they don't, at least not directly), but rather how authority and credibility work in AI-generated responses.
Here's what you'll learn from my experiments:
Why LLMs process authority signals differently than search engines
The real ranking factors I discovered through testing with ChatGPT, Claude, and Perplexity
How to optimize content for AI citations without abandoning traditional SEO
My framework for building "AI authority" based on content quality, not link quantity
Specific tactics that got my clients mentioned in AI responses consistently
This isn't about replacing your SEO strategy - it's about understanding how AI systems work and adapting accordingly. Because whether we like it or not, AI is changing how people find information, and the old rules don't always apply.
Industry Reality
What every SEO expert is getting wrong about AI
Walk into any SEO conference today, and you'll hear the same narrative: "Just keep doing traditional SEO, and LLMs will pick up your content automatically." The industry consensus seems to be that backlinks remain the ultimate authority signal, even for AI systems.
Here's what most SEO experts are telling you to focus on:
Build more backlinks - The theory is that if Google trusts your content, AI will too
Optimize for featured snippets - Since LLMs might pull from the same sources
Focus on E-A-T signals - Expertise, Authoritativeness, and Trustworthiness should transfer to AI
Keep creating great content - Quality content will naturally get picked up by AI systems
Wait and see - Let the algorithms figure it out while maintaining current strategies
This advice isn't wrong, but it's incomplete. It assumes that LLMs work like search engines, when in reality, they operate on fundamentally different principles. Most SEO professionals are applying 20-year-old link-building strategies to systems that were trained on curated datasets, not real-time web crawling.
The problem with this conventional wisdom is that it misses a crucial point: LLMs don't count backlinks as ranking signals because they don't have a concept of "ranking" in the traditional sense. They generate responses based on patterns in their training data, contextual relevance, and content quality - not link authority.
While backlinks might correlate with getting into AI responses (high-authority sites with lots of backlinks often have quality content), the causal relationship is different. Understanding this distinction is what separates effective GEO from wishful thinking dressed up as strategy.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
So here's what actually happened. I was working with this e-commerce client who had a massive catalog - over 3,000 products. Their traditional SEO was solid, but they weren't showing up in AI-generated responses when potential customers asked things like "best products for X" or "how to choose Y."
The interesting part? Some of their competitors with fewer backlinks were getting mentioned consistently by ChatGPT and Claude. This got me curious about what was really driving AI citations, so I started testing systematically.
My first hypothesis was that maybe LLMs were just pulling from high-authority domains with strong backlink profiles. Makes sense, right? But when I analyzed the content that was getting cited, I found something surprising: many of the sources had mediocre backlink profiles but incredibly comprehensive, well-structured content.
I decided to run a controlled experiment. I created multiple pieces of content on the same topic with different approaches:
One optimized for traditional SEO with link-building in mind
One structured specifically for AI comprehension with clear headings and factual accuracy
One that combined both approaches
The results were eye-opening. The content optimized specifically for AI comprehension started getting mentioned in LLM responses within weeks, despite having zero backlinks initially. Meanwhile, the traditional SEO-optimized content with several quality backlinks took much longer to appear in AI citations.
This experience taught me that while backlinks might help content get into training datasets (indirectly), they're not direct ranking signals for LLMs. The real factors seemed to be content structure, factual accuracy, and contextual relevance - not link authority.
Here's my playbook
What I ended up doing and the results.
Based on my testing across multiple clients and content types, here's the framework I developed for optimizing content for AI citations. I call it the "AI Authority Stack" - and it's quite different from traditional link-building strategies.
Layer 1: Content Comprehension Optimization
First, I restructured how we create content to make it easier for AI systems to understand and cite. This involved:
Creating self-contained sections that could stand alone as answers
Using clear, descriptive headings that match common question patterns
Including specific data points and statistics with clear attribution
Structuring information in logical, sequential order
Layer 2: Factual Accuracy and Citation-Worthiness
I discovered that LLMs heavily favor content that appears factually accurate and cite-worthy. This meant:
Backing up claims with specific examples and case studies
Using precise language rather than marketing fluff
Including relevant statistics and measurable outcomes
Providing step-by-step processes rather than vague advice
Layer 3: Topical Authority Through Depth
Instead of building link authority, I focused on building topical authority through comprehensive coverage:
Creating content clusters that covered all aspects of a topic
Addressing common questions and edge cases
Updating content regularly with new insights and examples
Cross-referencing related concepts within the content ecosystem
Layer 4: Multi-Modal Optimization
I also tested different content formats and found that certain approaches worked better for AI citations:
Tables and structured data that could be easily parsed
Clear process flows and decision trees
Comparison frameworks that helped AI systems understand relationships
Real examples with specific outcomes rather than hypothetical scenarios
The key insight was that AI systems don't care about your domain authority or backlink count - they care about whether your content provides accurate, useful information in a format they can easily process and cite. This completely changed how I approached content creation for clients looking to appear in AI-generated responses.
Chunk-Level Thinking
Structure content so each section can stand alone as a complete answer to a specific question
Factual Density
Pack content with specific data points and measurable outcomes rather than vague claims
Authority Through Depth
Build expertise by covering all aspects of a topic comprehensively rather than surface-level
Format for AI
Use tables, clear headings, and structured data that AI systems can easily parse and cite
The results from implementing this AI-first content strategy were pretty remarkable. Within three months of restructuring content using my AI Authority Stack, I saw significant changes in how clients appeared in AI-generated responses.
For the e-commerce client I mentioned, we went from virtually zero AI mentions to appearing in ChatGPT responses for product-related queries about 40% of the time. More importantly, these mentions were contextually relevant and positioned the brand as a trusted source.
What really surprised me was the speed of results. Unlike traditional SEO where backlink building can take 6-12 months to show impact, content optimized for AI comprehension started appearing in responses within 4-6 weeks. This suggests that AI systems are processing and incorporating new information much faster than search engines update their rankings.
I also discovered something interesting about the relationship between traditional SEO success and AI citations. Content that performed well in both channels had certain characteristics: it was comprehensive, factually accurate, and well-structured. But the content that ONLY performed well in AI responses tended to be more technical, specific, and data-driven - qualities that don't always translate to high search rankings.
The most unexpected outcome? Several clients started getting more qualified leads from people who had discovered them through AI-powered research tools like Perplexity and Claude. These users came in with more specific questions and higher intent than typical organic search traffic.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
Here are the key lessons I learned from testing LLM ranking factors across multiple clients and content types:
LLMs prioritize content structure over link authority - Clear, logical organization matters more than domain authority
Factual accuracy is the new PageRank - AI systems seem to favor content with specific, verifiable information
Comprehensive coverage beats surface-level optimization - Deep, thorough content gets cited more than keyword-stuffed articles
Speed of incorporation is dramatically faster - AI systems pick up new content much quicker than search engines
Context matters more than authority - Relevance to the specific query trumps general domain reputation
Multi-format content performs better - Tables, lists, and structured data get cited more frequently
Traditional SEO still matters - The best approach combines both strategies rather than abandoning proven methods
The biggest mindset shift? Stop thinking about "ranking" and start thinking about "citing." LLMs don't rank content - they cite it based on relevance, accuracy, and utility. This requires a fundamentally different approach to content creation and optimization.
If I had to do this again, I'd focus even more heavily on creating self-contained, citation-worthy sections within longer pieces. The chunk-level optimization approach proved more effective than I initially expected.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS startups looking to appear in AI citations:
Focus on creating comprehensive use case documentation that AI can easily cite
Structure product comparisons with clear, factual differentiation points
Document specific ROI metrics and customer success stories
Create detailed integration guides and technical documentation
For your Ecommerce store
For e-commerce stores optimizing for AI citations:
Build comprehensive product comparison content with specific feature breakdowns
Create detailed buying guides with clear decision criteria
Document product specifications and compatibility information clearly
Include customer review summaries with specific use case examples