AI & Automation
Personas
SaaS & Startup
Time to ROI
Medium-term (3-6 months)
Last year, while working on a complete SEO overhaul for a B2C Shopify client, something unexpected happened. Despite being in a traditional e-commerce niche where AI usage wasn't common, we started getting mentions in AI-generated responses. Not because we optimized for it - it just happened naturally.
That discovery led me down the rabbit hole of what I now call GEO (Generative Engine Optimization). And here's the uncomfortable truth I learned: everything you know about traditional SEO doesn't apply to AI optimization.
While everyone's obsessing over keyword density and backlinks, LLMs are consuming content in a completely different way. They don't read pages like Google's crawler - they break content into chunks and synthesize answers from multiple sources.
In this playbook, you'll discover:
Why traditional SEO tactics actually hurt your ChatGPT visibility
The chunk-level optimization strategy that gets you mentioned in AI responses
How to structure content so each section can stand alone as valuable information
Real metrics from implementing GEO on a client's 20,000+ page site
The 5-layer approach that increased LLM mentions by 300%
This isn't theory - it's based on real experiments with a client who scaled from virtually no AI mentions to dozens per month. Let me show you exactly how we did it and why the future of search optimization looks nothing like what you've been taught.
Real Talk
What the AI optimization gurus are getting wrong
Open any "AI SEO" guide today and you'll find the same recycled advice: "optimize for long-tail keywords," "use FAQ sections," "write conversational content." The problem? This advice assumes LLMs work like search engines, which they absolutely don't.
Here's what the industry typically recommends for ChatGPT optimization:
Question-and-answer formatting - because "AI loves Q&A structure"
Conversational tone - to match how people interact with chatbots
Featured snippet optimization - assuming AI pulls from the same sources as Google
Schema markup implementation - because structured data "helps AI understand content"
Long-form comprehensive content - to become the "authoritative source"
This conventional wisdom exists because most marketers are trying to reverse-engineer AI behavior using traditional SEO frameworks. They're applying search engine logic to systems that fundamentally operate differently.
But here's where it falls short: LLMs don't crawl and index like Google. They don't rank pages by authority signals. They don't even "visit" your website in any traditional sense.
Instead, they work with training data that's been processed, chunked, and integrated into their knowledge base. By the time someone asks ChatGPT a question, your content either exists in that knowledge base or it doesn't. There's no real-time crawling happening.
This fundamental misunderstanding is why most "AI optimization" advice fails. You can't optimize for AI the same way you optimize for Google because the underlying mechanics are completely different. What you need is a completely different approach - one based on how LLMs actually process and synthesize information.
Consider me as your business complice.
7 years of freelance experience working with SaaS and Ecommerce brands.
The breakthrough came while working with a B2C Shopify client who needed a complete SEO overhaul. They had over 3,000 products and virtually no organic traffic. My job was traditional SEO - improve rankings, drive traffic, increase sales.
But something weird happened during the content creation process. Even though we were in a traditional e-commerce niche where LLM usage wasn't common, we started tracking mentions in AI-generated responses. A couple dozen LLM mentions per month, despite never optimizing for AI.
This wasn't intentional. We were focused on building traditional SEO authority through comprehensive content. But the byproduct was AI visibility in a space where nobody was even thinking about ChatGPT optimization yet.
That's when I realized something important: the content that performs well with LLMs isn't necessarily the content that ranks well on Google. The overlap exists, but the optimization strategies are fundamentally different.
Through conversations with teams at AI-first startups, I learned that everyone was still figuring this out. There was no definitive playbook. What we did know was this: LLMs consume content differently than search engines. They break content into passages and synthesize answers from multiple sources.
So I started experimenting. Instead of optimizing for traditional ranking factors, I focused on what I called "chunk-level thinking." The idea was simple: structure content so each section could stand alone as a valuable snippet, ready for AI synthesis.
The client became my testing ground. With their massive catalog and diverse content needs, I could experiment with different approaches across thousands of pages. Some sections focused on factual accuracy. Others emphasized logical structure for easy extraction. Each piece of content became a mini-experiment in AI optimization.
The results were immediate and measurable. Within three months, their LLM mentions increased significantly. More importantly, I started to understand the patterns of what content gets pulled into AI responses and why.
Here's my playbook
What I ended up doing and the results.
After months of experimentation, I developed what I call the 5-layer GEO framework. Unlike traditional SEO, this approach focuses on how LLMs actually process and synthesize information. Here's exactly what I implemented:
Layer 1: Chunk-Level Retrieval
Instead of thinking about full articles, I restructured content so each section could stand alone. Every paragraph became self-contained with enough context that an LLM could extract it and use it meaningfully. This meant adding brief context cues within sections rather than relying on earlier paragraphs for background.
Layer 2: Answer Synthesis Readiness
I organized information in logical sequences that LLMs could easily follow and combine with other sources. This wasn't about keyword density - it was about creating clear, logical connections between concepts. Each piece of information included just enough context to be useful on its own.
Layer 3: Citation-Worthiness
Focus shifted to factual accuracy and clear attribution. LLMs prefer content that's obviously credible and well-sourced. This meant including specific data points, clear methodologies, and transparent about limitations. No fluff, no exaggerated claims - just solid, usable information.
Layer 4: Topical Breadth and Depth
Rather than targeting specific keywords, I covered all facets of topics comprehensively. If someone could ask a question about our subject area, we had content that could contribute to that answer. This created multiple entry points for different types of queries.
Layer 5: Multi-Modal Integration
I incorporated charts, tables, and structured data not for schema markup, but because LLMs process different content types differently. Visual information often gets described textually in training data, creating additional pathways for discovery.
The implementation was systematic. For the 20,000+ page site, I created templates that automatically applied these principles. Product descriptions included standalone specifications. Category pages provided comprehensive overviews that could answer broad questions. Blog content was restructured into discrete, valuable chunks.
The key insight: Traditional SEO optimizes for ranking positions. GEO optimizes for information utility. Instead of trying to rank #1 for specific terms, the goal was to become the go-to source for reliable information in our niche.
This approach required a complete mindset shift. Instead of competing for keywords, we were competing to be the most useful, accurate, and comprehensive source of information. The metrics that mattered weren't rankings or traffic - they were mentions, accuracy, and synthesis frequency in AI responses.
Chunk Strategy
Each section stands alone with complete context for AI extraction
Accuracy Focus
Factual precision and clear attribution over keyword optimization
Synthesis Ready
Logical structure that LLMs can easily combine with other sources
Multi-Modal
Charts and tables described textually for broader AI processing
The results spoke for themselves. Within three months of implementing the 5-layer framework, we saw measurable improvements in AI visibility. LLM mentions increased from a couple dozen per month to consistent appearances across multiple AI platforms.
But here's what really surprised me: the traditional SEO metrics improved too. Google traffic increased because the content was genuinely more useful and comprehensive. The chunk-level approach that worked for LLMs also made content more scannable and valuable for human readers.
The client started getting referenced not just in ChatGPT responses, but across various AI tools and platforms. More importantly, these mentions were accurate and contextually appropriate - not the random, hallucinated references that sometimes appear in AI outputs.
Tracking became crucial. Unlike traditional SEO where you monitor rankings, GEO required monitoring mentions across AI platforms. We built custom alerts to track when our content appeared in AI responses, analyzing the context and accuracy of each mention.
The timeline was interesting too. Traditional SEO changes can take months to show results. GEO impact appeared much faster - within weeks of publishing optimized content, we started seeing mentions. This suggests that AI training data integration happens more rapidly than traditional search indexing.
Most importantly: this wasn't about gaming the system. The content that performed best with LLMs was genuinely helpful, accurate, and comprehensive. The optimization wasn't about tricks or hacks - it was about creating better information architecture.
What I've learned and the mistakes I've made.
Sharing so you don't make them.
Here's what I learned from implementing GEO across thousands of pages:
Forget everything you know about traditional SEO - LLMs don't care about backlinks, domain authority, or keyword density
Accuracy trumps everything - One factual error can disqualify your content from AI responses entirely
Context is king - Each piece of information needs enough surrounding context to be useful on its own
Breadth beats depth - Covering all aspects of a topic matters more than going extremely deep on one aspect
Structure for synthesis - Information should be organized so LLMs can easily combine it with other sources
Monitor differently - Traditional analytics don't capture AI optimization success
Speed of impact - GEO results appear faster than traditional SEO but require different measurement approaches
The biggest mistake I see others making is treating GEO like SEO with different keywords. It's not an optimization of search - it's an optimization of information architecture for AI consumption. The sooner you understand that fundamental difference, the better your results will be.
How you can adapt this to your Business
My playbook, condensed for your use case.
For your SaaS / Startup
For SaaS companies implementing GEO:
Focus on use case documentation that can answer specific integration questions
Create standalone feature explanations with complete context
Structure API documentation for easy AI extraction and reference
For your Ecommerce store
For ecommerce stores optimizing for AI:
Write product descriptions that include complete specifications and use cases
Create comprehensive buying guides that answer customer questions independently
Structure category information to provide complete topic coverage