AI & Automation

Does Claude Use Semantic Markup? My Experiment with GEO Optimization


Personas

SaaS & Startup

Time to ROI

Medium-term (3-6 months)

Last year, I discovered something that made me rethink everything about SEO strategy. While working on an e-commerce client's content overhaul, I noticed their articles were starting to appear in AI-generated responses - despite being in a traditional retail niche where LLM usage isn't common.

This discovery led me down the rabbit hole of GEO (Generative Engine Optimization) and a critical question: does Claude actually use semantic markup when processing content? Most SEO professionals are still optimizing for Google's crawlers, but the game is changing faster than we realize.

Through hands-on testing with my client's 20,000+ indexed pages, I learned that LLMs don't consume content the same way traditional search engines do - and this changes everything about how we should structure our content.

Here's what you'll discover in this playbook:

  • Why traditional SEO approaches miss the mark for AI optimization

  • How LLMs actually process and synthesize content from multiple sources

  • The 5 key optimizations that got us featured in AI responses

  • What semantic markup actually means for Claude and other LLMs

  • A practical framework for optimizing content for both search engines and AI

Check out our AI optimization strategies for more insights on this emerging field.

Industry Reality

What every marketer thinks they know about AI optimization

Most content creators and SEO professionals are still treating AI optimization like it's an extension of traditional search engine optimization. The conventional wisdom goes something like this:

  • Use structured data - Add schema markup to help AI understand your content

  • Focus on featured snippets - If it ranks well for Google, it'll work for Claude

  • Traditional keyword optimization - Target the same keywords you'd use for search

  • Meta tags matter - Keep optimizing titles and descriptions as usual

  • Link building still works - AI systems respect domain authority

This approach exists because it's comfortable. We know how to do traditional SEO, we have tools for it, and agencies can sell it easily. The problem? It's based on a fundamental misunderstanding of how LLMs actually work.

LLMs don't crawl the web the same way Google does. They don't look at your meta descriptions or count your backlinks. They process content in chunks, synthesize information from multiple sources, and generate responses based on patterns they've learned during training.

When I started testing this with my e-commerce client, I realized we were optimizing for the wrong system entirely. Traditional SEO tactics weren't just ineffective for AI optimization - they were actually counterproductive in some cases.

The real breakthrough came when I stopped thinking about "search optimization" and started thinking about "synthesis optimization" - making content that AI can easily understand, extract, and recombine with other sources.

Who am I

Consider me as your business complice.

7 years of freelance experience working with SaaS and Ecommerce brands.

When I took on this e-commerce client's SEO overhaul, the brief seemed straightforward: improve their organic rankings and traffic. They had a solid product catalog but virtually no content strategy, which meant starting from scratch with their SEO approach.

What made this project unique was the scale - we were dealing with over 3,000 products across 8 different languages, which meant creating and optimizing tens of thousands of pages. But here's where it got interesting: even though this was a traditional retail niche where you wouldn't expect much AI usage, I started noticing their content appearing in Claude and ChatGPT responses.

This wasn't something we initially optimized for - it happened naturally as a byproduct of solid content fundamentals. But it raised a crucial question: if LLMs were already mentioning our content, what if we could optimize specifically for that?

The challenge became figuring out how LLMs actually process content. Through conversations with teams at AI-first startups and my own testing, I realized everyone is still figuring this out. There's no definitive playbook yet for GEO (Generative Engine Optimization).

My client was tracking a couple dozen LLM mentions per month organically, which gave me a baseline to work from. But I needed to understand: does semantic markup actually matter for Claude? Do traditional SEO signals translate to AI optimization? And most importantly, how can we structure content so it's more likely to be featured in AI-generated responses?

The traditional approach would have been to double down on schema markup and structured data, assuming AI systems would parse it the same way Google does. But my gut told me this was wrong - LLMs operate fundamentally differently than search engines.

My experiments

Here's my playbook

What I ended up doing and the results.

Instead of trying to game the system with traditional SEO tactics, I developed what I call "chunk-level optimization" - structuring content so each section can stand alone as a valuable snippet that AI can easily extract and synthesize.

The Foundation: Understanding How LLMs Actually Work

First, I had to accept that LLMs don't consume pages like traditional search engines. They break content into passages and synthesize answers from multiple sources. This meant restructuring our entire content approach around self-contained sections rather than page-level optimization.

Here's the framework I developed:

1. Chunk-Level Retrieval
Every paragraph needed to be self-contained with enough context to be understood independently. Instead of writing flowing articles, I created modular content blocks that could stand alone.

2. Answer Synthesis Readiness
I structured information with logical hierarchies that make it easy for AI to extract and recombine. This meant using clear topic sentences, supporting evidence, and explicit conclusions in each section.

3. Citation-Worthiness
The content had to be factually accurate with clear attribution. AI systems are more likely to reference content that includes specific data points, methodologies, and credible sources.

4. Topical Breadth and Depth
Instead of targeting single keywords, I created comprehensive resources covering all facets of topics. This increased the chances of being referenced for related queries.

5. Multi-Modal Support
I integrated charts, tables, and structured data that could be easily extracted and referenced by AI systems.

The key insight: semantic markup matters less than semantic structure. Claude doesn't read your schema markup, but it does understand well-organized, logically structured content.

For implementation, I used AI itself to help scale this approach. I built content generation workflows that could create thousands of pages following these principles, ensuring consistency across our massive content catalog.

Learn more about scaling content with AI in our content automation playbook.

Content Structure

Focus on chunk-level optimization rather than page-level SEO. Each section should be self-contained and synthesis-ready.

Testing Method

Track mentions across different LLMs using brand monitoring tools. Set up alerts for your content appearing in AI responses.

Implementation

Use clear topic sentences, logical hierarchies, and factual accuracy. Structure content for easy extraction rather than traditional flow.

Results Tracking

Monitor LLM mentions monthly and correlate with traditional SEO metrics to understand the relationship between approaches.

The results were enlightening, though they challenged some of my assumptions about AI optimization. Over three months of testing with the chunk-level optimization approach, we saw some interesting patterns emerge.

LLM Mention Growth: We went from a couple dozen monthly mentions to tracking consistent references across Claude, ChatGPT, and Perplexity. The increase wasn't dramatic, but it was steady and sustainable.

Traditional SEO Impact: Interestingly, the content structured for AI optimization also performed well in traditional search. Google seemed to appreciate the clear, well-organized structure even though it wasn't optimized specifically for Google's algorithms.

Quality Over Quantity: The mentions we achieved were contextually relevant and accurate, which is more valuable than high-volume but incorrect references.

What surprised me most was that the content appearing in AI responses wasn't necessarily our highest-ranking pages in Google. LLMs were pulling from mid-tier content that happened to be well-structured and factually dense.

The timeline was also different than traditional SEO. While Google rankings can take months to improve, LLM mentions started appearing within weeks of publishing optimized content.

Learnings

What I've learned and the mistakes I've made.

Sharing so you don't make them.

Through this experiment, I learned that optimizing for AI requires a fundamentally different mindset than traditional SEO. Here are the key insights:

  • Structure trumps markup - Semantic HTML matters less than logical content organization

  • Synthesis over optimization - Think about how content will be recombined, not just consumed

  • Factual density wins - AI systems prefer content with specific data points and clear attribution

  • Self-contained sections - Each paragraph should provide complete context independently

  • Multiple touchpoints - Cover topics comprehensively rather than targeting single keywords

  • Faster feedback loops - AI mentions happen quicker than search rankings

  • Quality correlation - Content that works for AI often improves traditional SEO too

The biggest lesson? Don't abandon traditional SEO fundamentals for AI optimization. Build your GEO strategy on top of strong SEO foundations, not instead of them. The landscape is evolving too quickly to bet everything on optimization tactics that might be obsolete in six months.

Most importantly, focus on creating genuinely useful content for humans first. The best AI optimization is content that naturally aligns with how AI systems process and synthesize information - and that usually means well-structured, comprehensive, factually accurate content.

How you can adapt this to your Business

My playbook, condensed for your use case.

For your SaaS / Startup

For SaaS companies looking to optimize for AI mentions:

  • Structure product documentation for easy AI extraction

  • Create comprehensive use-case content covering all customer scenarios

  • Include specific metrics and data points in case studies

  • Build self-contained help articles that work as standalone references

For your Ecommerce store

For e-commerce stores targeting AI optimization:

  • Create detailed product guides with technical specifications

  • Structure category pages with comprehensive comparison data

  • Build buying guides that cover all decision factors

  • Include sizing, compatibility, and usage information in structured formats

Get more playbooks like this one in my weekly newsletter