What if the secret to Generative Engine Optimization (GEO) isn’t some new trick, but a return to the fundamentals? The technical, content, and tracking requirements for serving content to an AI bot look a lot like the best practices of technical SEO from a few years ago.
The industry can’t even agree on what to call it – GEO, AEO, LLMO – but we can all see the shift happening. What started as a few strange new bot visits in analytics has turned into a cultural moment that’s captured both retailers and consumers. But if you’ve been around long enough to remember the major SEO shakeups, this won’t feel new.
For retailers, GEO isn’t about ranking in prompts but about becoming the trusted source LLMs cite when describing your products, your quality, or your brand story. If you can’t control how your brand is being referenced you’ve already lost.
The Crossovers Between GEO and SEO
LLMs didn’t create a new world but they did create a new way to explore the one that already exists. And they’re relying on us to help them understand it.
That’s why those who have always cared about brand quality and SEO fundamentals will continue to win in the GEO era. Building an index of the internet is messy, expensive, and slow. Anyone who’s ever tried to make sense of modern search results knows it’s not as simple as “click the first link.” There’s missing context, fragmented information, and a lot of noise once you land on any given page of the internet.
If indexing and ranking were easy we’d see more players in the search engine market. Honestly, I’m not convinced that the current wave of AI startups has the time or money to reinvent what Google spent decades refining. OpenAI has already admitted to using Bing’s API for information, and plenty of evidence suggests that models like ChatGPT are pulling directly from scraped Google results. So, for now, good SEO still matters. Ranking well in traditional search is ground zero for being included in generative models.
Applying SEO Fundamentals to GEO
If you’ve been in SEO for any length of time, what’s happening with GEO should feel familiar. “Technical SEO” was once a buzzword, but the core idea – ensuring access and functionality before anything else – has never gone away. The same principle applies here. If a bot can’t reach your content, it can’t use it. While optimizing for LLMs is so much more than just allowing them into your site it is the first step in ensuring you can show up. Think of this process as a spectrum of GEO readiness: Access → Structure → Content Depth → Visibility → Feedback. Let’s walk through an example to help optimize for a collection of spring jackets we want to carry.
-
Start with access
Check whether LLM bots are blocked from your site—you’ll be looking for specific user agents like ChatGPT-User, OAI-Searchbot, Google-Extended, PerplexityBot, and Claude-SearchBot. If bots are getting blocked at the edge layer, they can’t crawl your content. Work with your development and security teams to unblock the specific bot IPs (Claude, ChatGPT, etc.) so they can crawl uninterrupted. Monitor crawl rates so they don’t impact site performance and throttle if needed. LLMs don’t respect robots.txt, so no need to update it.
-
Show them where to go
Whether optimizing for search engines or bots, sitemaps ensure efficient crawls and highlight your best content. Don’t bury top content deep in the taxonomy. Use HTML and/or XML sitemaps so bots hit your most important pages early. Keep them updated and easily accessible. For spring jacket prompts (“best spring jackets under $200,” “new jackets for spring”) ensure your new release pages and relevant PLPs are included.
-
Make your content available
Ensure your content isn’t hidden behind JavaScript or complex rendering—only Gemini can reliably parse heavy JS today. Rendering JavaScript is expensive and most LLM bots won’t bother (even Googlebot avoids doing it in real time). Keep key content simple, clean, and server-side rendered. Your visibility depends on what LLMs can comprehend.
Test by loading the page with JavaScript disabled in Chrome. If nothing loads, talk with your dev team immediately.
Critical data (descriptions, features, pricing, reviews) should be server-side. Non-critical elements can be client-side. If the LLM can’t see the jacket price, it can’t include it in “best spring jackets under $200” prompts.
-
Then focus on structure
Once bots can access pages and know where to go, they need to make sense of the content. Human prompts often include multiple layered questions, so LLMs perform “query fanout,” running multiple related searches to answer well. Structure becomes critical—headings, bullets, tables, and FAQs help both humans and crawlers quickly locate the right data (“chunking”). Bots won’t dig through dense paragraphs to find a material detail, and neither will your shoppers.
-
Deliver rich information
To answer complex prompts effectively, bots need a wide range of detailed product information. Sparse PDPs force LLMs to infer—and often guess incorrectly. Just like classic Google optimization, richer data drives stronger visibility and ROI.
Example: Boston Proper saw a 16.4x ROI lift on organic and paid PLA campaigns by enriching product data with themes, occasions, hem lengths, neck styles, and materials using Stylitics. As LLMs become the next generation of search engines, providing detailed, structured data will determine how frequently—and how accurately—your brand appears in their results.
We’re starting to see what I will call the ‘GEO feedback loop’ where visibility in AI models pushes retailers to redesign their PDPs to be more readable, which then further improves model visibility. In other words, once brands see their products or attributes surfaced in LLM answers, they double down on clarity and structure and the cycle reinforces itself.
Think about what your customers actually want to know: product details, materials, sizing, price, occasions, how to pair it, and brand story. That information should be clear and easy to parse on the appropriate pages. Crawling, sorting, and parsing takes effort, and the easier you make it the more you’re rewarded by all the good kinds of bots.
Measurement is always the tricky part. This is where GEO gets messy. Unlike SEO, there’s no rank to chase or universal “page one” to aim for. Generative results are personalized, contextual, and constantly changing. Two users can ask the same question and get completely different answers.
There are some third party tools retailers are using like SEMrush that are experimenting with ChatGPT APIs to simulate how results are served, but the reality is that personalization makes that data unreliable. Change a few words in a prompt, and you’re testing an entirely different prompt though. Some companies like Profound are attempting to develop persona-based instances wherein they can query directly what you want to track, but still lack the precision of a personal LLM conversation.
Overall visibility is a little easier to track – enterprise retailers are experimenting with brand-mention tracking inside LLM outputs using tools like Perplexity’s citation data and custom GPT wrappers. Others are piloting share-of-voice analysis for branded prompts (e.g., how often a brand is named among ‘top luxury handbag’ queries). Without large-scale data directly from OpenAI or Anthropic, there’s no way to track “visibility” precisely, but they signal the shape of GEO analytics to come. Before you start spending to understand where you stand, look at what data you already might have.
Monitor your traffic trends
So what can we do? Look for changes in referral and direct traffic. Some visits from LLMs are labeled correctly in analytics tooling with referrers, but others appear as direct traffic. If you see unexplained bumps in direct visits that don’t align with other marketing efforts, that could be a signal that you’ve increased your LLM visibility. It’s not the only reason, but it could be a contributing factor. You can also look for parameters like ?utm_source=chatgpt.com attached to your visit URLs and see how they are getting categorized. It’s not perfect but it is data you already have access to. Try pairing this data with GA4, CDP, or CRM pipelines to correlate exposure with downstream traffic and conversion patterns and get a sense for who’s visiting.
Bot traffic as an indication
The other thing to watch closely is bot activity. In SEO, we’ve tracked Googlebot visits as a proxy for content value. The same logic applies here. If you notice LLM bots crawling your site more frequently or revisiting certain pages, that likely means those pages hold information the models consider useful. LLMs value freshness and will revisit pages they deem worth it more frequently to ensure they have the most up to date information. This is a great tutorial on how to view the relative topics the LLMs are visiting to see where you are getting a lot of visibility. Over time, those crawl patterns could become one of the best early indicators of your GEO visibility and value.
Experiment on your own
While you won’t be able to track visibility at scale, try out some simple prompts to better understand if your brand is even in the running. If you are a retailer specializing in athleisure ask the LLM if they can recommend some athleisure brands that would work for you and to explain why they are a good fit. Make note of which brands are showing up and what the LLMs are saying about each brand. Then look at the citations to see where the information is coming from – is it the About Us pages of the site or third party review sites? What kind of pages are showing up – PDPs, homepages, or something else entirely? Notice the trends of what is getting displayed to you and start to pull together some places where your brand may not have the right page, tone of voice, or authority to be shown. Don’t expect that the results will be the same for everyone, but use this as a blueprint to learn what kinds of things are working for others.
Where We Go From Here
GEO today feels a lot like SEO did in its early years – unpredictable, full of noise, ever-changing, but rich with opportunity. We don’t have all the metrics yet, and much of the process is still opaque. GEO is not just SEO, but the playbook for winning likely hasn’t changed.
The brands that succeed will do what good SEOs have always done: make information clear, accessible, and genuinely valuable. Focus on content that answers real questions and a technical foundation that makes it easy to find. Retailers who invest in structured, data-rich PDPs will shape how AI describes them and by extension, how consumers discover them. Build trust with both users and the machines learning from them.
For those of us who’ve been optimizing for visibility for years, GEO isn’t a new language, just a new interface. Retailers who treat GEO as part of their brand architecture and not just their search strategy will control how AI describes them. Everyone else will be summarized by someone else’s data.
Checklist
- Check that all LLM bots have access to the site at the Edge layer
- Ensure the sitemap is up to date with your most important content
- Load a few page types in Chrome with javascript disabled and ensure that important information like descriptions, feature bullets, and pricing render on the page
- Determine your most important pages and check them for proper H tags, that content is formatted well with bullets, tables or in the FAQ style for easy scannability
- Check PDP pages to ensure that all rich information is pulled into feature bullets or specifications section with hem lengths, materials, neck styles, themes, occasions, etc.