ChatGPT is not one surface. It's two.
Most people think of ChatGPT as a single thing. It is technically two overlapping systems. The first is the underlying model — GPT-4, GPT-5, whichever is current — which has a frozen knowledge base from its training cutoff and answers from memory when the query doesn't require fresh information. The second is ChatGPT Search: a live-browse layer that fetches current sources via the OAI-SearchBot crawler and cites them in real time. Optimizing for ChatGPT means optimizing for both layers separately and simultaneously.
Winning the training-time layer
This is the slow, deep, compounding game. The model learns from a filtered snapshot of the open web plus carefully curated high-authority sources. To be remembered well, you need entity stability (the model has to know what "your brand" refers to without ambiguity), presence in sources the training pipeline weights heavily (Wikipedia, Wikidata, major publications, authoritative directories), and enough consistent phrasing around your brand's core claims that they become memorized associations.
The tactical moves: get your brand into Wikidata with a clean item, get a Wikipedia article if you're notable enough, earn citations in the 30–50 publications the training data oversamples, and ensure every mention of your brand across the open web uses the same entity phrasing. Boring. Slow. Decisive over a 12-month horizon.
Winning the browse-time layer
This is the fast game. When ChatGPT Search activates, OAI-SearchBot fetches candidate pages in real time, and the model cites the best few inline. To win here, your content needs to be crawlable by OAI-SearchBot (check your robots.txt — and read our LLM visibility audit guide first), structurally clear, schema-rich, and containing an extractable TL;DR block that directly answers the priority buyer question.
The browse layer moves in weeks, not months. Ship a fresh, extractable, schema-rich article on a priority topic, and you can see it cited in ChatGPT Search within the same week. Most brands don't because they don't know the surface exists.
The robots.txt mistake that kills everything
Many brands block GPTBot in robots.txt — often without realizing it. Sometimes it's a well-meaning "protect our content from AI training" decision. More often it's a copy-pasted robots.txt from a template someone found online. Either way, it's a self-inflicted wound: if GPTBot can't fetch your content, it cannot learn it at training time, and if OAI-SearchBot can't fetch your content, it cannot cite it at browse time. Check your robots.txt today. Allow both agents. See our llms.txt and robots.txt for reference.
Measuring what's working
Set up a rotating prompt panel of 30–100 queries specific to your category. Run them in ChatGPT (both without browse and with browse, separately) on a regular cadence — weekly for browse, monthly for model memory. Track: was your brand named, in what position, with what context, and alongside which competitors. This is your only honest signal. Vanity metrics won't save you here.