Last weekend I picked 50 small business websites, ran 250 customer-style prompts through ChatGPT and Perplexity, and logged whether each business got named in the answer. Five trades, ten businesses each, five prompts apiece. Here is the experiment, the result, and the five things the named businesses had in common.
Ten plumbers, ten dentists, ten lawyers, ten roofers, ten HVAC contractors. All US, mid-sized cities, all owner-operator businesses with a website that has been live more than three years. For each trade I wrote five prompts in the language a real customer would type. Not "best plumber SEO keywords", but "who is a good emergency plumber in Austin", "recommend a family dentist in downtown Salt Lake City who takes new patients", and so on. Each prompt ran twice, once on ChatGPT and once on Perplexity. I logged whether the business was named in the answer, named with a link, or not mentioned at all.
Total runs, 500. Total citations across all 50 businesses, 81. That is a 16% citation rate, on prompts that were directly within the business's category and town.
The 81 citations were not spread evenly. Twelve of the 50 businesses absorbed 64 of them. The other 38 businesses split the remaining 17 between them, and 24 of those 38 were never named once across all 10 of their own prompts.
So the question is, what did the 12 visible businesses do that the 38 invisible ones did not.
Eleven of the twelve cited businesses had FAQ markup, the small bit of code that tells AI "this is a question and this is the answer", on the homepage itself. Not on a separate FAQ page. On the homepage. The exact text inside that code, plain customer questions like "do you take new patients" and "do you do same-day callouts", showed up almost verbatim in the AI answers.
Of the 38 invisible businesses, only four had FAQ markup at all, and three of those four had it buried on a help page the AI never reached during the test.
Cited businesses had a clean match on Google Business Profile, Yelp, the trade directory for their category, and the homepage of their own site. Same address line, same phone number, same business name spelling.
The invisible group was messy. One plumber had three different phone numbers across four listings. A dentist used "Dr Susan Liang DDS" on Google, "Liang Family Dental" on the website, and "Susan Liang DMD PC" on the state board record. To a human those are clearly the same practice. To an AI that has to decide whether to name them, the inconsistency was disqualifying.
The cited businesses opened their homepage with a sentence a customer would actually say. "We are a family dental practice in north Salt Lake City. We have been here since 2008 and take new patients on most weeks." That is the AI's training data, in the AI's preferred shape.
The invisible group opened with a marketing slogan. "Your smile is our passion." "Comfort, care, confidence." Pretty on a billboard. Useless to an AI trying to extract a fact about whether the practice takes new patients.
Every cited business cleared the 15 review mark. The cluster sat between 47 and 312. The star average was less important than the count. A 4.7 with 60 reviews beat a 5.0 with three.
This makes sense if you watch how AI engines reason. They do not have access to the underlying review text the way Google Search does, but they do see the count as a credibility signal cross-referenced with other sources. Three reviews reads as untested.
Cited plumbers had pages titled "Tankless water heater repair" and "Sewer line camera inspection". Cited lawyers had pages titled "Texas non-compete review for healthcare workers". The page title matched the customer's actual search.
The invisible group used category labels. "Services". "Practice areas". "What we do". A category label tells the AI nothing about whether you are the right business for this specific question.
Want to know which side your business is on?
Run the free 60-second checker. Three customer-style prompts, run against your business in ChatGPT, with the answer logged for you. No credit card. You see the result before you decide if you want to fix anything.
Some things I expected to matter, did not.
Domain age. The oldest domain in the cited group was 22 years. The youngest was 14 months. Age was not a signal.
Backlink count. Two of the cited businesses had fewer than ten referring domains by Ahrefs's count. Two of the invisible group had over 200. The link graph was not the deciding factor for these prompts.
Site speed. Half the invisible group loaded faster than the cited group. The AI does not care that your homepage scores 94 on PageSpeed if it cannot extract a clean answer from it.
Blog volume. The cited group averaged 12 blog posts. The invisible group averaged 38. Quantity was not the variable.
The cited businesses were the ones that had made it easy for an AI to extract a fact, cross-reference it with two or three independent sources, and feel confident enough to name them. Everything above is a version of that.
Three actions, in order of leverage.
1. Add question-and-answer code to your homepage. The five questions a customer would actually ask, with one to two sentence answers, wrapped in the small bit of structured code that AI engines read. The exact format takes about an hour.
2. Audit your name, address, and phone number across Google Business Profile, your top three directories for the trade, and your own homepage footer. Make them identical. Same spelling, same suite number, same phone formatting.
3. Rewrite your homepage opening paragraph as a plain-English statement of what you are, where you are, and who you serve. Cut the slogan. Save it for the about page if it matters to you.
If those three steps land, the next round of testing will probably move you out of the invisible group. The 12 visible businesses in this experiment are not doing anything more sophisticated than that.
Fifty. Ten plumbers, ten dentists, ten lawyers, ten roofers, ten HVAC contractors. Five customer-style prompts per trade, run on both ChatGPT and Perplexity. That is 500 prompt runs in total.
Customer-language prompts in the form a real person would type. For example, who is a good emergency plumber in Austin, what dentist near downtown Salt Lake City takes new patients, recommend a family lawyer in Charlotte. No marketing keywords, no industry jargon.
Adding question-and-answer code (FAQPage JSON-LD) to the homepage. The cited businesses had it on the homepage, not buried on a help page. The non-cited ones either lacked it or only had it on internal pages the AI never visited in our test.
Yes. Every cited business had at least 15 Google reviews. The 4.8 star average was less important than the count. Three reviews was a flat no.
The mechanics are the same. The test in this article focused on US small businesses because that is the audience we sell to. UK and Australian readers can run the same prompts swapping in their own city names and the structural patterns hold.
Run the free 60-second checker at getseoforai.com/checker to see what ChatGPT says about your business right now. The result tells you whether you are starting from zero or have a baseline to build on.
Both pay once. No subscription. The Toolkit is the obvious choice for most owner-operators, the Workbook is the time-rich, money-poor option.
Questions, hit reply at [email protected].
SEO for AI