← Blog

Why Your Business Doesn't Show Up on ChatGPT

Dana Lampert·April 4, 2026·5 min read·AI Visibility

I watched an HVAC company owner in Dallas pull up ChatGPT on his phone and ask it to recommend the best HVAC company in his area. He had 22 years in business, 3,000+ jobs a year, a 4.8-star rating. ChatGPT returned three names. His was not one of them.

He assumed it was a bug. It was not. It was a data problem.

How ChatGPT decides who to recommend

When a homeowner asks ChatGPT for "the best HVAC company in Dallas," the model does not search Google, crawl Yelp, or visit your website. It constructs an answer from two sources: its training data (a static snapshot of the web, typically months old) and whatever structured data it can retrieve at inference time (JSON-LD markup, indexed data feeds, structured databases).

There is no PageRank equivalent. No ad auction. The model evaluates whatever structured, verifiable information it can find about businesses in a given category and geography, then assembles a response.

This means it is not choosing the "best" business. It is choosing the most evaluable one.

The business with the most complete, structured, verifiable data is the one the model can confidently name. Everything else gets skipped.

The three data gaps that make you invisible

Most businesses that are invisible to AI have the same three problems. All three need to be solved. Fixing one without the others does not move the needle.

Unstructured data

Your website says "We've proudly served the Dallas-Fort Worth area for over 20 years with top-quality HVAC solutions." To a human, that communicates experience. To a language model evaluating you against competitors, it is noise.

LLMs need structured facts: what services you perform, where you operate, how long you have been in business, what volume of work you handle. Discrete, parseable data points. Not sentences buried in marketing copy.

{
  "years_in_operation": 22,
  "primary_service_area": "Dallas-Fort Worth, TX",
  "service_categories": ["HVAC Installation", "HVAC Repair", "Preventive Maintenance"],
  "annual_jobs_completed": 3100
}

That block of JSON communicates more to an AI system than an entire "About Us" page. The model can compare those fields directly against other businesses and determine whether your company is a reasonable recommendation.

Marketing copy is written for humans. Structured data is written for machines. Almost every local service business has plenty of the former and none of the latter.

Unverified data

Even when structured data exists, LLMs weigh source reliability. A business claiming "22 years in operation" on its own website carries less weight than the same fact verified by a third party.

Google figured this out early. A website's own claims about itself were unreliable, so backlinks became the primary ranking signal: external, independent endorsements carried more information than self-asserted claims. LLMs apply the same principle. A business saying "we complete 3,000 jobs per year" on a homepage is an assertion. The same metric computed from authenticated QuickBooks data is evidence.

Most local businesses have no verified operational data anywhere on the web. Their only structured data is name, address, phone number, and whatever Google Business Profile contains. That is not enough for an AI system to meaningfully evaluate them.

Data in the wrong places

GPT-4's training data has a knowledge cutoff months in the past. If you opened a second location, added a new service line, or tripled your job volume last year, the model does not know.

Retrieval sources are the second channel: JSON-LD markup embedded in web pages, structured databases, and emerging standards like llms.txt files that some AI systems and indexing pipelines are beginning to use as a navigation layer. These give the model access to current information, but only if that information exists in a format the system can parse.

If your business data lives only in Google Business Profile, Yelp, and a WordPress site with no structured markup, you are relying entirely on stale training data. When that data is incomplete or outdated, the model either skips you or generates inaccurate information about you. Both outcomes are bad.

SEO solves a different problem

Good SEO gets you found on Google. That still matters. But SEO and AI discoverability are parallel channels with different mechanics, and solving one does not solve the other.

LLMs do not crawl websites the way Googlebot does. They do not index pages into a search corpus. They do not rank results by keyword relevance. Keyword density, meta descriptions, and internal linking have no direct mechanism to influence what a language model says about your business.

The gap is not that businesses have bad SEO. Most agencies do solid work on the Google side. The gap is that there is an entirely new channel growing fast — a growing share of consumers now research local services through AI chat interfaces before making a hiring decision, up from near zero two years ago — and almost nobody has the data infrastructure for it. A business with a perfect SEO score and zero structured operational data is fully optimized for one channel and invisible on the other.

What to do about it

Open ChatGPT, Perplexity, and Claude. Ask each one to recommend a business in your category and city. Note whether you appear, whether the information is accurate, and what competitors show up instead. If ChatGPT says you have been in business for 15 years when the real number is 22, that is a data accuracy problem. If it lists services you no longer offer, that is a staleness problem.

Then check your website for Schema.org LocalBusiness markup in JSON-LD. Most local business websites either have none or have incorrect markup auto-generated by a plugin.

Then ask the harder question: does your operational data exist in structured form anywhere on the public web? Not your address. Your actual operating metrics: job volume, service mix, years of operation, geography, repeat customer behavior. For almost every local service business, the answer is no. That data lives inside QuickBooks, ServiceTitan, Jobber, or whatever system runs the operation. It has never been extracted, structured, and published in a format AI can read.

This is the problem TrueSignal was built to solve. We connect to QuickBooks, ServiceTitan, or whatever system actually runs your operation, pull the real numbers, and publish them in a format AI systems can read. Three layers: server-rendered HTML, JSON-LD, and canonical JSON. Refreshed monthly. The business cannot edit, override, or selectively exclude any of it. That is the point.

The result is a verified operational record that increases the likelihood that AI systems can accurately evaluate and cite your business when someone asks for a recommendation.

Your business has verified data that's hidden.
A TrustRecord makes your operating history readable by every AI system making recommendations.
Related
Every AI Recommends a Different Plumber
April 7, 2026
How to Create an llms.txt File for Your Business Website
April 3, 2026
Where Business Reputation Actually Lives
April 2, 2026