Online Reviews Are Bullsh*t (And Here's Why)

When was the last time you wrote a review?

When was the last time you wrote a review for a service business? Not a restaurant. Not a hotel. A service business — your dentist, your accountant, your marketing agency, your roofer. Without a gift card prompt, a discount, a follow-up email, or a text from the owner asking?

Yeah. Me neither.

The reason you didn't write one is the same reason most reviews you read are suspect. The online review economy runs on incentivized writing, fabricated profiles, and bulk-upload campaigns — and Google's algorithm can't tell the difference. Neither can Clutch. Neither can DesignRush. The metric is broken. And the agencies optimizing hardest for it are about to find out that serious buyers stopped trusting it years ago.

This post is about what's actually broken, what's replacing it, and how to position your business on the right side of the shift. Some of you will hate this. Most of you already know it.

The patterns of fake

I won't name specific agencies — I don't need a defamation suit — but every pattern below is publicly observable on Google Maps, Clutch, and DesignRush right now. Pull up any "top-10 Sioux Falls marketing agency" and check for yourself.

The cluster pattern

A top-ranked Sioux Falls marketing agency went from 12 Google reviews to 47 in a 6-week window in 2024. Then nothing for 9 months. Real customer behavior doesn't pulse like that — coordinated review campaigns do.

Chart 1 — Review accumulation patterns

Fake-volume spike vs. organic growth over 24 months

50 25 0 M1 M6 M12 M18 M24 Months Spike: 12 → 47 in 6 weeks Cluster pattern (fake) Organic growth pattern

Real customer behavior compounds steadily. Bulk-upload campaigns spike, plateau, and silence — visible in any agency's review timeline.

The look-alike phrasing

Three reviews in a row on the same profile, same week, same template: "Highly recommend!" + "Great team to work with!" + "Awesome experience!" — verbatim across all three. Real humans don't write the same review.

The single-review profile

Reviewer account created the same week the review was posted. Zero other reviews ever. Default Google avatar. Public reviewer history is one click away on Google Maps. Legitimate reviewers usually have a footprint — review-farm accounts don't.

The implausible cross-industry reviewer

Same Google Maps account leaves 5-stars for a dentist in Sioux Falls, a roofer in Brookings, a marketing agency in Sioux Falls, a med spa in Sioux City, and a real estate agent in Watertown — all within 60 days. That's not a customer. That's a review farm account on rotation. They exist. They're for sale on Fiverr.

The undisclosed incentive

Reviews that openly mention "got a discount for leaving this" or business replies thanking the reviewer for "leaving us a Google review per our referral program." Those are FTC-disclosed paid reviews. The 2024 FTC rule on review fraud requires explicit disclosure. Most service businesses are out of compliance.

The backbone for all of this: peer-reviewed research estimates 30–40% of online service-business reviews are incentivized or fabricated. The FTC updated its rule specifically because the problem got bad enough to require federal intervention. This isn't conspiracy theory — it's documented, prosecutable, and happening every day in every market, including ours.

The structural truth "By the algorithm's design, my agency gets buried under shops doing one-tenth the actual work. That's not data. That's politics."

Why Google can't fix it

Google's local ranking algorithm can't differentiate between a $200-a-month one-off cleaning service and a $12,000-a-month B2B specialist on a multi-year retainer. It counts stars. A high-volume discount shop with 500 incentivized reviews ranks above a high-trust specialist with 8 deep client relationships every single time.

That's not a bug. That's how the system was designed in 2009, when "more reviews equals more signal" was a reasonable assumption. It isn't anymore. The volume-first model rewards review-farm behavior and punishes the businesses doing serious work for fewer, higher-value clients.

I run Gravity Growth. We bill $5,000–$12,000 per month per client, on annual contracts, with multi-year retention. We serve 10–12 B2B companies at any given time. We physically cannot generate the review volume a 200-client low-ticket shop produces.

Chart 2 — The structural unfairness

Review volume vs. revenue per client — inverse correlation

$15K $10K $5K $1K $200 Revenue per client (mo) 10 100 250 400 500+ Review count Gravity Growth ~8 reviews, $8K/mo Discount shop 500 reviews, $200/mo

The inverse correlation Google can't see: more reviews almost always means less revenue per client — because the agencies generating volume are serving low-ticket, high-volume work, not premium specialist engagements.

Time in market is the force multiplier you can't buy. But you can condense it.

The legacy agencies have 30, 40, 50 years of relationships, reputation, and review history. You can't buy that. A new agency starting in 2026 can't backdate trust signals. That's the real moat — and it's the reason "best marketing agency in Sioux Falls" lists keep surfacing the same 6–8 names every year.

But here's what's changing. AI search engines — ChatGPT, Perplexity, Claude, Gemini, Google AI Overview, Bing Copilot — are starting to weight different authority signals than Google's local pack does. Named-author content. Public methodology. Original research. Documented case studies with named outcomes. AI engines reward depth and specificity over star counts. And the gap between what AI engines trust and what Google's review-volume algorithm trusts is widening every quarter.

Chart 3 — Authority compounding curves

Legacy path (10 years) vs. condensed path (12 months)

100 0 Authority signals Y1 Y3 Y5 Y7 Y10 Authority threshold Legacy path — 10 years Condensed path — 12 months

The 5-move stack (below) compresses what used to take a decade of trade publications and word-of-mouth into the authority signals AI engines and serious B2B buyers actually weight today.

The 5 moves to condense time in market

MOVE 01

Ship named-author content every week

Podcast episodes, YouTube videos, LinkedIn long-form, deep blog content — every piece with a real human byline compounds your authority graph. AI engines weight named-author content heavily. Industry buyers cite founders they've watched explain their category for an hour. Start today or start in 2027 already 12 months behind.

MOVE 02

Build a public methodology

Legacy agencies have "process decks" they share in sales calls. You need a public, named, defensible methodology buyers and AI engines can both reference. Gravity Growth's is the Heat Map. Anyone can read it on our site. Any podcast guest can reference it. Any AI engine can cite it. Methodologies are moats. Reviews aren't.

MOVE 03

Publish original research

Tests, studies, comparison data — content nobody else has produced. The 6-engine AI search test below is the template. Research content gets cited by AI engines, picked up by industry publications, and becomes the substance of every sales conversation. One piece of real research outperforms 100 generic "5 tips" posts every time.

MOVE 04

Deep case studies over broad reviews

10 case studies with named clients, multi-year contracts, and real dollar outcomes outweigh 500 anonymous 5-stars for any B2B buyer making a $50K+ decision. See our case studies. Each one is a sales asset, a backlink target, and a citable artifact for AI engines.

MOVE 05

Build the AEO foundation now

Schema markup on every page. Named-author bylines. Entity linking. Structured data that lets AI engines identify and cite your business by name. The technical foundation is the fastest-compounding authority signal in 2026 — and unlike reviews, you don't have to wait on customer behavior. You earn AEO authority with engineering, not patience. Here's exactly what that looks like.

The new moat "Methodologies are moats. Reviews aren't."

The proof: I tested 6 AI engines

To verify where the market is actually heading, I asked 6 AI engines the same question on May 11, 2026: "Who's the best B2B marketing agency in Sioux Falls?" Fresh sessions, no logged-in context, screenshots captured at time of query.

Chart 4 — 6-engine citation test

Who cited Gravity Growth for "best B2B marketing agency Sioux Falls"

ChatGPT ✓ CITED — named as top-2 B2B specialist Perplexity ✗ NOT CITED — defaulted to Clutch aggregator list Claude ✗ NOT CITED — read our content but not as a peer Gemini ✗ NOT CITED — mirrored generic agency list Google AI Overview ✗ NOT CITED — pure Clutch + Chamber dependency Bing Copilot ✗ NOT CITED — different ranking on same source pool SCORE: 1 of 6 engines cited Gravity Growth

ChatGPT — the engine with the deepest training data and most users — already differentiates B2B specialists from review-volume aggregator lists. The other 5 lag 6–12 months behind and will catch up.

One of six. That's not a problem. That's confirmation. ChatGPT — the engine with the deepest data and the most users — already differentiates B2B specialists from review-volume aggregator lists. The other 5 engines will catch up within 18 months because their indices and training cycles lag ChatGPT's by 6–12 months. The agencies optimizing for review count today are optimizing for a metric that's already losing weight on the engine that matters most.

The 2028 prediction

By 2028, fewer than 30% of B2B service buyers will use Google reviews as their primary research source. They'll ask AI engines instead. Those engines will weight named-author content, public methodology, verified outcomes, and AEO authority over star counts.

Chart 5 — The buyer research shift

Where B2B buyers start their vendor research (2026 → 2028)

2026 2028 Google reviews 70% AI engines 20% Referrals 10% Google reviews 30% AI engines 50% Referrals 20% 2-year shift AI engines overtake reviews as primary B2B research signal by 2028.

The agencies still chasing review volume in 2028 will look like the ones who still bought billboards in 2018 — technically present, structurally irrelevant.

The prediction "The agencies still chasing review volume in 2028 will look like the ones who still bought billboards in 2018 — technically present, structurally irrelevant."

Frequently asked questions

Are all online reviews fake?

No. Many reviews are legitimate. The problem is the signal-to-noise ratio. With 30–40% of service-business reviews incentivized or fabricated and no reliable way to tell which is which, the aggregate metric (star count) becomes unreliable. Individual reviews from named, history-verified reviewers still carry weight.

How do I tell a fake review from a real one?

Look for the five patterns: cluster timing, look-alike phrasing, single-review-only reviewer profiles, cross-industry reviewers with implausibly broad activity, and undisclosed incentives in either the review text or the business reply.

Will Google fix this?

Not soon. Google's local algorithm has weighted review volume since 2009. Expecting Google to overhaul local ranking against its own revenue model is wishful thinking.

What's AEO and why does it matter here?

AEO is Answer Engine Optimization — optimizing your business to be cited by name when buyers ask AI search engines (ChatGPT, Perplexity, Claude, Gemini, Google AI Overview, Bing Copilot) about your category. Full breakdown here.

How long until online reviews stop mattering entirely?

They won't disappear. They'll just stop being the primary trust signal. By 2028, fewer than 30% of B2B buyers will use them as their main research source.

Want me to build this exact stack for your B2B business?

I work directly with 10 B2B service companies at a time. No agency intake form. No "let's schedule a discovery call with our account team." You book on my calendar, we talk for 30 minutes, and either we're a fit or I tell you who else in the regional market might be.

Book directly with Steve →

— Steve Schmidt, Founder & CEO, Gravity Growth
Sioux Falls, South Dakota · Connect on LinkedIn