Here is something most hotel GMs already feel but rarely see in data: the difference between a 8.9 and a 9.0 on Booking.com is not a rounding error. It is real money.
Last year, I wrote about what a 0.1-point boost on Booking.com means for hotel profits. The math was clear, backed by Cornell University research. But it was theoretical. The obvious question remained: can you actually move the number?
We decided to find out.
What we did
We took all hotels that went live with an in-room guest experience solution in the past 24 months. For each hotel, we scraped up to 250 reviews from Google, Booking.com, and Expedia, then compared the average scores before and after the go-live date.
No cherry-picking. No filtering for "good" hotels. We excluded the installation month itself (construction noise is not a fair data point) and only included hotels with at least 30 reviews on each side. That left us with 84,176 reviews across three platforms.
The headline: OTA scores go up
Across all three platforms, the average review score improved after go-live:
- Expedia: +0.08 on a 10-point scale
- Google Maps: +0.05 on a 5-point scale, which equals +0.10 normalized
- Booking.com: +0.12 on a 10-point scale
The effect is consistent. Not dramatic, but clearly directional, and strongest where it matters most for bookings.
Worth noting: our hotels already start from a high base. The average Booking.com score across our portfolio is 9.05, the average Expedia score 8.98. These are not struggling properties looking for a turnaround. They are already among the best-rated hotels on the platform, and they still improve.
But the real story is in the bad reviews
Averages are interesting. What actually surprised us was what happened to the low scores.
When we looked at Booking.com reviews rated 6 out of 10 or below, the share dropped from 5.9% to 3.2% after go-live. That is a 46% reduction. For reviews rated 5 or below, it dropped from 3.1% to 2.1%.
In plain terms: the worst reviews, the ones that actually scare potential guests browsing your listing, nearly halved. And for hotels already scoring above 8.0, these outliers are exactly what holds the average down. A single 4.0 review costs you more than five 9.0 reviews can recover.
Why? Because most bad reviews are not about the hotel being terrible. They are about a problem that nobody fixed in time. A broken AC. A missing towel. A noise complaint at 11 PM. If the guest can flag it and staff resolves it before checkout, the frustration dissipates. That review never gets written.
This is not a theory. We see it in our Quick Feedback feature, which lets guests rate their stay and flag issues directly from the in-room tablet: when you give guests a channel to complain in real time, the volume of public complaints drops.
What +0.12 means for your hotel
Cornell University studied 31,000 monthly hotel observations across 11 metro markets and found that every point on the 100-point GRI (Global Review Index, the industry standard for aggregating review scores) translates to:
- +0.89% in ADR
- +0.54% in occupancy
- +1.42% in RevPAR
Our measured +0.12 on Booking.com equals +1.2 GRI points. For a 150-room hotel running 65% occupancy at EUR 150 ADR, that works out to roughly EUR 60,000 in additional revenue per year, conservatively.
That is not a marketing claim. It is arithmetic, applied to peer-reviewed research and real review data from real hotels.
The counter-check: what happens when hotels remove the solution?
Fair question. If the in-room experience solution drives better reviews, hotels that stop using it should see the opposite effect.
We tested this. We took all hotels that ended their contracts in the past 6 to 30 months and ran the same analysis in reverse: scores with the solution versus scores after removal.
The result on Booking.com: churned hotels averaged -0.03 after removal. On Expedia: -0.15. Meanwhile, hotels that kept the solution gained +0.12 on Booking.com and +0.08 on Expedia over the same period.
That is a relative gap of +0.15 on Booking.com and +0.23 on Expedia between the two groups.
Google Maps told a different story: churned hotels gained +0.10, staying hotels gained +0.10 as well. That makes sense. Google aggregates reviews from all visitor types, not just overnight guests: restaurant visitors, spa guests, even the occasional employee review. Booking.com and Expedia reviews come exclusively from verified hotel stays, which is where the in-room experience shows up.
One more data point: the Shiji 2025 Guest Experience Benchmark reports that the global hotel industry improved its review scores by +0.5 GRI points year-over-year. Our hotels improved by +1.2 GRI points. That is 2.4 times the industry average.
The honest caveats
I could stop here, but you would be right to ask: is this actually causal?
The honest answer: we cannot prove that with certainty. We are comparing the same hotels before and after. Renovations happen. Staff changes happen. Market trends happen.
But three things make us confident the effect is real:
- The direction is positive on all three platforms, across 151 hotels in 20+ countries.
- Hotels that removed the solution stopped improving on OTAs while their peers kept gaining.
- The negative review reduction is large, consistent, and has a clear mechanism: real-time feedback intercepting complaints before they become public reviews. We see great examples of this. Every day.
- The baseline matters. Our hotels average 9.05 on Booking.com and 8.98 on Expedia before go-live. Score improvements at this level are harder to achieve than at lower baselines, where there is more room to grow. The fact that already high-performing hotels still move upward makes the effect more meaningful, not less.
If you are skeptical about the average score, focus on the bad reviews. That finding stands on its own.
What this means for your hotel
Most hotels still rely on post-checkout emails to collect feedback. By then, the guest is at the airport. The frustration has hardened into a Booking.com review.
The alternative is not complicated: give guests a way to tell you something is wrong while they are still in the room. Resolve it fast. The data says that alone can cut your worst reviews in half and lift your OTA scores.
Whether that translates to EUR 60,000 or EUR 90,000 in additional revenue depends on your hotel's size and rate. But the direction is the same for everyone.
Disclosure: the hotels in this study use SuitePad, the in-room guest experience platform I co-founded. This analysis is based on 84,176 reviews across Google Maps, Booking.com, and Expedia. Industry benchmark data from the Shiji 2025 Guest Experience Benchmark. Revenue calculations reference the Cornell University study by Chris Anderson (2012). Full methodology available on request.