Method
How we review.
Every review on this site follows the same process. We pay for our own subscriptions, test with real guests, and publish what we find. Nobody gets editorial input except the six of us.
How we pick what to review
We maintain a shared list of tools we've heard about, been pitched, or seen other hoteliers mention. Twice a year, when we meet in person, we agree on the next batch. The priority is usually whatever category is causing us the most frustration at work.
Vendors can suggest their product for review (see our contact page), but requesting a review doesn't guarantee one, and it doesn't affect the outcome.
How we test
Each tool gets installed in at least one real hotel. Not a test account, not a sandbox. A working property with real guests, real staff, and real problems. We typically run a tool for four to eight weeks before writing anything.
The reviewer is always someone who runs that type of property. A city hotel GM tests differently from a resort owner. We assign based on fit, not rotation.
What we look at
Every reviewer brings their own angle, but we cover the same ground:
- Setup and onboarding — how long from sign-up to actually using it with guests
- Daily use — what it's like when the novelty wears off and you're using it at 7am with 40 check-ins ahead
- Staff adoption — can a seasonal hire figure it out without a training manual
- Integrations — does it actually connect to the systems we use, or is the integration list marketing
- Pricing and value — what you really pay, including the costs they don't mention on the pricing page
- Data and privacy — where guest data goes, what the DPA says, and whether the company is transparent about it
- Support — what happens when something breaks on a Saturday night
How we score
Every review gets a single score out of 10. It's not an average of sub-scores or a weighted formula. It's one person's honest assessment of how useful this tool is for the type of hotelier they are.
As a rough guide:
- 8-10 — we'd recommend it to a friend without caveats
- 6-7 — good tool with real limitations, depends on your situation
- 4-5 — does some things well, but hard to justify given the competition
- 1-3 — significant problems, we'd steer you away from it
If other team members disagree with the score, their perspective appears in the review. We don't force consensus.
What we don't do
- Accept free accounts or trial extensions from vendors
- Share drafts with vendors before publishing
- Use affiliate links or earn referral fees
- Accept payment for reviews or placement
- Let vendors approve, edit, or influence content
Corrections and updates
Software changes. If a tool ships a major update that affects our conclusions, we'll revisit the review and note what changed. If we got a fact wrong, we'll correct it and mark the correction with a date. Anyone can flag an issue through our contact page.