Your hotel's AI runs on OpenAI. Do you know what that means?
I read data processing agreements the way some people read menus. Slowly, thoroughly, with particular attention to what’s hidden in the fine print. It’s a habit that has saved me from bad decisions more than once, and it’s the reason I started digging into something that most hoteliers haven’t thought about yet: which AI actually powers the chatbot answering your guests?
The answer, for almost the entire guest communication category, is the same. OpenAI. San Francisco. United States.
How we found this
Over the past few weeks, I went through the sub-processor lists, privacy policies, data processing agreements, and marketing materials of all ten guest communication tools we’ve reviewed as a team. I wasn’t looking for this pattern. I was updating our reviews with information about AI providers and expected to find variety. Instead, I found a monoculture.
Here’s what each tool uses to power its AI features:
Runnr.ai lists OpenAI LLC (San Francisco, US) as a sub-processor for “Generating AI answers.” It’s right there in their documentation. A Dutch company, EU-hosted on AWS Ireland, clean privacy credentials, but every AI-generated response to a guest passes through OpenAI in the US.
Duve lists OpenAI as a sub-processor for “generative artificial intelligence and natural language processing services.” The auto-translation that Elena relies on for her Greek resort, the smart replies, all of it processes through OpenAI.
chatlyn states openly in their marketing that their AI “leverages OpenAI’s advanced technology.” An Austrian company that positions itself as privacy-first, routing guest messages through American AI servers.
Bookboost markets a “hospitality GPT” feature. GPT is OpenAI’s model. Their EU hosting in Stockholm covers the booking data and the inbox, but the AI layer goes through the US.
Quicktext doesn’t name their AI sub-processor directly. Their privacy policy refers to data transfers to “LLC located in the United States.” Given what Q-Brain+ does and what the rest of the market uses, this is almost certainly OpenAI. A French company that sells European data sovereignty, with a US-shaped gap in the fine print.
Akia is a US company built by ex-Facebook engineers, marketing a “GPT Concierge.” No surprises here, but at least the jurisdiction is consistent. You know what you’re getting.
Canary Technologies doesn’t disclose their AI provider at all. Not in their privacy policy, not in their trust centre, not in their marketing materials. For a company processing passport scans and guest conversations through AI, this silence is itself a finding.
LIKE MAGIC uses D3x AI, a company based in Covina, California. D3x describes a “dynamic LLM-switching architecture,” meaning they use multiple AI models, but the company is American and the processing happens in the US.
HiJiffy is the one partial exception. Their Aplysia AI uses multiple providers: OpenAI, Anthropic, Google, and Mixtral (the open-source model from French company Mistral). It’s not fully European, but at least they’re not locked into a single US provider.
askng.it is the other exception, though for different reasons. Dirk Taselaar, the founder, has publicly committed to using Mistral, the French AI company. Their AI features are basic right now, but the principle is there: European AI for European hotels.
Why this matters for European hotels
When a guest sends a WhatsApp message asking about parking, and the AI generates a response, that message passes through the AI provider’s servers. With OpenAI, that means San Francisco. The guest’s question, the context from their booking, potentially their name and stay dates, all of it flows through US infrastructure to generate a reply.
This creates a gap that many hoteliers don’t see. Your hotel might use an EU-hosted platform. Your booking data might sit in Frankfurt or Stockholm. Your privacy policy might be perfectly correct. But the moment the AI handles a guest message, the conversation takes a detour through America. The CLOUD Act applies. US law governs. Your guest’s data is, for that moment, subject to a legal framework your guest never agreed to and your privacy notice probably doesn’t mention.
It gets worse. Guests don’t just ask about parking. We’ve seen it across our own hotels: guests share passport photos, credit card details, ID numbers, home addresses, medical needs, and travel companion details via WhatsApp. They do it because WhatsApp feels private, like texting a friend. They have no idea that the AI processing their message routes it through servers in San Francisco. A French guest sends a photo of her passport to confirm her booking details. The hotel’s AI picks it up, processes it through OpenAI to generate a response, and that passport image has just travelled to the United States. The guest didn’t consent to that. The front desk probably doesn’t know it happened either.
This isn’t a theoretical scenario. It happens every day at hotels using these tools. The more comfortable guests become with WhatsApp messaging, the more personal data they share through it. And every message that touches the AI layer takes that data outside Europe, outside the guest’s expectations, and outside what most hotel privacy notices cover.
I want to be precise here: this doesn’t mean your data is being actively misused. OpenAI offers data processing agreements and business terms that include commitments not to train on customer data. The legal protections exist on paper. But the jurisdictional exposure exists too, and for hotels that have spent effort building a compliant privacy posture, it’s an uncontrolled exception sitting quietly in the middle of your data flow.
US tools get a pass. European ones don’t.
For American hotels using Akia or Canary, none of this is a problem. The data stays under US law, the guests are mostly American, and the privacy framework is consistent from end to end. OpenAI is a domestic provider. There’s no jurisdictional conflict. It makes complete sense for a US hotel to use a US AI platform.
European hotels don’t have that luxury. The gap between European and American privacy standards isn’t a technicality. It’s a fundamental difference in how personal data is treated. In Europe, privacy is a right. In the US, it’s a policy. The GDPR gives guests control over their data, requires explicit consent for processing, and limits cross-border transfers. US law allows government access to data held by American companies without notifying the data subject. These aren’t minor differences. They’re incompatible frameworks. When a European hotel routes guest data through OpenAI, it bridges that gap without telling anyone, least of all the guest.
The European tools have a credibility problem
This is what frustrates me most. The tools that market themselves as European, privacy-first, and GDPR-compliant, the ones I’m inclined to recommend, are the same ones with the most jarring contradiction. chatlyn calls itself privacy-first and then sends guest messages through OpenAI. Bookboost builds its entire identity on EU data residency and then processes AI features through San Francisco. Runnr.ai has clean Dutch credentials and an OpenAI sub-processor.
I understand why. Building your own AI is expensive and slow. OpenAI offers the best-performing models with the easiest integration. For a startup trying to ship features and keep up with competitors, it’s the obvious choice. But “obvious” and “consistent with your privacy positioning” are different things.
If your marketing says “EU-hosted” and “privacy-first,” and your AI routes through the US, you owe your customers a clear explanation. Not buried in a sub-processor list that nobody reads, but upfront, in the same place where you make the privacy claims.
What hoteliers should do
First: read your vendor’s sub-processor list. If they don’t have one published, ask for it. If they won’t provide it, that tells you something.
Second: understand what “AI-powered” means in your specific setup. Is the AI processing guest conversations? Generating responses? Translating messages? Each of these functions might pass data through a different provider.
Third: update your privacy notice. If guest data flows through OpenAI (or any US-based AI provider) as part of your hotel’s communication stack, your guests have a right to know. Most hotel privacy notices don’t mention this, because most hoteliers don’t know it’s happening.
Fourth: ask your vendor what they’re doing about it. Are they working on European AI alternatives? Do they offer an option to disable AI processing for guests who object? Can they route AI through a European provider if required?
The market will change
Mistral, the French AI company, is growing. Their models are competitive with OpenAI’s for many tasks, and they operate under EU jurisdiction. European cloud providers are investing in AI infrastructure. The technology exists to build hotel AI that stays in Europe. The question is whether vendors will make the switch before regulators force them to.
Marc would say this is about sovereignty. Sophie would say it’s about risk management. Thomas would say it’s about technical architecture. I say it’s about honesty. If you tell hoteliers their data stays in Europe, it should stay in Europe. All of it. Including the bits that pass through your AI.
Right now, almost nobody in this industry is doing that. The one company that publicly committed to European AI is the smallest player in our review set, with the fewest features and the least traction. That’s not a comfortable finding, but it’s an honest one. And being honest about uncomfortable findings is what this site is for.
- Anna, for the whole team