<!-- Bizible Script --> <script type="text/javascript" class="optanon-category-C0004" src="//cdn.bizible.com/scripts/bizible.js" ></script> <!-- End Bizible Script -->
Hospitality

AI security in hospitality: The conversation operators shouldn't ignore

In 2026, AI-powered solutions are set to become pivotal for the hospitality industry. More than 28% of UK and Ireland operators are already rolling out AI across multiple departments, while a further 20% are exploring what AI could do for them. 

AI is increasingly seen as the solution, with nearly half of operators (45%) saying it could help most with improving efficiency, while 36% expect it to help grow revenue and profit. 

Yet as more operators feed business-critical data into AI systems, a new concern is emerging: how that data is protected.

Posted 10/03/2026

Champa Magesh, Managing Director at Access Hospitality, explains this is a conversation businesses shouldn't ignore:

“In 2026, cybersecurity will become an integral part of hospitality operations, as companies become more mindful about the information and data they feed into LLMS, such as ChatGPT. 

“With a greater focus on security, AI-powered platforms that ringfence and protect data will be the solution for hospitality operators that want to utilise AI insights.”

Access Hospitality surveyed 1,000 businesses and 8,000 consumers across six international markets to understand operators' and consumers' attitudes towards data privacy and secure AI adoption. 

UK and Ireland operators divided over sharing data with AI tools

While operators understand the benefits of AI, confidence in how the data is handled remains divided. 

Country

Concerns about data security and privacy

Concerns over data protection regulations

UK and Ireland

51%

38%

US

49%

36%

Indonesia

53%

53%

India

39%

42%

Australia

57%

39%

Germany

33%

50%

Austria

50%

17%

Switzerland

25%

38%

Overall, 45% of operators in the UK and Ireland are concerned about sharing important company data with AI tools, highlighting a trust gap as adoption increases. Sentiment is consistent across sectors, with 45% of F&B businesses and 46% of hotels expressing concerns.

Company size significantly influences trust. Businesses with fewer than 25 venues are the most hesitant, with over half worried about sharing data, compared to just 33% of larger venues. This could suggest that access to internal expertise and frameworks plays a role in AI confidence.

When comparing globally, concerns around sharing important company data are greater in other regions than in the UK and Ireland. On average, 63% of DACH hospitality operators are concerned, followed by 59% for the US and 54% for Indonesia. 

For many operators, confidence around how data is stored, processed and governed remains limited. The two biggest concerns when it comes to sharing important data with AI tools are data privacy and data protection regulations. 

Data privacy is one of consumers' biggest concerns with increased AI use in hospitality

Country

Privacy and data security – fear of my personal information or habits being misused

Feeling constantly monitored or surveilled

UK

28.42%

20.28%

US

30.33%

22.59%

Indonesia

44.80%

17.40%

India

36.50%

26.60%

Australia

37.10%

19.40%

Germany

22.69%

22.39%

Austria

30.93%

23.72%

Switzerland

24.62%

20.72%

Nearly one-third of global consumers are worried about their personal information and habits being misused, and a further 21% are worried about feeling monitored. 

Fears differ by country, with Indonesia and Australia most concerned about their personal information being misused, followed by India. 

Champa Magesh adds:

“The message for operators is clear. With 36% of global consumers sceptical about the increased use of AI in hospitality and nearly one third worried about their privacy, businesses must prioritise transparency and security to build trust.”

Three tips to earn guests' trust with AI

1. Ensure data protection is visible to consumers

With 45% of operators concerned about sharing company data and a third of consumers worried about their personal data being misused, cybersecurity needs to be a top priority for businesses. 

Operators should prioritise platforms that ringfence and protect data, as well as put an internal AI policy in place so staff can be educated and feel confident with using AI safely. These safeguards must be clearly communicated to both staff and customers to foster trust and transparency. 

2. Deliver clear guest value

AI should be used when it provides clear value to consumers and improves the guest experience. Our survey found that consumers want AI to make their experiences smoother and faster: 35% want faster ordering and payment processing, and 29% want real-time wait time updates and queue management.

3. Be transparent about how AI is used

Transparency is key to building trust with consumers. Clearly label AI-generated recommendations and explain how data is analysed to put consumers at ease. Offering simple opt-outs can significantly reduce consumers' feelings of being watched. 

The steps businesses can take to protect their data while using AI 

1. Put a clear AI Policy in place and inform all your colleagues

Create a formal AI policy that outlines what data can and cannot be entered into AI systems, which tools are approved and who is responsible for oversight. 

Make sure this is clearly communicated to all staff members and confirm they understand what the policy means. 

Connor Whelan, CIO at The Access Group, comments "Most data breaches aren't the result of sophisticated attacks - they come from everyday gaps that any organisation can fall prey to. When it comes to AI, having a clear policy is important, but it has to be backed by the right technical controls. People need to know what they can and can't put into an AI system, and the technology should enforce those boundaries, not just rely on good intentions. That combination - policy, controls, and regular reassessment - is how businesses meaningfully reduce risk."

2. Educate and train staff members

Educate and train employees on what constitutes sensitive business information, the risks of entering this information into AI systems and the best practices for using AI. 

3. Use secure AI platforms

Avoid entering sensitive information into publicly available AI tools that do not guarantee how information will be used and stored. Systems like OpenAI pose serious risks to confidentiality and data security when used by individuals who do not know how these systems process the information. 

Choose secure platforms that protect data and ensure it remains secure and under your control. 

Diego Baldini, Certified Chief Information Security Officer at The Access Group, comments "The moment you enter sensitive business data into a publicly available AI tool, you lose control of it. That's not a risk worth taking. The businesses getting AI right are choosing platforms built with security at the core, where data is ringfenced, stays under their ownership, and isn't being used to train models they have no visibility into. That's the standard operators should be demanding."

Access Evo gives you the confidence to use AI software safely and securely 

Champa Magesh, Managing Director, Access Hospitality, comments: 

“AI adoption is accelerating across the hospitality industry, but security must move at the same pace. 

“Nearly half of businesses in the UK and Ireland are cautious about the safety of incorporating AI solutions into their day-to-day operations, while 28% of UK consumers fear their personal information could be misused. 

“As adoption grows this year, security should be the top priority for operators. By choosing AI platforms that protect data at every level, operators can benefit from AI insights while keeping business, employee and guest data private and under their control.”

View more about Access Evo's security.

Ready to see how AI can transform your business?