A guest browses a digital room service menu

September 04, 2025 •

5 min reading

Psychological Insights into AI in Hospitality

scroll

AI in hospitality is reshaping the industry, but the real challenge isn’t just technical, it’s human. How guests and employees feel about AI often matters more than what the technology can actually do. This article explores the psychological dynamics that influence AI adoption in hospitality and offers practical ways for leaders to ensure technology strengthens, rather than erodes, the essence of human service.

AI and Human Psychology

The hospitality industry is experiencing a major transformation, driven by advancements in technology and specifically in artificial intelligence (AI). What once may have sounded futuristic (e.g., robots at reception desks) is now a reality in some hospitality establishments.

AI is changing the hospitality business by allowing the creation of immersive guest experiences, personalization, and contactless interactions. AI-powered robots are being employed to support service, and smart technologies are enhancing operational efficiency. Predictive analytics are further used in demand forecasting and operations optimization. Also, AI-powered guest feedback analysis and employee management have become possible today. Such innovations respond to the industry’s objectives of delivering more personalized and thus better experiences and smoother operations.

AI not only has the potential to elevate guest satisfaction but also to improve employee well-being by automating repetitive tasks and making processes simpler and faster. By reducing stress and workload, AI can allow employees to focus on more relational aspects of their roles.

hotel self check in stations

 

Psychological Barriers to AI Adoption in Hospitality

While the technological capabilities of AI continue to expand, people’s perceptions remain complex. The characteristics of AI that make it powerful are also the ones that create unease with guests and employees who question whether AI can truly understand them and their jobs. Staff could perceive AI as a threat to their professional expertise and careers, for example, if AI-driven decision-making reduces opportunities for staff to apply their skills. Guests could feel uncomfortable when technology crosses into spaces that require empathy, trust, and privacy, for example, if AI deals with sensitive complaints or if facial recognition is used without clear consent.

The barriers to AI adoption in hospitality are less about trust in the technical capability of AI and more about people’s psychology. The balance between efficiency and empathy, automation and autonomy, is central to how AI is accepted in the industry. People judge AI not only on what it does, but also on how it makes them feel. Therefore, understanding the psychological responses of guests and employees is important as these affect their willingness to accept and engage with the technology. Below, I propose four areas that hospitality leaders should consider as they strongly shape AI’s success in the industry:

1. Lack of Empathy

Despite AI’s capabilities, many remain skeptical and unwilling to use it. Research shows that people fear being reduced to “mere numbers” and often perceive technology as lacking empathy and emotional sensitivity (e.g., Dawes, 1979). This perception is reinforced by the belief that technology does not account for personal differences.

What practitioners could do:

  • Humanize the interface: Use AI that mimics natural communication styles and focuses on empathetic language, tone, and context. If relevant, design visuals (e.g., avatars, facial expressions) that are more human-like to foster relatability and trust in interactions.
  • Mix human and AI touchpoints: Use AI as a first responder, but make sure that guests can easily contact a human when they want emotional reassurance. Equip employees to step in and personalize interactions that AI cannot handle well, showing both employees and guests that technology is there to support rather than replace the human element.

2. Responsibility

Research shows that AI is perceived as lacking goals, which affects how errors made by AI are judged (e.g., Garvey et al., 2023; Kim & Duhachek, 2020). Interestingly, when AI makes mistakes, brands experience less reputational harm.  However, people abandon AI quickly following errors, unless they are given the ability to make adjustments.

What practitioners can do:

  • Communicate transparently: Clearly communicate how AI makes suggestions (e.g., “We recommended this room because you previously stayed in similar ones”).
  • Enable corrections: Allow guests and employees to override, adjust, or provide feedback on AI recommendations.
  • Clarify roles in decision-making: Position AI as an advisor and keep ultimate decision-making and responsibility with the employee or guest.

3. Control

AI systems can narrow perceived choices to personalize the company’s offer to the guest (e.g., Dietvorst et al., 2015; Zwebner & Schrift, 2020). This, however, could make individuals feel that their freedom is constrained as they are not presented with all possible choices. This effect is amplified when people feel they are being constantly surveilled by technology.

What practitioners can do:

  • Preserve choices: Present AI recommendations as options (e.g., “Guests like you enjoyed these three dining experiences” instead of “This is your dinner option”).
  • Be transparent about data use: Clearly state what data is being collected and for what purpose.
  • Collaborate with employees: Involve staff in co-designing AI systems' output so they see technology as empowering their expertise rather than threatening it.

4. Disclosure

Finally, individuals may be more willing to disclose sensitive information to AI in low-emotion contexts. However, in emotionally sensitive situations, humans remain the preferred choice (e.g., Kim et al., 2022; Pickard et al., 2016).

What practitioners can do:

  • Segment AI vs. human contexts: Use AI primarily for functional, data-driven inquiries (e.g., dietary restrictions, booking logistics), and keep human-led service for emotional, relational, or conflict situations.
  • Ensure safety: Reassure guests that any information shared with AI is handled securely and ethically. Give them control over what they choose to disclose.

 

human-interaction

AI to Empower Rather than Control

These insights illustrate that while AI holds significant potential to optimize and improve the hospitality business, its success depends on how it is perceived and experienced by guests and employees. Therefore, perceptions of fairness, empathy, and control are fundamental to encouraging trust and acceptance.

Looking ahead, the true opportunity may not lie in using more AI but in using it more wisely (see also the book Hospitality Vibes: The Positive Energy When People Interact With Like-Minded People). To realize AI’s promise, hospitality leaders need to carefully balance technological efficiency with human centricity. AI that complements rather than replaces human decision-making, one that empowers guests and employees rather than controls them, is more likely to be accepted. Although AI can help improve service delivery, it cannot fully substitute the relational elements that define hospitality. Therefore, the way forward is not about choosing between technology and humanity, but about correctly blending them.

By acknowledging and addressing individuals’ psychological responses, hospitality leaders can develop greater trust, increase adoption, and deliver better experiences that remain human at their core.  AI’s greatest potential does not lie in its ability to automate but in its capacity to support more personalized and human-centered guest journeys. As such, I see the future of hospitality as being less about technology replacing people but more about technology enabling people to be more human.

 
Written by

Assistant Professor at EHL Hospitality Business School

Chen, M.-M. (2024). Hospitality Vibes: The Positive Energy When People Interact With Like-Minded People. https://www.amazon.com/Hospitality-Vibes-Positive-Interact-Like-Minded/dp/2839943786

Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34(7), 571–582. https://doi.org/10.1037/0003-066X.34.7.571

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114. https://doi.org/10.1037/xge0000033

Garvey, A. M., Kim, T., & Duhachek, A. (2023). Bad news? Send an AI. Good news? Send a human. Journal of Marketing, 87(1), 10–25.https://doi.org/10.1177/00222429211066972

Kim, T. W., & Duhachek, A. (2020). Artificial Intelligence and Persuasion: A Construal-Level Account. Psychological Science, 31(4), 363-380. https://doi.org/10.1177/0956797620904985

Kim, T. W., Jiang, L., Duhachek, A., Lee, H., & Garvey, A. (2022). Do you mind if I ask you a personal question? How AI service agents alter consumer self-disclosure. Journal of Service Research, 25(4), 649-666. https://doi.org/10.1177/10946705221120232

Pickard, M. D., Roster, C. A., & Chen, Y. (2016). Revealing sensitive information in personal interviews: Is self-disclosure easier with humans or avatars and under what conditions? Computers in Human Behavior, 65, 23–30.https://doi.org/10.1016/j.chb.2016.08.004

Zwebner, Y., & Schrift, R. Y. (2020). On my own: The aversion to being observed during the preference-construction stage. Journal of Consumer Research, 47(4), 475-499.https://doi.org/10.1093/jcr/ucaa016

Lorem ipsum dolor sit amet, consectetur adipiscing elite. Sed ut perspiciatis undeomis nis iste natus error sit voluptis.
Lorem ipsum dolor sit amet, consectetur adipiscing elite. Sed ut perspiciatis undeomis nis iste natus error sit voluptis.
Lorem ipsum dolor sit amet, consectetur adipiscing elite. Sed ut perspiciatis undeomis nis iste natus error sit voluptis.
Lorem ipsum dolor sit amet, consectetur adipiscing elite. Sed ut perspiciatis undeomis nis iste natus error sit voluptis.
close