VoyagerNetz New Powering Amazing Experiences Logo
menu

EFFECTIVE COMMUNICATION

When AI Support Lies to Your Customers

May 6, 2025

NienkeProfilePicture

Nienke van Aardt

VoyagerNetz Delta LLC

WhenAiLies

AI is rapidly transforming customer support, offering businesses the promise of increased efficiency and reduced costs. However, this powerful technology comes with significant risks, particularly the phenomenon known as AI "hallucinations" or confabulations.

AI Confabulations: A Real Threat to Brand Trust

AI models, especially large language models (LLMs), are trained on vast amounts of data. They excel at identifying patterns and generating plausible-sounding text. However, they don't "understand" information in the same way humans do. When faced with a query they don't have a direct answer for, some AI models will essentially "fill in the gaps" creatively. They prioritize generating a confident, coherent response over admitting uncertainty or stating "I don't know."This can lead to AI systems generating false or misleading information presented as fact.

This is more than just a technical glitch; it's a serious threat to brand trust. Customers often can't tell if they're speaking to a human or a model, and if the AI is wrong, the business takes the hit. Recent incidents highlight the very real risks of AI-powered customer support when human checks and transparency aren't in place.

For example, in 2024, Air Canada was ordered by a tribunal to honor a refund policy completely invented by its own support chatbot. The tribunal rejected Air Canada's argument that the chatbot was a separate entity responsible for its own mistakes. A similar incident occurred with the AI code editor company, Cursor, where an AI support agent invented a policy that didn't exist, leading to frustrated users and a public relations scramble. AI Support Fail: Cursor Bot Invents Policy, Causes User Uproar - DigiAlps LTD

The Importance of Disclosure and Oversight

These incidents highlight the very real risks of AI-powered customer support when human checks and transparency aren't in place. Transparency and responsible frameworks are essential for building trust in artificial intelligence (AI), ensuring fair, safe and inclusive use to maximize its benefits.

  • Transparency is Crucial: Always clearly label AI agents. Users should know if they are interacting with a bot or a human.
  • Human Oversight is Necessary: AI should assist, not replace, human support, especially for complex or sensitive issues.
  • Understand AI Limitations: Be aware of confabulations. AI can generate plausible falsehoods. Don't treat AI responses as infallible truth.
  • Human Agent Escalation: Have clear escalation paths to human agents.
  • Test Rigorously: Thoroughly test AI support systems in various scenarios before deploying them to customers.
  • Monitor Performance: Continuously monitor AI interactions for accuracy and customer satisfaction.
  • Own the Output: Remember, your company is responsible for the information provided by its AI tools, correct or not.

VoyagerNetz: Integrating AI Safely and Strategically

At VoyagerNetz, we believe AI should empower your business, not create unexpected liabilities. Whether you're using AI for productivity, automation, or support, smart implementation starts with custom-built solutions that are transparent, efficient, and designed for your business needs. Home - VoyagerNetz - Powering Amazing Experiences

Let's talk about how to integrate AI into your business, safely and strategically.