AI Tools
February 28, 2024

AI on Trial: How Air Canada's Chatbot Case Redefines Digital Accountability

“My chatbot did it, not me.”

Companies won’t be able to hide behind this excuse anymore. The Canadian Tribunal verdict on the recent case of Air Canada vs. Jake Moffatt, is a milestone in the question of who is liable for the misinformation provided by AI chatbots.

It also raises a question, can we trust AI responses and information?

What happened between Jake Moffatt and Air Canada’s AI chatbot?

When done right, adopting an AI chatbot can bring you success - Bank of America’s Erica chatbot or Sephora’s Virtual Artist redefined customer engagement and experience.

However, Air Canada ran into a glitch with its AI chatbot (on its website), as was revealed during the verdict of Jake Moffatt vs Air Canada, earlier in February 2024.

It started when in November 2022, Jake Moffatt was booking a round-trip from Vancouver to Toronto after his grandmother passed away. He had asked the Air Canada chatbot about  the bereavement discount policy. The AI chatbot informed Moffatt that to claim bereavement discounts he needed to submit a refund application form within 90 days of his date of travel.

However, this wasn’t actual company policy. In reality, Moffatt was supposed to submit the application before taking his flight.

On returning from his trip, Moffatt filed for a partial refund of about USD 326. He was informed by the human customer representative that he wasn’t eligible for the refund as he was too late in filing. That’s when he found out that the chatbot had given him incorrect information. This resulted in: 

  • Weeks of email exchange between Moffatt and the airline, with no resolution. The airline did mention in their emails that they would update the chatbot’s policy information. 
  • Since the issue was not resolved, in February 2023, the matter was submitted to CRT (Civil Resolution Tribunal) - a quasi-judicial system in the British Columbia public justice system that is responsible for minor civil law disputes.

The case: Jake Moffatt vs Air Canada

During the proceedings, the argument presented by Air Canada was that the chatbot “is a separate legal entity and responsible for its own actions.” To which, tribunal member Christopher C. Rivers responded,

“While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”


Finally, in this unprecedented case, a verdict was made in favor of Jake Moffatt, and Air Canada was ordered to refund USD 483, plus tribunal and interest fees, “for which Moffatt had been fighting for nearly a year and a half”.

In summary, Rivers in his decision mentioned: 

  • Air Canada is responsible for all the information on their website; doesn’t matter if it is from a static page or a chatbot.
  • Air Canada did not take enough care and responsibility to ensure that the chatbot’s response was accurate.
  • Air Canada did not provide a suitable explanation of why a user/customer needs to validate the information provided by the chatbot, which is a part of the same website.

The true verdict came from Air Canada, who seemed to have decommissioned their chatbot on the website. It’s one of the first cases where AI has been replaced by humans.

Lessons for brands using AI chatbots on websites and as customer service

As per a survey, 44% of consumers appreciate the use of AI chatbots as shopping assistants and in helping find product information before the actual purchase. However, the case of Air Canada highlights the importance of having AI guidelines to ensure accountability and accurate information.

Below are the points that brands must address before employing AI chatbots on websites or in customer service:

  • Company accountability: Brands need to stay accountable for the actions of their AI chatbots. Companies cannot distance themselves and argue that the chatbot is a separate entity. Brands need to be ethically accountable for their chatbots' actions and responses while ensuring that unassuming customers are aware of a chatbot's limitations. 
  • Human intervention: When AI chatbots provide incorrect information, customers should have an easy method to speak with a human who can resolve their issues. This is especially true for complicated policies, offerings, and services, where access to a human agent can enhance customer satisfaction. This would ensure no false promises are made to the customer, as with Jake Moffatt. 
  • Monitoring and improvement: Deploying an AI tool, like a chatbot, is not a set-it-and-forget-it solution. It requires continuous monitoring and updating for information accuracy as per customer needs and expectations, as well as the changes in the tech. Brands deploying AI chatbots need to have regular reviews and adjustments of AI chatbot responses based on customer feedback and evolving company policies. 

What are your thoughts on using AI chatbots in customer service?

Let us know your thoughts.

Want to cut the clutter and get information directly in your mailbox?