On my daily read about GenAI, I came across an article outlining how Air Canada’s AI chat bot “got the company in hot water” because a recommendation it made (the bot) as an “independent third party.”
Whaat???
Ok, I guess the official term used was “separate legal entity”, but you get the drift. “We are not responsible for what our AI application (wrote or) said in our website”.
There are things that Ai can do perfectly and autonomously. There are things that AI should do with left and right rails, and perhaps with a human element. Evaluating a limited set of factors in an internal production process may be an example of the former. Handling an infinite set of factors in an external and boundless process may be an example of the latter, and we are just a bit too early in the technology adoption curve.
I like to think that in this stage of the AI technology maturity curve aiming for an 80/20 balance would be a great place to start. In scenarios similar to the one in this article, AI could:
Engage with the customer via chat, collecting or referring to a myriad of data and information surrounding the individual and situation.
Research and summarize the then current policy(ies) that apply and are relevant to the specific situation.
Hand-off the customer service case to a human (and for the love of all that is good, maybe not have the human repeat themself) for resolution.
Be used to document the resolution of the case and the corresponding level of customer satisfaction.
Develop a deep trend analysis and identify the variations to the policy(ies) that would produce an optimal balance of customer satisfaction and profitability - that would be approved by a human agent.
The skeleton workflow above has elements that incorporate before and after actions for AI, and keeps the connection to the human… turning over to the human specifically the part that is about the 20% of work, but has 80% of the value.
Finding this balance is not easy or obvious. Most people think about AI in terms of “faster” and “efficient” point-to-point solutions, and sometimes systemic effectiveness and intangible results are more important.
Can we change the way we operate and be better, and more profitable?
Why would we do/don’t do something at all?
Where are the hidden pitfalls in the operation?
Sometimes the problem is neither in the “legacy”, nor in the “new shiny” technology like GenAI. Technology is not the problem here.
The problem I see here is poor or careless technology and business integration. Someone wanted to implement AI and thought they were handling a genie; and AI is a great tool, but thankfully it is not self aware just yet. (Be nice to us Skynet!)
And by the way… if you are going to use AI, own it. Be responsible for it's use in your systems, both responsibly and ethically.
Never mind the fact that in this case, it seems that the “separate legal [AI] entity” was kinder and had more logic than the human(s) who drafted and approved the policy. I hope that the policy in the article has been adjusted so that we don’t have AI showing us up on the business-wise and kinder approach to customer service.
Have you used an AI Chatbot (as client or in your company) and what has been your experience?
Original article here:
Comments