About the Company
British American Tobacco (BAT) is one of the world's largest manufacturers of tobacco and nicotine-containing products. Headquartered in London, the company has presence in 180 countries and dominates the market in 50 of them. BAT has been successfully operating in Ukraine since 1993.
Aiming to expand into the niche of next-gen tobacco products, BAT launched a new heated-tobacco device under the brand name Glo. The company kicked off a massive marketing campaign aiming to deliver a frictionless CX and promote health benefits the new product offers.
But to run it smoothly, the company’s salesforce needed to be carefully supervised. And this is where the AI-powered voice robot took the lead.
BAT introduced a special offer, providing new customers with a free glo tobacco heating device. Retail store managers kept a list of those receiving the device, and later these customers had to be contacted for data verification followed by a short survey on both product and customer service.
The company needed to supervise its sales representatives as well – the experts who give on-site product presentations and enrich the sales funnel with leads.
So basically, BAT wanted to find out how their staff training practices affect customer satisfaction. There was a need for a large-scale survey leveraging a set of key customer feedback tools such as CSAT and CES.
However, collecting this massive amount of data would be too big of a challenge for the company’s call center in terms of time and money. But at the same time, the survey needed to be finely crafted to become another enjoyable consumer experience.
So, we had to conduct a series of CATI surveys to gauge customer satisfaction at several different touchpoints.
1. To get detailed statistics on the retail campaign: how much consumer data the employees managed to gather, how many people received a promo device, and whether they were satisfied with it.
2. To collect feedback on the work of BAT’s sales reps: whether they contacted a client, managed to meet them personally, and helped them choose. Respondents had to rate their level of satisfaction on a scale of 1-10.
Let's take a look at how we handled the first task.
Here is an example of an actual conversation: Eve asked the customer which device she had been given, whether she’d figured out how to use the device and found a convenient place to buy glo sticks:
Eve: Hi, my name is Anya, I am a support service agent at glo. Can I have a few minutes of your time?
Customer: Yes, but will it take long? Because I'm at work... yeah, go ahead.
E: You recently received a complimentary gift from British American Tobacco - a glo tobacco heater, is that correct?
C: That's right.
E: Thank you for your answer! Which device did you receive - hyper or pro?
E: Great! Did you figure out how to use it?
C: I did, thank you!
E: Have you already started to use the device?
C: Yeah, of course, I have.
E: Very well. And have you figured out how to clean the device?
C: Sure, I clean it with a brush.
E: Did you manage to find a store where you can buy sticks?
E: Great! After you register your device on our website, the loyalty program will be activated.
Within 12 months, you will be able to exchange the device if something goes wrong. You will also have an opportunity to receive special offers from the company. In addition, I am sending you a promo code, which takes 25% off your next order on our website. And don’t forget that you can order sticks on our website or by dialing 8008. Thanks for your time, have a nice day!
As for the second task, we broke it into several pieces to identify customer satisfaction at each interaction stage. Our team crafted a bunch of scenarios: both for those who had already made a purchase, and for those who’d just taken an interest.
Here's an example of a dialog with a customer. Eve asks if he has already spoken to a sales representative and met him in person. Then she asks whether the customer found this meeting fruitful:
And in this case, the robot asks if the company representative gave the customer a device for a test drive:
Eve vs call center
This time Eve got the chance to challenge real call center operators and won hands down: she outperformed their KPIs significantly while cutting call center expenses.
This is just one example: single-day figures for the promotional campaign survey:
Eve made 1.5 times more call attempts and held 1.5 times more conversations while reducing costs more than by half.
There's no need to say that Eve has really cracked it – we automated performance monitoring of both retail and non-retail salesforce.
Key results for our 3-month cooperation with BAT include:
- over 30 call campaigns
- 26,400 calls
- 60% of conversations succeeded
- Average call duration - 43 seconds. This means Eve keeps the conversation light and breezy!
So, another company was able to reap the full range of benefits the AI assistant offers. BAT entrusted us with the task of tracking customer satisfaction to effectively oversee the work of its salesforce.
We broke down this project into several subtasks and developed a separate practical approach for each. Our team created a whole series of scenarios specifically tailored for each subtask.
As a result, we managed to collect a large amount of accurate CX data more efficiently than the call center.
Our collaboration with BAT had so far yielded excellent results. The go-to AI solution allowed the company to ramp up its customer service automation while preserving unparalleled service quality.