Typical of our irrational human minds, we believe robots are going to take over the world before they can even walk. We also think chatbots are going to replace websites, apps, and everything in between before they can even hold up a conversation.
That’s not to say it won’t happen. The stats certainly make it look that way: with over a third of users preferring chatbots over apps, and a third of the world’s population — over two and a half billion people — already using messenger apps, chatbots will certainly be a big part of our future. But just how big a part of it they will be is very much up for debate.
Chatbots are essentially computer programs that automate certain tasks through conversational interfaces. They’re very much in their early stages, and, as we’ll see, they face many stumbling blocks which will have a big say in how soon or how long they take to actually become worthwhile investments for businesses.
We’re going to dive into the current chatbot landscape by assessing them across the three major stages of the inbound marketing funnel: connecting, understanding, and delivering.
Connect: Engaging users in a conversation
If there’s one part of the funnel where chatbots really deliver, it’s in getting users engaged. Chatbots have immense potential in allowing businesses to automate one-to-one interactions, not least for small businesses that lack the capacity to hire a team of customer service reps.
What this essentially comes down to is the speed at which chatbots can grab users’ attention and then, hopefully, keep it. It’s marketing 101 that the quicker you can engage your users, the better. The difference with chatbots, though, is that you’re engaging them in a way they associate with personal, one-on-one interactions.
The result is customers feel more attended to, meaning, as conversational interfaces also provide less friction and more seamless information delivery, they’re much less likely to bounce mid-chat and are much more likely to stick around.
At least in theory. Speed is certainly a virtue of the chatbot, but it can quickly be trumped if there is a lack of transparency in the service.
To put it simply, bots don’t work when they pretend to be human. Yes, you’re having a human-like conversation, but that doesn’t mean it should be masked that you’re having it with a bot. Doing so sets up high expectations, which, when the bot fails to interact as naturally as a human would, can come crashing down hard and can cause an irreversible loss of trust.
Understand: Having an actual conversation
With a name like chatbot, you wouldn’t be crazy to think that you could have an actual conversation with one of these things. And you will be able to, eventually. What’s possible today, though, is another matter, and as there are endless examples of failed conversations with chatbots, we’ll just say there’s hope for the future.
But why do chatbots currently suck so much? Well, although some bots with meticulously-written if-then scripts or sophisticated natural language capabilities can mimic human-to-human interaction, it only takes going beyond a few minutes of chatting to begin the see through the charade.
Let’s say you’re talking to a bot about booking a stay in a hotel for you and your parents. Past the dates and the basic facilities, you want to know if the elevators are working well and if the restaurant offers smaller portions. Such context-specific queries are where bots tend to fail most; as they can only carry information for one of two chat bubbles, they have a very hard time deducing the user’s intent and delivering another other than generic, pre-set responses.
The big takeaway for bots here is that they are effective when applied toward fulfilling one task, say, arranging a booking. But they fall down when we think they should be able to do everything. To bring context into a conversation, chatbots need not only advanced AI and natural language technology, but to also be hooked up a business’s larger ecosystem. In this way, they can operate in a more human-like manner, drawing in information from other parts of the business, learning from past interactions, and knowing more and more about who they’re actually having conversations with.
Deliver: Giving the user what they came for
As mentioned above, chatbots are great at achieving simple things — ordering a taxi or getting movie recommendations without leaving your messaging app. And as they’re not humans and nor would we ever want them to be, this is likely where they’ll continue to thrive in the future.
The fact is, when it comes to complex and higher risk interactions, we want to speak with humans. Or, at the very least, when it comes to fulfilling the action, we want to do it with one of our own. A good example is when buying expensive goods online; according to Salesforce, in such cases, only nine percent found a chatbot useful, with thirty percent worrying that the bot will make a mistake.
Saying that, we don’t want to talk to a human when faced with anything that happens to be a little complex. When we have problems with banking, for instance, we generally just want them sorted out as quick as possible — as LV found when its chatbot reduced calls by over ninety percent. Likewise when doing taxes, paying generic bills, and doing anything else that’s less of an experience and more a chore, whether chatbot or human, we don’t care how it’s fulfilled, as long as it is.
Chatbots that know their place and their limitations are warmly welcomed by users and businesses alike. Anyone would much prefer a bot that does one thing really well than many things poorly. Unfortunately, faced with the task of anticipating many potential use cases and inputs, even achieving simple tasks without hiccups is far from where we’re at.
This doesn’t mean chatbots should be avoided at all costs until AI reaches a certain level, but it does mean we need to use our very human creativity and discernability to work out how to get the best out of them and make them better.