Everyone involved in the tech industry has an idea of what a computer-based development team looks like.
- Developers write the code that makes tech projects work, whether they’re websites, apps or platforms
- DevOps engineers set up and manage the technical infrastructure, along with security and architecture planning
- UX designers make sure the websites, apps and platforms are easy to use, make sense in how they’re set up and actually perform tasks for the users in a sensible way
Over the years, this has worked well. We tell computers what to do and, provided we’ve put enough thought and effort into what we’re telling them, they do it. The output depends entirely on the input – there’s no dialogue involved and it’s very much a workman/tools relationship.
But that’s all about to change – advances in cognitive computing and the growing popularity of chatbots and AI have caused a shift in how we work with computers. Now it’s a two-way conversation with a coworker rather than a servant, and it needs a specific set of skills which aren’t usually found in traditional development teams.
More dialogue coach than developer
While all the usual disciplines still apply for developing cognitive projects – the fundamentals of development still need to be there before any of the ‘smart’ stuff can happen – the fact that we’re dealing with ‘learning’ machines brings in a whole new set of requirements.
It’s a question of conversation – for smart apps, chatbots and so on to access the right data from the huge quantities of information available to them, you need to ask them the right questions. This involves using ‘soft’ skills to identify all the nuances of human language to ensure you build the chatbot to be able to deliver a contextually correct, useful response. Using a delivery chatbot as an example, here are some of the issues to consider:
- Learning the language – working out how people might prefer to ‘talk’ to chatbots, from slang to emojis. Syntactically, there’s a lot of difference between ‘Plz can u tell me where my package no.0721 is’ and ‘I was due a delivery today at 4.30 – the number is 0721. Can you update me on progress?' - though they both mean the same thing and would get the same answer.
- Context – though there have been massive advances in the capabilities of chatbots, they aren’t yet at the stage where they can pass the Turing teston their own. They need to be educated in what words mean within the specific set of functions they have to perform – for example, ‘where’s my apple delivery’ will mean two entirely different things to a delivery assistant for a grocer and one for a technical hardware store.
- Sentiment – a delivery chatbot should have no problems dealing with straightforward queries about delivery progress, drop-off times, order updates and so on. But there will be times when users will express a feeling or make a request which is outside their capabilities. The ability to recognise sentiment – differentiating between an enquiry about a package and a complaint about poor service, for example – will enable the chatbot to hand off users to human customer service staff rather than having to say ‘sorry, I don’t understand’, which can be incredibly frustrating for an already-dissatisfied customer.
We need a new breed of UX specialists. Somewhere between copywriter and UX designer, they will not only understand the nuts and bolts of working with cognitive, but also have the linguistic capabilities and insight into user behaviour to be able to develop meaningful conversations with smart computers. It will always matter how your site or app looks to your users – but increasingly it will matter how it talks to them, too.