Join the Community

21,749
Expert opinions
43,819
Total members
469
New members (last 30 days)
192
New opinions (last 30 days)
28,609
Total comments

CHATBOTS OVER-HYPE: UNDERSTANDING THE CONSTRAINTS OF NATURAL LANGUAGE PROCESSING

Be the first to comment

As covered previously, Chatbot Natural Language Processing (NLP) is not underpinned by Artificial Intelligence that is a Singularity, all knowing, god like. Once this is obviously clear then it is important to understand NLP constraints and limitations.  

 

The science behind NLP is to take a free-form text or voice utterance, which is a form of Big Data, and dissect the dialogue into INTENTS. The purpose of identifying INTENTS is so that the conversation-as-a-service can take an ACTION. NLP helps identify INTENTS, but this is the boundary between NLP and ACTION.  

 

The ACTION is the area for the developer to programme. There are some tools for building ACTIONS from INTENTS, without the need for developers to write software code. These conversational flow tools have similarities with the traditions of workflow, but have the benefit of being integrated into NLP. Once more complicated requirements are needed then software code needs to be written. Using software code wants to be carefully considered to ensure maintainability and avoid the pitfalls of another legacy system in the making.

 

Once an ACTION has been prepared it is fed back to the NLP to continue the conversation. For some types of ACTIONS, it requires inbound data extracted from the NLP dialogue. This often needs to be processed with additional data obtained from other systems. Once completed the ACTION returns instructions and data for the NLP to process for a coherent conversation to continue.

 

The basics of conversation-as-a-service becomes more complex when there is a need to support:

  • handoff to a person
  • real-time language translation
  • real-time text-to-voice-to-text
  • omnichannel access across digital touchpoints and devices
  • retain memory of the conversation for later continuation
  • personalised contextual responses involving legacy data
  • transactions and payments
  • character, tone and persona
  • ethical and moral judgements

 

The complexity amplifies when the conversation needs to handle increasing choices, pathways and outcomes. This is compounded further when the flows involve recursive loops where the conversation needs to return to an earlier part of the dialogue.

 

The challenges intensify when the chatbot needs to handle the rigours of complex knowledge such as those found in regulatory, statutory, policy, legal, tax, tariffs and procedural practices. This type of complex knowledge cannot be left to the machine learning. Why? Here is a simple example. It is illegal to empower machine learning to change regulations. To overcome this problem, then complex knowledge needs to be scripted by software programmers and follow rigid validation to meet the criteria driven by governance, risk and compliance.

 

Conversation-as-a-service needs to be supported by analytics, business intelligence and new forms of key performance indicators. This is another subject as dialogue-data is a new form of Big Data.

 

There is no doubt that conversation-as-a-service will become the dominant interaction for customers, employees, and suppliers. However, the danger of believing over-hyped claims needs to be counter balanced with reality.

 

  

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

21,749
Expert opinions
43,819
Total members
469
New members (last 30 days)
192
New opinions (last 30 days)
28,609
Total comments

Now Hiring