There has been a fair amount of news around racist chat bots. We know the internet is a bad place. I was chatting to a friend in banking and we were discussing the chat bots and NLP (natural language programming). He pointed out that a bank was having issues around its chat bot. I have tried to check the veracity of this but was unable to do so. It seems that there are many failed attempts where bots can be coerced into becoming hate mongers. This means the decision trees used to get to the next line of text can be fooled into going down a hateful path.
There is another stream of NLP called sentiment analysis. This tries to uncover the true feelings expressed in text. I’m sure you see where I’m going with this? Sentiment analysis allows you to score a text based on the words used. This means that instead of relying on purely the decision tree you could put an additional step in the mixed where decisions are scored on the sentiment the bot is about to send. Mentioning the holocaust should be a decision that is never chosen in a discussion about your current account. Sentiment analysis would re-weight the decision tree (or Markov chain or whatever algorithm you use) to portray the emotion of the outcome desired.
If this analysis is taken a bit further then you could start to consider that this sentiment target parameter used in coercing the algorithm to be positive or negative would be the bot’s “temperament”. If this is also made to be variable using topical analysis like LDA (Latent Derichlet Association) then you could let the bot have a lot more specific “feelings” about specific topics. It could be angry about the “price of eggs” and happy about “opening a new investment account”. This could also be taken a step further into the realm of NLP (Neuro-linguistic programming) where the bot is able to keep the conversation flowing to the desired outcome. It is creepy to think that the bot could coerce the human into doing something though and that would be a nice topic for a future post.
Have fun and happy coding.