If you pass a lot of time with artificial intelligence systems that are capable of discourse, you’re likely to notice how few conversations they actually initiate; and, once a conversation is under way, how little they are disposed to take it in a new or tangential direction.

Of course, Siri or Cortana might be the first to begin an exchange, but it’s inevitably prompted by some due event in a calendar or other external factor. At the current state of the art, an AI is also not likely to introduce abstract or tangential comments into a Turing test, no matter how boring the conversation gets.

However, a team of Chinese research scientists has developed a new model which could make chatting with AIs more productive and fruitful by adding the capability for tangential conversation threads when the human>bot chat goes stale, or stalls. The new paper StalemateBreaker: A Proactive Content-Introducing Approach to Automatic Human-Computer Conversation [PDF], which has been accepted for the 25th International Joint Conference on Artificial Intelligence (IJCAI 2016) in New York city this July, outlines a novel adjunct to an existing automatic human-computer conversation system, in which the AI detects a conversational stalemate and then attempts to revive discourse by interjecting a novel idea or proposal related to the conversation so far.

‘To the best of our knowledge, existing open-domain chatbot-like conversation are a passive process: the computer only needs to “respond” to human inputs and does not take the role of conversation leading. Instead, we propose a proactive system, which can determine when, what, and how to be proactive and to introduce new content into the conversation.’

The two challenges involved in making chatbots more proactive in a flagging exchange are recognising that the conversation has stalled, and having a resource pool from which to draw new information that is related to the conversation at the point up to which it failed.

stalemate-breaker-diagrams

The first of these is addressed by defining a number of possible triggers, including the human contribution ‘errrr’ or similar non-informational signals that the ideas may have stopped flowing. The algorithm for the system can currently recognise a number of other meaningless utterances that indicate conversational deadlock, with the intention of developing future systems to detect it even when the human responses are clearer and better-structured.

As to where to draw ideas from to revive the chat, the system relies on two resources: web pages whose popularity and pertinence is indicated by Google’s PageRank system (even though Google has ‘officially’ withdrawn PageRank as a public indicator of web page quality and the HITS (Hyperlink-Induced Topic search) algorithm, which is capable of analysing returned results for usability in regards to discourse.

The AI uses conversation context and potential topic retrieval sources to generate fifty possible interjections or additions to the chat, and then randomly re-ranks them, presumably to be able to continue to mine the topic at hand without returning robotically to the highest-ranked search result.

To evaluate the success of the interjections, the researchers turned, inevitably, to crowdsourcing, hiring Chinese workers to assign a 1 (relevant) or 0 (inappropriate) score to each SMB novel topic. StalemateBreaker has a rudimentary site explaining the annotation criteria.

The potential pitfall of this system is, predictably, the human factor. Google’s PageRank algorithm was effectively mothballed in 2013, and the increasingly popular practice of using social media indicators as indices of topic/source importance has proved more than problematic for Microsoft. Only humans can teach AIs how to become more human, but there is currently no reliable data bank on human behaviour which is comprehensive and flexible enough to shield research from the bizarre anomalies of Twitter and Facebook.