School of Engineering

Graduates the engineering leaders of tomorrow...

Pre-processing techniques for end-to-end trainable RNN-based conversational AI

Spoken dialogue system interfaces are gaining increasing attention, with examples including Apple’s Siri, Amazon’s Alexa, and numerous other products. Yet most existing solutions remain heavily data-driven, and face limitations in integrating and handling data semantics. They mainly rely on statistical co-occurrences in the training dataset and lack a more profound knowledge integration model with semantically structured information such as knowledge graphs. This paper evaluates the impact of performing knowledge base integration (KBI) to regulate the dialogue output of a deep learning conversational system. More specifically, it evaluates whether integrating dependencies between the data, obtained from the semantic linking of an external knowledge base (KB), would help improve conversational quality. To do so, we compare three approaches of conversation preprocessing methods: i) no KBI: considering conversational data with no external knowledge integration, ii) All Predicates KBI: considering conversational data where all dialogue pairs are augmented with their linked predicates from the domain KB, and iii) Intersecting Predicates KBI: considering conversational data where dialogue pairs are augmented only with their intersecting predicates (to filter-out potentially useless or redundant knowledge). We vary the amount of history considered in the conversational data, ranging from 0% (considering the last dialogue pair only) to 100% (considering all dialogue pairs, from the beginning of the dialogue). To our knowledge, this is the first study to evaluate knowledge integration in the preprocessing phase of conversational systems. Results are promising and show that knowledge integration – with an amount of history ranging between 10% and 75%, generally improves conversational quality.   Report in pdf


Copyright 1997–2021 Lebanese American University, Lebanon.
Contact LAU | Feedback