Data-Driven Dialogue State Tracking using (Specialised) Word Embeddings

Ivan Vulic

One of the core components of modern spoken dialogue systems is the dialogue state tracker, which estimates the user's goal at every step of the dialogue. However, most current approaches require large amounts of domain-specific training data and therefore have difficulty scaling to larger, more complex dialogue domains as well as to other languages. In this talk, we show how to leverage collections of word embeddings pre-trained on large general-domain corpora to facilitate dialogue state tracking across different domains and languages. We present the recent Neural Belief Tracking (NBT) framework which overcomes data sparsity problems by building on recent advances in representation learning. NBT models reason over pre-trained, semantically specialised word embeddings, learning to compose them into distributed representations of user utterances and dialogue context. We also analyse how the properties of underlying embedding spaces impact model performance, and how the fact that the proposed model operates purely over word embeddings allows immediate deployment of dialogue state tracking models for other languages. Finally, we also present recent improvements over the initial NBT framework which 1) allow integration of language-specific morpho-syntactic knowledge into the belief tracker for enhanced tracking performance in morphologically richer languages, 2) offer more support to modeling infrequent slot-value pairs, and 3) remove a hand-crafted rule-based update of the user's goals, which results in a fully statistical neural dialogue state tracker.


Made on
Tilda