Back

Knowledge-Grounded Response Generation with Deep Attentional Latent-Variable Model

This paper is published in the 7th Dialog System Technology Challenge (DSTC7) in the Proceedings of Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019 - DSTC7).

NOTE: Also published in Computer Speech and Language (Journal, 2020)

Full paper: Here

End-to-end dialogue generation has achieved promising results without using handcrafted features and attributes specific for each task and corpus. However, one of the fatal drawbacks in such approaches is that they are unable to generate informative utterances, so it limits their usage from some real-world conversational applications. This paper attempts at generating diverse and informative responses with a variational generation model, which contains a joint attention mechanism conditioning on the information from both dialogue contexts and extra knowledge.


PhD candidate at National Taiwan University, research interests cover Natural Language Processing, and Dialogue Systems.
Shang-Yu Su