Back

Dynamically Context-Sensitive Time-Decay Attention for Dialogue Modeling

This paper is published in the in Proceedings of The 44th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2019), Brighton, U.K., May 12-17, 2019. IEEE.

Full paper: Here, arXiv

Spoken language understanding (SLU) is an essential component in conversational systems. Considering that contexts provide informative cues for better understanding, history can be leveraged for contextual SLU. However, most prior work only paid attention to the related content in history utterances and ignored the temporal information. In dialogues, it is intuitive that the most recent utterances are more important than the least recent ones, hence time-aware attention should be in a decaying manner. Therefore, this paper allows the model to automatically learn a time-decay attention function based on the content of each role’s contexts, which effectively integrates both content-aware and time-aware perspectives and demonstrates remarkable flexibility to complex dialogue contexts. The experiments on the benchmark Dialogue State Tracking Challenge (DSTC4) dataset show that the proposed role-based context-sensitive time-decay attention mechanisms significantly improve the state-of-the-art model for contextual understanding performance.


Researcher of Natural Language Processing.
Shang-Yu Su