Regularizing Dialogue Generation by Imitating Implicit Scenarios
Abstract
Human dialogues are scenario-based and appropriate responses generally relate to the latent context knowledge entailed by the specific scenario. To enable responses that are more meaningful and context-specific, we propose to improve <PRE_TAG>generative dialogue systems</POST_TAG> from the <PRE_TAG>scenario perspective</POST_TAG>, where both dialogue history and <PRE_TAG>future conversation</POST_TAG> are taken into account to implicitly reconstruct the <PRE_TAG>scenario knowledge</POST_TAG>. More importantly, the conversation scenarios are further internalized using <PRE_TAG>imitation learning framework</POST_TAG>, where the conventional dialogue model that has no access to <PRE_TAG>future conversation</POST_TAG>s is effectively regularized by transferring the <PRE_TAG>scenario knowledge</POST_TAG> contained in <PRE_TAG>hierarchical supervising signals</POST_TAG> from the <PRE_TAG>scenario-based dialogue model</POST_TAG>, so that the <PRE_TAG>future conversation</POST_TAG> is not required in actual inference. Extensive evaluations show that our approach significantly outperforms state-of-the-art baselines on <PRE_TAG>diversity</POST_TAG> and <PRE_TAG>relevance</POST_TAG>, and expresses scenario-specific knowledge.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper