dc.description.abstract | Nowadays, Natural Language Processing (NLP) has been made great progress. In the past, NLP can only work with one-word translation, nowadays, it could integrate the entire article and understand the meaning of sentence. The most common dialogue generation technique is “sequence to sequence”, which generates the best response sentence according to a single user input. However, in the reality, most of human dialogue have multi-turn questions and responses instead of only a single pair of them. When a person wants to response decently, he or she will not only focus on the last question, but the whole scenario, includes previous conversations. That is to say, the generation of dialogue may be more completely if contains information of utterance. Second, in the meanwhile, the response should be more customized. It should be based on current dialogue theme, such as characteristics of the questioner…etc. Model should have different responses for different users and themes although within same questions.
To meet above achievements, our research proposes hybrid hierarchical mechanism to improve multi-turn dialogues. Furthermore, we also propose a method to customize generating response based on self-attention mechanism. In our experiments, this approach can effectively improve the dialogue generation. | en_US |