中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/89827
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 41268947      在线人数 : 260
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89827


    题名: GAN^2: Fuse IntraGAN with OuterGAN for Text Generation
    作者: 莊凱智;Chuang, Kai-Chih
    贡献者: 資訊管理學系
    关键词: 深度學習;生成對抗網路;自然語言生成;Deep Learning;Generative Adversarial Network;Natural Language Generation
    日期: 2022-07-21
    上传时间: 2022-10-04 12:01:15 (UTC+8)
    出版者: 國立中央大學
    摘要: 自然語言生成模型在近年備受矚目並蓬勃發展,並且可以實際應用在商業中,如社群網站中的圖片敘述自動生成、新聞報導模板生成等。因此,自然語言生成十分注重於生成文字的品質以及是否與真人之寫作風格相似。然而,自然語言生成目前遭逢四大問題:訓練不穩定、獎勵稀疏、模式崩潰與曝光偏差,導致生成文字品質無法達到預期,更無法精準的學習寫作風格。因此,我們提出了〖GAN〗^2模型,透過結合IntraGAN與OuterGAN來建構一個創新的雙層生成對抗網路模型。IntraGAN作為OuterGAN的生成器,並結合beam search與IntraGAN的判別器來優化生成序列。IntraGAN生成之序列會輸出至OuterGAN,由經過改進的比較判別器來計算獎勵,以強化引導生成器更新的訊號,並更加輕易的傳遞更新資訊。且透過迭代對抗訓練持續優化模型。另外提出記憶機制穩定本模型的訓練,使效能最佳化。而本研究也透過三個資料集與三個評估方法作為效能評估,顯示本模型與不同知名模型比較有優秀的表現與極佳的生成品質。也在實驗中證明本模型架構採用的技術皆助於提升生成品質。最後探討模型中參數使用的影響以及最佳的參數配置來優化生成結果。;Natural language generation (NLG) has recently flourished in research, and the NLG can apply to several commercial cases, such as text descriptions of images on social media and the templates of news reports. The research of NLG concentrates on improving the quality of text and generating sequences similar to human writing style. However, NLG suffers from four issues: training unstable, reward sparsity, mode collapse, and exposure bias. These issues provoke the awful text quality and fail to learn the accurate writing style. As a result, we propose a novel 〖GAN〗^2 model constructed by IntraGAN and OuterGAN based on the generative adversarial networks (GAN). IntraGAN is the generator of OuterGAN which employ beam search and discriminator of IntraGAN to optimize the generated sequence. Then output the generated sequence to the OuterGAN, calculate the reward by improved comparative discriminator to strengthen the reward signal, and easily update the generator. And we iterate adversarial training to update the models regularly. Moreover, we introduce the memory mechanism to stabilize the training process that improves the efficiency of training. We collect three datasets and three evaluation metrics to conduct the experiments. It reveals that our model outperforms other state-of-art baseline models, and also proves the components of our model help to improve the text quality. Finally, we discuss the influence of parameters in our model and find the best configuration to advance the generated results.
    显示于类别:[資訊管理研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML51检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明