博碩士論文 108423002 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:168 、訪客IP:3.144.224.74
姓名 楊文瀚(Wen-Han Yang)  查詢紙本館藏   畢業系所 資訊管理學系
論文名稱 以命名實體作為約束非監督式釋義生成
(Unsupervised Paraphrasing with Named Entity Constraints)
相關論文
★ 多重標籤文本分類之實證研究 : word embedding 與傳統技術之比較★ 基於圖神經網路之網路協定關聯分析
★ 學習模態間及模態內之共用表示式★ Hierarchical Classification and Regression with Feature Selection
★ 病徵應用於病患自撰日誌之情緒分析★ 基於注意力機制的開放式對話系統
★ 針對特定領域任務—基於常識的BERT模型之應用★ 基於社群媒體使用者之硬體設備差異分析文本情緒強烈程度
★ 機器學習與特徵工程用於虛擬貨幣異常交易監控之成效討論★ 捷運轉轍器應用長短期記憶網路與機器學習實現最佳維保時間提醒
★ 基於半監督式學習的網路流量分類★ ERP日誌分析-以A公司為例
★ 企業資訊安全防護:網路封包蒐集分析與網路行為之探索性研究★ 資料探勘技術在顧客關係管理之應用─以C銀行數位存款為例
★ 人臉圖片生成與增益之可用性與效率探討分析★ 人工合成文本之資料增益於不平衡文字分類問題
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-7-26以後開放)
摘要(中) 釋義生成(Paraphrase generation)一直是文字探勘 (Natural language processing)中重要任務之一。目的在於保留相同的語意但不同的句法結構。至於這個任務大致可歸類為監督式學習,半監督式學習以及非監督式學習三種模式。而在監督式學習已經有顯著的成果,在各個評估指標中都已經達到很好的表現。至於半監督式學習和非監督式學習則是還在研究階段,所以比較少的研究在討論這個任務。也是因為這樣的原因,本研究想探討非監督式學習的方法。
另外在釋義生成的方法中,有部分的研究在探討控制生成的方法,主要目的在於保留句子中部份重要的詞彙來避免語意的改變,舉例來說 “川普有一隻狗。” 對於這句話而言川普和狗就是無法改變的文字,如果將川普改變成希拉蕊的話整句話的意思就改變了。而做到控制生成的方式亦可分為幾種,有的是利用句法結構(Syntactic Structure) 來做到控制生成。也有利用模型的輔助來達到控制生成。而為了探討在非監督式學習模型中做到控制生成,我們的研究修改了Transformer模型的架構,在架構中我們新增了命名實體 (Named Entity) 的概念,原因是因為研究指出這些帶有命名實體的詞通常都是句子中不可被替換的詞語。此實驗我們將帶有命名實體標籤的詞彙列為不可取代的詞彙。因此在模型的學習中,我們期望能將帶有命名實體標籤的詞被模型重點學習,因而在輸入層中新增了命名實體標籤的詞遷入並結合其所在位置資訊進行學習。
從實驗結果中,我們提出了一個判斷是否有效地保留命名實體的方法,我們計算命名實體的招回率 (Recall) 來辨別是否有正確招回帶有命名實體的詞彙。另外我們在結果中顯示我們的招回率是比基準模型來的好的,同時我們也比較了基準模型的主要判斷指標,iBLEU。 iBLEU是BLEU的延伸主要是判斷生成出來的句子跟目標句子語意保留程度。而iBLEU則是帶有逞罰機制的BLEU。在我們的結果中絕大部分的iBLEU成績都是比基準模型來的好的。這也間接說明,命名實體對於模型是有潛在的影響力。
摘要(英) Paraphrase generation is one of the important tasks in natural language processing (NLP). The purpose is to retain the same semantic meaning but a different syntactic structure. As for this task, it can be classified as supervised learning, semi-supervised learning, and unsupervised learning. There are several promising results in supervised learning, and good performance has been achieved in various indicators. As semi-supervised learning and unsupervised learning are still in the research stage, there is not much research discussing this task. For this reason, this research explores unsupervised paraphrasing.
In addition, no matter the supervised methods or unsupervised methods for paraphrase generation, some researchers are exploring the method of controlling the generation. The main purpose is to preserve the important vocabulary in the sentence to avoid the change of meaning. For example, “Trump has a dog”. In this sentence, Trump and the dog are the words that cannot be converted. If Trump converts into Hillary Clinton, the meaning of the entire sentence will be changed. There are several ways to control the generation that some use syntactic structure to achieve controllable generation. Some are proposing the method of modifying the model to achieve the controllable. In our research, we modified the structure of the Transformer model. In the structure, we added the concept of the introduced Named Entity (NE). The reason is that usually, these words with NE are irreplaceable in the sentence. In this study, we assume words with NE tags as irreplaceable words. Therefore, in the training phase, we expect the words with NE tags can be learned by the model. Consequently, we combine the embedding of NE tags with position encoding and input token for model training.
From the experimental results, we proposed a method to judge whether the entity is effectively retained. We calculated the NE′s recall. The recall score is better than that of the baseline model, and we also compared the main evaluation metric of the baseline model, iBLEU. iBLEU is an extension of BLEU which mainly determines the degree of semantic retention of the generated sentence and the target sentence. In addition, iBLEU is BLEU with a penalty mechanism. In our results, iBLEU scores are better than the benchmark. This can also show that our method of using NE constraints has potential influence.
關鍵字(中) ★ 釋義生成
★ 命名實體辨識
★ 深度學習
★ 自編碼變換器
關鍵字(英) ★ paraphrase generation
★ named entity recognition
★ deep learning
★ autoencoder transformer
論文目次 摘要 i
Abstract ii
Acknowledgement iii
Table of Contents iv
List of Figure v
List of Table vi
1. Introduction 1
1.1. Overview 1
1.2. Motivation 2
1.3. Objectives 2
1.4. Thesis Organization 3
2. Related Work 4
2.1. Paraphrase Generation 4
2.2. Controllable Generation 5
2.3. Unsupervised Paraphrase Generation 6
2.4. Named Entity Recognition (NER) 10
2.5. NER Tag Embedding 11
2.5.1. Word2vec 11
2.5.2. GloVe 13
2.5.3. Bidirectional Encoder Representation from Transformers (BERT) 13
2.6. The baseline of tasks 14
2.6.1. Transformers 14
2.6.2. Variational AutoEncoder (VAE) 15
3. Methodology 17
3.1. Overview 17
3.2. Flow Chart 18
3.3. Datasets 21
3.4. Experiment Setup 21
3.4.1. Preprocessing 21
3.4.2. NER Mechanism 22
3.4.3. Paraphrase Generation Model 23
3.5. Experiment Design 24
3.5.1. Experiment I: Different way of NE constraint 24
3.5.2. Experiment II Compare the impact of different word embedding models 25
3.6. Automatic Evaluation Metrics 25
3.6.1. BLEU & iBLEU 26
3.6.2. ROUGE 27
3.6.3. NER-Recall 27
4. Experiment Results 29
4.1. Experiment Objectives Analysis 29
4.1.1. Experiment I 29
4.1.2. Summary of Experiment I 32
4.1.3. Case Study 35
4.1.4. Experiment II 36
4.1.5. Summary of Experiment II 42
4.2. Summary of the experiment analysis 42
5. Conclusion 44
5.1. Summary 44
5.2. Limitations 44
5.3. Contributions 44
5.4. Future works 45
Reference 46
參考文獻 Ba, J.L., Kiros, J.R., Hinton, G.E., 2016. Layer Normalization. arXiv:1607.06450 [cs, stat].

Bannard, C., Callison-Burch, C., 2005. Paraphrasing with bilingual parallel corpora, in: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics - ACL ’05. Presented at the the 43rd Annual Meeting, Association for Computational Linguistics, Ann Arbor, Michigan, pp. 597–604. https://doi.org/10.3115/1219840.1219914

Bos, J., Basile, V., Evang, K., Venhuizen, N., Bjerva, J., 2017. The Groningen Meaning Bank. pp. 463–496. https://doi.org/10.1007/978-94-024-0881-2_18

Bowman, S.R., Angeli, G., Potts, C., Manning, C.D., 2015. A large annotated corpus for learning natural language inference, in: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2015, Association for Computational Linguistics, Lisbon, Portugal, pp. 632–642. https://doi.org/10.18653/v1/D15-1075

Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A., Jozefowicz, R., Bengio, S., 2016. Generating Sentences from a Continuous Space, in: Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Presented at the CoNLL 2016, Association for Computational Linguistics, Berlin, Germany, pp. 10–21. https://doi.org/10.18653/v1/K16-1002

Buck, C., Bulian, J., Ciaramita, M., Gajewski, W., Gesmundo, A., Houlsby, N., Wang, W., 2018. Ask the Right Questions: Active Question Reformulation with Reinforcement Learning. Presented at the International Conference on Learning Representations.
Callison-Burch, C., Osborne, M., Koehn, P., 2006. Re-evaluating the Role of Bleu in Machine Translation Research, in: 11th Conference of the European Chapter of the Association for Computational Linguistics. Presented at the EACL 2006, Association for Computational Linguistics, Trento, Italy.

Chen, M., Tang, Q., Wiseman, S., Gimpel, K., 2019. Controllable Paraphrase Generation with a Syntactic Exemplar, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2019, Association for Computational Linguistics, Florence, Italy, pp. 5972–5984. https://doi.org/10.18653/v1/P19-1599

Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y., 2014. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Presented at the EMNLP 2014, Association for Computational Linguistics, Doha, Qatar, pp. 1724–1734. https://doi.org/10.3115/v1/D14-1179

Dadashov, E., Sakshuwong, S., Yu, K., 2017. Quora Question Duplication. undefined.

Devlin, J., Chang, M.-W., Lee, K., Toutanova, K., 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, in: Proceedings of the 2019 Conference of the North. Presented at the Proceedings of the 2019 Conference of the North, Association for Computational Linguistics, Minneapolis, Minnesota, pp. 4171–4186. https://doi.org/10.18653/v1/N19-1423

Dolan, B., Quirk, C., Brockett, C., 2004. Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources, in: COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics. Presented at the COLING 2004, COLING, Geneva, Switzerland, pp. 350–356.

Dong, L., Mallinson, J., Reddy, S., Lapata, M., 2017. Learning to Paraphrase for Question Answering, in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2017, Association for Computational Linguistics, Copenhagen, Denmark, pp. 875–886. https://doi.org/10.18653/v1/D17-1091

Fader, A., Zettlemoyer, L., Etzioni, O., 2013. Paraphrase-Driven Learning for Open Question Answering, in: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Presented at the ACL 2013, Association for Computational Linguistics, Sofia, Bulgaria, pp. 1608–1618.

Finch, A., Hwang, Y.-S., Sumita, E., 2005. Using Machine Translation Evaluation Techniques to Determine Sentence-level Semantic Equivalence, in: Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Presented at the IJCNLP 2005.

Fu, Y., Feng, Y., Cunningham, J.P., 2019. Paraphrase Generation with Latent Bag of Words, in: Advances in Neural Information Processing Systems. Curran Associates, Inc.

Fu, Z., Tan, X., Peng, N., Zhao, D., Yan, R., 2018. Style Transfer in Text: Exploration and Evaluation, in: AAAI.

Getoor, L., Taskar, B. (Eds.), 2007. An Introduction to Conditional Random Fields for Relational Learning, in: Introduction to Statistical Relational Learning. The MIT Press. https://doi.org/10.7551/mitpress/7432.003.0006

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y., 2014. Generative Adversarial Nets, in: Advances in Neural Information Processing Systems. Curran Associates, Inc.

Goyal, T., Durrett, G., 2020. Neural Syntactic Preordering for Controlled Paraphrase Generation, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2020, Association for Computational Linguistics, Online, pp. 238–252. https://doi.org/10.18653/v1/2020.acl-main.22

Gupta, A., Agarwal, A., Singh, P., Rai, P., 2018. A Deep Generative Framework for Paraphrase Generation. Proceedings of the AAAI Conference on Artificial Intelligence 32.

Iyyer, M., Wieting, J., Gimpel, K., Zettlemoyer, L., 2018. Adversarial Example Generation with Syntactically Controlled Paraphrase Networks, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Presented at the NAACL-HLT 2018, Association for Computational Linguistics, New Orleans, Louisiana, pp. 1875–1885. https://doi.org/10.18653/v1/N18-1170

Joachims, T., 1998. Text categorization with Support Vector Machines: Learning with many relevant features, in: Nédellec, C., Rouveirol, C. (Eds.), Machine Learning: ECML-98, Lecture Notes in Computer Science. Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 137–142. https://doi.org/10.1007/BFb0026683

Kingma, D.P., Mohamed, S., Jimenez Rezende, D., Welling, M., 2014. Semi-supervised Learning with Deep Generative Models, in: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (Eds.), Advances in Neural Information Processing Systems 27. Curran Associates, Inc., pp. 3581–3589.

Kingma, D.P., Welling, M., 2014. Auto-Encoding Variational Bayes. arXiv:1312.6114 [cs, stat].

Kumar, A., Ahuja, K., Vadapalli, R., Talukdar, P., 2020. Syntax-Guided Controlled Generation of Paraphrases. Transactions of the Association for Computational Linguistics 8, 329–345. https://doi.org/10.1162/tacl_a_00318

Lan, W., Qiu, S., He, H., Xu, W., 2017. A Continuously Growing Dataset of Sentential Paraphrases, in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2017, Association for Computational Linguistics, Copenhagen, Denmark, pp. 1224–1234. https://doi.org/10.18653/v1/D17-1126

Lee, J., Yoon, W., Kim, Sungdong, Kim, D., Kim, Sunkyu, So, C.H., Kang, J., 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics btz682. https://doi.org/10.1093/bioinformatics/btz682

Li, Y., 2017. Deep Reinforcement Learning: An Overview. arXiv:1701.07274 [cs].

Li, Z., Jiang, X., Shang, L., Liu, Q., 2019. Decomposable Neural Paraphrase Generation, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2019, Association for Computational Linguistics, Florence, Italy, pp. 3403–3414. https://doi.org/10.18653/v1/P19-1332

Li, Z., Kiseleva, J., Rijke, M., 2019. Dialogue Generation: From Imitation Learning to Inverse Reinforcement Learning. AAAI. https://doi.org/10.1609/aaai.v33i01.33016722

Lin, C.-Y., 2004. ROUGE: A Package for Automatic Evaluation of Summaries, in: Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, pp. 74–81.

Liu, X., Mou, L., Meng, F., Zhou, H., Zhou, J., Song, S., 2020. Unsupervised Paraphrasing by Simulated Annealing, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2020, Association for Computational Linguistics, Online, pp. 302–312. https://doi.org/10.18653/v1/2020.acl-main.28

Madnani, N., Dorr, B.J., 2010. Generating Phrasal and Sentential Paraphrases: A Survey of Data-Driven Methods. Computational Linguistics 36, 341–387. https://doi.org/10.1162/coli_a_00002

Mallinson, J., Sennrich, R., Lapata, M., 2017. Paraphrasing Revisited with Neural Machine Translation, in: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Presented at the EACL 2017, Association for Computational Linguistics, Valencia, Spain, pp. 881–893.

McKeown, K.R., 1983. Paraphrasing Questions Using Given and New Information 10.

Meteer, M., Shaked, V., 1988. Strategies for Effective Paraphrasing, in: Coling Budapest 1988 Volume 2: International Conference on Computational Linguistics. Presented at the COLING 1988.

Miao, N., Zhou, H., Mou, L., Yan, R., Li, L., 2019. CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling. Proceedings of the AAAI Conference on Artificial Intelligence 33, 6834–6842. https://doi.org/10.1609/aaai.v33i01.33016834

Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J., 2013. Distributed Representations of Words and Phrases and their Compositionality, in: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (Eds.), Advances in Neural Information Processing Systems 26. Curran Associates, Inc., pp. 3111–3119.

Nadeau, D., Sekine, S., 2007. A survey of named entity recognition and classification. LI 30, 3–26. https://doi.org/10.1075/li.30.1.03nad

Och, F., Ney, H., 2002. Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. Proc. of the 40th Annual Meeting of the Association for Computational Linguistics. https://doi.org/10.3115/1073083.1073133

Pang, B., Knight, K., Marcu, D., 2003. Syntax-based Alignment of Multiple Translations: Extracting Paraphrases and Generating New Sentences, in: Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Presented at the HLT-NAACL 2003, pp. 181–188.

Papineni, K., Roukos, S., Ward, T., Zhu, W.-J., 2002. Bleu: a Method for Automatic Evaluation of Machine Translation, in: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2002, Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pp. 311–318. https://doi.org/10.3115/1073083.1073135

Pennington, J., Socher, R., Manning, C., 2014. Glove: Global Vectors for Word Representation, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Presented at the EMNLP 2014, Association for Computational Linguistics, Doha, Qatar, pp. 1532–1543. https://doi.org/10.3115/v1/D14-1162

Peters, M., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L., 2018. Deep Contextualized Word Representations, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Presented at the Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Association for Computational Linguistics, New Orleans, Louisiana, pp. 2227–2237. https://doi.org/10.18653/v1/N18-1202

Prakash, A., Hasan, S.A., Lee, K., Datla, V., Qadir, A., Liu, J., Farri, O., 2016. Neural Paraphrase Generation with Stacked Residual LSTM Networks, in: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. Presented at the COLING 2016, The COLING 2016 Organizing Committee, Osaka, Japan, pp. 2923–2934.

Provilkov, I., Emelianenko, D., Voita, E., 2020. BPE-Dropout: Simple and Effective Subword Regularization, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2020, Association for Computational Linguistics, Online, pp. 1882–1892. https://doi.org/10.18653/v1/2020.acl-main.170

Quinlan, J.R., 1986. Induction of decision trees. Mach Learn 1, 81–106. https://doi.org/10.1007/BF00116251

Rabiner, L., Juang, B., 1986. An introduction to hidden Markov models. IEEE ASSP Magazine 3, 4–16. https://doi.org/10.1109/MASSP.1986.1165342

Rezende, D.J., Mohamed, S., Wierstra, D., 2014. Stochastic Backpropagation and Approximate Inference in Deep Generative Models, in: International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 1278–1286.
Schweter, S., Akbik, A., 2020. FLERT: Document-Level Features for Named Entity Recognition. arXiv:2011.06993 [cs].

See, A., Liu, P.J., Manning, C.D., 2017. Get To The Point: Summarization with Pointer-Generator Networks, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Presented at the ACL 2017, Association for Computational Linguistics, Vancouver, Canada, pp. 1073–1083. https://doi.org/10.18653/v1/P17-1099

Sepp, H., Juergen, S., 1997. LONG SHORT-TERM MEMORY.

Siddique, A.B., Oymak, S., Hristidis, V., 2020. Unsupervised Paraphrasing via Deep Reinforcement Learning. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 1800–1809. https://doi.org/10.1145/3394486.3403231

Su, Y., Yan, X., 2017. Cross-domain Semantic Parsing via Paraphrasing, in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2017, Association for Computational Linguistics, Copenhagen, Denmark, pp. 1235–1246. https://doi.org/10.18653/v1/D17-1127

Sun, C., Yang, Z., 2019. Transfer Learning in Biomedical Named Entity Recognition: An Evaluation of BERT in the PharmaCoNER task, in: Proceedings of The 5th Workshop on BioNLP Open Shared Tasks. Association for Computational Linguistics, Hong Kong, China, pp. 100–104. https://doi.org/10.18653/v1/D19-5715

Sun, H., Zhou, M., 2012. Joint Learning of a Dual SMT System for Paraphrase Generation, in: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Presented at the ACL 2012, Association for Computational Linguistics, Jeju Island, Korea, pp. 38–42.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I., 2017. Attention is All you Need, in: Advances in Neural Information Processing Systems. Curran Associates, Inc.

Wang, X., Jiang, Y., Bach, N., Wang, T., Huang, Z., Huang, F., Tu, K., 2021. Automated Concatenation of Embeddings for Structured Prediction, in: ACL/IJCNLP.

Wiseman, S., Shieber, S., Rush, A., 2018. Learning Neural Templates for Text Generation, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2018, Association for Computational Linguistics, Brussels, Belgium, pp. 3174–3187. https://doi.org/10.18653/v1/D18-1356

Yu, A.W., Dohan, D., Luong, M.-T., Zhao, R., Chen, K., Norouzi, M., Le, Q.V., 2018. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. Presented at the International Conference on Learning Representations.

Zeng, D., Zhang, H., Xiang, L., Wang, J., Ji, G., 2019. User-Oriented Paraphrase Generation With Keywords Controlled Network. IEEE Access 7, 80542–80551. https://doi.org/10.1109/ACCESS.2019.2923057

Zhou, C., Neubig, G., 2017. Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Presented at the ACL 2017, Association for Computational Linguistics, Vancouver, Canada, pp. 310–320. https://doi.org/10.18653/v1/P17-1029
指導教授 柯士文(Shih-Wen Ke) 審核日期 2021-8-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明