博碩士論文 109423024 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊管理學系zh_TW
DC.creator李佳穎zh_TW
DC.creatorChia-Ying Lien_US
dc.date.accessioned2022-7-16T07:39:07Z
dc.date.available2022-7-16T07:39:07Z
dc.date.issued2022
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=109423024
dc.contributor.department資訊管理學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract現今企業的營運與服務隨著資訊科技的發展,逐漸轉往數位的運營模式,與此同時產生傳統人工方式難以估量之大量資料,若能將資料妥善應用,將能使企業之營運更上一層樓。本研究針對實際企業之使用者問題回饋加以設計系統,透過該公司先前蒐集並且經過專家整理後的歷史資料,應用至深度學習的模型並開發一套使用者問題反映自動分類系統。系統結合 Selective mechanism 架構來提升現有模型於篩選文意的能力,更進一步提升模型於文本分類上的效能,以此方法來減少企業未來在面對相似資料上所花費的歸檔成本。透過實際案例驗證,本研究開發之模型(Selective mechanism架構結合 〖BERT〗_base)相較於原始 〖BERT〗_base於Accuracy增加了4%、Precision增加了3%、Recall增加了4%、F1-score增加了4%,驗證了Selective mechanism確實能提升 〖BERT〗_base 模型於文本分類上的效能。且同時在實驗中也驗證Text Augmentation 於資料分布不平均的狀況下,仍能夠讓模型於epoch數量更少的狀況下更快達到收斂,並在epoch增加時避免過度學習(Overfitting)的狀況。最終將模型串接至Web-based的資訊系統上,並透過公司內部與資料相同領域的專家來驗證此系統,而結果表明使用者對於此系統大多抱持正向態度。zh_TW
dc.description.abstractNowadays, there is a rising number of enterprises and agencies using e-service in their daily operations or other services. Hence, there are more digital data will be generated than before. Dealing with these huge digital data may cost a lot of human resources and time, if they sort these digital data by human as the way they used to do. In order to solve aforementioned problem, this study designed a system to classify online user problem automatically, through applying a real enterprise’s historical data to a deep learning model. Then, this study tries to improve the ready-made model’s capability in filtering textual meaning by combine selective mechanism architecture to the model. Furthermore, it may also improve the model’s efficacy in text classification task, and reduce the cost for enterprises to deal with the similar data in future. Then, the author designed some experiments for verifying the model which is proposed by this study. The 〖BERT〗_base model which combine selective mechanism obtains 4% improvement in accuracy, 3% improvement in precision, 4% improvement in recall and 4% improvement in f1-score over the 〖BERT〗_base. That is, this study verify that combing 〖BERT〗_base with selective mechanism can improve 〖BERT〗_base efficacy in text classification. Moreover, this study also verifies that using text augmentation method with imbalance dataset would make convergence be achieved in a lower number of epoch and reduce the effects of overfitting in increasing number of epoch. Finally, this study concatenates the proposed model and a web-based application. In addition, this study would verify the system via survey feedback which was from experts who work for enterprise and have background knowledge of the same area of the dataset. The result shows that most users have a positive attitude towards the system.en_US
DC.subjectBidirectional Encoder Representations from Transformerszh_TW
DC.subject自然語言處理zh_TW
DC.subject文本分類zh_TW
DC.subjectSelective mechanismzh_TW
DC.subjectText augmentationzh_TW
DC.subjectBERT (Bidirectional Encoder Representations from Transformers)en_US
DC.subjectNatural Language Processing (NLP)en_US
DC.subjectText classificationen_US
DC.subjectSelective mechanismen_US
DC.subjectText augmentationen_US
DC.title應用Selective Mechanism與BERT來協助企業自動分類消費者的問題反映zh_TW
dc.language.isozh-TWzh-TW
DC.titleApplying BERT and Selective Mechanism to Assist Enterprise in Online User-Problem Classificationen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明