中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/89794
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 40249585      Online Users : 226
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/89794


    Title: 電子病歷縮寫消歧與一對多分類任務;Disambiguate clinical abbreviation by one-to-all classification
    Authors: 陳重諺;Chen, Chong-Yan
    Contributors: 資訊管理學系
    Keywords: 縮寫還原;文字探勘;詞義消歧;abbreviation expansion;text mining;word sense disambiguation
    Date: 2022-07-13
    Issue Date: 2022-10-04 11:59:55 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 醫學領域隨著人工智慧發展,越來越多學者相繼提出醫學領域相關的機器學習研 究,其中自然語言處理亦是其中最熱門的研究問題。通過各種文字探勘模型的建立可 協助醫療輔助診斷、預後追蹤與醫療客服等不同的應用。
    然而,這些研究所需的醫療文本資料,往往存在大量的縮寫字,若未能先進行縮 寫字詞的詞義消歧將限制後續醫療文本應用之可能性。因此,本研究將聚焦在臨床文 本縮寫字還原的問題。
    過往研究的解決方式是透過以單詞為基礎之分類器,來將縮寫字還原成縮寫前的 狀態,但這樣的方法間接導致後續需要更改、維護、甚至使用上的複雜性增加。本研 究會使用多詞彙共用一個分類器作法,納入預訓練的 BERT 進行較為泛化的架構實作 與演算法開發,以期提高模型於臨床上的可用性。
    本研究所提出之簡化架構可以降低部署的複雜流程,相較傳統方法取得 3%左右 正確率提升,使用上的彈性與可維護性更高,解決傳統架構需要重新訓練的問題。;With the growth of artificial intelligence, more researchers cultivate machine learning topics in the medical field. Natural language preprocessing is the hottest issue, many applications like assistant diagnosis, prognosis tracking, service chatbot......etc are relied on it.
    To fulfill the above practices, a cleaning dataset for building a model is necessary; however, there are tons of ambiguous abbreviations in the electronic health record. If researchers don’t disambiguate them to their original senses, it would bring negative effects to performance.
    Therefore, this content would discuss how to expand abbreviations in clinical data. In the previous approaches, most scholars built a classifier for every single term. This led to difficulty in deploying models and maintaining them. Thus, in this topic, we utilize pre-train BERT architecture to only build a model for all the terms. Trying to achieve higher usability in the real case.
    In conclusion, the accuracy of our method got higher performance for 1 to 3 percentage than previous multi-model ways, but it has the advantage of flexibility and maintainability. Avoid the risk of re-train problems.
    Appears in Collections:[Graduate Institute of Information Management] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML42View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明