中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/9323
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78818/78818 (100%)
Visitors : 34722517      Online Users : 1197
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/9323


    Title: 基於特製隱藏式馬可夫模型之中文斷詞研究;Chinese Word Segmentation using Specialized HMM
    Authors: 林千翔;Qian-Xiang Lin
    Contributors: 資訊工程研究所
    Date: 2004-07-10
    Issue Date: 2009-09-22 11:45:25 (UTC+8)
    Publisher: 國立中央大學圖書館
    Abstract: 中文斷詞在中文的自然語言處理上,是個相當基礎且非常重要的前置處理工作。中文斷詞這個領域雖然已經研究了數十年,過去也有相當多的學者提出各種斷詞演算法,但至今解決中文斷詞問題的研究仍未中斷,並且越來越受到重視。近年來的斷詞系統則較傾向於使用統計式的機器學習演算法來解決中文斷詞的問題,例如隱藏式馬可夫模型。然而,標準的隱藏式馬可夫模型在解決中文斷詞的問題上,斷詞效能F-measure約只有80% 的結果,所以許多研究都是使用外部資源或是結合其他的機器學習演算法來幫助斷詞。本研究目的是希望使用最簡單的方法,並且毋須使用任何外部資源,來提升隱藏式馬可夫模型的準確率。我們的作法是應用特製化(specialization)的概念,將中文斷詞之歧義性及未知詞的資訊帶入隱藏式馬可夫模型中,於完全不修改模型之訓練及測試過程的前提之下,透過兩階段特製化的方式,分別以擴充「觀測符號」,以及擴充「狀態符號」的方式,大大地改善了隱藏式馬可夫模型的斷詞準確性。第一階段中,我們結合了長詞優先法以及遮罩方式(Mask method),將歧義性與未知詞的資訊帶入隱藏式馬可夫模型中,使得模型擁有更多的斷詞資訊做學習。於實驗結果得知,結合最簡單的長詞優先斷詞方法,確實能大幅地提升隱藏式馬可夫模型的效能,將F-measure由0.812提升至0.953的斷詞結果。而第二階段的特製化過程中,我們使用詞彙化(lexicalization)的方式分別對高頻率及高錯誤的觀測符號,來新增狀態符號,於實驗結果也證明了,透過此階段的改良能再次提升系統效能,將斷詞結果F-measure由0.953提升至0.963。 The first step in Chinese language processing tasks is word segmentation. Various methods have been proposed to address this problem in previous studies, e.g. heuristic-based approaches, statistical-based approaches, etc. HMM is a statistical machine learning approach that has been successfully applied in many fields, e.g. POS tagging, shallow parsing, and so on. However, we find that standard HMM achieved only 80% results in Chinese word segmentation. As is commonly known, segmentation ambiguity and unknown word occurrence are two main problems in Chinese word segmentation. In this paper, we proposed a two-stage specialized HMM by incorporating these information into the model. In the first stage, we combine the maximum matching heuristics to incorporate segmentation ambiguity and use a masking approach to handle unknown word information. By extending the observation symbols, the proposed M-HMM is improved from 0.812 to 0.953 in F-measure. At the second stage, we use lexicalization technique to further enrich HMM performance. The idea is to add new state symbols for high frequency characters or high tagging error symbols. Experimental results show that Lexicalized M-HMM is improved from 0.953 to 0.963 in F-measure.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File SizeFormat


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明