中文斷詞在中文的自然語言處理上,是個相當基礎且非常重要的前置處理工作。中文斷詞這個領域雖然已經研究了數十年,過去也有相當多的學者提出各種斷詞演算法,但至今解決中文斷詞問題的研究仍未中斷,並且越來越受到重視。近年來的斷詞系統則較傾向於使用統計式的機器學習演算法來解決中文斷詞的問題,例如隱藏式馬可夫模型。然而,標準的隱藏式馬可夫模型在解決中文斷詞的問題上,斷詞效能F-measure約只有80% 的結果,所以許多研究都是使用外部資源或是結合其他的機器學習演算法來幫助斷詞。本研究目的是希望使用最簡單的方法,並且毋須使用任何外部資源,來提升隱藏式馬可夫模型的準確率。我們的作法是應用特製化(specialization)的概念,將中文斷詞之歧義性及未知詞的資訊帶入隱藏式馬可夫模型中,於完全不修改模型之訓練及測試過程的前提之下,透過兩階段特製化的方式,分別以擴充「觀測符號」,以及擴充「狀態符號」的方式,大大地改善了隱藏式馬可夫模型的斷詞準確性。第一階段中,我們結合了長詞優先法以及遮罩方式(Mask method),將歧義性與未知詞的資訊帶入隱藏式馬可夫模型中,使得模型擁有更多的斷詞資訊做學習。於實驗結果得知,結合最簡單的長詞優先斷詞方法,確實能大幅地提升隱藏式馬可夫模型的效能,將F-measure由0.812提升至0.953的斷詞結果。而第二階段的特製化過程中,我們使用詞彙化(lexicalization)的方式分別對高頻率及高錯誤的觀測符號,來新增狀態符號,於實驗結果也證明了,透過此階段的改良能再次提升系統效能,將斷詞結果F-measure由0.953提升至0.963。 The first step in Chinese language processing tasks is word segmentation. Various methods have been proposed to address this problem in previous studies, e.g. heuristic-based approaches, statistical-based approaches, etc. HMM is a statistical machine learning approach that has been successfully applied in many fields, e.g. POS tagging, shallow parsing, and so on. However, we find that standard HMM achieved only 80% results in Chinese word segmentation. As is commonly known, segmentation ambiguity and unknown word occurrence are two main problems in Chinese word segmentation. In this paper, we proposed a two-stage specialized HMM by incorporating these information into the model. In the first stage, we combine the maximum matching heuristics to incorporate segmentation ambiguity and use a masking approach to handle unknown word information. By extending the observation symbols, the proposed M-HMM is improved from 0.812 to 0.953 in F-measure. At the second stage, we use lexicalization technique to further enrich HMM performance. The idea is to add new state symbols for high frequency characters or high tagging error symbols. Experimental results show that Lexicalized M-HMM is improved from 0.953 to 0.963 in F-measure.