博碩士論文 110522158 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator孫潤德zh_TW
DC.creatorRen-Der Sunen_US
dc.date.accessioned2023-9-5T07:39:07Z
dc.date.available2023-9-5T07:39:07Z
dc.date.issued2023
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=110522158
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract法律判決預測(LJP)旨在基於案件之犯罪事實以預測判決結果(如法條、罪名和刑期)。這個領域大多相關研究都是基於法院判決書中所記載之犯罪事實所為模型之犯罪事實輸入。然而,每個判決書所記載之犯罪事實內容實際上都是基於起訴書的內容做延伸。除此外,過去相關研究之目的大多致力於服務法官,作為其工作上之輔助工具。然而實務上有更多的需求來自於檢察官起訴案件後,案件是否會被法官駁回或是法官不受理?如果被告無受法官處罰,則原因為何?若有處罰,則是受到有期徒刑還是罰款之處罰?是違反了哪一條法條與罪名?因此,在這項研究中,我們定義了三個新穎之LJP任務,分別為檢察官提供協助,包括起訴結果預測(LJP#1)、刑事罰款預測(LJP#2)和刑事刑期預測(LJP#3)。 本篇之研究係基於多任務學習(Multi-task Learning)架構,而在上述每個工作中之子任務間亦是具有相依關係,換言之,一項子任務之預測結果也影響了其他子任務之預測結果。在本篇論文中,對於子任務之間應用了不同之拓譜架構如IMN(Iteracitve Message Passing Network)及TopJudge與本篇研究提出結合IMN與TopJudge形成之拓譜架構並搭配不同之語言模型,如Word2Vec、BERT、Lawformer 進行LJP任務,比較子任務形成不同之拓譜架構並應用不同之語言模型對於效能之影響。此外,由於大型語言模型中的參數數量巨大,對每個LJP任務進行Full-Tuning的成本將變得越來越昂貴。為解決這個問題,我們採用了LoRA(Low-Rank Adaptation)架構,這是屬於一種Parameter-Efficient Fine-Tuning(PEFT)的技術,以減少訓練參數數量並節省計算成本與模型訓練時間。實驗結果顯示,使用LoRA進行Fine-Tuning不僅降低了訓練時間(45\%),甚至對某些LJP任務帶來了性能提升效果(2.5\%的Macro F1)。zh_TW
dc.description.abstractLegal Judgment Prediction (LJP) aims to predict the judgement results (such as article, charge, and penalty) based on the criminal facts of the case. Most previous research in this field was based on criminal statements from court verdicts. However, each verdict actually is based on the content from indictments. For prosecutors, will the case be dismissed or processed? If the case is accepted, is the penalty a jail sentence or a fine? What is the charge and article violated? In this study, we therefore define three novel LJP tasks for prosecutors, including prosecution outcome prediction (LJP#1), fine prediction (LJP#2) and imprison prediction (LJP#3). Due to the huge number of parameters in a large language model, the cost of full-tuning for each LJP task will become increasingly expensive. To solve this problem, we adopt the LoRA (Low-Rank Adaptation) architecture, a technique for parameter-efficient fine-tuning (PEFT) to reduce the number of tuned parameters and save computational cost/time. The experiments show that using LoRA for fine-tuning not only improves not only reduce training time (45\%) but also brings performance improvement effect (2.5\% F1) for some LJP tasks.en_US
DC.subject刑事判決預測zh_TW
DC.subject大型語言模型zh_TW
DC.subjectLoRAzh_TW
DC.subjectPEFTzh_TW
DC.subjectLegal Judgement Predictionen_US
DC.subjectLarge Language Modelen_US
DC.subjectLoRAen_US
DC.subjectPEFTen_US
DC.title刑事實務預判輔助之研究zh_TW
dc.language.isozh-TWzh-TW
DC.titleOn the practical legal judgement prediction from prosecutor indictments and court verdictsen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明