博碩士論文 108826014 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:95 、訪客IP:18.118.12.222
姓名 詹凱淳(Kai-Tsun Chan)  查詢紙本館藏   畢業系所 系統生物與生物資訊研究所
論文名稱 深度強化學習用於精準醫療-以DQN選課模型的開發來提高大學生成績為例子
(Development of DQN models to improve academic performance of college students - an exercise for deep reinforcement learning in precision healthcare)
相關論文
★ 發展酵素非限制性全基因體調控因子解析方法★ 利用健保資料庫探討常見複雜疾病之中草藥處方研究
★ 主觀影響療癒的案例與主觀在醫療重要性的探討★ 精神分裂症病患與正常人之DNA甲基化網絡的差異
★ 躁鬱症病患的精子之DNA 甲基化的網路分析★ Cloud-R:以R軟體與雲端技術為基礎的生物統計應用網站
★ 中草藥藥性與中草藥遺傳演化樹之關係★ 利用階層式叢集及不同分類方法分析人類正常組織特異性基因
★ 由ENCODE計畫分析脫氧核醣核酸酶I與組蛋白修飾★ 皮膚痣圖片毛髮辨識去除
★ 中醫癌症處方多由癰瘍、和解之劑與寒方組成,並隨氣溫下降而更改組成★ 主成分分析與叢集分析於DNA微陣列數據前處理的應用與實作
★ 確認與中醫處方有關的環境和社會經濟變數★ 與中醫處方有關的社會經濟變量關係網絡的確認與分析
★ 開發CNN模型預測學生是否退學— 練習如何建立AI模型以從NGS短序列片段數據中偵測SNP★ 深度 Q 網絡學習用於加護病房敗血症治療
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2031-8-16以後開放)
摘要(中) 近年來人工智慧的發展迅速,Pytorch 為重要的深度強化學習框架之一,使用者可以運用軟體庫裡的強化學習運算法,簡單開始架設模型,並且訓練這個架設的模型,最後運用它來預測一些目標,簡單來說概念是從過去的資料中探討未來的發展性。我們認為,這項技術應用在學校系統上也一定能夠有實質的幫助。本文主要介紹如何將 DQN技術應用於分析學生成績分析資料上,再來下一步建立一個由第一學期時,必修課成績及必修課程為基礎,來選擇接下來學期的選修課,目標是從各個系所的眾多選修課程中,挑出容易使修課學生獲得高分的課程,此 DQN 模型目的是幫助學生在選課上有更多的資訊。
本文先使用 R 語言做資料前處理,針對所有大學部學生的課號及修課成績資料,首先將這些大學生每一學期的成績資料逐一排列,並選出人數最多的系所進行模型的參考訓練目標。再來是利用Python 加上 DQN 演算法來訓練模型,並調整模型內的參數,逐步訓練後找出最「DQN選課」的模型。雖然在本文最後有成功建立出目標模型,但由於此模型只是經由過去的數據來預測未來,並沒有實質上的模擬是否可行,所以還需要投入更大量的時間去證明它是否可用於現實的選課。
最後引用Deep Reinforcement Learning and Simulation as a Path Toward Precision Medicine 這篇論文,使用模擬的方式來去驗證Deep Reinforcement Learning 的實際可行性,並且將它投入精準醫療的領域,實驗最後顯示此方法的優點為,不僅可以探索臨床實踐和可用數據之外新的治療策略,還能隨著時間的推移,患者的個體疾病進展將捕捉到患者之間的差異,和單個患者內疾病進展的內在隨機性。
摘要(英) In recent years, artificial intelligence has developed rapidly. Pytorch is one of the important deep reinforcement learning frameworks. Users can use the reinforcement learning algorithm in the software library to simply start to build the model, train the built model, and finally use it to predict some goals. In simple terms, the concept is to explore the future development from the past data. We believe that the application of this technology to the school system will definitely help. This article mainly introduces how to apply DQN technology to analyze student performance analysis data, and then establish a compulsory course based on the first semester scores and compulsory courses to choose the elective courses for the next semester. The goal is to select the elective courses for the next semester. Among the many elective courses, select the courses that are easy for students to get high marks. The purpose of this DQN model is to help students have more information in the elective courses.
This article first uses the R language to do data pre-processing. For the class numbers and class scores of all college students, first arrange the scores of these college students each semester one by one, and select the reference training target of the model conducted by the department with the largest number of students. Next, use Python plus the DQN algorithm to train the model, adjust the parameters in the model, and gradually train to find the most "DQN course selection" model. Although the target model was successfully established at the end of this article, because this model is only predicting the future based on past data, and there is no actual simulation whether it is feasible, it needs to invest a lot of time to prove whether it can be used for realistic course selection.
Finally, I quoted the paper Deep Reinforcement Learning and Simulation as a Path Toward Precision Medicine, using simulation to verify the practical feasibility of Deep Reinforcement Learning, and put it into the field of precision medicine. The experiment finally showed that the advantages of this method are not only New treatment strategies beyond clinical practice and available data can be explored, and over time, the patient’s individual disease progression will capture the differences between patients and the inherent randomness of disease progression within a single patient.
關鍵字(中) ★ 神經網路
★ 選課預測模型
★ 深度強化學習
關鍵字(英) ★ neural network
★ course selection prediction model
★ deep reinforcement learning
論文目次 摘要 i
英文摘要 ii
目錄 iv
圖目錄 vi
一、緒論 1
1-1 人工智慧(Artificial Intelligence) 1
1-2 機器學習(Machine Learning) 1
1-2-1 機器學習的分類(Types of Machine Learning) 2
1-3 深度學習 (Deep Learning) 3
1-3-1 深度神經網路(Deep Neural Network) 4
1-3-2 損失函數(Loss Function) 5
1-4 卷積神經網路(Convolutional Neural Network) 5
1-5 強化學習(Reinforcement Learning) 8
1-5-1 獎勵函數 (Reward Function) 9
1-5-2 Q-Learning 11
1-6 Deep Q-Networks 12
1-7 學生資料成績分析 16
1-8 研究動機 17
二、研究內容與方法 19
2-1 資料前處理 19
2-2 Pytorch 21
2-2-1 安裝 Pytorch 22
2-3 Reinforcement Learning 模型 22
2-3-1 建立模型 23
2-3-2 訓練模型 25
三、結果 26
3-1 RL 模型訓練結果 27
3-2 RL 模型 Loss Function 29
四、結論 32
五、未來發展 33
參考文獻 36
附錄一、使用 R 語言做資料前處理之程式碼 37
附錄二、使用 Pytorch 建立模型 Model 部分程式碼 40
參考文獻 1. Beyond the hype: AI, machine learning and deep learning explained. Available from:
https://www.smart-digital.de/en/beyond-the-hype-ai-machine-learning-and-deep-learning-explained/
2. Types of machine learning algorithms. Available from:
https://www.7wdata.be/himss/types-of-machine-learning-algorithms/
3. Gill, J.K. Automatic Log Analysis using Deep Learning and AI. 2018 [cited 2020 July 1]; Available from: https://www.xenonstack.com/blog/log-analytics- deep-machine-learning/.
4. Miao, S., Z.J. Wang, and R.J.I.t.o.m.i. Liao, A CNN regression approach for real- time 2D/3D registration. 2016. 35(5): p. 1352-1363.
5. JamesLearningNote. Available from: https://medium.com/jameslearningnote
6. Reinforcement Learning: What is, Algorithms, Applications, Example
Available from: https://www.guru99.com/reinforcement-learning-tutorial.html
7. Playing Atari with Deep Reinforcement Learning. Volodymyr Mnih, Koray Kavukcuoglu, et al.
8. Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, et al.
9. PyTorch dominates the top event: CVPR papers account for 4 times the proportion of TensorFlow.
Available from: https://www.programmersought.com/article/36664404330/
10. Hindsight Experience Replay, Andrychowicz et al, NIPS 2017
11. Deep Reinforcement Learning and Simulation as a Path Toward Precision Medicine. Brenden K. Petersen, Jiachen Yang, Will S. Grathwohl, Chase Cockrell, Claudio Santiago, Gary An, and Daniel M. Faissol
指導教授 王孫崇(Sun-Chong Wang) 審核日期 2021-8-31
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明