博碩士論文 107221017 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:13 、訪客IP:18.221.154.151
姓名 楊庭瑄(Ting-Hsuan Yang)  查詢紙本館藏   畢業系所 數學系
論文名稱 機器學習在肺炎資料分析中的應用
(The application of machine learning to the data analysis of pneumonia)
相關論文
★ 氣流的非黏性駐波通過不連續管子之探究★ An Iteration Method for the Riemann Problem of Some Degenerate Hyperbolic Balance Laws
★ 影像模糊方法在蝴蝶辨識神經網路中之應用★ 單一非線性平衡律黎曼問題廣義解的存在性
★ 非線性二階常微方程組兩點邊界值問題之解的存在性與唯一性★ 對接近音速流量可壓縮尤拉方程式的柯西問題去架構區間逼近解
★ 一些退化擬線性波動方程的解的性質.★ 擬線性波方程中片段線性初始值問題的整體Lipchitz連續解的
★ 水文地質學的平衡模型之擴散對流反應方程★ 非線性守恆律的擾動Riemann 問題的古典解
★ BBM與KdV方程初始邊界問題解的週期性★ 共振守恆律的擾動黎曼問題的古典解
★ 可壓縮流中微黏性尤拉方程激波解的行為★ 非齊次雙曲守恆律系統初始邊界值問題之整域弱解的存在性
★ 有關非線性平衡定律之柯西問題的廣域弱解★ 單一雙曲守恆律的柯西問題熵解整體存在性的一些引理
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著科技日益發展,人工智慧扮演了很重要的角色,機器的優勢在於能重複工作並且也不會疲乏,近年來許多人開始探討如何讓機器有著像人類的智慧使得這領域在這幾年快速發展。機器學習跟感知與估測扮演了很重要的角色,其中機器學習又可分為四種:監督式學習、 非監督式的學習、半監督式的學習、增強式學習。其中神經網路在機器學習上扮演了很重要的角色。感知跟估測可以藉由已知的資訊去推得更多未來的資訊。
影像辨識在人工智慧扮演了很重要的角色,例如:動物辨識、手寫辨識、車牌辨識。使用深度學習最主要的目的在於能夠提取特徵並能降低成本,但要做出好的分類並不是這麼容易,有很多因素都會相互影響著。例如:電腦設備、參數設定、優化器選取、模型架構。
本實驗肺炎圖片來自
https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia
且使用卷積神經網路的方式去建立肺炎辨識模型, 並選定幾種方式可能會影響肺炎辨識的原因作為討論、分析的對象。由實驗結果發現,dropout比例大小、優化方法、遷移學習、凍結參數、卷積層層數都會影響模型的表現能力。
摘要(英) With the development of science and technology, artificial intelligence has played a very important role. The advantage of machines is that they can repeat work without fatigue. In recent years, many people have begun to discuss how to make machines have human-like intelligence to make this field fast developed in these years. Machine learning plays a very important role in perception and estimation. Among them, machine learning can be divided into four types: supervised learning, unsupervised learning, semi-supervised learning, and enhanced learning. Among them, neural networks play a very important role in machine learning. Perception and estimation can use the known information to deduce more future information.
Image recognition plays an important role in artificial intelligence, such as animal recognition, handwriting recognition, and license plate recognition. The main purpose of using deep learning is to be able to extract features and reduce costs, but it is not so easy to make a good classification, many factors will affect each other. For example: computer equipment, parameter setting, optimizer selection, model architecture.
The pictures of pneumonia in this experiment come from kaggle, and use the method of convolutional neural network to establish a pneumonia identification model, selecting several ways that may affect the identification of pneumonia as the object of discussion and analysis. From the experimental results, it is found that the dropout ratio, optimization method, transfer learning, freezing parameters, and the number of convolutional layers all affect the performance of the model. Keywords: machine learning, volume-based layered network, deep learning, reinforcement learning, optimizer, filter, transfer learning.
關鍵字(中) ★ 機器學習
★ 卷積層積網路
★ 深度學習
★ 強化學習
★ 優化器
★ 過濾器
關鍵字(英) ★ machine learning
★ volume-based layered network
★ deep learning
★ reinforcement learning
★ optimizer
★ filter
論文目次 Contents
中文摘要 i
Abstract ii
Acknowledgments iii
Contents iv
List of Figures vi
List of Tables ix
Chapter I Introduction 1
1.1 Research Motivation. 1
1.2 Research Goal. 1
1.3 Research Approach. 2
1.4 Research Object 2
Chapter II Deep Learning Method 3
2.1 Introduction to CNN 3
2.2 Activation Function 4
2.3 Softmax. 7
2.4 Application of CNN. 7
2.5 Optimizer. 7
2.6.1 Introduction to SVM (Support-Vector -Machine) 9
2.6.2 Dual problem 10
2.6.3 Nonlinear Support Vector Machine 12
2.7.1 RNN(Recurrent Neural Network) 14
2.7.2RNN mathematical model 14
2.7.3RNN forms 17
2.8 LSTM(Long Short Term Memory) 19
2.9 GRU (Gated Recurrent Units) 21
Chapter III Experiment model and results 22
3.1 Introduction to experimental framework 22
3.2 Introduction to image library and image preprocessing 23
3.3 Data set production 24
3.4 Implementation process 24
3.5 The model structure 25
3.6 Result and discussion 26
3.6.1 The effect of the dropout ratio 26
3.6.2 Different optimizer approaches in the model performance 34
Chapter IV Conclusion and future outlook 55
4.1 conclusion. 55
4.2 future outlook. 56
Bibliographies 57
參考文獻 Bibliographies
[1]. Keiron O’Shea ,and Ryan Nash , “An Introduction to Convolutional Neural Networks“ arXiv:1511.08458v2 ( 2015 ).
[2]. Chigozie Enyinna Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall, “ Activation Functions: Comparsion of Trends in Practice and Research for Deep Learning ” arXiv:1811.03378v1 (2018).
[3]. Ghadeer Al-Bdour, Raffi Al-Qurran, Mahmoud Al-Ayyoub and Ali Shatnawi, “ A Detailed Comparsion Study of Open Source Deep Learning Frameworks” arXiv:1903.00102v2 (2020).
[4]. Eric Jang,Shixiang Gu ,and Ben Poole , “Categorical Reparameterization with Gumbel-Softmax” arXiv:1611.01144v5 (2017).
[5]. Ashwin Bhandare, Maithili Bhide, Pranav Gokhale, and Rohan, “Chandavarkar Applications of Convolutional Neural Networks” (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 7 (5) , (2016 ) pp.(2206-2215) .
[6]. Sebastian Ruder, “An overview of gradient descent optimization algorithms” Jan.19 , (2016).
(https://ruder.io/optimizing-gradient-descent/)
[7]. Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin , “A Practical Guide to Support Vector Classification” May.19, (2016 ).
[8]. Chih-Chung Chang, and Chih-Jen Lin , “A Library for Support Vector Machine” Nov.29 , (2019).
[9]. Gang Chen, “A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation” arXiv:1610.02583v3 (2018).
[10]. colah’s blog Understanding LSTM Networks Aug 27, (2015).
(https://colah.github.io/posts/2015-08-Understanding-LSTMs/)
[11]. Rafal Jozefowicz, Wojciech Zaremba and Ilya Sutskever , “An Empirical Exploration of Recurrent Network Architectures “ Icml (2015).
[12]. Osvaldo Simeone , “A Very Introduction to Machine Learning With Applications to Communication Systems” arXiv:1808.02342v4 (2018).
[13]. Vincent François-Lavet, Riashat Islam, Joelle Pineau, Peter Henderson and Marc G, “ Bellemare An Introduction to Deep Reinforcement Learning ” arXiv:1811.12560v2 (2018).
[14]. Tommy Huang, “機器學習:Ensemble learning Bagging、Boosting和AdaBoost” Jun.20,(2018) .
(https://medium.com/@chih.sheng.huang821/%E6%A9%9F%E5%99%A8%E 5%AD%B8%E7%BF%92-ensemble-learning%E4%B9%8Bbagging-boosting%E5%92%8Cadaboost-af031229ebc3)
[15]. Branko Markoski, Zdravko Ivanković, Ladislav Ratgeber, Predrag Pecev and Dragana Glušac, “Application of AdaBoost Algorithm in Basketball Player Detection ” Acta Polytechnica Hungarica Vol. 12, No. 1, (2015).
[16]. 知乎 強化學習基礎篇(Value iteration) Feb.25, (2018) .
(https://zhuanlan.zhihu.com/p/33229439)
[17]. 知乎 強化學習基礎篇(Policy iteration) Feb.28, (2018).
(https://zhuanlan.zhihu.com/p/34006925 )
[18]. Peter Dayan , “Technical Note Q-Learning Machine Learning”, 8, pp(279-292), (1992).
[19]. Yuxi Li, “Reinforcement Learning Application” arXiv:1908.06973v1(2019).
[20]. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra and Martin Riedmiller, “Playing Atari With Deep Reinforcement Learning” arXiv:1312.5602v1 (2013).
[21]. Hado van Hasselt, “Double Q-learning” Jan (2010).
[22]. Sagar Sharma towards data science Monto Carlo Tree Search MCTS for every data science enthusiast Aug.1, (2018).
(https://towardsdatascience.com/monte-carlo-tree-search-158a917a8baa)
[23]. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, “Aaron Courville and Yoshua Bengio Generative Adversarial Nets ” arXiv:1406.2661v1(2014) .
[24]. Rico Jonschkowski, Divyam Rastogi and Oliver Brock Differentiable Particle Filter, “End-to-End Learning with Algorithmic Priors” arXiv:1805.11122v2 (2018).
[25]. 棒棒生 統一的框架 Bayes Filter May.10, (2017)
(https://bobondemon.github.io/2017/05/10/Bayes-Filter-for-Localization/ )
[26]. Pieter , “Abbeel UC Berkeley EECS Bayes Filters ” (2014).
[27]. M. Sanjeev Arulampalam, Simon Maskell, Neil Gordon and Tim Clapp, “A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking ” IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 2, Feb,(2002).
[28]. Dr. Jizhong Xiao Probabilistic Robotic Particle Filter/ Monte Carlo Localization.
[29]. Sharath Srinivasan Practicle Filter: A hero in the world of Non-Linearity and Non-Gaussian Aug.14, (2019).
(https://towardsdatascience.com/particle-filter-a-hero-in-the-world-of-non-linearity-and-non-gaussian-6d8947f4a3dc )
[30]. Shashank Joisa Kalman Filter Based GPS Signal Tracking Dec14, (2017).
(https://medium.com/viithiisys/kalman-filter-based-gps-signal-tracking-cf76e9c40834 )
[31]. 拾人牙慧 Kalman Filter Dec.14
(https://silverwind.pixnet.net/blog/post/167680859)
[32]. Youngjoo Kim and Hyochoong Bang, “Introduction to Kalman Filter and Its Applications” Sep.4,(2018).
[33]. Chadaporn Keatmanee, Junaid Baber and Maheen Bakhtyar , “Simple Example of Applying Extend Kalman Filter” Mar.11, (2015).
[34]. M.W.M.G Dissanayake, P.Newman,S.Clark, H.F.Durrant-Whyte M.Csorba, “ A Solution to the Simultaneous Localization and Mapping Building SLAM” (2016).
[35]. Sinno Jialin Pan and Qiang Yang , “A Survey on Transfer Learning” IEEE (2009).
[36]. Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun, “Deep Residual Learning for Image Recognition” arXiv:1512.03385v1(2015).
[37]. Christian Szegedy, WeiLiu, YangqingJia, PierreSermanet, DragomirAnguelov, ScottReed, DumitruErhan, VincentVanhoucke,
AndrewRabinovich , AndrewRabinovich ,“Goingdeeper with convolution”
arXiv:1409.4842v1 (2014).
[38]. Sebastian Ruder Transfer Learning-Machine Learning’s Next Frontier Mar.31, (2017).
(https://ruder.io/transfer-learning/)
[39]. Andrej Karpathy blog The Unreasonable Effectiveness of Recurrent Neural Networks May.21 (2015)
指導教授 洪盟凱(Meng -Kai,Hong) 審核日期 2020-7-17
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明