博碩士論文 107221017 詳細資訊

 以作者查詢圖書館館藏 、以作者查詢臺灣博碩士 、以作者查詢全國書目 、勘誤回報 、線上人數：6 、訪客IP：100.24.122.117

(The application of machine learning to the data analysis of pneumonia)

 ★ 氣流的非黏性駐波通過不連續管子之探究 ★ An Iteration Method for the Riemann Problem of Some Degenerate Hyperbolic Balance Laws ★ 影像模糊方法在蝴蝶辨識神經網路中之應用 ★ 單一非線性平衡律黎曼問題廣義解的存在性 ★ 非線性二階常微方程組兩點邊界值問題之解的存在性與唯一性 ★ 對接近音速流量可壓縮尤拉方程式的柯西問題去架構區間逼近解 ★ 一些退化擬線性波動方程的解的性質. ★ 擬線性波方程中片段線性初始值問題的整體Lipchitz連續解的 ★ 水文地質學的平衡模型之擴散對流反應方程 ★ 非線性守恆律的擾動Riemann 問題的古典解 ★ BBM與KdV方程初始邊界問題解的週期性 ★ 共振守恆律的擾動黎曼問題的古典解 ★ 可壓縮流中微黏性尤拉方程激波解的行為 ★ 非齊次雙曲守恆律系統初始邊界值問題之整域弱解的存在性 ★ 有關非線性平衡定律之柯西問題的廣域弱解 ★ 單一雙曲守恆律的柯西問題熵解整體存在性的一些引理

https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia

Image recognition plays an important role in artificial intelligence, such as animal recognition, handwriting recognition, and license plate recognition. The main purpose of using deep learning is to be able to extract features and reduce costs, but it is not so easy to make a good classification, many factors will affect each other. For example: computer equipment, parameter setting, optimizer selection, model architecture.
The pictures of pneumonia in this experiment come from kaggle, and use the method of convolutional neural network to establish a pneumonia identification model, selecting several ways that may affect the identification of pneumonia as the object of discussion and analysis. From the experimental results, it is found that the dropout ratio, optimization method, transfer learning, freezing parameters, and the number of convolutional layers all affect the performance of the model. Keywords: machine learning, volume-based layered network, deep learning, reinforcement learning, optimizer, filter, transfer learning.

★ 卷積層積網路
★ 深度學習
★ 強化學習
★ 優化器
★ 過濾器

★ volume-based layered network
★ deep learning
★ reinforcement learning
★ optimizer
★ filter

Abstract ii
Acknowledgments iii
Contents iv
List of Figures vi
List of Tables ix
Chapter I Introduction 1
1.1 Research Motivation. 1
1.2 Research Goal. 1
1.3 Research Approach. 2
1.4 Research Object 2
Chapter II Deep Learning Method 3
2.1 Introduction to CNN 3
2.2 Activation Function 4
2.3 Softmax. 7
2.4 Application of CNN. 7
2.5 Optimizer. 7
2.6.1 Introduction to SVM (Support-Vector -Machine) 9
2.6.2 Dual problem 10
2.6.3 Nonlinear Support Vector Machine 12
2.7.1 RNN(Recurrent Neural Network) 14
2.7.2RNN mathematical model 14
2.7.3RNN forms 17
2.8 LSTM(Long Short Term Memory) 19
2.9 GRU (Gated Recurrent Units) 21
Chapter III Experiment model and results 22
3.1 Introduction to experimental framework 22
3.2 Introduction to image library and image preprocessing 23
3.3 Data set production 24
3.4 Implementation process 24
3.5 The model structure 25
3.6 Result and discussion 26
3.6.1 The effect of the dropout ratio 26
3.6.2 Different optimizer approaches in the model performance 34
Chapter IV Conclusion and future outlook 55
4.1 conclusion. 55
4.2 future outlook. 56
Bibliographies 57

[1]. Keiron O’Shea ,and Ryan Nash , “An Introduction to Convolutional Neural Networks“ arXiv:1511.08458v2 ( 2015 ).
[2]. Chigozie Enyinna Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall, “ Activation Functions: Comparsion of Trends in Practice and Research for Deep Learning ” arXiv:1811.03378v1 (2018).
[3]. Ghadeer Al-Bdour, Raffi Al-Qurran, Mahmoud Al-Ayyoub and Ali Shatnawi, “ A Detailed Comparsion Study of Open Source Deep Learning Frameworks” arXiv:1903.00102v2 (2020).
[4]. Eric Jang,Shixiang Gu ,and Ben Poole , “Categorical Reparameterization with Gumbel-Softmax” arXiv:1611.01144v5 (2017).
[5]. Ashwin Bhandare, Maithili Bhide, Pranav Gokhale, and Rohan, “Chandavarkar Applications of Convolutional Neural Networks” (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 7 (5) , (2016 ) pp.(2206-2215) .
[6]. Sebastian Ruder, “An overview of gradient descent optimization algorithms” Jan.19 , (2016).
[7]. Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin , “A Practical Guide to Support Vector Classification” May.19, (2016 ).
[8]. Chih-Chung Chang, and Chih-Jen Lin , “A Library for Support Vector Machine” Nov.29 , (2019).
[9]. Gang Chen, “A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation” arXiv:1610.02583v3 (2018).
[10]. colah’s blog Understanding LSTM Networks Aug 27, (2015).
(https://colah.github.io/posts/2015-08-Understanding-LSTMs/)
[11]. Rafal Jozefowicz, Wojciech Zaremba and Ilya Sutskever , “An Empirical Exploration of Recurrent Network Architectures “ Icml (2015).
[12]. Osvaldo Simeone , “A Very Introduction to Machine Learning With Applications to Communication Systems” arXiv:1808.02342v4 (2018).
[13]. Vincent François-Lavet, Riashat Islam, Joelle Pineau, Peter Henderson and Marc G, “ Bellemare An Introduction to Deep Reinforcement Learning ” arXiv:1811.12560v2 (2018).
[14]. Tommy Huang, “機器學習:Ensemble learning Bagging、Boosting和AdaBoost” Jun.20,(2018) .
[15]. Branko Markoski, Zdravko Ivanković, Ladislav Ratgeber, Predrag Pecev and Dragana Glušac, “Application of AdaBoost Algorithm in Basketball Player Detection ” Acta Polytechnica Hungarica Vol. 12, No. 1, (2015).
[16]. 知乎 強化學習基礎篇(Value iteration) Feb.25, (2018) .
(https://zhuanlan.zhihu.com/p/33229439)
[17]. 知乎 強化學習基礎篇(Policy iteration) Feb.28, (2018).
(https://zhuanlan.zhihu.com/p/34006925 )
[18]. Peter Dayan , “Technical Note Q-Learning Machine Learning”, 8, pp(279-292), (1992).
[19]. Yuxi Li, “Reinforcement Learning Application” arXiv:1908.06973v1(2019).
[20]. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra and Martin Riedmiller, “Playing Atari With Deep Reinforcement Learning” arXiv:1312.5602v1 (2013).
[21]. Hado van Hasselt, “Double Q-learning” Jan (2010).
[22]. Sagar Sharma towards data science Monto Carlo Tree Search MCTS for every data science enthusiast Aug.1, (2018).
(https://towardsdatascience.com/monte-carlo-tree-search-158a917a8baa)
[23]. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, “Aaron Courville and Yoshua Bengio Generative Adversarial Nets ” arXiv:1406.2661v1(2014) .
[24]. Rico Jonschkowski, Divyam Rastogi and Oliver Brock Differentiable Particle Filter, “End-to-End Learning with Algorithmic Priors” arXiv:1805.11122v2 (2018).
[25]. 棒棒生 統一的框架 Bayes Filter May.10, (2017)
(https://bobondemon.github.io/2017/05/10/Bayes-Filter-for-Localization/ )
[26]. Pieter , “Abbeel UC Berkeley EECS Bayes Filters ” (2014).
[27]. M. Sanjeev Arulampalam, Simon Maskell, Neil Gordon and Tim Clapp, “A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking ” IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 2, Feb,(2002).
[28]. Dr. Jizhong Xiao Probabilistic Robotic Particle Filter/ Monte Carlo Localization.
[29]. Sharath Srinivasan Practicle Filter: A hero in the world of Non-Linearity and Non-Gaussian Aug.14, (2019).
(https://towardsdatascience.com/particle-filter-a-hero-in-the-world-of-non-linearity-and-non-gaussian-6d8947f4a3dc )
[30]. Shashank Joisa Kalman Filter Based GPS Signal Tracking Dec14, (2017).
(https://medium.com/viithiisys/kalman-filter-based-gps-signal-tracking-cf76e9c40834 )
[31]. 拾人牙慧 Kalman Filter Dec.14
(https://silverwind.pixnet.net/blog/post/167680859)
[32]. Youngjoo Kim and Hyochoong Bang, “Introduction to Kalman Filter and Its Applications” Sep.4,(2018).
[33]. Chadaporn Keatmanee, Junaid Baber and Maheen Bakhtyar , “Simple Example of Applying Extend Kalman Filter” Mar.11, (2015).
[34]. M.W.M.G Dissanayake, P.Newman,S.Clark, H.F.Durrant-Whyte M.Csorba, “ A Solution to the Simultaneous Localization and Mapping Building SLAM” (2016).
[35]. Sinno Jialin Pan and Qiang Yang , “A Survey on Transfer Learning” IEEE (2009).
[36]. Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun, “Deep Residual Learning for Image Recognition” arXiv:1512.03385v1(2015).
[37]. Christian Szegedy, WeiLiu, YangqingJia, PierreSermanet, DragomirAnguelov, ScottReed, DumitruErhan, VincentVanhoucke,
AndrewRabinovich , AndrewRabinovich ,“Goingdeeper with convolution”
arXiv:1409.4842v1 (2014).
[38]. Sebastian Ruder Transfer Learning-Machine Learning’s Next Frontier Mar.31, (2017).
(https://ruder.io/transfer-learning/)
[39]. Andrej Karpathy blog The Unreasonable Effectiveness of Recurrent Neural Networks May.21 (2015)