English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42119814      線上人數 : 1411
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/81473


    題名: Exploring Effects of Optimizer Selection and Their Hyperparameter Tuning on Performance of Deep Neural Networks for Image Recognition;Exploring Effects of Optimizer Selection and Their Hyperparameter Tuning on Performance of Deep Neural Networks for Image Recognition
    作者: 陳靖玟;Chen, Jing-Wun
    貢獻者: 數學系
    關鍵詞: 深度學習
    日期: 2019-05-09
    上傳時間: 2019-09-03 15:56:38 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,深度學習蓬勃發展,人們開始使用深度學習解決問題。深度神經網路可以用來進行語音辨識、圖像辨識、物件偵測、人臉辨識或是無人駕駛等等。最基礎的神經網路是多層感知器(MLP),由多個節點層組成,每層間互相全連接,而多層感知器最大的問題在於會忽略資料的形狀或順序,例如輸入影像資料時,將資訊處理成一維便會失去圖像重要的空間資訊,所以發展了卷積神經網路(CNN)。卷積神經網路比傳統的神經網路多了卷積層(Convolution layer)以及池化層(Pooling layer),以此保存以及擷取圖像特徵。

    我們將資料放進神經網路後,希望神經網路輸出的結果能夠接近真實值,其中就需要優化方法(Optimizer)的幫忙,讓預測值和真值的誤差最小化。深度學習內的優化方法通常都是基於梯度下降法(Gradient descent)改進的,而該如何選擇合適學習率(Learning rate)是一個難題。所以在本次實驗中,我們使用三種資料集(手寫數字MNIST, CIFAR-10和火車路途場景)及兩種網路架構(多層感知器以及卷積神經網路),搭配六種優化器(Gradient descent, Momentum, Adaptive gradient algorithm, Adadelta, Root Mean Square Propagation, 和Adam)。想藉此探討在圖像辨識問題上,優化器的選擇以及超參數的選取會有怎樣的影響。;In recent years, deep learning has flourished and people have begun to use deep learning to solve problems. Deep neural networks can be used for speech recognition, image recognition, object detection, face recognition, or driverless. The most basic neural network is the Multilayer Perceptron (MLP), which consists of multiple node layers, each layer is fully connected to each other, and one of the drawbacks of MLP is that it ignores the shape of the data which is important for image data. Compare to traditional neural networks, the convolutional neural network (CNN) has additional convolution and pooling layers which are used for preserving and capturing image features.

    The accuracy rate for prediction using neural network depends on many factors, such as the architecture of neural networks, the cost functions, and the selection of an optimizer. The goal of this work is to investigate the effects of optimizer selection and their hyperparameter tuning on the performance of deep neural networks for image recognition problems. We use three data sets including MNIST, CIFAR-10 and train route scenarios as test problems and test six optimizers (Gradient descent, Momentum, Adaptive gradient algorithm, Adadelta, Root Mean Square Propagation, and Adam). Our numerical results show that Adam is a good choice because of its efficiency and robustness.
    顯示於類別:[數學研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML245檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明