在機器學習領域中,超參數調整對於深度學習演算法來說是一個很重要的步驟,不同的超參數設定可以直接影響模型效能。而貝氏優化一直是超參數調整的熱門方法,貝氏優化利用迭代的方式,不斷更新先驗與後驗分佈來找出最佳超參數組合。本研究利用貝氏優化與穩健參數設計的概念,提出了一種新的超參數優化方法。在優化過程中,該方法將控制因子及噪音因子(例如:初始權重、訓練樣本的選取)納入考量,以期提高求得最佳超參數組合之準確度。在模擬及實證例子中,依據不同類型的問題,發現所提出的方法會比傳統貝氏優化方法找到更接近真實超參數組合的設定。;Tuning hyperparameters is crucial to the success of deep learning algorithms because it affects the model performance directly. Therefore, hyperparameter tuning has received great attention. Bayesian optimization has always been a popular option for hyperparameter tuning, which obtains optimal values of hyperparameters in a sequential manner. This thesis presents a new hyperparameter optimization method using the concept of robust parameter design. We identify several noise factors (e.g, initial weights or random splitting training samples) for optimization. Simulations show that the proposed method can find hyperparameter settings that are closer to the real hyperparameter setting.