參考文獻 |
Garrido-Merchán, E.C. and Hernández-Lobato, D. (2020). Dealing with Cate-
gorical and Integer-valued Variables in Bayesian optimization with Gaussian Processes,
Neurocomputing, 380(7), 20-35.
Greenhill, S., Rana, S., Gupta, P., Vellanki, P., and Venkatesh, S. (2020).
Bayesian optimization for Adaptive Experimental Design: A Review, IEEE Access, 8,
13937-13948.
Hertel, L., Baldi, P., and Gillen, D.L. (2021). Reproducible Hyperparameter
optimization, Computational and Graphical Statistics, 31, 84-99.
Johnson, Richard A. and Wichern, Dean W. (2007). Applied Multivariate Statis-
tical Analysis, Pearson, 149-208.
Joseph, V.R. and Delaney, J.D.(2007). Functionally Induced Priors for the Analysis
of Experiments, Technology, 49, 1-11.
Khaw, J.F.C., Lim, B.S., and Lim, L.E.N. (1995). Optimal design of neural networks
using the Taguchi method, Neurocomputing, 3(7), 225–245.
Kim, Y.S. and Yum, B.J. (2004). Robust design of multilayer feedforward neural net-
works: an experimental approach, Engineering Applications of Artificial Intelligence,
17(3), 249-263.
Luong, P., Gupta, S., Nguyen, D., Rana, S., and Venkatesh, S. (2019). Bayesian
optimization with Discrete Variables, In Australasian Joint Conference on Artificial
Intelligence, 473-484.
Midilli, Y.E. and Elevli, S. (2019). Optimization of Neural Networks with Response
Surface Methodology: Prediction of Cigarette Pressure Drop, 60th International Scien-
tific Conference on Information Technology and Management Science of Riga Technical
University (ITMS).
Murugan, P. (2017). Hyperparameters optimization in Deep Convolutional Neural Net-
work / Bayesian Approach with Gaussian Process Priors, arXiv:1712.07233.
Nazghelichi, T., Aghbashlo, M., and Kianmehr, M.H. (2011). Optimization of
an artificial neural network topology using coupled response surface methodology and
genetic algorithm for fluidized bed drying, Computers and Electronics in Agriculture,
75(1), 84-91.
Packianather, M.S., Drake, P.R., and Rowlands, H. (2000). Optimizing the
parameters of multilayered feedforward neural networks through Taguchi design of ex-
periments, Quality and Reliability Engineering International, 16(6), 461-473.
Sato, R., Tanaka, M., and Takeda, A. (2021). A Gradient Method for Multilevel
optimization, NeurIPS, arXiv:2105.13954.
Sukthomya, W. and Tannock, J. (2005). The training of neural networks to model
manufacturing processes, Journal of Intelligent Manufacturing, 16(1), 39-51, 16, 39-51.
Sukthomya, W. and Tannock, J. (2005). The optimization of neural network param-
eters using Taguchi’s design of experiments approach-an application in manufacturing
process modelling, Neural Computing and Applications, 14,337-344.
Santner, Thomas J., Williams, Brian J., and Notz, William I. (2003). The
Design and Analysis of Computer Experiments, Springer, 216-225.
Tsai, J.-T., Chou, J.-H., and Liu, T.-K. (2006). Tuning the Structure and Parame-
ters of a Neural Network by Using Hybrid Taguchi-Genetic Algorithm, IEEE Transac-
tions on Neural Networks, 17(1), 69-80.
Tarik, M.H.M., Omar, M., Abdullah, M.F., and Ibrahim, R. (2018). Optimiza-
tion of Neural Network Hyperparameters for Gas Turbine Modeling Using Bayesian
optimization, 5th IET International Conference on Clean Energy and Technology
(CEAT2018), 1-5.
Wang, L., Dernoncourt, F., and Bui, T. (2020). Bayesian optimization for Selecting
Efficient Machine Learning Models, CIKM MoST-Rec Workshop.
Yangi, S.M. and Lee, G.S. (1999). Neural Network Design by Using Taguchi Method,
Journal of Dynamic Systems, Measurement, and Control, 121(3), 560-563.
Zhang, X., Chen, X., Yao, L., Ge, C., and Dong, M. (2019). Deep Neural
Network Hyperparameter optimization with orthogonal array tuning, Conference on
Neural Information Processing,287–295. |