深度學習演算法的進步,加上軟硬體的計算能力提高,對神經網路模型的訓練變得更加容易,也因此產生大量的相關研究及產業應用。對輸入數據來說,每一組數據皆對應一權重值,再藉由激勵函數輸出結果,然而多數模型的權重值為一固定值,不管設計多複雜的模型,經過反向傳播後所修正的權重仍為一固定值,輸出值也僅僅依靠這一固定值計算出結果,導致整個模型不夠穩健。 引入貝氏推論的神經網路模型可以視為一個條件分配模型,權重也從一固定值轉為分配型態,透過計算權重的後驗分配平均值求得預測果,但由於計算出所有神經網路的預測值在求均值的計算量過於複雜,所以我們將採用Monte Carlo Dropout作為貝氏推論的近似方法 。;Deep learning approaches have made it easier to train neural network models, and developments in software and hardware computing capacity have resulted in a flurry of related research and industrial applications. Each set of data in the input data correlates to a weight value, and the activation function outputs the result. However, the weight value of most models is a fixed value that is rectified after backpropagation, regardless of how complex the model is built. The weight remains a fixed value, and the output values relies solely on the fixed value to calculate the result, making the model not robust enough. A conditional distribution model can be used to describe the neural network model that uses Bayesian inference. Because all neural networks are computed, the weight is also changed from a fix value to a distribution type, and the prediction result is derived by calculating the average value of the posterior distribution of the weight. The calculation of the predicted value of the road is too complicated in averaging, so we will use Monte Carlo Dropout as the Bayesian approximation.