| 摘要: | 本研究以實驗室現有的14頻道燈箱為基礎,設計一款可以藉由輸入光 參數後,快速得到符合輸入光參數光譜的光譜生成器。利用基因演算法來生
 成不同照度的數據集,使用數據集來訓練神經網路,再透過基因演算法優化
 超參數尋找表現良好的模型,並將該模型設計成光譜生成器。本研究的光譜
 生成器分成無拓展輸入及拓展輸入,無拓展輸入情況下,光譜生成時間為
 0.05 至 0.06 秒,而拓展輸入時,光譜生成時間約為1.2至1.3秒,雖然時間
 相較比無拓展輸入要久但可以預測較大的範圍。
 透過基因演算法生成照度(illuminance, Ev)為 500 lux,相關色溫
 (correlated color temperture, CCT)為 3000 K、4000 K、5000 K 與 6500 K,的
 數據集,包含相對應的14頻道權重、CCT、色偏差值(Delta u-v, Duv)、平均
 演色性指數(general color rendering index, Ra)以及黑視素日光效能比
 (melanopic daylight efficacy ratio, mel-DER)等光參數。
 訓練神經網路前,會先將數據集分為訓練集、驗證集及測試集,並對訓
 練集與驗證集進行Ra及mel-DER 的二維過採樣,使用訓練集與驗證集進
 行訓練。為了找出誤差最小的模型,本研究比較了不同的數據以及優化參數
 組合,使神經網路的誤差達到最小。研究結果顯示,當CCT越高,mel-DER
 值也會隨之升高。訓練神經網路時,使用交叉驗證可以適當降低預測誤差。
 v
 當適當加深神經網路、減少神經元數量時,可以得到較好的訓練誤差,
 但訓練時間會較長。;This study is based on a 14-channel illuminator available in the laboratory
 and aims to develop a spectral generator capable of rapidly producing spectra that
 match given input optical parameters. A genetic algorithm is employed to generate
 datasets under different illuminance levels, which are then used to train a neural
 network. The genetic algorithm is also used to optimize the hyperparameters in
 order to identify high-performing models, which are subsequently implemented
 in the spectral generator.
 The proposed spectral generator is designed in two configurations: one
 without extended inputs and one with extended inputs. In the non-extended mode,
 the generation time is approximately 0.05-0.06 seconds. In the extended input
 mode, although the generation time increases to about 1.2-1.3 seconds, the model
 can predict over a wider spectral range.
 The datasets are generated using a genetic algorithm under an illuminance
 (Ev) of 500 lux and correlated color temperatures (CCTs) of 3000 K, 4000 K,
 5000 K, and 6500 K. Each dataset includes the corresponding 14-channel weights
 and optical parameters such as CCT, color deviation (Delta u-v, Duv), general
 color rendering index (Ra), and melanopic daylight efficacy ratio (mel-DER).
 vii
 Before training the neural network, the dataset is divided into training,
 validation, and test sets. Two-dimensional oversampling of Ra and mel-DER is
 applied to the training and validation sets to enhance data balance, and the
 network is trained on these sets. To minimize prediction errors, this study
 compares various data configurations and hyperparameter combinations.
 The results indicate that as CCT increases, mel-DER values also tend to rise.
 Cross-validation during network training helps reduce prediction errors. A deeper
 network architecture with fewer neurons per layer can achieve better training
 errors, albeit at the cost of longer training times. However, if the network becomes
 too deep and narrow, both the error and training time increase significantly.
 |