高斯隨機過程模型為一在電腦實驗廣泛使用的模型,其具備良好的預測能力。然 而,配適此模型會牽涉到反矩陣的運算,因此當資料量龐大時相當耗時。近年來科技 的發展導致資料的取得更為便利,卻也同時加劇了資料縮減的需求。本論文的研究目 的便是透過資料縮減的方式,來減少高斯隨機過程模型配適造成的運算成本。本文提 出的方法會在維持模型參數特性的情況下,進行資料縮減並同時提升模型的預測能力。 此外在進行資料縮減的過程中,此方法不需要預先指定縮減的資料量,而是在縮減的 過程中利用數據的特性找出最適當的資料量。本文以許多模擬實例來展示此方法之優 點。最後,透過高斯模型跟多維常態分佈的關聯,本文所提出的方法與 Mallow’s Cp 有 相似之處。本文亦闡述我們的方法與 Mallow’s Cp 之相似之處並比較。;Gaussian processes (GPs) are commonly used for emulating large-scale computer ex periments. However, parameter estimation is computationally intensive for a GP model given massive data because it involves the computation of the inverse of a big correla tion matrix. Recently, thanks to technological evolution, collecting data is getting easier. However, a great mass of data incurs the requirement for data reduction. Our purpose is to lessen the computational burden through data reduction. Our method maintains the characteristics of model parameters and improves the performance of predictions. Besides, instead of giving a size of reduced data in advance, we also try to find a proper size of the reduced data. We conduct several simulations for illustration. Additionally, from the connection between the GP and the multivariate normal distribution, we find that our method has an aspect in common with the Mallow’s Cp, a model selection criterion for linear regression. We also compare our method with the Mallow’s Cp.