博碩士論文 110522121 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:31 、訪客IP:3.147.103.8
姓名 黃家茹(Chia-Ju Huang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 透過圖像修補技術補空間中的PM2.5值:以台灣為例
(Imputing spatial PM2.5 values via image inpainting: A case study in Taiwan)
相關論文
★ 透過網頁瀏覽紀錄預測使用者之個人資訊與性格特質★ 透過矩陣分解之多目標預測方法預測使用者於特殊節日前之瀏覽行為變化
★ 動態多模型融合分析研究★ 擴展點擊流:分析點擊流中缺少的使用者行為
★ 關聯式學習:利用自動編碼器與目標傳遞法分解端到端倒傳遞演算法★ 融合多模型排序之點擊預測模型
★ 分析網路日誌中有意圖、無意圖及缺失之使用者行為★ 基於自注意力機制產生的無方向性序列編碼器使用同義詞與反義詞資訊調整詞向量
★ 探索深度學習或簡易學習模型在點擊率預測任務中的使用時機★ 空氣品質感測器之故障偵測--基於深度時空圖模型的異常偵測框架
★ 以同反義詞典調整的詞向量對下游自然語言任務影響之實證研究★ 結合時空資料的半監督模型並應用於PM2.5空污感測器的異常偵測
★ 藉由權重之梯度大小調整DropConnect的捨棄機率來訓練神經網路★ 使用圖神經網路偵測 PTT 的低活躍異常帳號
★ 針對個別使用者從其少量趨勢線樣本生成個人化趨勢線★ 基於雙變量及多變量貝他分布的兩個新型機率分群模型
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本研究旨在進行台灣地區PM2.5濃度的預測和補值,並與其他概念做出明顯區別。相較於傳統方法,我們將PM2.5濃度視為圖像,並運用圖像修補技術進行缺失值的修復。此外,我們特別強調僅使用經緯度地理資訊和PM2.5濃度作為輸入特徵,而不考慮其他氣象特徵。

為了實現此目標,我們收集了台灣各地的大量PM2.5濃度數據和相對應的經緯度地理資訊。首先,我們將PM2.5濃度數據轉換為圖像表示,其中每個像素點代表一個觀測站點的PM2.5值。然後,利用圖像修補技術,我們根據周圍已有的PM2.5觀測站點數據,預測並填補目標區域的缺失值。這種基於圖像的方法使得我們能夠捕捉到空間鄰近性和相關性,從而改善缺失值的補值效果,透過將PM2.5濃度視為圖像並運用圖像修補技術,與傳統方法有明顯的區別。。

為了驗證我們方法的有效性,我們進行了一系列實驗和比較。結果顯示,我們提出的基於圖像修補的方法在PM2.5濃度的預測和補值方面具有潛力。此外,我們的方法利用了僅使用經緯度地理資訊和PM2.5濃度作為輸入特徵的特點,使其在資料需求和計算複雜度方面相對簡化,這項研究的成果有望為空氣污染監測和環境保護提供有價值的參考和指導。
摘要(英) This study aims to predict and interpolate PM2.5 concentrations in Taiwan while distinguishing itself from other approaches. In comparison to traditional methods, we treat PM2.5 concentrations as images and utilize image inpainting techniques for missing value restoration. Additionally, we specifically emphasize the use of only geographical information (latitude and longitude) and PM2.5 concentrations as input features, excluding other meteorological factors.

To achieve this objective, we collected a substantial amount of PM2.5 concentration data and corresponding geographical information (latitude and longitude) from various locations in Taiwan. Firstly, we transformed the PM2.5 concentration data into image representations, where each pixel represents the PM2.5 value at an observation station. Then, using image inpainting techniques, we predicted and filled in the missing values in the target areas based on the surrounding PM2.5 observation station data. This image-based approach allows us to capture spatial proximity and correlations, thereby improving the effectiveness of missing value interpolation. By treating PM2.5 concentrations as images and applying image inpainting techniques, our approach distinguishes itself from traditional methods.

To validate the effectiveness of our method, we conducted a series of experiments and comparisons. The results demonstrate the potential of our proposed image inpainting-based method in predicting and interpolating PM2.5 concentrations. Furthermore, our method′s reliance solely on geographical information (latitude and longitude) and PM2.5 concentrations as input features simplifies data requirements and computational complexity. The outcomes of this research are expected to provide valuable references and guidance for air pollution monitoring and environmental protection.
關鍵字(中) ★ PM2.5
★ 空氣污染
★ 插值
★ 圖像修補
★ 缺失值修復
關鍵字(英) ★ PM2.5
★ Air pollution
★ Interpolation
★ Image inpainting
★ Missing value restoration
論文目次 摘要iv
Abstract v
誌謝vii
目錄viii
圖目錄x
表目錄xi
一、緒論1
1.1 研究動機........................................................................ 1
1.2 研究目標........................................................................ 2
1.3 論文架構........................................................................ 2
二、相關研究4
2.1 PM2.5(細微懸浮微粒).................................................... 4
2.2 圖像修復........................................................................ 6
三、資料集設定與處理9
3.1 資料集來源及設定............................................................ 9
3.1.1 民生公共物聯網...................................................... 9
3.2 Kriging........................................................................... 11
四、實驗模型及方法14
4.1 RFR .............................................................................. 14
4.2 比較方法........................................................................ 17
4.2.1 Auto-Encoder ......................................................... 17
4.2.2 Deep Neural Network................................................ 20
五、實驗結果22
5.1 評量標準........................................................................ 22
5.1.1 EPA 站點.............................................................. 22
5.1.2 陸地地區............................................................... 23
5.2 實驗結果........................................................................ 24
5.2.1 預測的準確性......................................................... 25
5.2.2 預測的完整性......................................................... 29
六、總結33
6.0.1 結論..................................................................... 33
6.0.2 未來展望............................................................... 34
參考文獻35
附錄A 實驗程式碼37
A.1 Github Link .................................................................... 37
參考文獻 [1] T.-H. Lin, S.-C. Tsay, W.-H. Lien, N.-H. Lin, and T.-C. Hsiao, “Spectral derivatives of optical depth for partitioning aerosol type and loading,” Remote Sensing, vol. 13, no. 8, p. 1544, 2021.

[2] X. Li, L. Peng, X. Yao, S. Cui, Y. Hu, C. You, and T. Chi, “Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation,” Environmental pollution, vol. 231, pp. 997–1004, 2017.

[3] U. Pak, J. Ma, U. Ryu, K. Ryom, U. Juhyok, K. Pak, and C. Pak, “Deep learning-based pm2. 5 prediction considering the spatiotemporal correlations: A case study of beijing, china,” Science of The Total Environment, vol. 699, p. 133561, 2020.

[4] A. P. Tai, L. J. Mickley, and D. J. Jacob, “Correlations between fine particulate matter (pm2. 5) and meteorological variables in the united states: Implications for the sensitivity of pm2. 5 to climate change,” Atmospheric environment, vol. 44, no. 32, pp. 3976–3984, 2010.

[5] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro, “Image inpainting for irregular holes using partial convolutions,” in Proceedings of the European conference on computer vision (ECCV), pp. 85–100, 2018.

[6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.

[7] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401–4410, 2019.

[8] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851, 2020.

[9] A. Lugmayr, M. Danelljan, A. Romero, F. Yu, R. Timofte, and L. Van Gool, “Repaint: Inpainting using denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11461–11471, 2022.

[10] “Civil IoT Taiwan.” https://ci.taiwan.gov.tw/dsp/.

[11] “ERSSLE/ordinary_kriging.” https://github.com/ERSSLE/ordinary_kriging.

[12] J. Li, N. Wang, L. Zhang, B. Du, and D. Tao, “Recurrent feature reasoning for image inpainting,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7760–7768, 2020.

[13] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747, 2016.

[14] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” arXiv preprint arXiv:1511.07289, 2015.

[15] R. Suvorov, E. Logacheva, A. Mashikhin, A. Remizova, A. Ashukha, A. Silvestrov, N. Kong, H. Goka, K. Park, and V. Lempitsky, “Resolution-robust large mask inpainting with fourier convolutions,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 2149–2159, 2022.

[16] W. Chen, Y. Li, B. J. Reich, and Y. Sun, “Deepkriging: Spatially dependent deep neural networks for spatial prediction,” arXiv preprint arXiv:2007.11972, 2020.
指導教授 陳弘軒(Hung-Hsuan Chen) 審核日期 2023-7-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明