摘要(英) |
Bathymetric map is crucial for various applications, such as ocean related research and navigation safety. However, retrieving accurate water depths is always a challenging task. Traditionally, the water depth is measured by shipborne sonar system, but shallow waters are difficult for ships to access and this method is also constrained by its limited swath. While advanced airborne LiDAR system can measure wider than sonar of shallow water, however, it takes huge costs and has to consider air traffic issues. Recently, a spaceborne LiDAR sensor onboard ICEsat-2 can also provide some water depth measurement, but the spatial resolution is relatively low. Therefore, using optical satellite imagery to derive water depth becomes a potential alternate way. Satellite image offers periodic and wide coverage with lower costs, which can overcome the limitations of the other methods.
Deriving water depth from satellite imagery is not a straightforward task due to the complicated nonlinear system between factors such as water depth, water quality, and seafloor type. Based on the complexity, machine learning (ML)-based models have demonstrated effective capabilities. In this study, three models, NN (Neural Network), APMLP (Adjacent-Pixel Multilayer Perceptron), and CNN (Convolutional Neural Network), were adopted to estimate water depth in the Dongsha Atoll. The 2 datasets are integrated from Sentinel-2 and PlanetScope satellite imageries with the ground truth obtained by LiDAR measurement. Then these 3 models were trained by these datasets separately and analyzed their results. Additionally, we investigated the impact of amount of training data and the number of hidden layers on model performances.
The experimental results showed that the NN had the largest errors among them, and APMLP outperformed the other models when it was configured with multiple hidden layers (MAE = 0.78 m; RMSE = 1.57 m). Furthermore, this study investigated the feature importance to assess the influence of each spectral bands on the trained models. Regardless of the satellite imagery used, the results indicated that all models identified the green band as the most crucial feature for depth retrieval. The behavior is consistent with the optical property of shallow sea water, Despite the blue light has the strongest penetrating ability, it is easily scattered by the atmosphere, therefore, the green band will be the easiest to penetrate to the bottom of water. This emphasize the reliability and accuracy of the models′ estimations. |
參考文獻 |
1. Legleiter, C.J., et al., Passive optical remote sensing of river channel morphology and in-stream habitat: Physical basis and feasibility. Remote Sensing of Environment, 2004. 93(4): p. 493-510.
2. Lyzenga, D.R., N.P. Malinas, and F.J. Tanis, Multispectral bathymetry using a simple physically based algorithm. IEEE Transactions on Geoscience and Remote Sensing, 2006. 44(8): p. 2251-2259.
3. Stumpf, R.P., K. Holderied, and M. Sinclair, Determination of water depth with high‐resolution satellite imagery over variable bottom types. Limnology and Oceanography, 2003. 48(1part2): p. 547-556.
4. Legleiter, C.J., D.A. Roberts, and R.L. Lawrence, Spectrally based remote sensing of river bathymetry. Earth Surface Processes and Landforms, 2009. 34(8): p. 1039-1059.
5. Niroumand-Jadidi, M., C.J. Legleiter, and F. Bovolo, River bathymetry retrieval from landsat-9 images based on neural networks and comparison to superdove and sentinel-2. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022. 15: p. 5250-5260.
6. Niroumand-Jadidi, M., A. Vitti, and D.R. Lyzenga, Multiple Optimal Depth Predictors Analysis (MODPA) for river bathymetry: Findings from spectroradiometry, simulations, and satellite imagery. Remote Sensing of Environment, 2018. 218: p. 132-147.
7. Yang, S., et al., Fully automated classification method for crops based on spatiotemporal deep-learning fusion technology. IEEE Transactions on Geoscience and Remote Sensing, 2021. 60: p. 1-16.
8. Castelluccio, M., et al., Land use classification in remote sensing images by convolutional neural networks. arXiv preprint arXiv:1508.00092, 2015.
9. Maxwell, A.E., T.A. Warner, and F. Fang, Implementation of machine-learning classification in remote sensing: An applied review. International journal of remote sensing, 2018. 39(9): p. 2784-2817.
10. Li, Q., et al. Medical image classification with convolutional neural network. in 2014 13th international conference on control automation robotics & vision (ICARCV). 2014. IEEE.
11. Thapa, A., et al., Deep Learning for Remote Sensing Image Scene Classification: A Review and Meta-Analysis. Remote Sensing, 2023. 15(19): p. 4804.
12. Cifuentes, J., et al., Air temperature forecasting using machine learning techniques: a review. Energies, 2020. 13(16): p. 4215.
13. Pavlyshenko, B.M., Machine-learning models for sales time series forecasting. Data, 2019. 4(1): p. 15.
14. Zhu, J., et al., An APMLP deep learning model for bathymetry retrieval using adjacent pixels. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021. 15: p. 235-246.
15. Zhou, W., et al., A Comparison of Machine Learning and Empirical Approaches for Deriving Bathymetry from Multispectral Imagery. Remote Sensing, 2023. 15(2): p. 393.
16. Lumban-Gaol, Y., K. Ohori, and R. Peters, Satellite-derived bathymetry using convolutional neural networks and multispectral sentinel-2 images. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2021. 43: p. 201-207.
17. Sagawa, T., et al., Satellite derived bathymetry using machine learning and multi-temporal satellite images. Remote Sensing, 2019. 11(10): p. 1155.
18. Dębska, B. and B. Guzowska-Świder, Application of artificial neural network in food classification. Analytica Chimica Acta, 2011. 705(1-2): p. 283-291.
19. Chebud, Y., et al., Water quality monitoring using remote sensing and an artificial neural network. Water, Air, & Soil Pollution, 2012. 223: p. 4875-4887.
20. Li, H., et al. A convolutional neural network cascade for face detection. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
21. Sultana, F., A. Sufian, and P. Dutta, A review of object detection models based on convolutional neural network. Intelligent computing: image processing based applications, 2020: p. 1-16.
22. Hong, S., et al. Learning transferrable knowledge for semantic segmentation with deep convolutional neural network. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
23. Sultana, F., A. Sufian, and P. Dutta, Evolution of image segmentation using deep convolutional neural network: A survey. Knowledge-Based Systems, 2020. 201: p. 106062.
24. Uzair, M. and N. Jamil. Effects of hidden layers on the efficiency of neural networks. in 2020 IEEE 23rd international multitopic conference (INMIC). 2020. IEEE.
25. Mascarenhas, V. and T. Keck, Marine Optics and Ocean Color Remote Sensing. Dans YOUMARES 8--Oceans Across Boundaries: Learning from each other (pp. 41-54). 2018, Springer. |