摘要: | 在高品質的彩色影像強化中,強化亮度或飽和度的同時,保持色調不變是很重要的,因此符合感官的色彩模型,像是HSI,HSL以及HSV等等,常被拿來使用。HSI 是一個常用的色彩模型,有許多色彩的應用都是以這個模型為基礎,無論如何,在HSI色彩空間修改亮度與飽和度之後,再從HSI色彩模型轉換回到RGB色彩模型時,常常造成跑出色域 (out-of-gamut) 的問題,除此之外,不論域中最大的飽和度範圍為何,像素的飽和度總是根據亮度的增加而遞增,亮度的減少或遞減。 HSV (Hue-Saturation-Value) 色彩模型在彩色影像強化與影像分割很常用到,但是在HSV色彩空間,同一亮度值所組成的平面,是平行於RGB方塊屋頂,環繞白色的三個平面,該平面的面積會隨著亮度值的遞增而擴增,因此影像的亮度直方統計圖會偏向集中在高亮度值的區域,因為亮度越高統計面積越大,這樣會導致色調和亮度很接近但是飽和度有明顯差異的兩個像素,在亮度強化之後,像是histogram equalization或是histogram stretching,會被大大的分開。 在這篇論文,我提出了一個從RGB到HSI色彩轉換的修正公式,稱為eHSI色彩模型,用來解決彩色影像強化中out-of-gamut的問題,並且讓像素的飽和度,可以根據最大飽和度的範圍自動適應性的調整,也就是像素的飽和度可以根據最大可擴充的範圍,自動增強或減少。在實驗部分,我還示範了基於所提出的eHSI色彩模型,如何犧牲一些對比度來增強影像的飽和度,另一方面,我也提出了一個改進的HSV色彩模型:iHSV,它保留了一般在RGB色彩空間,亮度直方統計圖有高斯分布的特性,那是因為在RGB色彩空間的兩端有較小的飽和度延伸範圍,在中央有較大的飽和度延伸範圍,我也再度示範了如何犧牲一些對比度來增進影像的飽和度,實際上,我們可以根據影像的特性,在亮度和飽和度的強化之間做一些取捨,來獲得一個最佳品質的影像。 最後在遙測影像裡,雲層遮蔽是一個嚴重的問題,這個問題大部分可以用不同時段的影像中,沒有雲區域的拼貼來消除,在這篇論文,我們提出一個多重技巧的方法,藉由不同時段衛星影像的拼貼,在三個步驟內拼貼出無雲的衛星影像。首先,原始影像用我們提出的eHSI色彩模型來強化,其次,一個簡單的亮度閾值,加上幾個不同的比較方法,可以用來擷取出所有雲層覆蓋的區域,然後我們選擇較少雲層覆蓋的影像當作基底影像,並且將其分割成許多方格區域,我們利用偵測到有雲方格區域周邊八個相鄰的方格區域,來涵蓋破碎雲以及由太陽斜照所形成雲的陰影,最後,這些厚雲以及雲的陰影存在的區域,會被另一時段影像相同位置且無雲的區域所替換,然後我們用一個金字塔型的多重解析度融合方法來產生一張無雲的衛星影像。基於我們所提出的完整解決方案,融合後的影像除了可以還原雲層覆蓋的部分,還可以有較佳的亮度和飽和度。 While enhancing the intensity or saturation component for high-quality color image enhancement, keeping the hue component unchanged is important; thus, perceptual color models such as HSI, HSL, and HSV were used. Hue-Saturation-Intensity (HSI) is a public color model, many color applications are commonly based on this model; however, the transformation from HSI model to RGB model usually generates the out-of-gamut problem after modifying intensity and saturation in the HSI model. Moreover, the saturation component is always increased or decreased following the change of intensity component no matter what the attainable saturation range is. The HSV (Hue-Saturation-Value) color model is popular for color image enhancement and segmentation; but the identical-value plane of the HSV color space is parallel to one of the three ceiling planes of the RGB cube, the area of the plane will be extended as the value is increased. Hence the value histogram is concentrated in the high value portion; the low saturation pixels are highly separated from the high saturation ones after enhancement on the value components, such as histogram equalization and histogram stretching, although their hue and intensity are approximate. In this paper, we propose accurate formulas for the color transformation between RGB and the proposed HSI color model, called the exact HSI (eHSI) color model, to resolve the out-of-gamut problem directly as well as to automatically adapt the saturation range; that is, the saturation component can be enhanced or reduced according to the attainable maximum saturation range. In experiments, we demonstrate how to sacrifice a little contrast to improve the image saturation based on the proposed eHSI color model. On the other hand, we propose an improved HSV color model iHSV, which preserves the Gaussian distribution characteristic of the intensity histogram in RGB cube; that is, the maximum saturation range is smaller on both ends and larger in the central area of the value axis in the corresponding HSV model. We also demonstrate how the saturation of an image can be improved by sacrificing a little contrast in the improved HSV color model. In practice, we can take a counterbalance between intensity enhancement and saturation enhancement to obtain a better quality image based on the characteristics of images. Furthermore, partial cloud cover is a serious problem in optical remote sensing images. The problem can be mostly resolved by mosaicking the cloud-free areas of multi-temporal images. We propose multidisciplinary methods to generate cloud-free mosaic images from multi-temporal satellite images in three steps. At first, all original images are enhanced in intensity based on the proposed exact HSI (eHSI) color model. Secondly, an intensity thresholding accompanied with a difference comparison method is used to extract all cloud-cover regions. Then we choose the image with the least thin cloud cover as the base image and divide the image into grid zones. We find the thin-cloud and cloud-shadow zones in the eight neighbors of the thick cloud zones based on the relative locations and elevation angle of the sun. Finally, the cloud and cloud-shadow zones of the base image are replaced by the same-location cloud-free zones on other temporal images; then a pyramid multi-scale fusion method is used to generate cloud-free satellite images. Based on the proposed complete approach, fused images with proper brightness and saturation are produced from source images that may have variant brightness. |