博碩士論文 965402028 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:36 、訪客IP:18.219.112.111
姓名 劉于碩(Yu-Shuo Liu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 特殊頻譜影像的融合與瑕疵檢測技術研究
(The study of fusion and defect detection techniques for special-spectral images)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 特殊頻譜影像 (special spectral images) 是指可見光頻譜以外之電磁輻射 (electromagnetic radiation) 所形成的影像。特殊頻譜影像可以突破人類肉眼觀察的限制,用來獲得更多的資訊以觀察場景和強化觀察的特徵。在本研究中,我們將討論兩個特殊頻譜影像的應用:影像融合 (image fusion) 與瑕疵偵測 (defect detection)。以下就這兩個議題各別闡述。
對於影像融合而言,由於在同一場景中,不同頻譜的影像感測器所得影像各代表不同資訊,所以影像融合的目的就是要將多張不同頻譜影像中的各種資訊融合在一張影像中,藉此獲得更多資訊在單一影像中,方便及快速地偵測目標物與解析場景。在現今的影像融合方法中,若輸入影像的灰階分佈不同,在融合後,有可能會得到較低對比的融合影像;若輸入的影像有較多的雜訊,會使得融合後的影像,無法獲得輸入影像中較顯著的資訊。因此,本研究提出了一個基於區域影像融合的方法 (region-based image fusion) 來融合可見光 (visible-light) 和紅外光 (infrared) 影像,以提高融合影像的對比度 (contrast) 並保持輸入影像的顯著特徵 (salient features)。我們方法有下列 5 步驟:1. 使用雙邊濾波器 (bilateral filter) 分解影像,將輸入影像轉換成細節層 (detail layer) 和基礎層 (base layer) 兩部份。2. 分割基礎層,將基礎層內容分成許多灰階較均勻的區域。3. 以基礎層的區域為基礎,分析每一區域與其每一相鄰區域平均灰階的差值。差值越大表示對比越明顯;我們以此差值表示該區域重要性之權重。整理所有區域的權重值形成一個決策圖 (decision map)。4. 細節層和基礎層使用同一張決策圖,各別使用不同的融合規則融合出一個細節層及一個基礎層。5. 將融合後的細節層和基礎層合併成一張融合影像。在實驗中,將我們的方法與離散小波 (discrete wavelet transform)、多尺度方向雙邊濾波器 (multiscale directional bilateral filter) 及視覺權重映射萃取(visual weight map extraction) 方法比較。實驗的結果可看出,我們的方法比另三者影像融合方法,不僅在熵 (entropy) 和加權融合質量指標 (weighted fusion quality index) 的評估質量均有所提升,而且我們的融合影像有較高的對比,且保留了輸入影像中顯著的特徵。
在瑕疵檢測方面,我們的檢測對象是太陽能電池板(solar cell)。在電池板出廠的瑕疵檢測是必要的工作。太陽能電池板瑕疵主要分為兩類,分別是材料製造與電池板製程中時所造成的內部瑕疵與外部瑕疵。外部瑕疵中的印刷斷線 (finger interruption) 與微裂 (micro crack),對於電能轉換的效率與太陽能板的耐久性造成很大的影響。這類型的瑕疵太小,無法用肉眼直接觀察到,因此需要使用電致發光 (electroluminescence, EL) 影像來強化瑕疵。多晶矽太陽能電池 (multi crystalline silicon solar cell ) 在 EL 影像中的印刷斷線,會因內部瑕疵的干擾或是印刷斷線的對比過低,而使得印刷斷線的自動檢測變得非常困難。在本研究中,我們提出在 EL 影像上自動檢測印刷斷線的方法。我們的方法有下列 5 步驟:1. 先在 EL 影像上劃分出幾個感興趣區域 (ROI)。2. 分別在每個 ROI 內找出有規則排列的垂直印刷線。3. 水平掃描 ROI 區域,以印刷線上每個像素與鄰近兩條印刷線間的灰階變化做為特徵。4. 我們使用印刷斷線與印刷非斷線上的特徵。經過頻譜分類 (spectral clustering) 方法,這些特徵會被表示成頻譜嵌入空間中的點。我們使用 k-means 分群法將這些點分為兩群:印刷斷線和印刷非斷線。5. 在測試階段,我們先找到 EL 影像中印刷線的位置,再將獲得的特徵轉換到頻譜嵌入空間中,然後再使用最近鄰居分類法 (nearest-neighbor classification) 來分類特徵。將我們的方法與傅立葉重建方法 (Fourier image reconstruction) 比較,不論內部瑕疵的位置或是印刷斷線對比的高低,我們的方法能夠更有效地找到印刷斷線,並且減少誤判。在印刷斷線對比過低與內部瑕疵的干擾下,偵測印刷斷線的平均正確率 (average accuracy) 為 99.64%。因此,本研究能夠有效地檢測出太陽能電池板印刷斷線的位置。
摘要(英) A special-spectral image is an image that is formed with electromagnetic radiation except visible light spectrum. Special-spectral images can be used to detect and enhance the features of target objects in the invisible environments. In this dissertation, we study two application issues of special-spectral images: image fusion and defect detection. These two research issues are respectively presented as follows.
The objective of image fusion is to obtain a fused image that contains most significant information in all input images that were captured using different sensors from the same scene. The fused image may have a reduced contrast result if the gray-level distributions of two images are different; on the other hand, the fused image may not comprise the most significant information if the input images contained more noise. Therefore in this study, we propose a region-based image fusion method to fuse spatially registered visible and infrared images while improving the contrast and preserving the significant features of input images. The proposed method consists of the following five steps: (1) The proposed method decomposes input images into base layers and detail layers by using a bilateral filter. (2) The base layers of the input images are segmented into numerous regions with uniform gray levels. (3) A region-based decision map is generated to represent the importance of each region. The decision map is obtained by calculating the weights of regions according to the gray-level difference between each region and its neighboring regions in the base layers. (4) The detail layers and base layers are separately fused according to different fusion rules based on the same decision map. (5) The fused base and detail layers are composed to obtain a final fused image. In the experiment, we compare the proposed method with the discrete wavelet transform method, multiscale directional bilateral filter method and visual weight map extraction method. The experimental results show that the proposed method has the highest entropy measurement and weighted fusion quality index among all methods. Not only the quantitative comparison, but also the proposed method shows the best quality in the gray-level contrast and feature preserving in the fused images.
On the topic of defect detection, we conduct the defect detection on the production of solar cells. Defects in solar cells should be found out in every production procedure. The defects can be divided into two categories: intrinsic and extrinsic, which are caused from the material production and processing of solar cells, respectively. Extrinsic defects consist of micro cracks and finger interruptions, which substantially reduce lifetime and efficiency of solar cells. Extrinsic defects are not visible with the naked eye, but can be inspected using electroluminescence (EL) images. Automatic extrinsic defect detection in EL images of multicrystalline silicon solar cells is typically difficult because of disturbances caused by intrinsic defects and the low contrast of fingers. In this study, we propose an automatic method to detect finger interruptions in EL images of multicrystalline solar cells. The proposed method consists of the following five steps: (1) We clip several regions of interest (ROI) from an EL image to detect defects. (2) Vertical fingers are regularly arranged in each ROI and are candidates for detection. (3) We horizontally scan each ROI region and extract the gray-level variation of every finger pixel between two adjacency fingers. (4) We record a set of features which are extracted from interrupted fingers and non-interrupted fingers. These features are represented as points in a spectral embedding space that is produced by a spectral clustering method. These points are classified into two clusters by using a k-means clustering method. (5) During testing, we first detect the position of fingers in an EL image and then obtain features from each finger. The set of features in each finger combined with known features are represented as points in the spectral embedding space and classified into clusters by using a nearest-neighbor classification method. The proposed method was compared with the Fourier image reconstruction methods. The experimental results reveal that the proposed method is more efficient to detect finger interruptions than other methods, no matter what locations of intrinsic defects or levels of contrast of fingers. In the cases of low-contrasted interrupted fingers and interference with the intrinsic defects, we got an average accuracy rate of 99.64%. The results show that the proposed method is effective to detect finger interruptions for a set of EL images of solar cells.
關鍵字(中) ★ 影像融合
★ 印刷斷線
★ 多晶矽太陽能電池
★ 瑕疵檢測
關鍵字(英) ★ image fusion
★ finger interruption
★ multi crystalline silicon solar cell
★ defect detection
論文目次 摘要 i
Abstract iii
誌謝 vii
Contents viii
List of Figures x
List of Tables xviii
Chapter 1 Introduction 1
1.1. Motivation 1
1.2. Overview of this study 3
1.2.1. Image fusion with contrast improving and feature preserving 3
1.2.2. Automatic finger interruption detection in electroluminescence images of multicrystalline solar cells 4
1.3. Organization of this dissertation 4
Chapter 2 The related works 5
2.1. Region-based image fusion 5
2.2. Defect detection in multicrystalline solar cells 23
Chapter 3 Image fusion with contrast improving and feature preserving 32
3.1. Bilateral filters 32
3.2. The proposed fusion method 34
3.2.1. Image decomposition 36
3.2.2. Image segmentation 37
3.2.3. Integrated region map 38
3.2.4. Decision map 39
3.2.5. Fusion rules 44
3.2.6. Image construction 47
3.3. Experiments 48
Chapter 4 Automatic finger interruption detection in electroluminescence images of multicrystalline solar cells 54
4.1. Spectral clustering algorithm 54
4.2. The proposed method 55
4.2.1. ROI location 56
4.2.2. Finger detection 58
4.2.3. Feature extraction 59
4.2.4. Finger classification 61
4.3. Experiments 62
Chapter 5 Conclusions 74
5.1. Image fusion with contrast improving and feature preserving 74
5.2. Automatic finger interruption detection in electroluminescence images of multicrystalline solar cells 76
References 77
參考文獻 [1] Abdelhamid, M., R. Singh, and M. Omar, "Review of microcrack detection techniques for silicon solar cells," IEEE Journal of Photovoltaics, vol.4, no.1, pp.514-524, 2014.
[2] Ameri, T., N. Li, and C. J. Brabec, "Highly efficient organic tandem solar cells: A follow up review," Energy and Environmental Science, vol.6, no.8, pp.2390-2413, 2013.
[3] Anwar, S. A. and M. Z. Abdullah, "Micro-crack detection of multicrystalline solar cells featuring an improved anisotropic diffusion filter and image segmentation technique," Eurasip Journal on Image and Video Processing, vol.2014, no.15, 2014.
[4] Aslantas, V., E. Bendes, R. Kurban, and A. N. Toprak, "New optimised region-based multi-scale image fusion method for thermal and visible images," IET Image Processing, vol.8, no.5, pp.289-299, 2014.
[5] Bai, X., F. Zhou, and B. Xue, "Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform," Optics Express, vol.19, no.9, pp.8444-8457, 2011.
[6] Breitenstein, O., J. Bauer, K. Bothe, D. Hinken, J. Müller, W. Kwapil, M. C. Schubert, and W. Warta, "Can luminescence imaging replace lock-in thermography on solar cells," IEEE Journal of Photovoltaics, vol.1, no.2, pp.159-167, 2011.
[7] Bruzzone, L. and S. B. Serpico, "An iterative technique for the detection of land-cover transitions in multitemporal remote-sensing images," IEEE Trans. on Geoscience and Remote Sensing, vol.35, no.4, pp.858-867, 1997.
[8] Calderero, F. and F. Marques, "Region merging techniques using information theory statistical measures," IEEE Trans. on Image Processing, vol.19, no.6, pp.1567-1586, 2010.
[9] Chaturvedi, P., B. Hoex, and T. M. Walsh, "Broken metal fingers in silicon wafer solar cells and PV modules," Solar Energy Materials and Solar Cells, vol.108, pp.78-81, 2013.
[10] Chen, H. M., S. Lee, R. M. Rao, M. A. Slamani, and P. K. Varshney, "Imaging for concealed weapon detection," IEEE Signal Processing Magazine, vol.22, no.2, pp.52-61, 2005.
[11] Chen, M. S. and S. D. Lin, "Image fusion based on curvelet transform and fuzzy logic," in Proc. 5th Int. Congress on Image and Signal Processing, Sichuan, China, Oct. 16-18, 2012, pp.1063-1067.
[12] Chen, W. Y., Y. Song, H. Bai, C. J. Lin, and E. Y. Chang, "Parallel spectral clustering in distributed systems," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.33, no.3, pp.568-586, 2011.
[13] Cui, G., H. Feng, Z. Xu, Q. Li, and Y. Chen, "Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition," Optics Communications, vol.341, pp.199-209, 2015.
[14] Cvejic, N., D. Bull, and N. Canagarajah, "Region-based multimodal image fusion using ICA bases," IEEE Sensors Journal, vol.7, no.5, pp.743-750, 2007.
[15] De Rose, R., A. Malomo, P. Magnone, F. Crupi, G. Cellere, M. Martire, D. Tonini, and E. Sangiorgi, "A methodology to account for the finger interruptions in solar cell performance," Microelectronics Reliability, vol.52, no.9-10, pp.2500-2503, 2012.
[16] Ding, M., L. Wei, and B. Wang, "Research on fusion method for infrared and visible images via compressive sensing," Infrared Physics and Technology, vol.57, pp.56-67, 2013.
[17] Farbman, Z., R. Fattal, D. Lischinski, and R. Szeliski, "Edge-preserving decompositions for multi-scale tone and detail manipulation," ACM Trans. on Graphics, vol.27, no.3, 2008.
[18] Filippone, M., F. Camastra, F. Masulli, and S. Rovetta, "A survey of kernel and spectral methods for clustering," Pattern Recognition, vol.41, no.1, pp.176-190, 2008.
[19] Fuyuki, T. and A. Kitiyanan, "Photographic diagnosis of crystalline silicon solar cells utilizing electroluminescence," Applied Physics A: Materials Science and Processing, vol.96, no.1, pp.189-196, 2009.
[20] Goshtasby, A. A. and S. Nikolov, "Image fusion: advances in the state of the art," Information Fusion, vol.8, no.2, pp.114-118, 2007.
[21] Guo, L., M. Dai, and M. Zhu, "Multifocus color image fusion based on quaternion curvelet transform," Optics Express, vol.20, no.17, pp.18846-18860, 2012.
[22] Han, J. W., J. H. Kim, S. H. Cheon, J. O. Kim, and S. J. Ko, "A novel image interpolation method using the bilateral filter," IEEE Trans. on Consumer Electronics, vol.56, no.1, pp.175-181, 2010.
[23] Hu, J. and S. Li, "The multiscale directional bilateral filter and its application to multisensor image fusion," Information Fusion, vol.13, no.3, pp.196-206, 2012.
[24] Israil, M., S. A. Anwar, and M. Z. Abdullah, "Automatic detection of micro-crack in solar wafers and cells: A review," Trans. of the Institute of Measurement and Control, vol.35, no.5, pp.606-618, 2013.
[25] Katsoulas, D., C. C. Bastidas, and D. Kosmopoulos, "Superquadric segmentation in range images via fusion of region and boundary information," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.30, no.5, pp.781-795, 2008.
[26] Kumar, M. and S. Dass, "A total variation-based algorithm for pixel-level image fusion," IEEE Trans. on Image Processing, vol.18, no.9, pp.2137-2143, 2009.
[27] Laliberté, F., L. Gagnon, and Y. Sheng, "Registration and fusion of retinal images - An evaluation study," IEEE Trans. on Medical Imaging, vol.22, no.5, pp.661-673, 2003.
[28] Lewis, J. J., R. J. O′Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, "Pixel- and region-based image fusion with complex wavelets," Information Fusion, vol.8, no.2, pp.119-130, 2007.
[29] Li, S. and B. Yang, "Multifocus image fusion using region segmentation and spatial frequency," Image and Vision Computing, vol.26, no.7, pp.971-979, 2008.
[30] Li, S., X. Kang, and J. Hu, "Image fusion with guided filtering," IEEE Trans. on Image Processing, vol.22, no.7, pp.2864-2875, 2013.
[31] Liu, C., L. Hong, J. Chen, S. Chu, and M. Deng, "Fusion of pixel-based and multi-scale region-based features for the classification of high-resolution remote sensing image," Journal of Remote Sensing, vol.19, no.2, pp.228-239, 2015.
[32] Liu, G., Z. Jing, S. Sun, and J. Li, "Region-based multisensor image fusion method," Journal of Systems Engineering and Electronics, vol.16, no.3, pp.521-526, 2005.
[33] Liu, Y., J. Jin, Q. Wang, Y. Shen, and X. Dong, "Region level based multi-focus image fusion using quaternion wavelet and normalized cut," Signal Processing, vol.97, pp.9-30, 2014.
[34] Looney, D. and D. P. Mandic, "Multiscale image fusion using complex extensions of EMD," IEEE Trans. on Signal Processing, vol.57, no.4, pp.1626-1630, 2009.
[35] Luo, X., J. Zhang, and Q. Dai, "A regional image fusion based on similarity characteristics," Signal Processing, vol.92, no.5, pp.1268-1280, 2012.
[36] Madabhushi, A., M. D. Feldman, D. N. Metaxas, J. Tomaszeweski, and D. Chute, "Automated detection of prostatic adenocarcinoma from high-resolution Ex vivo MRI," IEEE Trans. on Medical Imaging, vol.24, no.12, pp.1611-1625, 2005.
[37] Manjusha, U., "Image fusion and image quality assessment of fused images," Int. Journal of Image Processing, vol.4, no.5, pp.484-508, 2010.
[38] Murphy, R. R., "Sensor and information fusion for improved vision-based vehicle guidance," IEEE Intelligent Systems and Their Applications, vol.13, no.6, pp.49-56, 1998.
[39] Nünez, J., X. Otazu, O. Fors, A. Prades, V. Palà, and R. Arbiol, "Multiresolution-based image fusion with additive wavelet decomposition," IEEE Trans. on Geoscience and Remote Sensing, vol.37, no.3, pp.1204-1211, 1999.
[40] O′Donnell, L. J. and C. F. Westin, "Automatic tractography segmentation using a high-dimensional white matter atlas," IEEE Trans. on Medical Imaging, vol.26, no.11, pp.1562-1575, 2007.
[41] Pajares, G. and J. M. de la Cruz, "A wavelet-based image fusion tutorial," Pattern Recognition, vol.37, no.9, pp.1855-1872, 2004.
[42] Paris, S., P. Kornprobst, J. Tumblin, and F. Durand, "Bilateral filtering: Theory and applications," Foundations and Trends in Computer Graphics and Vision, vol.4, no.1, pp.1-73, 2009.
[43] Petrović, V. S. and C. S. Xydeas, "Gradient-based multiresolution image fusion," IEEE Trans. on Image Processing, vol.13, no.2, pp.228-237, 2004.
[44] Piella, G., "A general framework for multiresolution image fusion: from pixels to regions," Information Fusion, vol.4, no.4, pp.259-280, 2003.
[45] Piella, G. and H. Heijmans, "A new quality metric for image fusion," in Proc. the Tenth Int. Conf. on Image Processing, Barcelona, Spain, Sep. 14-17, 2003, pp.173-176.
[46] Pohl, C. and J. L. Van Genderen, "Multisensor image fusion in remote sensing: concepts, methods and applications," Int. Journal of Remote Sensing, vol.19, no.5, pp.823-854, 1998.
[47] Qu, G., D. Zhang, and P. Yan, "Medical image fusion using two dimensional discrete wavelet transform," in Proc. Data Mining and Applications, Wuhan, China, Sep. 18, 2001, pp.86-95.
[48] Reed, J. M. and S. Hutchinson, "Image fusion and subpixel parameter estimation for automated optical inspection of electronic components," IEEE Trans. on Industrial Electronics, vol.43, no.3, pp.346-354, 1996.
[49] Reese, C. E., E. J. Bender, and R. D. Reed, "Advancements of the Head-Tracked Vision System (HTVS)," in Proc. Helmet- and Head-Mounted Displays VII, Orlando, FL, Aug. 5, 2002, pp.105-116.
[50] Saeedi, J. and K. Faez, "Infrared and visible image fusion using fuzzy logic and population-based optimization," Applied Soft Computing Journal, vol.12, no.3, pp.1041-1054, 2012.
[51] Sener, C. and V. Fthenakis, "Energy policy and financing options to achieve solar energy grid penetration targets: Accounting for external costs," Renewable and Sustainable Energy Reviews, vol.32, pp.854-868, 2014.
[52] Shi, A., L. Xu, F. Xu, and C. Huang, "Multispectral and panchromatic image fusion based on improved bilateral filter," Journal of Applied Remote Sensing, vol.5, no.1, pp.053542-1-17, 2011.
[53] Shi, J. and J. Malik, "Normalized cuts and image segmentation," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.22, no.8, pp.888-905, 2000.
[54] Simone, G., A. Farina, F. C. Morabito, S. B. Serpico, and L. Bruzzone, "Image fusion techniques for remote sensing applications," Information Fusion, vol.3, no.1, pp.3-15, 2002.
[55] Tian, J., L. Chen, L. Ma, and W. Yu, "Multi-focus image fusion using a bilateral gradient-based sharpness criterion," Optics Communications, vol.284, no.1, pp.80-87, 2011.
[56] Tomasi, C. and R. Manduchi, "Bilateral filtering for gray and color images," in Proc. the 1998 IEEE 6th Int. Conf. on Computer Vision, Bombay, India, Jan. 4-7, 1998, pp.839-846.
[57] Tsai, D. M., S. C. Wu, and W. C. Li, "Defect detection of solar cells in electroluminescence images using Fourier image reconstruction," Solar Energy Materials and Solar Cells, vol.99, pp.250-262, 2012.
[58] Tsai, D. M., S. C. Wu, and W. Y. Chiu, "Defect detection in solar modules using ICA basis images," IEEE Trans. on Industrial Informatics, vol.9, no.1, pp.122-131, 2013.
[59] Tsai, D. M., G. N. Li, W. C. Li, and W. Y. Chiu, "Defect detection in multi-crystal solar cells using clustering with uniformity measures," Advanced Engineering Informatics, 2015, in press.
[60] Tseng, D.-C., Y.-S. Liu, and C.-M. Chou, "Automatic finger interruption detection in electroluminescence images of multicrystalline solar cells," Mathematical Problems in Engineering, Article ID 879675, 2015, in press.
[61] Tseng, D.-C., Y.-S. Liu, and C.-M. Chou, "Image fusion with contrast improving and feature preserving," Mathematical Problems in Engineering, Article ID 603195, 2014, in press.
[62] Varshney, P. K., H.-M. Chen, L. C. Ramac, M. Uner, D. Ferris, and M. Alford, "Registration and fusion of infrared and millimeter wave images for concealed weapon detection," in Proc. Int. Conf. on Image Processing, Kobe, Japan, Oct. 24-28, 1999, pp.532-536.
[63] Von Luxburg, U., "A tutorial on spectral clustering," Statistics and Computing, vol.17, no.4, pp.395-416, 2007.
[64] Wan, T., N. Canagarajah, and A. Achim, "Segmentation-driven image fusion based on alpha-stable modeling of wavelet coefficients," IEEE Trans. on Multimedia, vol.11, no.4, pp.624-633, 2009.
[65] Wang, H., Q. Yang, and R. Li, "Tunable-Q contourlet-based multi-sensor image fusion," Signal Processing, vol.93, no.7, pp.1879-1891, 2013.
[66] Wang, P., H. Tian, and W. Zheng, "A novel image fusion method based on FRFT-NSCT," Mathematical Problems in Engineering, vol.2013, 2013.
[67] Wang, Y. L., J. Sun, and H. W. Xu, "Research on solar panels online defect detecting method," in Proc. 4th Int. Conf. on Advanced Design and Manufacturing Engineering, Hangzhou, China, July 26-27, 2014, pp.938-941.
[68] Wang, Y. Q., D. Tian, D. Y. Song, and L. Zhang, "Application of improved invariant moments and SVM in the recognition of solar cell defects," in Proc. 2nd Int. Conf. on Renewable Energy and Environmental Technology, Dalian, China, Aug. 19-20, 2014, pp.3-6.
[69] Wang, Z., F. Yang, G. Pan, J. Gao, and H. Zhang, "Research on detection technology for solar cells multi-defects in complicated background," Journal of Information and Computational Science, vol.11, no.2, pp.449-459, 2014.
[70] Yakimov, E. B. and V. I. Orlov, "Defect detection in solar cells via electroluminescence, LBIC, and EBIC methods," Journal of Surface Investigation, vol.8, no.5, pp.839-842, 2014.
[71] Zhang, Z. and R. S. Blum, "Region-based image fusion scheme for concealed weapon detection," in Proc. the 31st Annual Conf. on Information Sciences and Systems, Baltimore, MD, Mar. 4, 1997, pp.168-173.
[72] Zhao, J., Q. Zhou, Y. Chen, H. Feng, Z. Xu, and Q. Li, "Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition," Infrared Physics and Technology, vol.56, pp.93-99, 2012.
[73] Zhao, J., H. Feng, Z. Xu, Q. Li, and T. Liu, "Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition," Optics Communications, vol.287, pp.45-52, 2013.
[74] Zhao, Z., "Contourlet transform in the application of the image fusion," Applied Mechanics and Materials, vol.239-240, pp.1432-1436, 2013.
[75] Zhou, P., W. Ye, Y. Xia, and Q. Wang, "An improved Canny algorithm for edge detection," Journal of Computational Information Systems, vol.7, no.5, pp.1516-1523, 2011.
[76] Zribi, M., "Non-parametric and region-based image fusion with Bootstrap sampling," Information Fusion, vol.11, no.2, pp.85-94, 2010.
指導教授 曾定章(Din-Chang Tseng) 審核日期 2015-7-20
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明