摘要: | 本論文望遠偵蒐光學系統由一顆單中心物鏡與2132個相同目鏡與2132個相同數位攝影機鏡頭所組成,設計方法為使用光學軟體CODE V分別將單中心物鏡與目鏡與數位攝影機鏡頭單獨設計,再將其一一拼接而成,首先將單中心物鏡與目鏡拼接成Kepler望遠鏡系統,再將此Kepler望遠鏡系統與數位攝影機鏡頭拼接,完成單一組望遠偵蒐光學系統設計。望遠偵蒐光學系統是由於一個較大的單中心球形物鏡和一組完全相同的目鏡與數位攝影機鏡頭組成,因此稱目鏡與數位攝影機鏡頭兩者結合為微型攝影機鏡頭,所以本文望遠偵蒐光學系統共有一顆單中心物鏡與2132組微型攝影機鏡頭。 可見光望遠偵蒐光學系統要求對於距離125公里目標大小為4.69 m × 21.2 m的J-20戰鬥機進行探測,偵測波長為可見光400 nm至700 nm,全視角為水平120° × 垂直72°,共2132台數位攝影機鏡頭,總畫素307億。選用Onsemi 公司產品型號MT9F002之CMOS感測器畫素數目長與寬為4384 × 3288,單一畫素大小為1.4 μm×1.4 μm,CMOS感測器畫素為1400萬,依據Johnson Criteria在準確性95%情況下距離125公里J-20戰鬥機需要2 line pairs才能探測到目標,推導出光學系統總焦距為149.25 mm,每一組望遠偵蒐光學系統之全視角為水平2.352° × 垂直1.764°,可得在CMOS感測器上每一畫素之角解析度(IFOV, Instantaneous Field Of View)為9.363 μrad,約為0.032′(arc minute)。探測目標J-20戰鬥機於125 km處為169.6 μrad,IFOV越小能看得越細,可以探測到J-20戰鬥機,計算出J-20戰鬥機在CMOS感測上約佔18.11 pixel。已知人眼為1 arc minute,可見光望遠偵蒐光學系統解析度為人眼31倍。 在對比微型攝影機鏡頭排列方式後,計算出六角排列可以比正方形排列多出1.1547倍空間利用效率,因此將六角排列作為微型攝影機鏡頭排列方式可以有效的縮短行與行的間距,使球面排列結果更加緊湊,有效節省空間。 為了模擬在真實環境下可見光望遠偵蒐光學系統轉換效率使用光學軟體LightTools建立模型,使用照度計量測太陽照度最大值為166100 lux等於166100 lm⁄m^2 ,單中心球形物鏡第一面直徑為72.447 mm,可知入射表面積為1.648 ×10^(-2) m^2,入射光源總光通量為2738.782 lm,實際在感測器接收到的能量為998.764 lm,平均差為0.232,可計算出轉換效率為36.467 %。 將本論文設計結果與同樣以單中心多尺度設計系統設計的AWARE系列進行比較,使用相同CMOS感測器,差別在於由於本論文主要目的是探測距離125 km的J-20戰鬥機,整體光學系統焦距較長,以至於單一個CMOS感測器所能覆蓋的角度很小,IFOV能解析的畫面更細,由於探測要包含天空和海上,因此整體光學系統全視角要大,入射角度小導致整體camera數量多,總畫素也隨之增加,整體優於以往所AWARE設計出的單中心多尺度光學系統。 ;This study presents a design for a long-range telescope reconnaissance optical system consisting of one monocentric objective lens and 2,132 sets of identical eyepieces and cameras. The design approach involves using CODE V optical design software to individually design the monocentric objective lens, eyepieces, and cameras, which are subsequently assembled together. Initially, the monocentric objective lens and the eyepieces are combined to form a Kepler telescope system, which is then integrated with the cameras to complete a set of the long-range telescope reconnaissance optical system. The system comprises a larger monocentric spherical objective lens and a set of same eyepieces combined with cameras, hence referred to as the “microcameras.” Therefore, the proposed long-range telescope reconnaissance optical system includes one monocentric objective lens and 2132 sets of microcameras. The long-range visible light telescope reconnaissance optical system requires the detection of J-20 fighter aircraft at a distance of 125 kilometers, with a target size of 4.69 m × 21.2 m. The detection wavelength ranges from 400 nm to 700 nm, with a field of view of 120° horizontally and 72° vertically, totaling 2132 cameras. The total pixel count is 30.7 gigapixels. The selected CMOS sensor is the MT9F002 from Onsemi, with an active pixel array of 4,384 H × 3,288 V and a pixel size of 1.4 μm × 1.4 μm. The CMOS sensor has a total of 14 MP resolution. Based on the Johnson Criteria, the J-20 fighter aircraft at a distance of 125 kilometers requires 2 line pairs for detection with an accuracy of 95%. The derived total focal length of the optical system is 149.25 mm. Each set of the long-range telescope reconnaissance optical system has a field of view of 2.352° horizontally and 1.764° vertically. The angular resolution per pixel on the CMOS sensor, known as the instantaneous field of view (IFOV), is 9.363 μrad (0.032 arc minutes). The J-20 fighter aircraft at 125 kilometers occupies approximately 169.6 μrad on the CMOS sensor, equivalent to around 18.11 pixels. Compared to the human eye’s resolution of 1 arc minute, the visible light long-range telescope reconnaissance optical system offers a resolution that is 31 times better than what the human eye can see. After conducting a comparison of various microcamera layouts, it was determined that a hexagonal arrangement offered 1.1547 times better space utilization compared to a square arrangement. Therefore, the hexagonal arrangement was adopted to effectively optimize space efficiency by reducing the spacing between rows and creating a more compact spherical arrangement. To simulate the conversion efficiency of the long-range visible light telescope reconnaissance optical system in real-world scenarios, a model was created using the optical software LightTools. The maximum solar illuminance was measured using an illuminometer yielding a value of 166,100 lux or 166,100 lm/m^2. With a diameter of 72.447 mm for the first surface of the monocentric spherical objective lens, the incident surface area was calculated to be 1.648 × 10^(-2) m^2. The total luminous flux was 2,738.782 lm, while the actual energy received by the sensor was 998.764 lm, resulting in an average difference of 0.232. Based on these calculations, the conversion efficiency was calculated to be 36.467%. The design results of this thesis are compared with the AWARE-series gigapixel cameras, which also employs a monocentric multi-scale optical system but with a different purpose. Both systems use the same CMOS sensor, but the main objective of this study is to detect J-20 fighter aircraft at a distance of 125 kilometers. The key difference lies in the overall optical system’s longer focal length in this thesis, resulting in a smaller field of view that allows for finer resolution of the IFOV on a single CMOS sensor. As the detection task includes both the sky and the sea areas, a wider field of view is required for the overall optical system, leading to smaller incident angles and consequently more cameras. This, in turn, increases the total number of pixels in the system. Overall, the optical system designed in this study outperforms the previously designed monocentric multi-scale optical system in the AWARE series. |