DC 欄位 值 語言 DC.contributor 資訊工程學系 zh_TW DC.creator 駱鍇頡 zh_TW DC.creator Kai-Jie Lo en_US dc.date.accessioned 2018-7-12T07:39:07Z dc.date.available 2018-7-12T07:39:07Z dc.date.issued 2018 dc.identifier.uri http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=105522008 dc.contributor.department 資訊工程學系 zh_TW DC.description 國立中央大學 zh_TW DC.description National Central University en_US dc.description.abstract 由於天文觀測資料相當之龐大,長時間觀測下來的數據往往會到達PB以上等級。這不僅僅對天文人員造成分析上的困擾,也在分析過程中耗費難以想像的時間。雖然電腦規格日益進步,但是一般普通電腦還是無法獨自處理全部的數據,因為可能會遭遇到記憶體或者硬碟的空間不足以及運算耗時的問題。所以本論文提出,以基於分散式運算演算法的方法來處理天文資料,且要使其結果符合時序性質,以致可以有效且精確的處理天文數據。本論文將以泛星計畫 (Pan-STARRS, Panoramic Survey Telescope and Rapid Response System) 作為實驗的資料來源。 本論文以 Hadoop 分散式檔案系統作為儲存設備,以良好的擴充性以及可靠性作為考量。搭配 Apache Spark 作為分散式運算框架,能夠更有效率利用分散式的演算法來的尋找在天體中的小行星軌跡。為了能夠讓Spark 能夠更緊密的與 Hadoop 系統做接合,本論文也利用 Hadoop Yarn作為此系統之叢集資源管理器。 本論文於資料前處理階段,將會在原始資料上進行k-d Tree的範圍搜尋來消除雜訊。緊接著,會以分散式 Hough Transform 演算法找出可能為路徑的線,來作為第一次分群的條件。之後,會以第一次的分群結果,基於時序性的配對方式找出可能軌跡片段的兩個點,運算其速度以及方向,作為第二次分群的條件。再來,將第二次分群的結果,進行改編過的Floyd-Warshall 運算 Transitive Closure,進而得到軌跡的最大樣式(maximal patterns)。最後在輸出軌跡前,必須判斷軌跡最大樣式之相似度,將 Hough Transform 離散化取樣所產生的重複軌跡去除。 zh_TW dc.description.abstract The amount of astronomical observational data is greatly increasing, along with long-term data entered into the petabytes (PB) scale. This presents a problem for analysis as well as a time-consuming puzzle. Although the current computer standard is improving, the ordinary personal computer encounters space exhaustion and associated problems. The purpose of this thesis is to study astronomical observational data, with results compiled as sequential property; data is used from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS). The Hadoop Distributed File System (HDFS) is used for storage, as it is well-known for creating excellent scalability and reliability. This approach also adopts Apache Spark as the distributed computing framework to effectively use distributed algorithms and explore asteroid trajectories; similarly, the Hadoop Yarn is used as the cluster manager for this system. This approach can be split into seven stages. First, there are processing range queries by k-dimensional (k-d) trees to filter noise. Second, it processes the distributed Hough transform algorithm to determine a line for grouping. Third, it filters detections by the standard deviation of magnitudes. Fourth, it pairs every two detections into a pair-based sequential property and calculates its velocity and direction as a condition for the next grouping stage. Fifth, it groups pairs by the Hough transform’s rho, theta, velocity and direction. Sixth, it uses the adapted Floyd-Warshall algorithm to compute transitive closure and establish maximal patterns. Finally, it deduplicates asteroid trajectories before outputting the result. en_US DC.subject 大數據 zh_TW DC.subject 分散式運算 zh_TW DC.subject 小行星軌跡 zh_TW DC.subject Hough Transform zh_TW DC.subject Transitive Closure zh_TW DC.subject Big Data en_US DC.subject Distributed Computing en_US DC.subject Asteroid Trajectory en_US DC.subject Hough Transform en_US DC.subject Transitive Closure en_US DC.title 基於分散式運算架構探索時序性小行星軌跡 zh_TW dc.language.iso zh-TW zh-TW DC.title Exploration of Sequential Asteroid Trajectories with a Distributed Computing System en_US DC.type 博碩士論文 zh_TW DC.type thesis en_US DC.publisher National Central University en_US