English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41636900      線上人數 : 1156
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/77522


    題名: 基於分散式運算架構探索時序性小行星軌跡;Exploration of Sequential Asteroid Trajectories with a Distributed Computing System
    作者: 駱鍇頡;Lo, Kai-Jie
    貢獻者: 資訊工程學系
    關鍵詞: 大數據;分散式運算;小行星軌跡;Hough Transform;Transitive Closure;Big Data;Distributed Computing;Asteroid Trajectory;Hough Transform;Transitive Closure
    日期: 2018-07-12
    上傳時間: 2018-08-31 14:46:54 (UTC+8)
    出版者: 國立中央大學
    摘要: 由於天文觀測資料相當之龐大,長時間觀測下來的數據往往會到達PB以上等級。這不僅僅對天文人員造成分析上的困擾,也在分析過程中耗費難以想像的時間。雖然電腦規格日益進步,但是一般普通電腦還是無法獨自處理全部的數據,因為可能會遭遇到記憶體或者硬碟的空間不足以及運算耗時的問題。所以本論文提出,以基於分散式運算演算法的方法來處理天文資料,且要使其結果符合時序性質,以致可以有效且精確的處理天文數據。本論文將以泛星計畫 (Pan-STARRS, Panoramic Survey Telescope and Rapid Response System) 作為實驗的資料來源。 本論文以 Hadoop 分散式檔案系統作為儲存設備,以良好的擴充性以及可靠性作為考量。搭配 Apache Spark 作為分散式運算框架,能夠更有效率利用分散式的演算法來的尋找在天體中的小行星軌跡。為了能夠讓Spark 能夠更緊密的與 Hadoop 系統做接合,本論文也利用 Hadoop Yarn作為此系統之叢集資源管理器。
    本論文於資料前處理階段,將會在原始資料上進行k-d Tree的範圍搜尋來消除雜訊。緊接著,會以分散式 Hough Transform 演算法找出可能為路徑的線,來作為第一次分群的條件。之後,會以第一次的分群結果,基於時序性的配對方式找出可能軌跡片段的兩個點,運算其速度以及方向,作為第二次分群的條件。再來,將第二次分群的結果,進行改編過的Floyd-Warshall 運算 Transitive Closure,進而得到軌跡的最大樣式(maximal patterns)。最後在輸出軌跡前,必須判斷軌跡最大樣式之相似度,將 Hough Transform 離散化取樣所產生的重複軌跡去除。;The amount of astronomical observational data is greatly increasing, along with long-term data entered into the petabytes (PB) scale. This presents a problem for analysis as well as a time-consuming puzzle. Although the current computer standard is improving, the ordinary personal computer encounters space exhaustion and associated problems. The purpose of this thesis is to study astronomical observational data, with results compiled as sequential property; data is used from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS).
    The Hadoop Distributed File System (HDFS) is used for storage, as it is well-known for creating excellent scalability and reliability. This approach also adopts Apache Spark as the distributed computing framework to effectively use distributed algorithms and explore asteroid trajectories; similarly, the Hadoop Yarn is used as the cluster manager for this system.
    This approach can be split into seven stages. First, there are processing range queries by k-dimensional (k-d) trees to filter noise. Second, it processes the distributed Hough transform algorithm to determine a line for grouping. Third, it filters detections by the standard deviation of magnitudes. Fourth, it pairs every two detections into a pair-based sequential property and calculates its velocity and direction as a condition for the next grouping stage. Fifth, it groups pairs by the Hough transform’s rho, theta, velocity and direction. Sixth, it uses the adapted Floyd-Warshall algorithm to compute transitive closure and establish maximal patterns. Finally, it deduplicates asteroid trajectories before outputting the result.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML132檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明