English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 40304493      線上人數 : 520
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95810


    題名: 測試時領域自適應與場景感知3D人體姿態重建;Test-time Domain Adaptation And Scene-Aware 3D Human Pose Reconstruction
    作者: 詹幃傑;ZHAN, WEI-JIE
    貢獻者: 資訊工程學系
    關鍵詞: 3D 人體姿態估測;3D Human Pose Estimation
    日期: 2024-08-14
    上傳時間: 2024-10-09 17:17:58 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年對基於深度學習的3D領域發展以極快的速度發展中,科技開始從2D平面的領域擴
    展至3D立體的層面。隨著3D研究的發展,已有開始許多點子想利用3D才可呈現的立
    體功能,在以往的基礎上進一步地加強畫面的呈現或者應用,例如:已有快速利用人物圖
    片生成對應的3D模型,並用於表現出真實人物的動作、姿勢,或者利用3D重建技術,
    建立影像的人物及物件。
    然而,深度學習的領域中,往往需要大量資料集給予AI模型進行學習,而資料集的
    數量和多樣性往往會影響AI模型的後續表現和應用成效,因此在資料集的使用上往往需
    要應用各種方法獲取及利用,但這部分往往在3D深度學習領域中更嚴重,不像2D圖像
    或者語音已有著大量資料集,3D領域的資料往往較為稀缺,同時由於3D領域在相對於
    2D 空間中複雜度較高,僅用單一的2D圖像影像資料往往無法還原實際的3D環境場景,
    最常見的問題即是如何的將結果收斂至準確的3D領域中。
    為了解決此類的問題,本論文建構了一個利用2D影像建立對應3D物體的作法,利
    用多個AI模型進行資料處理,配合有著領域自適應特性的模型,最後利用損失函數進一
    步規範生成的結果,使其可以在一定的範圍之內可以生成與現實生活中相似或者近似的結
    果;In recent years, the developmentof3Dtechnologybasedondeeplearninghasbeenprogressingatan
    extremely rapid pace, withtechnologyexpandingfrom2Dplanardomainsto3Dspatialdimensions.
    As 3D research advances, many ideas have emerged that leverage the unique capabilities of 3D to
    enhance visual representation and applications. For example, there are now techniques to quickly
    generate corresponding 3D models from human images, which can be used to realistically depict
    human movements and poses. Additionally, 3D reconstruction technology can be used to create
    images of people and objects.
    However, in the field of deep learning, a significant amount of data is often required for AI
    models to learn effectively. The quantity and diversity of datasets greatly influence the subsequent
    performance and application effectiveness of AI models. This issue is particularly severe in the
    realm of 3Ddeeplearning. Unlike 2D images or audio, where there are abundant datasets available,
    3D data is often scarce. Due to the higher complexity of 3D spaces compared to 2D, a single 2D
    imageisusuallyinsufficienttoaccuratelyreconstructtheactual3Denvironment. Themostcommon
    challenge is how to converge the results to an accurate 3D domain.
    To address these issues, this paper constructs a method to establish corresponding 3D objects
    from 2D images. It utilizes multiple AI models for data processing and incorporates models with
    domain adaptation capabilities. Finally, it employs a loss function to further constrain the generated
    results, ensuring that the generated outputs are similar or approximate to real-life objects within a
    certain range
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML14檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明