博碩士論文 109523040 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:160 、訪客IP:3.142.197.198
姓名 曾鴻林(Hong-Lin Zeng)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱
(LARR:Delay Compensation via Local Positioning for MR Remote Rendering)
相關論文
★ 基於馬賽克特性之低失真實體電路佈局保密技術★ 多路徑傳輸控制協定下從無線區域網路到行動網路之無縫換手
★ 感知網路下具預算限制之異質性子頻段分配★ 下行服務品質排程在多天線傳輸環境下的效能評估
★ 多路徑傳輸控制協定下之整合型壅塞及路徑控制★ Opportunistic Scheduling for Multicast over Wireless Networks
★ 適用多用戶多輸出輸入系統之低複雜度比例公平性排程設計★ 利用混合式天線分配之 LTE 異質網路 UE 與 MIMO 模式選擇
★ 基於有限預算標價式拍賣之異質性頻譜分配方法★ 適用於 MTC 裝置 ID 共享情境之排程式分群方法
★ Efficient Two-Way Vertical Handover with Multipath TCP★ 多路徑傳輸控制協定下可亂序傳輸之壅塞及排程控制
★ 移動網路下適用於閘道重置之群體換手機制★ 使用率能小型基地台之拍賣是行動數據分流方法
★ 高速鐵路環境下之通道預測暨比例公平性排程設計★ 用於行動網路效能評估之混合式物聯網流量產生器
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2024-8-31以後開放)
摘要(中) 無線混合現實(MR)系統,需要提供使用者及時且準確的與虛擬物件互動
的沉浸式體驗。為了在有限的硬體計算資源下達成此目標,現有的方式
為將計算轉移至邊緣伺服器,並藉由預測的方式減緩MTP延遲所造成看
到的畫面不是當下畫面的情況發生,提前渲染視野的影像進行傳輸。由
於預測不是無錯誤的,因此可能出現物件顯示位置錯誤的問題,而導致
使用者無法準確與物件進行互動。為了解決這個問題,我們提出了使用
本地端的物件精準位置做為定位輔助的串流架構並且開發了基於所提出
架構的串流資料優化算法,避免多餘的資料傳輸。實驗結果顯示,在相
同的MTP延遲下,所提出的串流架構在虛擬物件定位和網路資料量傳輸
方面都優於現有的方法。
摘要(英) Wireless mixed reality (MR) systems must provide users with an immersive experience of interacting with virtual objects in real-time and accurately. In order to achieve this goal under limited hardware computing resources. The existing method is to transfer the calculation to the edge server and use the field of view (FoV) prediction method to reduce the situation that the image seen is not the current image caused by the Motion-to-Photon (MTP) latency and render the relative video in advance for transmission. Since predictions are not error-free, objects may be displayed in the wrong position, preventing users from interacting with objects inaccurately. To solve this problem, we propose a streaming architecture that uses the absolute position of objects on the MR devices as a positioning aid. Then, we develop streaming data optimization algorithms based on the proposed architecture to avoid redundant data transmission. Experimental results show that under the same MTP latency, the proposed streaming architecture outperforms existing methods in both virtual object location and network data volume transmission.
關鍵字(中) ★ 混合實境 關鍵字(英) ★ Mixed Reality
★ Low-latency communication
★ Remote rendering
★ Delay compensation
論文目次 1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Related Works 4
2.1 Volumetric Video Format . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Remote Rendering System . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Low-Latency Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 MR Remote Rendering and Performance issues 7
3.1 Remote Streaming of Mixed Reality . . . . . . . . . . . . . . . . . . . . 7
3.2 Position Offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4 Localization-Assisted Remote Rendering 11
4.1 Server Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2 Client Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 Object Streaming Camera . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5 Optimization of Streaming Data 16
5.1 Visibility Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3 Occlusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.4 Optimal Streaming Object Size . . . . . . . . . . . . . . . . . . . . . . . 19
6 Experimental Results 21
6.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.2 Hardware Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
iv
6.3 Transmission Performance . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4 Position Error Performance . . . . . . . . . . . . . . . . . . . . . . . . . 26
7 Conclusion and Future Work 28
7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Bibliography 29
參考文獻 [1] P. Milgram and F. Kishino, “A taxonomy of mixed reality visual displays,” IEICE
TRANSACTIONS on Information and Systems, vol. 77, no. 12, pp. 1321–1329, 1994.
[2] O. Schreer, I. Feldmann, P. Kauff, P. Eisert, D. Tatzelt, C. Hellge, K. M ̈uller,
S. Bliedung, and T. Ebner, “Lessons learned during one year of commercial vol-
umetric video production,” SMPTE Motion Imaging Journal, vol. 129, no. 9, pp.
31–37, 2020.
[3] S. Schwarz, M. Preda, V. Baroncini, M. Budagavi, P. Cesar, P. A. Chou, R. A. Co-
hen, M. Krivoku ́ca, S. Lasserre, Z. Li et al., “Emerging mpeg standards for point
cloud compression,” IEEE Journal on Emerging and Selected Topics in Circuits and
Systems, vol. 9, no. 1, pp. 133–148, 2018.
[4] A. Clemm, M. T. Vega, H. K. Ravuri, T. Wauters, and F. De Turck, “Toward truly
immersive holographic-type communication: Challenges and solutions,” IEEE Com-
munications Magazine, vol. 58, no. 1, pp. 93–99, 2020.
[5] S. Shi and C.-H. Hsu, “A survey of interactive remote rendering systems,” ACM
Computing Surveys (CSUR), vol. 47, no. 4, pp. 1–29, 2015.
[6] J. Zhao, R. S. Allison, M. Vinnikov, and S. Jennings, “Estimating the motion-
to-photon latency in head mounted displays,” in 2017 IEEE Virtual Reality (VR).
IEEE, 2017, pp. 313–314.
[7] E. Vikberg et al., “Optimizing webrtc for cloud streaming of xr,” 2021.
[8] P. Eisert and P. Fechteler, “Low delay streaming of computer graphics,” in 2008 15th
IEEE International Conference on Image Processing. IEEE, 2008, pp. 2704–2707.
[9] I. Nave, H. David, A. Shani, Y. Tzruya, A. Laikari, P. Eisert, and P. Fechteler,
“Games@ large graphics streaming architecture,” in 2008 IEEE International Sym-
posium on Consumer Electronics. IEEE, 2008, pp. 1–4.
[10] M. A. Livingston and Z. Ai, “The effect of registration error on tracking distant
augmented objects,” in 2008 7th IEEE/ACM International Symposium on Mixed and
Augmented Reality. IEEE, 2008, pp. 77–86.
[11] S. G ̈ul, S. Bosse, D. Podborski, T. Schierl, and C. Hellge, “Kalman filter-based head
motion prediction for cloud-based mixed reality,” in Proceedings of the 28th ACM
International Conference on Multimedia, 2020, pp. 3632–3641.
[12] R. K. Kundu, A. Rahman, and S. Paul, “A study on sensor system latency in vr
motion sickness,” Journal of Sensor and Actuator Networks, vol. 10, no. 3, p. 53,
2021.
[13] X. Hou and S. Dey, “Motion prediction and pre-rendering at the edge to enable ultra-
low latency mobile 6dof experiences,” IEEE Open Journal of the Communications
Society, vol. 1, pp. 1674–1690, 2020.
[14] S. G ̈ul, D. Podborski, J. Son, G. S. Bhullar, T. Buchholz, T. Schierl, and C. Hellge,
“Cloud rendering-based volumetric video streaming system for mixed reality ser-
vices,” in Proceedings of the 11th ACM multimedia systems conference, 2020, pp.
357–360.
[15] K. Lee, J. Yi, Y. Lee, S. Choi, and Y. M. Kim, “Groot: a real-time streaming system
of high-fidelity volumetric videos,” in Proceedings of the 26th Annual International
Conference on Mobile Computing and Networking, 2020, pp. 1–14.
[16] R. B. Rusu and S. Cousins, “3d is here: Point cloud library (pcl),” in 2011 IEEE
international conference on robotics and automation. IEEE, 2011, pp. 1–4.
[17] P. Alliez, D. Cohen-Steiner, O. Devillers, B. L ́evy, and M. Desbrun, “Anisotropic
polygonal remeshing,” in ACM SIGGRAPH 2003 Papers, 2003, pp. 485–493.
[18] E. Zerman, C. Ozcinar, P. Gao, and A. Smolic, “Textured mesh vs coloured point
cloud: A subjective study for volumetric video compression,” in 2020 Twelfth Inter-
national Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020,
pp. 1–6.
[19] T. K ̈am ̈ar ̈ainen, M. Siekkinen, J. Eerik ̈ainen, and A. Yl ̈a-J ̈a ̈aski, “Cloudvr: Cloud
accelerated interactive mobile virtual reality,” in Proceedings of the 26th ACM inter-
national conference on Multimedia, 2018, pp. 1181–1189.
[20] L. Liu, H. Li, and M. Gruteser, “Edge assisted real-time object detection for mobile
augmented reality,” in The 25th annual international conference on mobile comput-
ing and networking, 2019, pp. 1–16.
[21] Z. Long, H. Dong, and A. El Saddik, “Interacting with new york city data by hololens
through remote rendering,” IEEE Consumer Electronics Magazine, 2022.
[22] J. Park, P. A. Chou, and J.-N. Hwang, “Rate-utility optimized streaming of volumet-
ric media for augmented reality,” IEEE Journal on Emerging and Selected Topics in
Circuits and Systems, vol. 9, no. 1, pp. 149–162, 2019.
[23] S. Liu, X. Xu, and M. Claypool, “A survey and taxonomy of latency compensation
techniques for network computer games,” ACM Computing Surveys (CSUR), 2022.
[24] S. Yoon, H. jeong Lim, J. H. Kim, H.-S. Lee, Y.-T. Kim, and S. Sull, “Deep 6-dof
head motion prediction for latency in lightweight augmented reality glasses,” in 2022
IEEE International Conference on Consumer Electronics (ICCE). IEEE, 2022, pp.
1–6.
[25] I. Telecom et al., “Advanced video coding for generic audiovisual services,” ITU-T
Recommendation H. 264, 2003.
[26] M. Wien, “High efficiency video coding,” Coding Tools and specification, vol. 24,
2015.
[27] E. Cuervo, A. Wolman, L. P. Cox, K. Lebeck, A. Razeen, S. Saroiu, and M. Musu-
vathi, “Kahawai: High-quality mobile gaming using gpu offload,” in Proceedings
of the 13th Annual International Conference on Mobile Systems, Applications, and
Services, 2015, pp. 121–135.
[28] “4dviews volumetric motion capture systems.” https://www.4dviews.com/
volumetric-resources.
[29] S. G ̈ul, S. Bosse, D. Podborski, T. Schierl, C. Hellge, M. A. Kastner, and J. Zah ́alka,
“Reproducibility companion paper: Kalman filter-based head motion prediction for
cloud-based mixed reality,” in Proceedings of the 29th ACM International Confer-
ence on Multimedia, 2021, pp. 3619–3621.
指導教授 黃志煒(Chih-Wei Huang) 審核日期 2022-8-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明