博碩士論文 110522111 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:12 、訪客IP:18.225.98.100
姓名 林家佑(Chia-Yu Lin)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱
(TCN-MAML: A TCN-based Model with Model-agnostic Meta-learning for Human-to-Human Interaction Recognition)
相關論文
★ Real-time Human Activity Recognition using WiFi Channel State Information
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2025-7-5以後開放)
摘要(中) 近年來,人類活動識別(HAR)的問題引起了相當大的關注,尤其是在使用基於Wi-Fi的應用中,如醫療保健(例如呼吸和心率監測)、安全(例如身份驗證及入侵偵測)、老人照護等方面。對於基於Wi-Fi的HAR而言,這種日益增長的興趣來自於其在不同領域中提供的潛在益處和多功能性。
然而,Wi-Fi信號容易受到干擾,在不同時間、不同環境和不同受試者之間波動不定。此外,在基於Wi-Fi的領域中,很少有大量且豐富的資料集,這使得「訓練一個通用模型並透過Wi-Fi訊號識別新的人類活動」變得困難。其中一個主要的解決方案是元學習,它使模型能夠在僅經過少數步驟的情況下適應新的任務。
為了應對上述挑戰,我們提出了一種新的方法,將時間卷積網絡(TCN)與模型不可知元學習(MAML)相結合。值得注意的是,我們提出的方法展示了出色的計算效率,同時實現了更高的準確性,即使處理樣本數有限的資料集也是如此。通過對一個公開可訪問的資料集進行嚴格的實驗,我們的方法取得了顯著的結果,展示了令人印象深刻的98.35%的準確性,同時有效地適應新的受試者,並凸顯了它在應對不同場景中的多功能性和韌性。
摘要(英) The issue of human activity recognition (HAR) has garnered significant attention in recent years, especially in the utilization of Wi-Fi-based application, such as healthcare (e.g., monitoring breath and heart rate), security, elderly care, and more. This growing interest stems from the potential benefits and versatility offered by Wi-Fi-based HAR in diverse domains. However, the Wi-Fi signals fluctuate through time, environments, and the subjects. Also, there is nearly an extensive dataset in Wi-Fi-based field which leads training a general model to recognize new human activity through Wi-Fi signals difficultly. One main solution is meta-learning which enables the model to adapt to new tasks with only few steps. In order to address the aforementioned challenges, we present a novel approach that combines the Temporal Convolution Network (TCN) with Model-Agnostic Meta-Learning (MAML). Notably, our proposed approach demonstrates remarkable computational efficiency while achieving improved accuracy, even when dealing with datasets that have a limited number of samples. By conducting rigorous experiments on a publicly accessible dataset, we have obtained remarkable results with our approach, showcasing an impressive accuracy of 98.35% while adapting effectively to new subjects and highlighting its versatility and robustness in handling varying scenarios.
關鍵字(中) ★ 無線訊號
★ 行為辨識
★ 時間卷積網路
★ 小樣本學習
關鍵字(英) ★ Wi-Fi
★ Channel State Information
★ Human Activity Recognition
★ Temporal Convolution Network
★ Few-shot learning
論文目次 Contents
1. Introduction 1
2. Related Work 7
2.1. CNN-based approach 7
2.2. Seq2Seq-based approach 9
2.3. Few-shot learning based approach 12
3. Background 15
3.1. CSI dataset of human-to-human interaction (HHI) 16
3.2. Problem Setup 17
3.3. Model-agnostic Meta-Learning 18
4. Methods & Model 19
4.1. Preprocessing 20
4.2. Augmentation 26
4.2.1. Dropout 26
4.2.2. Mix samples with different labels 27
4.2.3. Mix samples with same labels 27
4.3. TCN Model 28
4.3.1. Causal Convolutions 31
4.3.2. Dilated Convolutions 32
4.3.3. Fully Connected Network 33
4.4. Modified the outer-loop optimization in MAML 34
4.5. Variables, Symbol, and Hyper-parameters 35
5. Experiments 38
5.1. Compare to our pervious work (TCN-AA) 38
5.2. Ablation Study 42
5.2.1. Dropout rate 42
5.2.2. Augmentation 45
6. Conclusion 47
7. Reference 48
Appendix A 52
Detailed information of splitting subject-pairs 52
參考文獻 [1] I. M. Shafiqul, M. K. A. Jannat, J.-W. Kim, S.-W. Lee, and S.-H. Yang, “Hhi-attentionnet: An enhanced human-human interaction recognition method based on a lightweight deep learning model with attention network from csi,” Sensors, vol. 22, no. 16, 2022. [Online]. Available: https://www.mdpi.com/1424-8220/22/16/6018
[2] L. Minh Dang, K. Min, H. Wang, M. Jalil Piran, C. Hee Lee, and H. Moon, “Sensor-based and vision-based human activity recognition: A comprehensive survey,” Pattern Recognition, vol. 108, p. 107561, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0031320320303642
[3] D. R. Beddiar, B. Nini, M. Sabokrou, and A. Hadid, “Vision-based human activity recognition: A survey,” Multimedia Tools Appl., vol. 79, no. 41–42, p. 30509–30555, nov 2020. [Online]. Available: https://doi.org/10.1007/s11042-020-09004-3
[4] J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu, “Deep learning for sensor-based activity recognition: A survey,” Pattern Recognition Letters, vol. 119, pp. 3–11, mar 2019. [Online]. Available: https://doi.org/10.1016%2Fj.patrec.2018.02.010
[5] S. M¨unzner, P. Schmidt, A. Reiss, M. Hanselmann, R. Stiefelhagen, and R. D¨urichen, “Cnn-based sensor fusion techniques for multimodal human activity recognition,” in Proceedings of the 2017 ACM International Symposium on Wearable Computers, ser. ISWC ’17. New York, NY, USA: Association for Computing Machinery, 2017, p. 158–165. [Online]. Available: https://doi.org/10.1145/3123021.3123046
[6] S. K. Yadav, K. Tiwari, H. M. Pandey, and S. A. Akbar, “A review of multimodal human activity recognition with special emphasis on classification, applications, challenges and future directions,” Knowledge-Based Systems, vol. 223, p. 106970, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0950705121002331
[7] R. Alazrai, A. Awad, B. Alsaify, M. Hababeh, and M. I. Daoud, “A dataset for wi-fi-based human-to-human interaction recognition,” Data in Brief, vol. 31, p. 105668, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S235234092030562X
[8] M. H. Uddin, J. M. K. Ara, M. H. Rahman, and S. H. Yang, “A study of real-time physical activity recognition from motion sensors via smartphone using deep neural network,” in 2021 5th International Conference on Electrical Information and Communication Technology (EICT), 2021, pp. 1–6.
[9] Y. Ma, G. Zhou, and S. Wang, “Wifi sensing with channel state information: A survey,” ACM Comput. Surv., vol. 52, no. 3, jun 2019. [Online]. Available: https://doi.org/10.1145/3310194
[10] C.-Y. Lin, Y.-T. Liu, C.-Y. Lin, and T. K. Shih, “Tcn aa: A wi fi based temporal convolution network for human to human interaction recognition with augmentation and attention,” 2023.
[11] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” 2017.
[12] M. H. Kabir, M. H. Rahman, and W. Shin, “Csi-ianet: An inception attention network for human-human interaction recognition based on csi signal,” IEEE Access, vol. 9, pp. 166 624–166 638, 2021.
[13] Z. Chen, L. Zhang, C. Jiang, Z. Cao, and W. Cui, “Wifi csi based passive human activity recognition using attention based blstm,” IEEE Transactions on Mobile Computing, vol. 18, no. 11, pp. 2714–2724, 2019.
[14] S. Yousefi, H. Narui, S. Dayal, S. Ermon, and S. Valaee, “A survey on behavior recognition using wifi channel state information,” IEEE Communications Magazine, vol. 55, no. 10, pp. 98–104, 2017.
[15] B. Li, W. Cui, W. Wang, L. Zhang, Z. Chen, and M. Wu, “Two-stream convolution augmented transformer for human activity recognition,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 1, pp. 286–293, May 2021. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/16103
[16] M. Abdel-Basset, H. Hawash, N. Moustafa, and N. Mohammad, “H2hi-net: A dual-branch network for recognizing human-to-human interactions from channel-state information,” IEEE Internet of Things Journal, vol. 9, no. 12, pp. 10 010–10 021, 2022.
[17] Z. Zhou, F. Wang, J. Yu, J. Ren, Z. Wang, and W. Gong, “Target-oriented semi-supervised domain adaptation for wifi-based har,” in IEEE INFOCOM 2022 - IEEE Conference on Computer Communications, 2022, pp. 420–429.
[18] Y. Zhang, Y. Chen, Y. Wang, Q. Liu, and A. Cheng, “Csi-based human activity recognition with graph few-shot learning,” IEEE Internet of Things Journal, vol. 9, no. 6, pp. 4139–4151, 2022.
[19] Y. Ma, G. Zhou, S. Wang, H. Zhao, and W. Jung, “Signfi: Sign language recognition using wifi,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 2, no. 1, mar 2018. [Online]. Available: https://doi.org/10.1145/3191755
[20] D. Wang, J. Yang, W. Cui, L. Xie, and S. Sun, “Caution: A robust wifi-based human authentication system via few-shot open-set recognition,” IEEE Internet of Things Journal, vol. 9, no. 18, pp. 17 323–17 333, 2022.
[21] D. Halperin, W. Hu, A. Sheth, and D. Wetherall, “Tool release: Gathering 802.11n traces with channel state information,” SIGCOMM Comput. Commun. Rev., vol. 41, no. 1, p. 53, jan 2011. [Online]. Available: https://doi.org/10.1145/1925861.1925870
[22] S. Bai, J. Z. Kolter, and V. Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” 2018.
[23] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” 2016.
指導教授 施國琛 林智揚(Timothy K. Shih Chih-Yang Lin) 審核日期 2023-7-11
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明