博碩士論文 106582608 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:19 、訪客IP:18.224.43.250
姓名 歐文尼斯(Ervin Yohannes)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 透過基於深度學習的單眼視頻偵測器增強車輛功能
(Enhancing Vehicle Features through Deep Learning-based Detectors in Monocular Videos)
相關論文
★ 基於edX線上討論板社交關係之分組機制★ 利用Kinect建置3D視覺化之Facebook互動系統
★ 利用 Kinect建置智慧型教室之評量系統★ 基於行動裝置應用之智慧型都會區路徑規劃機制
★ 基於分析關鍵動量相關性之動態紋理轉換★ 基於保護影像中直線結構的細縫裁減系統
★ 建基於開放式網路社群學習環境之社群推薦機制★ 英語作為外語的互動式情境學習環境之系統設計
★ 基於膚色保存之情感色彩轉換機制★ 一個用於虛擬鍵盤之手勢識別框架
★ 分數冪次型灰色生成預測模型誤差分析暨電腦工具箱之研發★ 使用慣性傳感器構建即時人體骨架動作
★ 基於多台攝影機即時三維建模★ 基於互補度與社群網路分析於基因演算法之分組機制
★ 即時手部追蹤之虛擬樂器演奏系統★ 基於類神經網路之即時虛擬樂器演奏系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 由於車輛具有各種引人入勝的特徵需要增強,因此車輛是一個廣泛研究的對象。車輛的重要方面,如方向的定義、距離的計算和速度的確定,在其整體功能中扮演著重要角色。雖然這些特徵可以直接從單眼視頻中獲得,但它們常常面臨校準和遮擋方面的挑戰。在智能交通系統(ITS)研究領域中,車輛速度估計是一個值得注意的挑戰。儘管傳統和深度學習方法在此領域中展示出潛力,但由於實施硬件設備以通過激光雷達、雷達和磁性傳感器等感測器進行數據收集而產生的高昂費用,它們的進展受到了阻礙。在這篇論文中,我們提出了一個模型,該模型由兩個主要組件組成。第一個組件是一個車輛檢測和跟踪模塊,旨在準確檢測和跟踪特定物體,同時解決校準問題。第二個組件是一個單眼視頻中的同態變換回歸網絡,可以在解決單眼視頻中的遮擋問題的同時,高效而準確地估計車輛的速度。通過對多個數據集的實驗評估,我們證明了我們提出的方法優於現有的技術方法,平均均方誤差(MSE)指標的改善幅度約為51.00%。
摘要(英) The vehicle is an extensively studied object due to its various intriguing features that require enhancement. Important aspects of a vehicle, such as defining its direction, calculating distance, and determining its speed, play a vital role in its overall functionality. Although these features can be directly derived from monocular videos, they often face challenges related to calibration and occlusion. Vehicle speed estimation poses a notable challenge within the realm of intelligent transportation systems (ITS) research. Although conventional and deep learning methods have exhibited potential in this domain, their progress has been hindered by the substantial expenses associated with implementing hardware devices for data gathering via sensors like lidar, radar, and magnetic sensors. In this dissertation, we propose a model that consists of two main components. The first component is a vehicle detection and tracking module, designed to accurately detect and track specific objects while addressing calibration issues. The second component is a homography transformation regression network, which efficiently and accurately estimates vehicle speed while addressing occlusion issues in monocular videos. Through experimental evaluations on multiple datasets, we demonstrate that our proposed method outperforms state-of-the-art approaches, achieving a significant improvement of approximately 51.00% in the mean square error (MSE) metric.
關鍵字(中) ★ 單眼視頻
★ 方向
★ 距離
★ 速度
★ 深度學習
★ 鳥瞰圖
關鍵字(英) ★ monocular videos
★ direction
★ distance
★ speed
★ deep learning
★ bird eye view
論文目次 摘 要 i
Abstract ii
Acknowledgment iii
Table of Contents iv
List of Figures vii
List of Tables ix
Abbreviations x
Chapter I Introduction 1
1.1 Background 1
1.2 Problem Definition 4
1.3 Contribution 4
1.4 Limitation 4
1.5 Thesis Overview 5
Chapter II Literature Review 6
2.1 Deep Learning Methods 6
2.2 Detection and Tracking in Deep Learning Method 7
2.3 Bird’s – Eye View of Deep Learning Methods 8
2.4 Vehicle Speed Estimation 9
2.5 Comprehensive Review Hardware/Sensor vs Deep Learning Approaches 10
Chapter III Monocular Video System 12
3.1 Detection System 12
3.1.1 Mask R-CNN 13
3.2 Distance Measurement 15
3.3 Speed Perception 17
3.4 Concept of Direction 18
3.5 Pretrained Model 19
Chapter IV Deep Homography Transformation Regression Network 21
4.1 Deep Learning Methods 21
4.1.1 Homography Transformation Network 22
4.1.2 Design of Vehicle Tracking and Detection Network 24
4.1.3 Distance Measurement Using Line Detection 27
4.1.4 Vehicle Speed and Regression Network 29
4.2 The Datasets 31
4.2.1 BrnoCompSpeed 31
4.2.2 AI City Challenge 2018 (AIC18) 32
Chapter V Experimental Results on Monocular Video System 34
5.1 Requirements and Results of Mask R-CNN 34
5.2 Analysis 35
5.2.1 Analysis of Distance 35
5.2.2 Analysis of Velocity and Direction 36
5.3 Results 37
Chapter VI Experimental Results on Deep Homography Transformation Regression Network 38
6.1 Trials 38
6.1.1 Computation and Database/Storage Cost Analysis 38
6.1.2 Evaluation 42
6.1.3 The Bird-Eye View Results Using Homography Transformation 43
6.1.4 Results of Speed Estimation on AIC18 and BrnoCompSpeed 46
6.1.5 State-of-the-Art Results on AIC18 and BrnoCompSpeed 49
6.2 Ablation Study 59
Chapter VII Conclusion and Future Works 67
References 68
參考文獻 [1] Gao, Yuxiang, and Chien-Ming Huang. "Evaluation of socially-aware robot navigation." Frontiers in Robotics and AI 8 (2022): 721317.
[2] Liu, Jinqiao. "Survey of the Image Recognition Based on Deep Learning Network for Autonomous Driving Car." In 2020 5th International Conference on Information Science, Computer Technology and Transportation (ISCTT), pp. 1-6. IEEE, 2020.
[3] Zou, Zhengxia, Keyan Chen, Zhenwei Shi, Yuhong Guo, and Jieping Ye. "Object detection in 20 years: A survey." Proceedings of the IEEE (2023).
[4] Bochkovskiy, Alexey, Chien-Yao Wang, and Hong-Yuan Mark Liao. "Yolov4: Optimal speed and accuracy of object detection." arXiv preprint arXiv:2004.10934 (2020).
[5] Wang, Hai, Yijie Yu, Yingfeng Cai, Xiaobo Chen, Long Chen, and Yicheng Li. "Soft-weighted-average ensemble vehicle detection method based on single-stage and two-stage deep learning models." IEEE Transactions on Intelligent Vehicles 6, no. 1 (2020): 100-109.
[6] Xu, Wenkai, Longgang Zhao, Juan Li, Shuqi Shang, Xiping Ding, and Tiewei Wang. "Detection and classification of tea buds based on deep learning." Computers and Electronics in Agriculture 192 (2022): 106547.
[7] Yohannes, Ervin, Paul Lin, Chih-Yang Lin, and Timothy K. Shih. "Robot eye: automatic object detection and recognition using deep attention network to assist blind people." In 2020 International Conference on Pervasive Artificial Intelligence (ICPAI), pp. 152-157. IEEE, 2020.
[8] Isern, Juan, Francisco Barranco, Daniel Deniz, Juho Lesonen, Jari Hannuksela, and Richard R. Carrillo. "Reconfigurable cyber-physical system for critical infrastructure protection in smart cities via smart video-surveillance." Pattern recognition letters 140 (2020): 303-309.
[9] Haghighat, Arya Ketabchi, Varsha Ravichandra-Mouli, Pranamesh Chakraborty, Yasaman Esfandiari, Saeed Arabi, and Anuj Sharma. "Applications of deep learning in intelligent transportation systems." Journal of Big Data Analytics in Transportation 2 (2020): 115-145.
[10] Dewi, Christine, Rung-Ching Chen, Yan-Ting Liu, Xiaoyi Jiang, and Kristoko Dwi Hartomo. "Yolo V4 for advanced traffic sign recognition with synthetic training data generated by various GAN." IEEE Access 9 (2021): 97228-97242.
[11] Hsu, Wei-Yen, and Wen-Yen Lin. "Ratio-and-scale-aware YOLO for pedestrian detection." IEEE transactions on image processing 30 (2020): 934-947.
[12] Peng, Haolong, Song Guo, and Xiaoyi Zuo. "A vehicle detection method based on YOLOV4 model." In 2021 2nd International Conference on Artificial Intelligence and Information Systems, pp. 1-4. 2021.
[13] Croitoru, Florinel-Alin, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. "Diffusion models in vision: A survey." IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
[14] Georgescu, Mariana-Iuliana, Antonio Barbalau, Radu Tudor Ionescu, Fahad Shahbaz Khan, Marius Popescu, and Mubarak Shah. "Anomaly detection in video via self-supervised and multi-task learning." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12742-12752. 2021.
[15] Kamenou, Eleni, Jesús Martínez del Rincón, Paul Miller, and Patricia Devlin-Hill. "A Meta-Learning Approach for Domain Generalisation Across Visual Modalities in Vehicle Re-Identification." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 385-393. 2023.
[16] Meng, Dechao, Liang Li, Xuejing Liu, Yadong Li, Shijie Yang, Zheng-Jun Zha, Xingyu Gao, Shuhui Wang, and Qingming Huang. "Parsing-based view-aware embedding network for vehicle re-identification." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7103-7112. 2020.
[17] Moral, Paula, Alvaro Garcia-Martin, and Jose M. Martinez. "Vehicle re-identification in multi-camera scenarios based on ensembling deep learning features." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 604-605. 2020.
[18] Perumal, P. Shunmuga, M. Sujasree, Suresh Chavhan, Deepak Gupta, Venkat Mukthineni, Soorya Ram Shimgekar, Ashish Khanna, and Giancarlo Fortino. "An insight into crash avoidance and overtaking advice systems for Autonomous Vehicles: A review, challenges and solutions." Engineering applications of artificial intelligence 104 (2021): 104406.
[19] Zhang, Jiaming, Kailun Yang, Angela Constantinescu, Kunyu Peng, Karin Müller, and Rainer Stiefelhagen. "Trans4Trans: Efficient transformer for transparent object and semantic scene segmentation in real-world navigation assistance." IEEE Transactions on Intelligent Transportation Systems 23, no. 10 (2022): 19173-19186.
[20] Gholamhosseinian, Ashkan, and Jochen Seitz. "Vehicle classification in intelligent transport systems: An overview, methods and software perspective." IEEE Open Journal of Intelligent Transportation Systems 2 (2021): 173-194.
[21] Zhang, Xingchen, Yuxiang Feng, Panagiotis Angeloudis, and Yiannis Demiris. "Monocular visual traffic surveillance: A review." IEEE Transactions on Intelligent Transportation Systems 23, no. 9 (2022): 14148-14165.
[22] Feng, Yimeng, Guoqiang Mao, Bo Chen, Changle Li, Yilong Hui, Zhigang Xu, and Junliang Chen. "MagMonitor: Vehicle speed estimation and vehicle classification through a magnetic sensor." IEEE Transactions on Intelligent Transportation Systems 23, no. 2 (2020): 1311-1322.
[23] Thodi, Bilal Thonnam, Zaid Saeed Khan, Saif Eddin Jabari, and Mónica Menéndez. "Incorporating kinematic wave theory into a deep learning method for high-resolution traffic speed estimation." IEEE Transactions on Intelligent Transportation Systems 23, no. 10 (2022): 17849-17862.
[24] Chen, Jing, Qichao Wang, Harry H. Cheng, Weiming Peng, and Wenqiang Xu. "A review of vision-based traffic semantic understanding in ITSs." IEEE Transactions on Intelligent Transportation Systems (2022).
[25] Li, Wengang, Zhen Liu, Yilong Hui, Liuyan Yang, Rui Chen, and Xiao Xiao. "Vehicle classification and speed estimation based on a single magnetic sensor." IEEE Access 8 (2020): 126814-126824.
[26] Naphade, Milind, Shuo Wang, David C. Anastasiu, Zheng Tang, Ming-Ching Chang, Yue Yao, Liang Zheng et al. "The 7th AI City Challenge." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5537-5547. 2023.
[27] Welikala, Roshan Alex, Paolo Remagnino, Jian Han Lim, Chee Seng Chan, Senthilmani Rajendran, Thomas George Kallarakkal, Rosnah Binti Zain et al. "Automated detection and classification of oral lesions using deep learning for early detection of oral cancer." IEEE Access 8 (2020): 132677-132693.
[28] Asgari Taghanaki, Saeid, Kumar Abhishek, Joseph Paul Cohen, Julien Cohen-Adad, and Ghassan Hamarneh. "Deep semantic segmentation of natural and medical images: a review." Artificial Intelligence Review 54 (2021): 137-178.
[29] Malek, Youssef Nait, Mehdi Najib, Mohamed Bakhouya, and Mohammed Essaaidi. "Multivariate deep learning approach for electric vehicle speed forecasting." Big Data Mining and Analytics 4, no. 1 (2021): 56-64.
[30] Zhou, Ziqin, Yinjie Lei, Bowen Zhang, Lingqiao Liu, and Yifan Liu. "Zegclip: Towards adapting clip for zero-shot semantic segmentation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11175-11185. 2023.
[31] Xing, Jiawei, Ziyuan Luo, Minh Nguyen, and Wei Qi Yan. "Traffic sign recognition from digital images by using deep learning." In Pacific-Rim Symposium on Image and Video Technology, pp. 37-49. Cham: Springer International Publishing, 2022.
[32] Lu, Xiankai, Wenguan Wang, Jianbing Shen, David J. Crandall, and Luc Van Gool. "Segmenting objects from relational visual data." IEEE transactions on pattern analysis and machine intelligence 44, no. 11 (2021): 7885-7897.
[33] Zaidi, Syed Sahil Abbas, Mohammad Samar Ansari, Asra Aslam, Nadia Kanwal, Mamoona Asghar, and Brian Lee. "A survey of modern deep learning based object detection models." Digital Signal Processing 126 (2022): 103514.
[34] Wang, Hai, Yijie Yu, Yingfeng Cai, Xiaobo Chen, Long Chen, and Yicheng Li. "Soft-weighted-average ensemble vehicle detection method based on single-stage and two-stage deep learning models." IEEE Transactions on Intelligent Vehicles 6, no. 1 (2020): 100-109.
[35] Peng, Haolong, Song Guo, and Xiaoyi Zuo. "A vehicle detection method based on YOLOV4 model." In 2021 2nd International Conference on Artificial Intelligence and Information Systems, pp. 1-4. 2021.
[36] Hsu, Wei-Yen, and Wen-Yen Lin. "Ratio-and-scale-aware YOLO for pedestrian detection." IEEE transactions on image processing 30 (2020): 934-947.
[37] Hassaballah, Mahmoud, Mourad A. Kenk, Khan Muhammad, and Shervin Minaee. "Vehicle detection and tracking in adverse weather using a deep learning framework." IEEE transactions on intelligent transportation systems 22, no. 7 (2020): 4230-4242.
[38] Perera, Imalie, Shehan Senavirathna, Aseni Jayarathne, Shamendra Egodawela, Roshan Godaliyadda, Parakrama Ekanayake, Janaka Wijayakulasooriya, Vijitha Herath, and Sathindra Sathyaprasad. "Vehicle tracking based on an improved DeepSORT algorithm and the YOLOv4 framework." In 2021 10th International Conference on Information and Automation for Sustainability (ICIAfS), pp. 305-309. IEEE, 2021.
[39] Kapania, Shivani, Dharmender Saini, Sachin Goyal, Narina Thakur, Rachna Jain, and Preeti Nagrath. "Multi object tracking with UAVs using deep SORT and YOLOv3 RetinaNet detection framework." In Proceedings of the 1st ACM Workshop on Autonomous and Intelligent Mobile Systems, pp. 1-6. 2020.
[40] Gosala, Nikhil, and Abhinav Valada. "Bird’s-eye-view panoptic segmentation using monocular frontal view images." IEEE Robotics and Automation Letters 7, no. 2 (2022): 1968-1975.
[41] Feng, Di, Christian Haase-Schütz, Lars Rosenbaum, Heinz Hertlein, Claudius Glaeser, Fabian Timm, Werner Wiesbeck, and Klaus Dietmayer. "Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges." IEEE Transactions on Intelligent Transportation Systems 22, no. 3 (2020): 1341-1360.
[42] Yohannes, Ervin, Chih-Yang Lin, Timothy K. Shih, Tipajin Thaipisutikul, Avirmed Enkhbat, and Fitri Utaminingrum. "An Improved Speed Estimation Using Deep Homography Transformation Regression Network on Monocular Videos." IEEE Access 11 (2023): 5955-5965.
[43] Yi, Li, Boqing Gong, and Thomas Funkhouser. "Complete & label: A domain adaptation approach to semantic segmentation of lidar point clouds." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 15363-15373. 2021.
[44] Mohapatra, Sambit, Senthil Yogamani, Heinrich Gotzig, Stefan Milz, and Patrick Mader. "BEVDetNet: bird′s eye view LiDAR point cloud based real-time 3D object detection for autonomous driving." In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp. 2809-2815. IEEE, 2021.
[45] Teng, Siyu, Long Chen, Yunfeng Ai, Yuanye Zhou, Zhe Xuanyuan, and Xuemin Hu. "Hierarchical interpretable imitation learning for end-to-end autonomous driving." IEEE Transactions on Intelligent Vehicles 8, no. 1 (2022): 673-683.
[46] Luo, Yan, Chongyang Zhang, Muming Zhao, Hao Zhou, and Jun Sun. "Where, what, whether: Multi-modal learning meets pedestrian detection." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14065-14073. 2020.
[47] Reading, Cody, Ali Harakeh, Julia Chae, and Steven L. Waslander. "Categorical depth distribution network for monocular 3d object detection." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8555-8564. 2021.
[48] Miklusis, Donatas, Vytautas Markevicius, Dangirutis Navikas, Mindaugas Cepenas, Juozas Balamutas, Algimantas Valinevicius, Mindaugas Zilys, Inigo Cuinas, Dardan Klimenta, and Darius Andriukaitis. "Research of distorted vehicle magnetic signatures recognitions, for length estimation in real traffic conditions." Sensors 21, no. 23 (2021): 7872.
[49] Chen, Xinqiang, Zichuang Wang, Qiaozhi Hua, Wen-Long Shang, Qiang Luo, and Keping Yu. "AI-empowered speed extraction via port-like videos for vehicular trajectory analysis." IEEE Transactions on Intelligent Transportation Systems 24, no. 4 (2022): 4541-4552.
[50] Fernandez Llorca, David, Antonio Hernandez Martinez, and Ivan Garcia Daza. "Vision‐based vehicle speed estimation: A survey." IET Intelligent Transport Systems 15, no. 8 (2021): 987-1005.
[51] Jiménez-Bravo, Diego M., Álvaro Lozano Murciego, André Sales Mendes, Héctor Sánchez San Blás, and Javier Bajo. "Multi-object tracking in traffic environments: A systematic literature review." Neurocomputing 494 (2022): 43-55.
[52] Chiu, Mang Tik, Xingqian Xu, Yunchao Wei, Zilong Huang, Alexander G. Schwing, Robert Brunner, Hrant Khachatrian et al. "Agriculture-vision: A large aerial image database for agricultural pattern analysis." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2828-2838. 2020.
[53] Chen, Jing, Qichao Wang, Harry H. Cheng, Weiming Peng, and Wenqiang Xu. "A review of vision-based traffic semantic understanding in ITSs." IEEE Transactions on Intelligent Transportation Systems (2022).
[54] Chen, Yuqing, Dongyang Zhao, Meng Joo Er, Yan Zhuang, and Huosheng Hu. "A novel vehicle tracking and speed estimation with varying UAV altitude and video resolution." International Journal of Remote Sensing 42, no. 12 (2021): 4441-4466.
[55] Zhang, Xingchen, Yuxiang Feng, Panagiotis Angeloudis, and Yiannis Demiris. "Monocular visual traffic surveillance: A review." IEEE Transactions on Intelligent Transportation Systems 23, no. 9 (2022): 14148-14165.
[56] Santhosh, Kelathodi Kumaran, Debi Prosad Dogra, and Partha Pratim Roy. "Anomaly detection in road traffic using visual surveillance: A survey." ACM Computing Surveys (CSUR) 53, no. 6 (2020): 1-26.
[57] Chen, Yuqing, Dongyang Zhao, Meng Joo Er, Yan Zhuang, and Huosheng Hu. "A novel vehicle tracking and speed estimation with varying UAV altitude and video resolution." International Journal of Remote Sensing 42, no. 12 (2021): 4441-4466.
[58] Tran, M.T., Nguyen, T.V., Hoang, T.H., Le, T.N., Nguyen, K.T., Dinh, D.T., Nguyen, T.A., Nguyen, H.D., Hoang, X.N., Nguyen, T.T. and Vo-Ho, V.K., 2020. iTASK-Intelligent traffic analysis software kit. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 612-613).
[59] Datondji, Sokemi Rene Emmanuel, Yohan Dupuis, Peggy Subirats, and Pascal Vasseur. "A survey of vision-based traffic monitoring of road intersections." IEEE transactions on intelligent transportation systems 17, no. 10 (2016): 2681-2698.
[60] Khan, Salman, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. "Transformers in vision: A survey." ACM computing surveys (CSUR) 54, no. 10s (2022): 1-41.
[61] Dhulavvagol, Praveen M., Vijayakumar H. Bhajantri, and S. G. Totad. "Performance analysis of distributed processing system using shard selection techniques on elasticsearch." Procedia Computer Science 167 (2020): 1626-1635.
[62] Trivedi, Janak D., Sarada Devi Mandalapu, and Dhara H. Dave. "Vision-based real-time vehicle detection and vehicle speed measurement using morphology and binary logical operation." Journal of Industrial Information Integration 27 (2022): 100280.
[63] Shim, Kyu-Sun, Nam in Park, Jin-Hwan Kim, Oc-Yeub Jeon, and Heejo Lee. "Vehicle Speed Measurement Methodology Robust to Playback Speed-Manipulated Video File." IEEE Access 9 (2021): 132862-132874.
[64] Parico, Addie Ira Borja, and Tofael Ahamed. "Real time pear fruit detection and counting using YOLOv4 models and deep SORT." Sensors 21, no. 14 (2021): 4803.
指導教授 施國琛(Timothy K. Shih) 審核日期 2023-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明