參考文獻 |
[1] 羽球撿球機-https://www.bilibili.com/s/video/BV1gZ4y197sM,2022年5月。
[2] M. Iqbal and R. Omar, "Automatic Guided Vehicle (AGV) Design Using an IoT-based RFID for Location Determination,"2020 International Conference on Applied Science and Technology (iCAST),Padang,Oct.2020, pp. 489-494.
[3] S. Quan and J. Chen, "AGV Localization Based on Odometry and LiDAR," 2019 2nd World Conference on Mechanical Engineering and Intelligent Manufacturing (WCMEIM), Shanghai, Nov. 2019.
[4] R. Chakma et al., "Navigation and Tracking of AGV in ware house via Wireless Sensor Network," 2019 IEEE 3rd International Electrical and Energy Conference (CIEEC), Beijing, Sep.2019, pp. 1686-1690.
[5] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," Proc. IEEE Conference Computer Vision and Pattern Recognition, Columbus ,Jun. 2014, pp. 580-587.
[6] R. B. Girshick, "Fast R-CNN," Proc. International Conference on Computer Vision Pattern Recognition, Santiago, Dec. 2015, pp. 1440-1448.
[7] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: towards real time object detection with region proposal networks," Proc. IEEE Transactions on Pattern Analysis Machine Intelligence, 2017, vol. 39, pp. 1137-1149.
[8] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: unified, real-time object detection," arXiv preprint, arXiv:1506.02640,2015.
[9] G. S. Jocher, A.; Borovec, J.; NanoCode012; ChristopherSTAN; Changyu, L.; Laughing; tkianai; yxNONG; Hogan, A.; et al., "ultralytics/yolov5," 2022,
doi: https://doi.org/10.5281/zenodo.3908559
[10] L. Jianguo, L Weidong, G, Li-e, and L. Le, "Detection and localization of underwater targets based on monocular vision," in Proc. The 2nd International Conference on Advanced Robotics and Mechatronics, Hefei, Aug. 2017, pp. 100-105.
[11] X. Li, and Lu Wang, "A monocular distance estimation method used in video sequence," in Proc. International Conference on Information and Automation, Shenyang, Jun. 2012, pp. 390-394
[12] D. Bao and P. Wang, "Vehicle distance detection based on monocular vision," in Proc. IEEE International Conference Progress in Informatics and Computing, Shanghai, Dec. 2016, pp. 187-191.
[13] Robot end effector – Wikipedia, June 2019.
Available at : https://en.wikipedia.org/wiki/Robot_end_effector
[14] A. Khan, C. Xiangming, Z. Xingxing and W. L. Quan, "Closed form inverse kinematics solution for 6-DOF underwater manipulator," in Proc. International Conference on Fluid Power and Mechatronics (FPM), Harbin ,2015, pp. 1171-1176.
[15] J.-J. Kim and J.-J. Lee, "Trajectory optimization with particle swarm optimization for manipulator motion planning," in Proc. IEEE Transactions on Industrial Informatics, vol. 11, pp. 620-631, no, Mar. 2015.
[16] P. Beeson and B. Ames, "TRAC-IK: An open-source library for improved solving of generic inverse kinematics," IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, 2015, pp. 928-935.
[17] S. Kumar, N. Sukavanam and R. Balasubramanian, "An optimization approach to solve the inverse kinematics of redundant manipulator, " International Journal of Information and System Sciences (Institute for Scientific Computing and Information), vol. 6, no. 4, pp. 414-423, no,2010.
[18] J. Vannoy and J. Xiao, "Real-time adaptive motion planning (RAMP) of mobile manipulators in dynamic environments with unforeseen changes," in Proc. IEEE Transactions on Robotics, vol. 24, pp. 1199-1212, no, Oct. 2008.
[19] J. J. Kuffner Jr and S. M. LaValle, "RRT-Connect: An efficient approach to single-query path planning," in Proc. IEEE International Conference on Robotics and Automation, San Francisco, Aug. 2000, pp. 995-1001.
[20] I. H. Choi and Y. G. Kim, "Head pose and gaze direction tracking for detecting a drowsy driver," International Conference on Big Data and Smart Computing, Bangkok, 2014, pp. 241-244.
[21] P. I. Corke, "A Simple and Systematic Approach to Assigning Denavit–Hartenberg Parameters," in IEEE Transactions on Robotics, vol. 23, no. 3, pp. 590-594, June 2007, doi: 10.1109/TRO.2007.896765.
[22] NVIDIA® Jetson AGX Xavier
https://www.nvidia.com/zh-tw/autonomous-machines/embedded-systems/jetson-agx-xavier/ ,2022年6月
[23] ROW0146
https://shop.playrobot.com/products/robot-row0146,2022年6月。
[24] C310-Webcam
https://www.logitech.com/zh-tw/products/webcams/c310-hd-webcam.960-000631.html,2022年6月。
[25] AX-12 Motor
https://emanual.robotis.com/docs/en/dxl/ax/ax-12a/,2022年6月。
[26] Cjscope-RZ760
https://www.cjscope.com.tw/product/detail/117,2022年6月
[27] 達明機器人
https://www.valin.com/sites/default/files/asset/document/Omron-Collaborative-Robots-TM5-Series-Datasheet.pdf,2022年6月。
[28] Intel® RealSense™ Depth Camera D435i
https://www.intelrealsense.com/zh-hans/depth-camera-d435i/,2022年6月。
[29] Logitech-c920
https://www.logitech.com/zh-tw/products/webcams/c920e-business-webcam.960-001360.html,2022年6月。
[30] Ros
http://wiki.ros.org/ROS/Tutorials,2022年6月。
[31] Moveit
https://ros-planning.github.io/moveit_tutorials/,2022年6月。
[32] YOLOv5
https://docs.ultralytics.com/,2022年6月。
[33] Yolov5-Train Custom Data
https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data,2022年6月。
[34] RolabelImg -Github
https://github.com/cgvict/roLabelImg,2022年6月。
[35] Cantzler, H. "Random sample consensus (ransac)." Institute for Perception, Action and Behaviour, Division of Informatics, University of Edinburgh,1981
[36] Canny edge detector
https://en.wikipedia.org/wiki/Canny_edge_detector , 2022 年 6 月
[37] Camera Calibration and 3-D Vision - MATLAB & Simulink https://www.mathworks.com/help/vision/ref/cameracalibrator-app.html , 2022 年 6 月。
[38] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, Nov. 2000, vol. 22, no. 11, pp. 1330-1334.
[39] Base control in Ros -
http://wiki.ros.org/pr2_controllers/Tutorials/Using%20the%20base%20controller%20with%20odometry%20and%20transform%20information,2022年6月。
[40] E. Olson, "AprilTag: A robust and flexible visual fiducial system," 2011 IEEE International Conference on Robotics and Automation, Shanghai , 2011, pp. 3400-3407,doi: 10.1109/ICRA.2011.5979561.
[41] 達明機器人
https://www.tm-robot.com/zh-hant/regular-payload/,2020年5月。
[42] Intel RealSense Help Center D400 Series
https://support.intelrealsense.com/hc/en-us/community/posts/360037076293-Align-color-and-depth-images,2022年6月。
[43] J. Cho, S. Park and S. Chien, "Hole-Filling of RealSense Depth Images Using a Color Edge Map," in IEEE Access, vol. 8, pp. 53901-53914, 2020, doi: 10.1109/ACCESS.2020.2981378.
[44] L. Xiao, "A review of solutions for perspective-n-point problem in camera pose estimation," Journal of Physics: Conference Series,Ancona, Sep. 2018, vol. 1087.
[45] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,"
Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981.
[46] Intel RealSense Help Center D400 Series
https://support.intelrealsense.com/hc/en-us/community/posts/360037076293-Align-color-and-depth-images,2022年6月。
|