摘要(英) |
This paper presents an interactive home robot. The robot system includes the design of the robot architecture, processing image, kinematics of robot arms, and establishment of communications protocols. Besides, there are six modes in the robot system. Mode 1: Record/Play a message. Mode 2: Follow people. Mode 3: Drinking interaction. Mode 4: Pour and drink. Mode 5: Remind the event. Mode 6: Remote monitoring.
In the mode 1, the user can record the message and set the time by using the App on the mobile phone. In the mode 2, the robot head can follow a person who is in the sight of the robot. In the mode 3, robot will ask user to pour the drink into the cup, which is hold by robot hand. If the drink in the cup reaches the particular weight, the robot will drink it. In the mode 4, the robot will request the user to put the bottle to its hand. If the bottle is in the robot hand, then the robot will hold the bottle and drink. In the mode 5, the user can set the time and several events; and then the robot will remind the near people at the time. In the mode 6, the user can remotely control and monitor the robot head by using the App. The design of the App has related the composition of information from services or local resources can be time-consuming, costly, and inconvenient. This research combines communications protocols to build an effective, efficient, and easy-to-use mobile App.
The research is about the robot system integration. Within the Raspberry Pi, the tasks contain that the sending packages of motors control, the face/the bottle recognition, and website building. The user uses the App to connect with the website. |
參考文獻 |
[1] 日本軟銀公司「Pepper」機器人相關網站,2017年6月
https://www.ald.softbankrobotics.com/en/cool-robots/pepper
[2] 台灣新創麗暘科技公司「Robotelf」機器人相關網站,2017年6月
http://www.robelf.com/
[3] 新創Ingen Dynamics公司「Aido」機器人相關網站,2017年6月
http://www.aidorobot.com/index.html#home
[4] 香港WowWee 公司「CHiP」機器人相關網站,2017年6月
http://wowwee.com/chip
[5] 日本MJI公司「Tapia」機器人相關網站,2017年6月
https://mjirobotics.co.jp/en/
[6] J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1106–1112, August 2002.
[7] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 1330–1334, Nov. 2000.
[8] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I-511–I-518, Dec. 2001.
[9] K. Okada, M. Kojima, Y. Sagawa, T. Ichino, K. Sato and M. Inada, “Vision based behavior verification system of humanoid robot for daily environment tasks,” in Proceedings of IEEE-RAS International Conference on Humanoid Robots, Japan, Dec. 2006, pp.7-12.
[10] 伺服馬達AX-12 相關網站
http://en.robotis.com/index/index.php
[11] 詹翔閔(翁慶昌教授指導),“人型機器人機構設計與應用”,淡江大學電機工程研究所碩士論文,2008年6月。
[12] 潘奕璁(黃國勝教授指導),“嵌入式機械手臂運動控制器之實作”,國立中正大學電機工程研究所碩士論文,2010年6月。
[13] 蔡柏謙(王文俊教授指導),“基於逆向運動學之機械手臂控制”,國立中央大學電機工程研究所碩士論文,2011年6月。
[14] K. Telegenov, A. Shintemirov and Y. Tlegenov, “A low-cost open-source 3-d-printed three-finger gripper platform for research and educational purposes,” IEEE Access, vol. 3, pp. 638–647, May 2015.
[15] L. Atzori, A. Iera, G. Morabito, “The internet of things: a survey,” Computer Networks, vol. 54, issue 28, pp. 2787–2805, Oct. 2010.
[16] 英國樹莓派基金會相關網站,2017年6月
https://www.raspberrypi.org/
[17] Apache軟體基金會相關網站,2017年6月
http://www.apache.org/
[18] 美國MIT媒體實驗室「App inventor」相關網站,2017年6月
http://appinventor.mit.edu/explore/
[19] Pura Vida Apps 部落格,2017年6月
https://puravidaapps.com/postfile.php
[20] Ch. P. R. Sai, V. S. Datta and Sudheera, “Design of a smart remote,” in Proceedings of 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT), India, March 2016.
[21] 曲威光 著,雲端通訊與多媒體產業,全華圖書股份有限公司,2014.
[22] R. Lienhart and J. Maydt, “An extended set of Haar-like features for rapid object detection,” in Proceedings. International Conference on Image Processing, vol. 1, pp. I-900–I-903, Sept. 2002.
[23] 游秋榮(蕭耀榮教授指導),“利用立體視覺之自動停車系統設計與實踐”,國立台北科技大學車輛工程研究所碩士論文,2008年6月。 |