台灣由於地震活動十分頻繁,部分行動不便者與老年人會在地震生時跌到或被重物重壓而無法行動,若無人發現並立即救援,可能會釀成悲劇。因此本研究目的為開發一台智慧型居家機器人用於地震後自動巡查並進行緊急應變處置。本研究以移動式平台搭載六軸機械手臂為機器人主體,主控制器核心採用Jetson TX2 AI 嵌入式平台並整合arduino及Mbed微處理器搭配機器人作業系統(Robot Operating System)進行控制。運作方式為當地震速報接收裝置(Earthquake Warning Receiving Device)接獲地震速報時,即時透過接收端搭載的無線溝通裝置向機器人發送訊息,機器人端上的無線溝通裝置會負責等待地震訊息到來,當收獲地震訊息時,裝置在機器人端上的蜂鳴器與LED警示燈便開始運作,首先開啟平時建立之室內平面地圖(SLAM),執行室內定點巡航,並以Jetson TX2人工智慧運算裝置整合深度學習模型及RGB-D深度攝影機進行即時影像辨識和追蹤。再者當機器人系統偵測到有倒地人員時,則會前往該地點,並可透過ROS監控介面監視機器人移動狀況。由於倒地人員可能行動不便但意識清晰,因此最後搭配六軸機械手臂結合攝影機深度訊息基於預訓練的Mobilenet-SSD深度學習模型,辨識倒地者五官位置,並將機械手臂移動至倒地者能進食的範圍,給予適當的飲食或是藥品補給,等候救援。;Due to the frequent earthquake activities in Taiwan, some people with mobility impairments and the elderly will fall or be weighed by heavy objects and become unable to move during the earthquake. If no one finds them and rescues them immediately, it may lead to tragedy. Therefore, the purpose of this research is to develop a smart home robot for automatic inspection and emergency response after an earthquake. In this study, the mobile platform is equipped with a six-axis robotic arm as the main body of the robot. The core of the main controller uses the Jetson TX2 AI embedded platform and integrates Arduino and Mbed microprocessors with a robot operating system (Robot Operating System) for control. The mode of operation is that when the receiving end (PC) receives the earthquake quick report, it will immediately send a message to the robot through the communication module on the receiving end. The communication modules on the robot end will be responsible for waiting for the arrival of the earthquake information. When the earthquakes information is received, it will be mounted on the robot end. The buzzer and LED lights on the computer will start to operate, turn on the normally created indoor flat map (SLAM), perform indoor fixed-point cruises, and use Jetson TX2 combined with deep learning models and depth cameras for real-time image recognition and tracking. When the system detects a person who has fallen on the ground, it will go to the location and send the coordinates to the ROS monitoring system for the caregiver to check. Because the trapped person may be inconvenient to move but has a clear consciousness. At this time, the robot arm combines with the depth camera based on the pre-trained Mobilenet-SSD deep learning model to identify the location of the fallen person′s five senses and move to the range where the fallen person can eat, and give appropriate Food or medicine supplies, waiting for rescue.