| 摘要: | 隨著現代生活節奏日益加快,咖啡成為許多人在繁忙早晨中的必備飲品。膠囊咖啡機因操作便利、口味多樣而廣受歡迎,然而,目前市面上的自動化咖啡系統多專注於義式咖啡沖泡流程,鮮少針對膠囊咖啡進行完整的自動辨識與操作整合。因此,本論文旨在設計並實作一套結合影像處理與機械手臂控制技術的膠囊咖啡自動沖泡系統,實現從使用者介面操作到實際沖泡完成的全流程自動化,提升日常使用的便利性與操作效率。 本系統包含四大模組:使用者介面、膠囊種類辨識模組、膠囊姿態分析模組,以及六軸機械手臂控制模組。使用者可透過網頁選擇欲沖泡的咖啡膠囊種類,系統即透過攝影機擷取收納盤影像,並以物件偵測演算法辨識膠囊的種類與位置。針對選定目標膠囊,系統進一步進行裁切與影像處理,包含灰階轉換、雙邊濾波與邊緣檢測,藉此分析膠囊在盤中的擺放姿態,再轉換為夾爪所需之操作角度與位置。 為因應膠囊擁擠而無法直接夾取的情形,系統依據邊界框間距分級判斷環境條件,並選擇適當策略,如直接夾取、靠近後微幅推開鄰近膠囊,或使用推桿推開周圍膠囊,為目標膠囊創造足夠夾取空間。系統可實現全流程自動化作業,操作過程無須人工介入。系統另設計簡易操作的網頁介面,除提供膠囊選擇外,亦支援自然語言輸入需求,能根據輸入偏好推薦適當膠囊種類,提升互動體驗。 實驗部分評估系統辨識與操作效能。物件偵測模型在測試資料上達成99.5%準確率與99.8%召回率;姿態分析準確率達97.6%;自動沖泡流程共進行80次操作,成功完成73次,整體成功率為91.25%。系統具穩定性與可用性,具應用於家庭使用或自助設備等場域之潛力,未來亦可進一步導入深度攝影、固定光源與自動調整策略,以提升整體效能與適應力。 ;As modern lifestyles become increasingly fast-paced, coffee has become an essential beverage during busy mornings. Capsule coffee machines are popular due to their convenience and variety of flavors. However, most automated coffee systems focus on espresso brewing, with few providing comprehensive recognition and operation for capsule coffee. This thesis designs and implements an automated capsule coffee brewing system integrating image processing and robotic arm control, achieving full automation from user interface to brewing completion, enhancing daily convenience and efficiency. The system includes four modules: user interface, capsule type recognition, capsule posture analysis, and six-axis robotic arm control. Users select capsules via a web interface. The system captures images of the capsule tray and uses object detection to identify capsule types and locations. For the selected capsule, image cropping and processing—grayscale conversion, bilateral filtering, and edge detection—analyze its posture, which is converted into gripping angles and positions for the robot. To handle crowded capsules, the system classifies environmental conditions based on bounding box spacing and applies strategies such as direct grasping, slight pushing of neighbors, or using a pusher tool to create space. Fully automated operation requires no manual intervention. The user-friendly web interface supports capsule selection and natural language input for personalized recommendations. Experiments show the object detection model achieves 99.5% precision and 99.8% recall; posture analysis accuracy reaches 97.6%; and 73 successes out of 80 automated brewing attempts yield a 91.25% success rate. The system is stable and usable, with potential for home or self-service applications. Future work may incorporate depth cameras, fixed lighting, and automatic adjustments to improve performance and adaptability. |