支持向量機是一種有效的二元分類器,在各種應用中具有出色的性能。然而,在 處理大型資料集或來自分散來源的資料時,計算資源的限制和資料隱私問題可能 會阻礙支持向量機的效能。在本文中,我們研究分散共識支持向量機,它具有兩 個主要優點:它允許每位工作者在不共享資料的情況下推導出更通用的超平面, 從而保持資料的隱私;它能夠將大型問題分解為可處理的子問題,透過分散式計 算提高處理速度。儘管如此,由於分散共識支持向量機的目標函數的不可微性而 面臨挑戰,我們採用了平滑支持向量機的處理方法來解決這個問題,結合平滑函 數來增強目標函數的可微性,從而產生了所謂的分散共識平滑支持向量機,它利 用1-範數進行懲罰計算,優化效率和準確性。最後,我們透過多次數值實驗驗證 該演算法的效能。;The support vector machine (SVM) is an effective binary classifier with excellent performance across various applications. However, computing limitations and data privacy concerns can hinder SVM’s performance when handling large datasets or data from distributed sources. In this thesis, we study the distributed consensus SVM, which offers two primary advantages: it maintains data privacy by allowing each worker to derive a more generalized hyperplane without data sharing, and it enables the decomposition of large-scale problems into manageable sub-problems for enhanced processing speed through distributed computing. Nonetheless, the distributed consensus SVM faces challenges due to the non-differentiability of its objective function. We adopt the smoothing SVM approach to address this issue, incorporating a smoothing function to enhance function differentiability. It leads to the so-called distributed consensus smoothing SVM, which utilizes the 1-norm for penalty calculation, optimizing efficiency and accuracy. Finally, we validate the performance of this algorithm through several numerical experiments.