機器學習 (machine learning, ML) 是人工智慧 (artificial intelligence, AI)的一種,透過大量的資料跟經驗來從中反覆學習並且找到分類規則或是訓練模型,在之後輸入新的資料時可以透過規則或模型進行預測。現今機器學習在各個領域都在快速進展,針對機器學習的加速器也被大量研究,這些加速器通常處在長時間的運算,加速老化效應的發生,老化會造成計算延遲增加,重則會產生功能錯誤,而在機器學習中的老化會使精確度下降,在實際應用層面如自駕車以及醫療模型中出現精確度下降是不可接受的,所以處理該問題是當務之急。 機器學習需要大量乘加運算,乘法所需時間更是加法無法比擬的。現今的乘法器多由壓縮器組合而成,壓縮器的近似運算也是眾多研究的目標,因為機器學習的特性,可以犧牲一部份的精度換取時間跟功耗。而老化所造成的延遲也可透過犧牲精度來減少運算時間補償,但是目前的研究缺少針對老化而做出的優化,在本篇論文中,我們提出一個有效的可動態重新配置的4:2近似壓縮器,可以在還未老化時精確運算,在老化時透過近似運算減少運算時間用以補償老化增加的延遲。實驗結果表明,我們的方法可以保證老化10年的近似運算精確度不變。;Machine Learning is a subset of Artificial Intelligence (AI) that involves learning from large amounts of data and experience to identify classification rules or train models. When new data is inputted, these models can make predictions based on the learned rules. Currently, machine learning is advancing rapidly across various fields, and accelerators for machine learning are being extensively researched. These accelerators often operate for long periods, accelerating aging effects. Aging can lead to increased computational latency and can also lead to functional errors in severe cases. In machine learning, aging can cause a decrease in accuracy, which is unacceptable in practical applications such as autonomous vehicles and medical models. Therefore, addressing this issue is urgent.
Machine learning requires numerous multiply-accumulate operations, with multiplication being significantly more time-consuming than addition. Modern multipliers are often composed of compressors, and approximate computing for compressors is a major research focus. Given the characteristics of machine learning, some precision can be sacrificed for improved time and power efficiency. Aging-induced delays can also be mitigated by sacrificing precision to compensate for increased computation time. However, current research lacks optimization specifically for aging. In this paper, we propose an effective dynamically reconfigurable 4:2 approximate compressor that performs accurate computations before aging and uses approximate computations to reduce computation time and compensate for aging-induced delays. Experimental results show that our method ensures that the accuracy of approximate computations remains unchanged after 10 years of aging.