| 摘要: | 非侵入式負載監測(Non-Intrusive Load Monitoring, NILM)旨在從單一總用電訊號推估各別電器的用電資訊。隨著智慧電網與物聯網的發展,NILM技術可協助用戶瞭解用電明細並提升能源效率。然而,傳統方法面臨許多挑戰,包括不同電器訊號交疊、資料稀少以及負載訊號的非平穩特性等問題。為了解決這些挑戰,本論文提出一種基於離散編碼字典的深度學習架構,用於NILM的資料擴增與負載分解任務。本研究首先針對電器啟動事件進行資料視窗擷取,利用向量量化變分自編碼器(Vector Quantized-Variational Autoencoder, VQ-VAE)配合遮罩式生成Transformer 模型(Masked Generative Transformer, 又稱 MaskGIT)建立單一電器的資料生成器,以生成擬真的單電器用電曲線供資料增強使用。在負載分解方面,本論文設計了一種多電器分解架構:包含共享的小波轉換編碼器、電器專用的適配層與離散量化器,以及用於預測各電器狀態的分類器。該架構透過共享時頻特徵提取和專用編碼字典約束,實現對總負載的高準確度分解。在英國與美國家庭用電資料集(UK-DALE、REDD、REFIT)上的實驗結果顯示,所提方法在各電器之辨識精度上均優於傳統基準模型,F1-score 提升約10–15%,並在資料不足情況下保持穩定表現。同時,透過消融實驗驗證了離散編碼字典、資料擴增以及小波編碼器對模型效能的貢獻。本研究證明了結合離散表示學習與生成模型可有效強化NILM的資料擴增與分解能力,為智慧用電分析領域提供新穎的解決方案。;Non-intrusive load monitoring (NILM) aims to infer individual appliance power usage from a single aggregate measurement. With the growth of smart grids and IoT, NILM enables users to understand detailed energy consumption and improve efficiency. However, conventional NILM approaches face challenges such as overlapping appliance signals, limited labeled data, and non-stationary load characteristics. To address these issues, this thesis proposes a deep learning architecture based on a discrete codebook for data augmentation and load disaggregation in NILM. We first perform event-based segmentation of appliance activation periods and develop a single-appliance data generator using a Vector Quantized Variational Autoencoder (VQ-VAE) combined with a Masked Generative Transformer (MaskGIT). This generator learns a discrete codebook of appliance power patterns and can synthesize realistic appliance load sequences for data augmentation. Next, for load decomposition, we design a multi-appliance disaggregation model consisting of a shared wavelet transform encoder, appliance-specific adaptation layers and quantizers, and classifiers for appliance state prediction. By leveraging shared time-frequency feature extraction and enforcing appliance-specific codebook constraints, the proposed model achieves accurate disaggregation of the total load. Experiments on UK-DALE, REDD, and REFIT household datasets demonstrate that our method outperforms traditional baseline models in appliance recognition accuracy, achieving about 10–15% higher F1-scores, and maintains robust performance under limited data. Ablation studies further confirm the contribution of the discrete codebook, data augmentation, and wavelet encoder to the model’s effectiveness. In summary, this research shows that integrating discrete representation learning with generative models can effectively enhance data augmentation and disaggregation in NILM, offering a novel solution for smart energy analytics. |