博碩士論文 103582008 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator陳思卉zh_TW
DC.creatorSih-Huei Chenen_US
dc.date.accessioned2018-8-17T07:39:07Z
dc.date.available2018-8-17T07:39:07Z
dc.date.issued2018
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=103582008
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract本論文針對離散及連續的潛在空間,提出基於機率型潛在變數模型 (Probabilistic Latent Variable Model) 的三種表示學習方法。對於離散型的潛在變數,本論文提出基於高斯階層型潛在狄氏配置 (Gaussian Hierarchical Latent Dirichlet Allocation, GhLDA) 的階層表示法以捕捉低階特徵參數的潛在特性。我們藉由發展一個能夠自行調整架構之階層樹狀混合模型來學習資料的潛藏表示,其對於不同類別之間的細微差別可以很好地建模。對於連續型的潛在變數,本論文提出兩個基於連續潛在變數的表示學習方法。其一,本論文提出複數高斯潛在變數模型 (Complex-Valued Gaussian Process Latent Variable Model, CGPLVM) 來學習資料的複數表示。模型的主要概念為假設複數的資料為其對應之低維度潛在變數的函數,其中此函數來自一個複數的高斯過程。此外,我們試圖保留資料的全域及局部結構並同時鼓勵學到的表示具有鑑別能力,因此我們將原複數高斯潛在變數模型的目標函數加入了對於複數資料而設計的局部保留項及鑑別項。其二,本論文提出基於變分自編碼器 (Variational Autoencoder, VAE) 及高斯過程分類器 (Gaussian Process Classifier, GPC) 之深度協同學習 (Deep collaborative learning) 方法。我們將高斯過程分類器結合至變分自編碼器,讓變分自編碼器在學習表示的過程中能夠考慮到類別資訊,並同時訓練分類器。我們提出的表示很好地區分類別之間的資料變異,並增加了原本基於變分自編碼器之表示的鑑別能力。所開發的方法之效能在多媒體的資料上進行評估,實驗結果證明了所提出方法的優越性能,特別是對於只有少量訓練資料的情況。zh_TW
dc.description.abstractProbabilistic framework has emerged as a powerful technique for representation learning. This dissertation proposes probabilistic latent variable model-based representation learning methods that involve both discrete and continuous latent spaces. For a discrete latent space, a hierarchical representation that is based on the Gaussian hierarchical latent Dirichlet allocation (G-hLDA) is proposed for capturing the latent characteristics of low-level features. Representation is learned by constructing an infinitely deep and branching tree-structured mixture model, which effectively models the subtle differences among classes. For a continuous latent space, a novel complex-valued latent variable model, named the complex-valued Gaussian process latent variable model (CGPLVM), is developed for discovering a compressed complex-valued representation of complex-valued data. The key concept of CGPLVM is that complex-valued data is approximated by a low-dimensional complex-valued latent representation through a function that is drawn from a complex Gaussian process. Additionally, we attempt to preserve both global and local data structures while promoting discrimination. A new objective function that incorporates a locality-preserving and a discriminative term for complex-valued data is presented. Then, a deep collaborative learning framework that is based on a variational autoencoder (VAE) and a Gaussian process (GP) is proposed to represent multimedia data with greater discriminative power than previously achieved. A Gaussian process classifier is incorporated into the VAE to guide a VAE-based representation, which distinguishes variations of data among classes and achieves the dual goals of reconstruction and classification. The developed methods are evaluated using multimedia data. The experimental results demonstrate the superior performances of the proposed methods, especially for situations with only a small number of training data.en_US
DC.subject潛在變數模型zh_TW
DC.subject高斯過程zh_TW
DC.subject深度學習zh_TW
DC.subjectLatent Variable Modelen_US
DC.subjectGaussian Processen_US
DC.subjectDeep Learningen_US
DC.title機率型潛在變數模型於資料表示法學習zh_TW
dc.language.isozh-TWzh-TW
DC.titleProbabilistic Latent Variable Model for Learning Data Representationen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明