博碩士論文 105481014 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:13 、訪客IP:3.15.10.137
姓名 柯彥輝(YEN-HUEI KO)  查詢紙本館藏   畢業系所 企業管理學系
論文名稱 運用語音數據判別客戶滿意度
(Distinguishing Customer Satisfaction with Vocal Data)
相關論文
★ 在社群網站上作互動推薦及研究使用者行為對其效果之影響★ 以AHP法探討伺服器品牌大廠的供應商遴選指標的權重決定分析
★ 以AHP法探討智慧型手機產業營運中心區位選擇考量關鍵因素之研究★ 太陽能光電產業經營績效評估-應用資料包絡分析法
★ 建構國家太陽能電池產業競爭力比較模式之研究★ 以序列採礦方法探討景氣指標與進出口值的關聯
★ ERP專案成員組合對績效影響之研究★ 推薦期刊文章至適合學科類別之研究
★ 品牌故事分析與比較-以古早味美食產業為例★ 以方法目的鏈比較Starbucks與Cama吸引消費者購買因素
★ 探討創意店家創業價值之研究- 以赤峰街、民生社區為例★ 以領先指標預測企業長短期借款變化之研究
★ 應用層級分析法遴選電競筆記型電腦鍵盤供應商之關鍵因子探討★ 以互惠及利他行為探討信任關係對知識分享之影響
★ 結合人格特質與海報主色以類神經網路推薦電影之研究★ 資料視覺化圖表與議題之關聯
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 根據客戶消費行為研究,客戶滿意度明顯影響客戶回購意願,產品銷售業績和企業成長。通常,滿意度測量需要客戶花費額外的時間來填寫購買後的問卷調查。然而,在電話行銷產業,問卷調查仍存在一些執行上的限制,因此,開發一種更方便收集數據的方法及無需客戶購買後填寫問卷調查即可直接評估客戶滿意度的工具,是非常值得研究。
這項研究試圖驗證,透過語音數據分析是否可以用來區分客戶滿意度。本研究完成了以下任務:設計一個實驗流程來收集客戶表達滿意度的聲音和可驗證相應此聲音的真實數據;根據收集到的數據,將參與者分為滿意、中立和不滿意;從客戶表達滿意度的聲音數據中提取MFCCs(梅爾頻率倒譜係數)作為特徵;由於所收集到的聲音數據集有限,使用Auto-Encoder (自動編碼器) 進一步減少聲音的向量維度;所提取的MFCCs和音韻特徵被輸入到LSTM-RNNs(長期短期記憶-遞歸循環神經網絡)和SVM (支持向量機),以建立區分客戶滿意度的模型;使用nested cross-validation (雙迴圈交叉驗證) 訓練和評估模型;SVM 和LSTM 的平均準確率分別可以達到 73.97% 和 71.95%。
摘要(英) Customer repurchase behavior is manifestly influenced by satisfaction and the degree of customer satisfaction impacts on business growth and enterprise performance. Typically, satisfaction measurement requires customers to spend additional time to fill in a post-purchase survey. A more convenient way to collect data is to ask customers to express their degree of satisfaction while they are actually using the product or service they have purchased. This study strived to verify if voices can be used to distinguish customer satisfaction. An experiment was set up to collect voices and survey of satisfaction right after participants consume beverage offered. Participants were clustered into satisfaction, neutrality, and dissatisfaction according to the collected questionnaires. The MFCCs (Mel Frequency Cepstral Coefficients) were extracted from voices as features. Due to the dataset was limited, Auto-Encoder was used to reduce the voice features. The extracted and prosodic features were fed into an LSTM-RNNs (Long Short-Term Memory-Recurrent Neural Networks) and SVM models to distinguish customer satisfaction. The nested cross-validation method was used to train and evaluate models. The average accuracy of SVM and LSTM could achieve 73.97% and 71.95% of accuracy, respectively.
關鍵字(中) ★ 顧客滿意度
★ 梅爾頻率倒譜系數
★ 長短期記憶模型-遞歸循環神經網路
★ 支援向量機
關鍵字(英) ★ Customer satisfaction
★ MFCCs
★ LSTM
★ SVM
論文目次 中文摘要 i
Abstract ii
Preface iii
Table of Contents iv
List of Figures v
List of Tables vi
Chapter Ⅰ Introduction 1
Chapter Ⅱ Related works 5
2-1 Expectancy confirmation theory 5
2-2 Applications of MFCCs 6
Chapter Ⅲ Methodology 9
3-1 Research design 9
3-2 Construction of satisfaction dataset 11
3-3 MFCC processing 14
3-4 Auto-Encoder neural networks 21
3-5 LSTM model construction 23
3-6 SVM model construction 27
Chapter Ⅳ Experimental results and analysis 29
4-1 Experimental data collection and analysis 29
4-1-1 Statistical analysis and validation 29
4-1-2 Acoustic extraction 33
4-1-3 Auto-Encoder for feature reconstruction 34
4-2 Experimental results 37
4-2-1 Nested cross validation 37
4-2-2 Classification 40
Chapter Ⅴ Conclusion 49
5-1 Implications 50
5-2 Limitation and further research 51
Acknowledgment 52
Reference 53

參考文獻 [1] N. Kamaruddin, A. W. A. Rahman, and A. N. R. Shah, “Measuring customer satisfaction through speech using valence-arousal approach,” in Proc. 6th Int. Conf. Inf. Commun. Technol. Muslim World (ICT4M), Nov. 2016, pp. 298–303.
[2] Q. Llimona, J. Luque, X. Anguera, Z. Hidalgo, S. Park, and N. Oliver, “Effect of gender and call duration on customer satisfaction in call center big data,” in Proc. 16th Annu. Conf. Int. Speech Commun. Assoc.( INTERSPEECH), Sep. 2015, pp. 1825–1829.
[3] R. L. Oliver, “A cognitive model of the antecedents and consequences of satisfaction decisions”, Journal of Marketing Research, Vol. 17, no. 4, pp. 460–469, Nov. 1980.
[4] E. Garbarino and M. S. Johnson, “The different roles of satisfaction, trust, and commitment in customer relationships,” J. Mark., vol. 63, no. 2, pp. 70–87, Apr. 1999.
[5] A. Ando, R. Masumura, H. Kamiyama, S. Kobashikawa, Y. Aono, and T. Toda, “Customer satisfaction estimation in contact center calls based on a hierarchical multi-task model,” IEEE/ACM Trans. Audio Speech Lang. Process., vol. 28, pp. 715–728, Jan. 2020.
[6] J. J. Cronin and S. A. Taylor, “Measuring service quality: A reexamination and extension,” J. Mark., vol. 56, no. 3, pp. 55–68, Jul. 1992.
[7] D. H. Kincade, V. L. Giddings, and H. J. Chen-Yu, “Impact of product-specific variables on consumers’ post-consumption behaviour for apparel products: USA,” J. Consum. Stud. Home Econ., vol. 22, no. 2, pp. 81–90, Jun. 1998.
[8] M. Brennan, S. Benson, and Z. Kearns, “The effect of introductions on telephone survey participation rates,” Int. J. Mark. Res., vol. 47, no. 1, pp. 65–74, Jan. 2005.
[9] M. Rod and N.J. Ashill, “The impact of call centre stressors on inbound and outbound call‐centre agent burnout,” Manag. Serv. Qual. Int. J., vol. 23, no. 3, pp. 245-264, May. 2013.
[10] “Call centers: A global strategic business report,” Global Industry Analysts. [Online]. Available:http://www.strategyr.com/MarketResearch/infographTemplate.asp?code=MCP-1145. [Accessed: May. 20, 2017].
[11] D. Kang and Y. Park, “Review-based measurement of customer satisfaction in mobile service: Sentiment analysis and VIKOR approach,” Expert Syst. Appl., vol. 41, no. 4, Part 1, pp. 1041–1050, Mar. 2014.
[12] J. Park, “Framework for sentiment-driven evaluation of customer satisfaction with cosmetics brands,” IEEE Access, vol. 8, pp. 98526–98538, May 2020.
[13] S. Godbole and S. Roy, “Text classification, business intelligence, and interactivity: Automating c-sat analysis for services industry,” in Proc. 14th ACM SIGKDD Int. Conf. Knowl. discovery data mining, Aug. 2008, pp. 911–919.
[14] Y. Park and S. C. Gates, “Towards real-time measurement of customer satisfaction using automatically generated call transcripts,” in Proc. 18th ACM Conf. Inf. Knowl. Manage., Nov. 2009, pp. 1387–1396.
[15] J. Sun, W. Xu, Y. Yan, C. Wang, Z. Ren, P. Cong et al., “Information fusion in automatic user satisfaction analysis in call center,” in Proc. 8th Int. Conf. Intell. Hum. Mach. Syst. Cybern. (IHMSC), Aug. 2016, vol. 01, pp. 425–428.
[16] M. Płaza and Ł. Pawlik, "Influence of the contact center systems development on key performance indicators," IEEE Access, vol. 9, pp. 44580-44591, Mar. 2021.
[17] A. Tanaka, A. Koizumi, H. Imai, S. Hiramatsu, E. Hiramoto, and B. de Gelder, “I feel your voice: Cultural differences in the multisensory perception of emotion,” Psychol. Sci., vol. 21, no. 9, pp. 1259–1262, Aug. 2010.
[18] P. Liu, S. Rigoulot, and M. D. Pell, “Culture modulates the brain response to human expressions of emotion: Electrophysiological evidence,” Neuropsychologia, vol. 67, pp. 1–13, Jan. 2015.
[19] “Cross-cultural communication: Much more than just a linguistic stretch,” ScienceDaily, Feb. 24, 2015. [Online]. Available: https://www.sciencedaily.com/releases/2015/02/150224102843.htm. [Accessed: Apr. 22, 2020].
[20] R. Banse and K. R. Scherer, “Acoustic profiles in vocal emotion expression,” J. Pers. Soc. Psychol., vol. 70, no. 3, pp. 614–636, Apr. 1996.
[21] V. Bostanov and B. Kotchoubey, “Recognition of affective prosody: Continuous wavelet measures of event‐related brain potentials to emotional exclamations,” Psychophysiology, vol. 41, no. 2, pp. 259–268, Mar. 2004.
[22] A. Bhattacherjee, “Understanding information systems continuance: An expectation-confirmation model,” MIS Q., vol. 25, no. 3, pp. 351–370, Sep. 2001.
[23] S. Davis and P. Mermelstein, “Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences,” IEEE Trans. Acoust. Speech Signal Process., vol. 28, no. 4, pp. 357–366, Aug. 1980.
[24] R. H. Fazio and M. P. Zanna, “Direct Experience And Attitude-Behavior Consistency,” in Advances in Experimental Social Psychology, vol. 14, L. Berkowitz, Ed. Academic Press, 1981, pp. 161–202.
[25] D. J. Bem, “Self-Perception Theory,” in Advances in Experimental Social Psychology, vol. 6, L. Berkowitz, Ed. Elsevier, 1972, pp. 1–62.
[26] H. K. Hunt, “Consumer satisfaction, dissatisfaction, and complaining behavior,” J. Soc. Issues, vol. 47, no.1, pp. 107-117, Apr. 1991.
[27] E. W. Anderson and M. W. Sullivan, “The antecedents and consequences of customer satisfaction for firms,” Mark. Sci., vol. 12, no. 2, pp. 125–143, May 1993.
[28] P. G. Patterson and L. W. Johnson, “Disconfirmation of expectations and the gap model of service quality: An integrated paradigm,” J. Consum. Satisfaction Dissatisfaction Complaining Behav., vol. 6, no. 1, pp. 90–99, 1993.
[29] V. A. Zeithaml, L. L. Berry, and A. Parasuraman, “The behavioral consequences of service quality,” J. Mark., vol. 60, no. 2, pp. 31–46, Apr. 1996.
[30] E. Karahanna, D. W. Straub, and N. L. Chervany, “Information technology adoption across time: A cross-sectional comparison of pre-adoption and post-adoption beliefs,” MIS Q., vol. 23, no. 2, pp. 183–213, Jun. 1999.
[31] B. Edvardsson, M. D. Johnson, A. Gustafsson, and T. Strandvik, “The effects of satisfaction and loyalty on profits and growth: Products versus services,” Total Qual. Manag., vol. 11, no. 7, pp. 917–927, Sep. 2000.
[32] Y.-K. Lee, C.-K. Lee, J. Choi, S.-M. Yoon, and R. J. Hart, “Tourism’s role in urban regeneration: Examining the impact of environmental cues on emotion, satisfaction, loyalty, and support for Seoul′s revitalized Cheonggyecheon stream district,” J. Sustain. Tour., vol. 22, no. 5, pp. 726–749, Jan. 2014.
[33] C. Ranaweera and J. Prabhu, “The influence of satisfaction, trust and switching barriers on customer retention in a continuous purchasing setting,” Int. J. Serv. Ind. Manag., vol. 14, no. 4, pp. 374–395, Oct. 2003.
[34] F. F. Reichheld, “The one number you need to grow,” Harv. Bus. Rev., vol. 81, no. 12, pp. 46–55, Dec. 2003.
[35] O. Abdel-Hamid, A.-R. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio Speech Lang. Process., vol. 22, no. 10, pp. 1533–1545, Oct. 2014.
[36] P. M. Chauhan and N. P. Desai, “Mel frequency cepstral coefficients (MFCC) based speaker identification in noisy environment using wiener filter,” in Proc. Int. Conf. Green Comput. Commun. Elect. Eng. (ICGCCEE), Mar. 2014, pp. 1–5.
[37] V. Tiwari, “MFCC and its applications in speaker recognition,” Int. J. Emerg. Tech., vol. 1, no. 1, pp. 19–22, Feb. 2010.
[38] J. Martinez, H. Perez, E. Escamilla, and M. M. Suzuki, “Speaker recognition using Mel frequency cepstral coefficients (MFCC) and vector quantization (VQ) techniques,” in Proc. 22nd Int. Conf. Elect. Commun. Computers (CONIELECOMP), Feb. 2012, pp. 248–251.
[39] S. Singh and E. G. Rajan, “MFCC VQ based speaker recognition and its accuracy affecting factors,” Int. J. Comput. Appl., vol. 21, no. 6, pp. 1–6, May 2011.
[40] L. Wang, K. Minami, K. Yamamoto, and S. Nakagawa, “Speaker identification by combining MFCC and phase information in noisy environments,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), Mar. 2010, pp. 4502–4505.
[41] N. Singh, R. A. Khan, and R. Shree, “MFCC and prosodic feature extraction techniques: a comparative study,” Int. J. Comput. Appl., vol. 54, no. 1, pp. 9–13, Sep. 2012.
[42] P. P. Dahake, K. Shaw, and P. Malathi, “Speaker dependent speech emotion recognition using MFCC and support vector machine,” in Proc. Int. Conf. Autom. Control Dyn. Optim. Techn. (ICACDOT), Sep. 2016, pp. 1080–1084.
[43] G. Friedland, O. Vinyals, Y. Huang, and C. Muller, “Prosodic and other long-term features for speaker diarization,” IEEE Trans. Audio Speech Lang. Process., vol. 17, no. 5, pp. 985–993, Jul. 2009.
[44] I. Luengo, E. Navas, I. Hernáez, and J. Sánchez, “Automatic emotion recognition using prosodic parameters,” in Proc. 9th Eur. Conf. Speech Commun. Technol. (INTERSPEECH), 2005, pp. 493–496.
[45] R. W. M. Ng, T. Lee, C. Leung, B. Ma, and H. Li, “Analysis and selection of prosodic features for language identification,” in Proc. Int. Conf. Asian Lang. Process., Dec. 2009, pp. 123–128.
[46] M. S. Sinith, E. Aswathi, T. M. Deepa, C. P. Shameema, and S. Rajan, “Emotion recognition from audio signals using support vector machine,” in Proc. IEEE Recent Advances Intell. Comput. Syst. (RAICS), Dec. 2015, pp. 139–144.
[47] K. Sönmez, E. Shriberg, L. Heck, and M. Weintraub, “Modeling dynamic prosodic variation for speaker verification,” in Proc. 5th Int. Conf. Spoken Lang. Process. (ICSLP), Nov. 1998, pp. 3189–3192.
[48] P. Rosso, L. Hurtado, E. Segarra and E. Sanchis, “On the voice-activated question answering,” IEEE Trans Syst Man Cybern C Appl Rev., vol. 42, no. 1, pp. 75-85, Jan. 2012.
[49] X. Sun, Z. Pei, C. Zhang, G. Li and J. Tao, “Design and analysis of a human-machine interaction system for researching human′s dynamic emotion,” IEEE Trans. Syst. Man. Cybern. Syst., vol. 51, no. 10, pp. 6111–6121, Oct. 2021.
[50] C. Fornell and D. F. Larcker, “Evaluating structural equation models with unobservable variables and measurement error,” J. Mark. Res., vol. 18, no. 1, pp. 39–50, Feb. 1981.
[51] “Welcome to python_speech_features′s documentation!,” Welcome to python_speech_features′s documentation! - python_speech_features 0.1.0 documentation. [Online]. Available: https://python-speech-features.readthedocs.io/en/latest/. [Accessed: May. 20, 2017].
[52] F. J. Harris, “On the use of windows for harmonic analysis with the discrete Fourier transform,” Proc. IEEE, vol. 66, no. 1, pp. 51–83, Jan. 1978.
[53] P. Boersma, “Accurate short-term analysis of the fundamental frequency and the harmonics-to-noise ratio of a sampled sound,” Proc. Inst. Phonetic sciences, vol. 17, pp. 97–110, Mar. 1993.
[54] M. Pakyurek, M. Atmis, S. Kulac, and U. Uludag, “Extraction of novel features based on histograms of MFCCs used in emotion classification from generated original speech dataset,” Elektron. ir Elektrotechnika, vol. 26, no. 1, pp. 46–51, Feb. 2020.
[55] P. G. Patterson and R. A. Spreng, “Modelling the relationship between perceived value, satisfaction and repurchase intentions in a business‐to‐business, services context: An empirical examination,” Int. J. Serv. Ind. Manag., vol. 8, no. 5, pp. 414–434, Dec. 1997.
[56] M. Sun, Q. Tan, R. Ding and H. Liu, “Cross-domain sentiment classification using deep learning approach,” in Proc. IEEE 3rd Int. Conf. Cloud Comput. Intell. Syst., Nov. 2014, pp. 60-64.
[57] N. Long, D. Gianola, G.J.M. Rosa and K.A. Weigel, “Dimension reduction and variable selection for genomic selection: application to predicting milk yield in Holsteins,” J. Anim. Breed. Genet., vol. 128, no.4, pp. 247–257, Aug. 2011.
[58] M. P. Black et al., “Toward automating a human behavioral coding system for married couples’ interactions using speech acoustic features,” Speech Commun., vol. 55, no. 1, pp. 1-21, Jan. 2013.
[59] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504-507, Jul. 2006.
[60] D. Erhan, Y. Bengio, A. Courville, P. A. Manzagol, P. Vincent and S. Bengio, “Why does unsupervised pre-training help deep learning?,” J. Mach. Learn. Res., vol. 11, no. 19, pp. 625-660, Mar. 2010.
[61] M. F. Ferreira, R. Camacho and L. F. Teixeira, “Using autoencoders as a weight initialization method on deep neural networks for disease detection,” BMC Med Inform Decis Mak., vol. 20, no. S5, pp. 1-8, Aug. 2020.
[62] J. F. Hair, R. E. Anderson, R. L. Tatham and W. C. Black, Multivariate Data Analysis, Hoboken, NJ, USA: Prentice-Hall (Inc.), 1998.
[63] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997.
[64] A. Shirani and A. R. N. Nilchi, “Speech emotion recognition based on SVM as both feature selector and classifier,” Int. J. Image Graph. Signal Process., vol. 8, no. 4, pp. 39–45, Apr. 2016.
指導教授 許秉瑜(Ping-Yu Hsu) 審核日期 2023-1-13
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明