博碩士論文 995205006 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:96 、訪客IP:18.116.62.106
姓名 范振維(Chen-Wei Fan)  查詢紙本館藏   畢業系所 軟體工程研究所
論文名稱 CRUnit - Capture / Replay Based Unit Testing
(CRUnit - Capture / Replay Based Unit Testing)
相關論文
★ Locating Interested Code by Program Execution Paths with Debugger★ An OpenStack Based Testing as a Service Platform
★ Visualize Ripple Effect with Analyzing Object-Oriented Design Relationship★ Change History Tracing Tool for Arbitrary Programming Language
★ Virtual Objects for Program Visualization in xDIVA★ Enhance Stress Testing Power by Synchronizing JMeter Test Scripts
★ 以 GDB 實作 XThreadDebugger-Linux(XTD-Linux)提供 Linux 平台之多執行緒除錯★ 支援版本控制系統之文件撰寫工具
★ Korat: An O.S.-independent Capture/Replay Test Automation System★ GUI Component Detection for Cross-Platform Applications–Using Input Device and Image Change Synergistic Detection Method
★ 應用設計模式於 RPA 軟體 實作低維護成本的屬性面板自動修正功能★ 使用靜態分析偵測 JavaScript 應用程式中的 Race Condition
★ 基於物件導向與 Clean Code 概念進行 xDiva 重構與優化★ 基於xDIVA之利用關鍵影格將3D物件動畫化與即時保存的視覺化工具
★ CoolPCB:以控制點為主的電路板切割成形繪製自動化方法★ 利用軟體 UI 實現擴充功能 突破原始碼限制的工具
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在軟體的開發流程中導入xUnit測試框架(xUnit testing framework)的時候,往往需要花費很大的力氣對SUT (System Under Test)進行重構(refactoring)才能夠順利地進行單元測試。雖然同樣都是軟體測試,單元測試並不像系統整合測試一樣可以利用工具來自動化地進行運作,它反而需要程式開發人員額外手動撰寫大量的測試程式碼才能達到目的。很不幸地,這些手動撰寫的測試程式碼就像系統程式碼一樣,非常容易落入程式碼維護的災難之中。
在這篇論文當中,我們針對開發人員在使用xUnit測試框架的時候需要付出額外的代價所帶來的問題提出了一種新的方法並且依據這個方法打造了一個工具,稱之為CRUnit(Capture / Replay based Unit Testing)。CRUnit是Eclipse整合開發環境(Integrated Development Environment, IDE)裡面的JUnit工具的延伸,它藉由除錯器(debugger)的幫助,將單元測試分成錄製與重播(Capture / Replay)兩種階段來進行,以取代傳統xUnit測試框架裡面所需要大量撰寫的驗證程式碼(assertions)。CRUnit突破了傳統xUnit testing framework必須將SUT視為一個黑盒子來做測試的限制,它利用除錯器的幫忙可以對SUT的內部狀態做出探測和驗證,如此一來我們就再也不需要在測試程式碼裡面撰寫冗長和繁瑣的assertion,也不需要為了測試的目的而違背黑盒子原則去對CUT (Class Under Test)的封裝做出破壞。我們利用的是人類善於使用人眼和大腦來對所看見的事物做出判斷和驗證的天性,搭配視覺化工具(visualizers)的幫忙來完成這樣子的半自動化單元測試。
摘要(英) Adopting xUnit testing framework in software development often requires a lot of refactoring to the system under test (SUT). Contrast to system testing which can be performed by tools, xUnit testing is coding activity which produces test code as the delivery. However, test code often suffers from maintenance problems like system code.
In this paper, a prototype tool called CRUnit is proposed to reduce the test overhead from adopting xUnit testing framework. CRUnit is an extension to the JUnit module in Eclipse IDE, which can replace the hand-crafted assertions by a Capture / Replay process with the help from debuggers. Contrast to xUnit testing framework that treats a SUT as a black-box, CRUnit probes the internal states of a SUT so that complicated hand-crafted assertions can be excluded from test methods and class encapsulation principles are no longer compromised. This semi-automated process is achieved by introducing the verification power of human brain and human eyes and the help from the “visualizers”.
關鍵字(中) ★ 單元測試 關鍵字(英) ★ Capture / Replay
★ unit testing
論文目次 摘要 i
Abstract ii
圖表目錄 iv
一、 緒論 1
二、 研究背景 4
2-1. xUnit Testing Framework簡介 5
2-2. Test Oracle in xUnit Testing Framework 6
2-3. Test Overhead 7
2-3-1. 提升CUT的可測試性 7
2-3-2. 為難以被測試的CUT進行重構 8
2-3-3. Test Overhead總結 9
2-4. 克服Test Overhead的技術 10
2-5. Capture / Replay技術 11
三、 研究方法 13
3-1 被CUT介面限制住的驗證程式碼 13
3-2 Test-Oriented Methods 16
3-3 Capture / Replay形式的單元測試 18
3-3-1. 視覺化:眼見為憑 18
3-3-2. CRUnit in Capture Mode 19
3-3-3. CRUnit in Replay Mode 22
四、 系統架構與實作 23
4-1 實作平台 23
4-2 Capture模式架構概觀 23
4-3 Replay模式架構概觀 24
4-4 實作細節 25
4-4-1. 向Eclipse平台取得除錯資訊 25
4-4-2. Visualizer擴充架構 28
五、 評估 30
5-1 方式 30
5-2 結果與探討 31
5-3 有效性 32
六、 結論 33
參考文獻 35
參考文獻 [1] K. Beck, Test-Driven Development: By Example. The Addison-Wesley Signature Series, Addison-Wesley, 2003.
[2] K. Beck, JUnit – pocket guide: quick lookup and advice, O’Reilly, 2004.
[3] S. Fraser, D. Astels, K. Beck, B. W. Boehm, J. D. Mc-Gregor, J. Newkirk, and C. Poole, “Discipline and practices of TDD: (test driven development),” in OOPSLA Companion (R. Crocker and G. L. S. Jr., eds.), pp. 268–270, ACM, 2003.
[4] G. Meszaros, xUnit Test Patterns: Refactoring Test Code. Pearson Education, 2007.
[5] G. Misko Hevery, “Tutorial: How to write hard to test code and what to look for when reviewing other people’s hard to test code,” in OOPSLA Companion (S. Arora and G. T. Leavens, eds.), ACM, 2009.
[6] R. Osherove, The Art of Unit Testing: With Examples in .Net. Manning Pubs Co Series, Manning, 2009
[7] D. J. Richardson, S. L. Aha, and T. O. O’Malley, “Specification-based test oracles for reactive systems,” in ICSE (T. Montgomery, L. A. Clarke, and C. Ghezzi, eds.), pp. 105–118, ACM Press, 1992.
[8] D. Peters and D. L. Parnas, “Generating a test oracle from program documentation: work in progress,” in Proceedings of the 1994 ACM SIGSOFT international symposium on Software testing and analysis, ISSTA ’94, (New York, NY, USA), pp. 58–65, ACM, 1994
[9] Y.-P. Cheng, H.-Y. Tsai, C.-S. Wang, and C.-H. Hsueh, “xDIVA: automatic animation between debugging break points,” in SOFTVIS (A. Telea, C. G¨org, and S. P. Reiss, eds.), pp. 221–222, ACM, 2010.
[10] Y.-P. Cheng, J.-F. Chen, M.-C. Chiu, N.-W. Lai, and C.-C. Tseng, “xDIVA: a debugging visualization system with composable visualization metaphors,” in OOPSLA Companion (G. E. Harris, ed.), pp. 807–810, ACM, 2008.
[11] E. Horowitz, S. Sahni, and S. Anderson-Freed, Fundamentals of Data Structures in C. Silicon Press, 2007.
[12] Y. Cheon and G. T. Leavens, “A simple and practical approach to unit testing: The JML and JUnit way,” in ECOOP (B. Magnusson, ed.), vol. 2374 of Lecture Notes in Computer Science, pp. 231–255, Springer, 2002.
[13] Y. Cheon, M. Kim, and A. Perumandla, “A complete automation of unit testing for java programs,” in Software Engineering Research and Practice (H. R. Arabnia and H. Reza, eds.), pp. 290–295, CSREA Press, 2005.
[14] T. Xie and D. Notkin, “Tool-assisted unit-test generation and selection based on operational abstractions,” Autom. Softw. Eng., vol. 13, no. 3, pp. 345–371, 2006.
[15] S. Thummalapenta, M. R. Marri, T. Xie, N. Tillmann, and J. de Halleux, “Retrofitting unit tests for parameterized unit testing,” in FASE (D. Giannakopoulou and F. Orejas, eds.), vol. 6603 of Lecture Notes in Computer Science, pp. 294–309, Springer, 2011.
[16] N. Tillmann and J. de Halleux, “Pex-white box test generation for .net,” in TAP (B. Beckert and R. H¨ahnle, eds.), vol. 4966 of Lecture Notes in Computer Science, pp. 134–153, Springer, 2008.
[17] J. Steven, P. Chandra, B. Fleck, A. Podgurski, and A. Podgurski, “jRapture: A capture/replay tool for observation based testing.” in ISSTA, pp. 158–167, 2000.
[18] S. G. Elbaum, H. N. Chin, M. B. Dwyer, and M. Jorde, “Carving and replaying differential unit test cases from system test cases,” IEEE Trans. Software Eng., vol. 35, no. 1, pp. 29–45, 2009.
[19] D. F. Redmiles, T. Ellman, and A. Zisman, eds., 20th IEEE/ACM International Conference on Automated Software Engineering (ASE 2005), November 7-11, 2005, Long Beach, CA, USA, ACM, 2005.
[20] A. Orso and B. Kennedy, “Selective capture and replay of program executions,” ACM SIGSOFT Software Engineering Notes, vol. 30, no. 4, pp. 1–7, 2005.
[21] “Notes on the Eclipse Plug-in Architecture” Retrieved 6 14, 2012, from Eclipse.org: http://www.eclipse.org/articles/Article-Plug-in-architecture/plugin_architecture.html
[22] Retrieved 12 27, 2010, from WinRunner: http://www.loadtest.com.au/Technology/winrunner.htm
[23] Rational Tester. (2010, 12 27). Retrieved 12 27, 2010, from http://www-01.ibm.com/software/awdtools/tester/functional/
[24] Squish. (2010). Retrieved 12 28, 2010, from froglogic - Squish: http://www.froglogic.com/products/index.php
[25] Test Complete. (2010). Retrieved 12 28, 2010, from Test Complete version8: http://www.automatedqa.com/products/testcomplete/
[26] Graphviz. (2012). Retrieved 5 1, 2012, from Graphviz.org: http://www.graphviz.org/
[27] M. Feathers, Working Effectively with Legacy Code, Prentice Hall PTR, 2004.
[28] Unit Testing Tools by Typemock. (2012). Retrieved 6 20, 2012, from Typemock Isolator: http://www.typemock.com/typemock-isolator-product3
[29] A developer testing toolkit for Java. (2012). Retrieved 6 20, 2012, from JMockit: http://code.google.com/p/jmockit/
[30] Supported mscorlib types. (2012). Retrieved 6 21, 2012, from Typemock Isolator: http://www.typemock.com/mscorlib-types
[31] Examples of Test Oracles. (2012). Retrieved 6 23, 2012, from Center for Software Testing Education & Research: http://www.testingeducation.org/k04/OracleExamples.htm
[32] Platform Debug Model. (2012). Retrieved 6 24, 2012, from Eclipse Documentation: http://help.eclipse.org/galileo/index.jsp?topic=/org.eclipse.platform.doc.isv/guide/debug_model.htm
[33] DebugEvent. (2012). Retrieved 6 25, 2012, from Eclipse Platform API Specification: http://help.eclipse.org/indigo/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/debug/core/DebugEvent.html
指導教授 鄭永斌(Yung-Pin Cheng) 審核日期 2012-7-12
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明