軟體開發的過程中,為了確保系統的穩定性,我們常常會使用負載測試(Load Testing)來檢驗此系統是否符合規格書中系統所能承受的負載能力;另外再利用壓力測試(Stress Testing)來測試系統所能負荷的工作量超出負荷後系統是否運作正常。這兩種測試都必須模擬大量同時上線人數(concurrent users)。 Software Testing Portal為本團隊所開發的TaaS平台,為一種第三方測試服務。平台主要為利用開源測試工具JMeter來執行負載測試/壓力測試,模擬同時上線人數使用待測系統的情形。而本研究為了模擬大量的同時上線使用者,設計一個自動化壓力測試分散式系統架構,使這個TaaS平台需要支援可以自動同時配置與分配多台伺服器給測試人員,模擬負載/壓力測試所需要的大量concurrent users。但不需要考量系統背後的設計架構,與系統所要設定的某些參數等細節。 在壓力測試過程中,通常會模仿使用者真實使用系統的行為,來找到系統潛在的瓶頸。但是,在對大型系統進行壓力測試時,所要模擬的同時上線人數數量非常龐大,這將會耗費許多測試資源。本研究將提出一個方法,以觀察待測系統資源消耗的方式,經過不斷加重系統的負荷,接著利用統計資源消耗的曲線圖來估計系統可能會使用的系統資源量。期待能用最少的測試設備,來簡單發現系統可能的壓力臨界值,並藉此讓測試團隊找出系統可能會發生瓶頸的原因。 ;In order to ensure a software system can survive large number of concurrent users, load testing and stress testing are two major approaches to verify such a non-functional requirement. To test such a non-functional requirement, we need to mimic a large number of concurrent users using stress testing tools. In the past, a TaaS (Test as a Service) portal was built to support 3rd party independent testing service. This portal adopt the open source stress testing tool “JMeter” to build a transparent service which support a large number of concurrent users but hide the setup and implementation details from the testers. To simulate numerous concurrent users, this TaaS portal can automatically assign a lot of testing servers to execute load/stress testing at the same time. Besides, to find out the scalability problem of a system, emulating a large number of concurrent users that is close to the real scenarios is the ultimate solution. However, this is an expensive approach which may requires a lot of computing resources. In this thesis, we propose an attempt to avoiding such a problem. Instead of simulating the real number of concurrent users, we install different sensors on system under test. Through increasing the workload on system, and inspecting the system’s resource consuming history, we can observe the growth rate of these resource usages to predict what could happen when the expected number of concurrent users are reached.