KPIs in Performance Testing
In performance testing, Key Performance Indicators (KPIs) play a pivotal role in assessing and ensuring the optimal functioning of applications under various conditions. This topic explains what KPIs are, their significance in performance testing, and how they contribute to delivering a seamless user experience.
What are KPIs in Performance Testing?
KPIs are measurable metrics that quantify the performance and effectiveness of a system. In the context of performance testing, these indicators serve as benchmarks to evaluate the behavior of applications under different loads, ensuring they meet predefined performance criteria. KPIs provide insights into the performance, reliability, and efficiency of applications. Some of the most common KPIs are:
-
Response Time
-
Throughput
-
Error Rate
-
Concurrent Users
-
Transactions Per Second (TPS)
-
Resource Utilization (CPU, Memory, Disk I/O)
-
Latency
Why are KPIs Important in Performance Testing?
KPIs help testers with the following:
-
Performance Benchmarking: KPIs set the standard for performance by establishing benchmarks that define acceptable response times, throughput, and resource utilization. These benchmarks serve as a reference point for evaluating the system's efficiency.
-
Early Detection of Issues: Monitoring KPIs during performance testing allows early identification of potential bottlenecks, latency issues, or resource constraints. This early detection enables proactive measures to be taken before these issues impact end-users.
-
User Experience Optimization: KPIs are instrumental in assessing the user experience. Metrics such as response time, error rates, and transaction rates directly impact how end-users perceive the application's performance. By optimizing these KPIs, organizations enhance user satisfaction.
-
Capacity Planning: KPIs aid in capacity planning by helping organizations understand how the system performs under different workloads. This information is invaluable for scaling resources to accommodate increasing user loads and ensuring system stability.
-
Resource Utilization Analysis: Throughput, resource consumption, and transaction rates are critical KPIs for analyzing how efficiently system resources, such as CPU, memory, and network, are utilized. This analysis guides optimizations for better resource management.
-
Objective Performance Evaluation: KPIs provide an objective and quantifiable measure of performance. This objectivity is crucial for performance testing teams, developers, and stakeholders to align their expectations and goals based on concrete data.
BlazeMeter KPIs
In BlazeMeter, you set KPIs in the Failure Criteria section of the test configuration page.
Name | Description | When to Use |
---|---|---|
connectTime.avg (ms) | Average time taken for establishing a connection in milliseconds. | Use to evaluate the efficiency of connection establishment in the system. |
duration.count (s) | Total duration of the performance test in seconds. | Use to understand the overall time taken for the performance test execution. |
errors.count number | Total count of errors observed during the performance test. | Use to identify and analyze the number of errors impacting the system. |
errors.percent (%) | Percentage of errors in relation to the total number of requests. | Use to assess the impact of errors on the overall performance of the system. |
errors.rate (errors/s) | Rate of errors observed per second during the performance test. | Use to monitor the frequency of errors over time. |
hits.avg (hits/s) | Average number of hits (successful requests) per second. | Use to evaluate the rate of successful interactions with the system. |
hits.count number | Total count of successful hits during the performance test. | Use to assess the overall success rate of interactions with the system. |
hits.rate (hits/s) | Rate of successful hits (interactions) per second. | Use to monitor the frequency of successful interactions over time. |
latency.avg (ms) | Average latency time, measuring the time it takes for the system to respond to a request. | Use to assess the overall responsiveness of the system. |
responseTime.avg (ms) | Average response time in milliseconds. | Use to assess the overall responsiveness of the application. |
responseTime.max (ms) | Maximum response time observed during the performance test. | Use to identify the worst-case scenario for response time. |
responseTime.min (ms) | Minimum response time observed during the performance test. | Use to identify the best-case scenario for response time. |
responseTime.percentile.0 (ms) | 0th percentile response time represents the minimum time taken for a request. | Use to identify the best-case scenario for response time. |
responseTime.percentile.25 (ms) | 25th percentile response time represents the time below which 25% of requests fall. | Use to understand response time distribution and identify performance outliers. |
responseTime.percentile.50 (ms) | 50th percentile response time represents the median time taken for a request. | Use to understand the central tendency of response times. |
responseTime.percentile.90 (ms) | 90th percentile response time represents the time below which 90% of requests fall. | Use to identify response time for a majority of requests and assess system performance. |
responseTime.percentile.95 (ms) | 95th percentile response time represents the time below which 95% of requests fall. | Use to identify response time for a significant majority of requests and assess system performance. |
responseTime.percentile.99 (ms) | 99th percentile response time represents the time below which 99% of requests fall. | Use to identify response time for the majority of requests and assess system performance. |
responseTime.std (ms) | Standard deviation of response time, providing insights into the variability of response times. | Use to assess the consistency or variability in the system's response times. |
size.avg (bytes/s) | Average size of the response payload in bytes per second. | Use to assess the efficiency of data transfer in the system. |
size.count | Total count of responses during the performance test. | Use to understand the overall volume of responses generated by the system. |
size.rate (bytes/s) | Rate of data transfer (response payload size) per second. | Use to monitor the efficiency of data transfer over time. |