Performance Test Definitions
Updated 2017-01-04 - Johan Sandberg
1. Non-Functional Requirements
The non-functional requirements for performance explain how an IT system should perform in a production-like environment.
A typical requirement is response time per page or the transaction at a specific load.
2. Performance Risk Assessment
A performance risk assessment is a task that is done to determine if there are any performance risks with a new release, architecture change, or increase in number-of-users.
The task is often done as a workshop with IT architects, developers, DBA's, operations, test management and other stakeholders.
The output from a performance risk assessment is a report that outlines the risks (if any) and the recommended performance tests to minimize any risks.
The report might also include requirements on test environments, test data and an estimation of the time/costs to perform the tests.
3. Performance Test
A performance test is done to verify that an IT system performs according to the non-functional requirements.
If non-functional requirements are not available a performance test can be done to verify the actual performance before a release.
4. Virtual User – VU
A virtual user is a simulated real user used in a performance test.
5. Infrastructure Monitoring
Infrastructure monitoring is monitoring of server and middleware performance metrics.
Examples of metrics are CPU utilization, Memory usage, Disk usage, and the number of idling threads in a thread pool etc.
6. Application Performance Management – APM
Application Performance Management is monitoring and management of application performance.
The performance of an application can be monitored at each level from the client (end user monitoring) through distributed transactions and including the platform.
The monitored distributed transactions flow can be used to visualize the flow through each tier and to do an end user response time breakdown.
Example products are AppDynamics, DynaTrace and New Relic.
7. Performance Test Scenario
A performance test scenario is a placeholder for everything that relates to a performance test.
One of the benefits of a performance test scenario is that everything is handled in one place, making it easy to rerun a test or use a scenario as a template.
- One or more performance test scripts
- Parameter files
- Parameters that control how the load is applied
- Infrastructure monitor agent configuration
- APM integration configuration
- Test result name
Test Case Definitions
A test case can be functional or non-functional and describes in detail each step and interaction with an application. If the application has a user interface the test case usually describes the most common interaction that a real user would do. A functional test case can often be adopted to a non-functional test case by removing unnecessary steps that are not directly related to performance.
2. Test Case Transaction
A test case transaction is used in performance test scripts to monitor single or multiple HTTP/S calls.
Note: A test case transaction is independent of calls that relate to a web page. They are also used for applications without user interfaces, such as web services.
3. Test Case Iteration Time
The time it takes to perform one iteration of all the steps in a test case including optional loops and think-time.
Performance Test Definitions
A transaction is a single call/response using a specific protocol such as HTTP/S.
2. Performance Test Script
A performance test script is a script that generates transactions. A performance test script can be simple with a single transaction or complex with a large number of transactions.
3. Transaction Response Time
The time it takes for a single transaction, including receiving the response.
4. Page Response Time Server Side
The time it takes for all calls that are done to render a web page. Note that calls can be both in serial or in parallel.
5. Page Response Time Client Side
The time it takes all calls to reach the server in a specific web page, including the time it takes to execute client-side code and rendering a complete web page.
6. Think Time
Think time is the time between one user interaction to the next user interaction that in turn creates a new call to the server.
If the application does not have a user interface it is the time between two calls or between two test case transactions.
7. Background load
Background loads are transactions that are generated to put load on the system. The system has a similar total load as expected in production together with the transactions that are tested.
8. Peak Load
Peak load is the highest expected load on a system during a shorter time period.
The peak load is defined as transactions, pages, iterations or test cases per second.
Throughput is the number of specific transactions per second and the unit is transactions per second TPS.
10. Response Time Analyze
A response time analyze is done to determine min, avg. and max response times for a specific throughput of a transaction, a test case transaction, or a complete web page.
This analysis can also be done to find the point (if any) where the response time increases due to an increase in throughput.
Performance Test Type Definitions
1. Load Test
A test that generates transactions to a targeted application with intention of verifying how the application reacts to a specific load.
2. Base Line Test
A load test with a specific level of load to be used as a baseline when comparing application releases or infrastructure changes.
The goal of baseline test is to generate a test result that can be used as a reference when comparing performance between releases, configurational changes or changes in HW or SW versions.
3. Stress Test
A stress test is a performance test where the level of load is increased over the expected load in production.
The focus of the test can be to see how a specific function is handling high-load or as a test that generates production-like transactions with a higher than expected throughput.
The goal is to identify what part of the architecture design is limiting the performance and if the application recovers when the load is decreased.
4. Stability Test
A stability test is done to test the stability of an application over time. The load is often set to 80% of max throughput and the test time is set to 8-10 hours.
The goal is to find out if there are resources that are not handled correctly by the application. Examples are memory leaks, or connection pools that run out of connections etc.
5. Scalability Test
A scalability test is done to determine how well an application is handling an increased load and how well it utilizes the HW platform.
Typically, a scalability test is done without any think-time and with a slow ramp-up of the load.
A perfect (theoretical) application does not show any increase in response time when the load is increased until a specific point where the response time increases rapidly.
A more realistic application has a small increase in response time during a ramp-up of the load and then at a specific point a steep increase.
An application that is not scaling has a significant increase in response time linear with the increased load.
The goal with a scalability test is to determine if the application scales in terms of response time vs increased load and changes of the system configuration.
Several scalability tests are normally done when a configuration of the system is changed in terms of the number of nodes (scaling horizontally or out/in) or adding more CPU cores or memory (scaling vertically or up/down).
6. Concurrency Test
A concurrency test is done to determine the performance of a system using concurrent transactions that are expected to load the system in production.
The goal is to determine if there are transactions that affect each other due to shared resources or synchronizations.