O'Reilly FYI

News from within O'Reilly

Choosing an Appropriate Performance Testing Tool

 
By Sara Peyton
January 28, 2009

9780596520663_lrg.jpg

The Art of Application Performance Testing arrived on my desk today. Indeed, application performance testing is a unique discipline that's crying out for its own set of industry standards. In this new title, author Ian Molyneaux offers his practical, commonsense solutions and expertise to show readers how to focus on planning, execution, and interpretation of results in this illuminating new title. Here, in this excerpt, Ian details how to choose an appropriate testing tool.

Choosing an Appropriate Performance Testing Tool
Automated tools have been around in some form for the best part of 15 years. During that period, application technology has gone through many changes, moving from a norm of "fat client" to web-enablement. Accordingly, the sort of capabilities that automated tools must now provide is very much biased toward web development, and there is much less requirement to support legacy technologies that rely on a two-tier application model. This "web focus" is good news for the performance tester, because there are now many more automated tool vendors in the marketplace to choose from with offerings to suit even modest budgets. There are also a number of free tools available as shareware or open source (http://www.opensource.org); see Appendix C.

Another sea change may well be about to occur with the emergence of Service Oriented Architecture (SOA), where the performance focus is not just on the end-user transaction but also includes the business process. This concept is generally too abstract for the current crop of performance tools, although it is possible to test business process components that make use of technologies like web services. I will discuss SOA and other technology challenges in Chapter 5.

All of this is well and good, but here's a note of caution. When your performance testing needs do move outside of the Web, the choice of tool vendors diminishes rapidly and technology challenges that have plagued automated tools for many years are still very much in evidence. These problems center not on execution and analysis but rather on being able to successfully record application activity and then modify the resulting scripts for use in a performance test. Phrases such as "encryption" and "compression" are not good news for performance test tools; unless these technologies can be disabled, it is unlikely that you will be able to create scripts.

Even technologies that are web-based can present problems for performance test tools. For example, if you need to deal with streaming media or client certificates then not all vendors will be able to offer a solution. You should carefully match your needs to performance tool capabilities before making your final choice, and I recommend you insist on a Proof of Concept (POC) before making any commitment to buy.

Despite these challenges, automated tools are needed to carry out serious performance testing. In Chapter 1 I mentioned this as a reason why many applications are not properly performance tested prior to deployment. There is simply no practical way to provide reliable, repeatable performance tests without using some form of automation.

Accordingly, the aim of any automated performance testing tool is to simplify the testing process. This is normally achieved by providing the ability to record end-user activity and to render this data as transactions or scripts. The scripts are then used to create load testing sessions or scenarios that represent a mix of typical end-user activity. These are the actual performance tests and, once created, they can easily be rerun on demand, which is a big advantage over any sort of manual testing.

Another huge advantage over manual testing is the ability to quickly correlate performance data from various sources (such as the servers, network, and application response time) and then present this in a single view. This information is normally stored for each test run, allowing easy comparison of the results of multiple test executions.

Testing Tool Architecture
Automated performance test tools typically have the following components.


Scripting module

Enables recording of end-user activity and may support many different middleware protocols. Allows modification of the recorded scripts to associate internal/external data and configure granularity of response-time measurement.


Test management module

Allows the creation and execution of load test sessions or scenarios that represent different mixes of end-user activity. These sessions make use of nominated scripts and one or more load injectors.


Load injector(s)

Generates the load--normally from multiple workstations or servers, depending on the amount of load required.


Analysis module

Provides the ability to analyze the data collected from each test execution. This data is typically a mixture of autogenerated reports and graphical or tabular reporting. There may also be an "expert" capability that provides automated analysis of results and highlights areas of concern.

Complementing these components may be additional modules that monitor server and network performance while a load test is running.


If you enjoyed this excerpt, buy a copy of The Art of Application Performance Testing.


You might also be interested in:

 

Popular Topics

Browse Books

Archives

Or, visit our complete archives.

FYI Topics

Recommended for You

Got a Question?