Scripting Your Test

This section reviews, and helps to establish proper scripting processes for running effective performance tests.

We'll cover critical elements like how to approach scripting, how to ensure dynamic content management, best practices and more.

The Right Approach To Scripting

Scripting is a critical part of performance testing and is the actual activity that validates and creates the results of the test. Although different testing software uses different scripting languages and processes, the core concepts for scripting are the same.

Below are the most important parts of the scripting process to ensure that your performance tests are working:

Record & Playback

Most performance testing tools, like JMeter, provide you with the ability to record a script and then playing back the same (after making any required changes to the script) test. The business flow is manually recorded and changes to the script are handled efficiently.

Modular scripting

Adopt Modular scripting if the performance testing team prefers to create regression suite or a performance testing framework. Individual workflows, recorded separately in such scenarios and each workflow, becomes a part of an action.

Transaction naming

You can carry out transaction naming while recording the script or while updating the scripts. Add both transaction starts, and end points to the script, as these are essential to getting the response time for a page.

ThinkTime & Pacing Time

ThinkTime is the time that a virtual user waits on a page (and thinks) before moving to the next page. Pacing time is the time lag between two consecutive sessions for a virtual user.

Text / Image check point

Add Text checkpoints for each page, and select the text that needs checking so that that text is unique for that page. Auto recording of text check points is also possible and change them in settings should the need updating.

Data management

In order to make the test scenario closer to production scenarios, add data files to scripts at where data needs to be input by the end user. For the purpose of our testing, Data files, as well as date and time functions are essential.

Dynamic content management

While recording the script, generate items like session IDs, timestamp values, etc. Replaying these values may cause the script replay to fail. This is due to the fact that for every session the server may demand a different session ID for that specific session. In such cases co-relation functions are useful. In the case of JMeter, regular expressions can help manage the dynamic data points.

Replay for one user

Replay the script after making the above updates. Validate whether the script is functioning as it should. If so, move on to muliple user testing.

Replay with multiple users

Once the script runs well with one user, execute it and run the scenarios again with multiple users (small number of users ex. 5 users).

Test

If the test succeeds with 5 users it is then subjected to the full length test as planned, monitoring servers fully while testing.

Dynamic Content Management

When performing tests, some values like Session ID, Time Stamp and others may be dynamically generated and affect future test recording as your application considers this as a repeat visitor.

The testing software may use these dynamically set values for recording and erroneously playback the previous activities (instead of new test scenarios).

As a result, you should be aware of these dynamic values that are specific to your application and ensure that each new test script is generating the appropriate results.

One common approach to solving this problem is through co-relation and dynamic data management while scripting.

How to do co-relation?

Different testing tools use different approaches to handling dynamic content. For instance, JMeter uses Regular Expressions to handle dynamic content on web pages. Click here for more information on how to use regular expressions in JMeter.

Text / Image Check Validation

During performance tests, it is important to validate each page is properly generated for each virtual user.

Validating this manually or through other visual mechanisms is close to impossible. Therefore, an automated method is the best tool to achieve this objective.

Text / Image check points are the solution to this problem. During execution, the script adds a text check function to each page. These functions validate the presence of text content items in the application during test execution. This type of validation runs for each virtual user.

How do you perform a text check?

  • JMeter: In the case of JMeter, assertions validate the occurrence of content / image in the response for a page request.
    • Different types of assertions are available in JMeter:
    • Response assertion
    • Size assertion
    • XML assertion
    • etc.
  • Loadrunner: In case of Loadrunner, the web_reg_find() function runs a text check task. This function adds the text content to validate and places it just before the web request (of the page on which the text must appear).
  • Silkperformer: Like that of Loadrunner, Silkperformer also has text check functions.

Scripting Best Practices

Additional critical scripting best practices should include consideration of the following items:

  1. If you are using a tool that records and playback your script, then try adding transaction names and comments while recording the script;
  2. Parameterize your script in order to create a test scenario as identical as possible to the production scenario. For examples, if using dates, use the date functions instead of static date;
  3. For every web page, it's important to have text (and or) image check point. This validates that the page actually opened, and that the text appeared on the page for every virtual user while your test runs;
  4. ThinkTime is critical to have implemented for production like live user sessions (concurrent users) and think time that matches to what users might wait for in production;
  5. Before any performance tests run, it's critical to ensure that you've cleared the application logs, web logs and database logs. You should also restart the server after every;
  6. Monitor your servers during the test and note the time (in seconds if possible) for any abnormal observations. After the test, check the server logs for what caused the abnormal behavior;
  7. If your application has a lot of AJAX requests and activities happening on the client side, it is good to test a single user and response time observed for the same in GUI mode. For the same, this is either initiated manuall,y or using some automation test tool. This gives an idea of the Graphical User Interface (GUI) response time for a user under load;
  8. Always keep an eye on the capacity of your load generators and don't over stress the load generators during the test. For higher number of virtual users, use multiple load generators without stressing them;
  9. It is common practice to have a separate test environment for performance test. Testing the system on Production or QA environment may impact the activities of production users or internal QA testers;
  10. Use the 80-20 rule while designing scenarios: While designing scenarios, go for functionalities and transactions that execute 80 % of the time by end users and form only 20% of the system.