Prepare your Test Environment Using Test Data Orchestration

When you have the recurring need to satisfy data dependencies in your test environment, use BlazeMeter's test data orchestration. The orchestration can create, read, update, and delete test data in your test environment before and after each test run. BlazeMeter can generate test data that drives the test according to your requirements; but some tests additionally depend on consistent data in the test environment.

Examples of data dependencies include:

  • To test object reading, object deletion, or object updating, these objects must first be created in the test environment.
  • To test unique object creation, the test environment must be a clean slate; objects from previous tests must be deleted.
  • To amend your data model with unique generated IDs, you need the ability to read values from the test environment.
  • The test data row used in the test must be the same as the one used in the test environment.

In these test situations, you want to prepare the environment before and after each test run. The orchestration will rely on the APIs of your application under test.

This article covers the following topics:



Usage example: You use orchestration before the test run to seed the test environment with users. BlazeMeter does not have access to your application's business logic and therefore cannot guess or generate your proprietary user keys synthetically. In your Data Model, leave the Data Parameter for the proprietary userKey empty and have the orchestration initialize it later, before the test run. The orchestration’s ability to read and store values (such as userKeys) from API responses ensures consistency between the seeded data and the tests.

Using Test Data Orchestration has the following benefits.

  • Orchestration is available to GUI Functional tests and Performance tests as part of the Test Data integration.
  • Orchestration maintains data consistency by using existing data models in related tests.
  • Orchestration can be automated to run together with test execution.
  • Different kinds of test data can be part of the same data model.
    Examples: Test data can be generated synthetically or loaded from CSV files; other values can be defined by reading existing values from your test environment.

BlazeMeter makes it easy for you to use the same data consistently and helps you manage the state of your test environments in context.



BlazeMeter relies on the application programming interface (API) of your web application. Familiarize yourself with your application’s data model, endpoints, authentication, and usage.

If your application is only reachable within your premises, you must create a Private Location agent with Data Orchestration functionality enabled. Select this Private Location as your Publish Execution Location in the Data Target Settings.

Using Test Data Orchestration is well integrated with test data from Test Data Entities. This article assumes that you understand test data concepts and know how to create it. For more information, see How to Use Test Data.

Some advanced orchestration features, such as bulk publishing, require scripting knowledge. Testers without scripting experience can use the base functionality in the BlazeMeter web interface.


  • API Requests create, read, update, or delete data in a test environment before the test runs.
  • Clean Up Requests create, read, update, or delete data in a test environment after the test runs.
  • Data Targets are containers for API Requests, Clean Up Requests, and their settings.
    • A test can have zero, one, or more Data Targets associated.
    • A Data Target can be associated with several tests.

How to Prepare your Test Environment

This screenshot shows the Data Targets tab of the Data Settings window.


How to create a Data Target

This procedure assumes that you have already created test data for your test scenario.

  1. Open a GUI Functional test or Performance test.
  2. Open the Test Configuration.
  3. In the Test Data pane, click Data Settings/Iterations.
  4. Go to the Data Targets
  5. Click Add New Target.
  6. Choose between add New Data Target or Existing Data Target.

    • To create a new Data Target, give the Data Target a name that describes its purpose.

      Examples: test users, daily offers, logistics Europe.

    • To clone an existing Data Target, select a Data Entity, and enable the checkboxes for the Data Targets that you want to clone. Click Add to add them to the test.

BlazeMeter will use these data targets to set up the environment before test execution.

For each Data Target, you first define settings, then define API Requests or Clean-Up Requests, or both. After publishing a data target, you can review log files.

Define API Requests

API Requests and Clean Up Requests can create, read, update, or delete test data in your environment. You define API Requests and Clean Up Requests in Data Targets. Each test can have multiple Data Targets associated with it.

To define a Data Target:

  1. Open the Test Configuration.
  2. In the Test Data pane, click Data Settings/Iterations.
  3. Go to the Data Targets tab.
    The tab lists the available Data Targets for this test.
  4. Expand a Data Target to edit it.

A Data Target can contain a single or multiple requests. Multiple requests in one target are ordered and will be executed in sequence.

For each API Request, follow these steps:

  1. Select the operation such as GET or POST.
  2. Define an endpoint URL.
    • You can reference settings properties. For example, type ${_env.baseUrl} instead of hard-coding the hostname.
    • You can reference Data Parameters. Copy them from the Test Data pane and paste them into the URL field as ${_data.myvar}.
  3. (If there are several Data Entities) Select the Data Entity in which to store this Data Target.
  4. Define Run Options for the request:
    1. Run for each Data Row — This is the default.
    2. Run Once — Run this request only once and not for every row. Used, for example, for one-time authentication requests.
  5. Go to the Headers tab
    • Add Header names and values, such as content-type.
    • You can copy Data Parameters from the Data Parameters pane and paste them into the Value field.
  6. Go to the Body tab:
    • Define attributes and values in the request body.
    • Parameterize the body by mapping Data Parameters to API call parameters. Copy Data Parameters from the Data Parameters pane and paste them into Body values.
  7. (Optional) Go to the Extract from Response tab to extract response values.
  8. (Optional) Go to the Response Actions to handle exceptions.

Define Clean Up API Requests

Use this tab to set up requests that clean up the environment and reset it for the next test run.

Clean Up API Requests are defined in the same way as API Requests. The only difference is that API Requests run before test execution, and Clean Up API Requests run after test execution.

Define Settings

First define important settings for the Data Target (such as your baseUrl) and Run Options.

  1. Open a GUI Functional test or Performance test and go to the Test Data pane.
  2. Click Data Settings and then go to the Data Targets tab.
    You see the list of data targets. If the list is empty, create a data target first.
  3. Open a Data Target and go to the Settings tab.
    • Define Run Options.
    • Define Publish Execution Location.
    • Define Configuration Properties.
    • Define Private Properties.

Settings: Define Run Options

  • Run for each Data Row
    Enable this option if you are using multiple rows of test data. BlazeMeter will post the sequence of API Requests once for each row, each time with different test data values. This is the default.
  • Run Once
    Enable this option to shorten the whole publish cycle down to one execution, no matter how many iterations are defined. Running only once saves time, for example, while debugging.

Settings: Define Publish Execution Location – Cloud or On Premise

The default location is the BlazeMeter cloud. You select a different Publish Execution Location in the Data Target Settings. If your application under test can be reached through the internet by BlazeMeter, keep the default location. In this case, you do not need to create a Private Location.

If your application is only reachable on-premises within your internal network, you must create a Private Location with Data Orchestration to use this feature.

To create a Private Location:

  1. Click the Settings gear in the top right.
  2. Go to Settings > Workspace > Private Locations.
  3. Click the Plus to add a new Private Location and configure it as needed.
  4. Under Functionalities, enable the Data Orchestration toggle.
  5. Click Apply.

Now you can select the Private Location as the Publish Execution Location in the Data Target Settings and use orchestration on your premises.

Settings: Define Configuration Properties

Define variables, such as your API's base URL, your proxy settings, and required credentials. You can reference these properties in your Data Targets later. Storing environment values, such as your hostname, in variables makes maintenance easier because if for example your baseUrl ever changes, you have to update it only once, here.

  • baseUrl -- Enter your hostname here.
  • withCredentials -- Enter a boolean value whether cross-site Access-Control requests should be made using credentials such as cookies or authorization headers.
  • auth.username
  • auth.password
  • xsrfCookieName
  • xsrfHeaderName
  • proxy.protocol
  • proxy.port

For example, for a search request, you reference the baseURL variable in the Endpoint URL field of the API Request as follows:


Settings: Define Private Properties

Define variables for your authentication tokens or required credentials. These values are never shown in users' UI screens nor log files, instead, they appear only obfuscated as ##PRIVATE_VARIABLEname##.

  • proxy.auth.username

  • proxy.auth.password

For example, for an auth request, you reference the password as follows:


Run Orchestration Adhoc (Debugging)

After defining the API Requests and Cleanup Requests, run an adhoc publish to verify the environment is prepared as expected. Review the Publish Log inside the Data Targets tab when debugging adhoc runs.

  1. Open the test and go to the Test Data pane.
  2. Click Data Settings and then go to the Data Targets tab.
  3. Identify the Data Target that you want to verify and click Publish.
  4. Expand the Data Target entry.
  5. Go to the Publish Log tab and review the outcome.

For each API Request step, the log contains a numbered step section. Steps are executed one after the other. By default, each step is executed multiple times, once for each row of test data. For example, if you have three API Requests (three steps) and ten rows of test data, the log will contain 30 published entries in total. Review the body that you sent, the responses that were received, the values that were extracted, and response error codes if any.

Run Orchestration Automatically

For everyday use, you will set up automatic Test Data Orchestration as part of a Test Definition. This way, the test environment is prepared automatically every time the associated test runs.


After the test run, you’ll find the Orchestration logs as part of the test execution log.

How to Edit Data Targets

  1. Open the GUI Functional test or Performance test and go to the Test Data pane.
  2. Click Data Settings and then go to the Data Targets tab.
  3. In this window, you can perform the following actions:
    • Expand a Data Target to edit it inline.
    • Click the Delete icon to remove a Data Target.
    • Click Add New Data Target to create a new one.
  4. Click Save.

Review the Log Files

When the data target is associated with a test, you find the logs as part of the test execution log. The files are called blazedata-api-publish.log and blazedata-api-unpublish.log.

After running the orchestration adhoc, you find a Publish Log in the Data Target definition window. In the Test Data pane, click Data Settings, go to the Data Target tab, expand the data target, and go to the Publish Log tab.

Usage Scenarios

How to Reference Test Data in Orchestration

One of the main features of the orchestration is ensuring consistency with your existing test data. To achieve that you replace hard-coded values in your orchestration with Data Parameters. This procedure assumes that you have already created test data for your test scenario and that you have loaded these data entities in the Test Data pane of your test configuration. You can always add more Data Parameters and Data Entities as needed.

To use test data in the orchestration, follow these steps:

  1. Go to the Data Targets tab and expand a Data Target to edit it.
  2. Click Test Data to open a read-only Test Data pane.
    The pane lists Data Parameters that are available to the test.
  3. Click the button next to a Data Parameter to Copy the parameter name to the clipboard.
  4. Return to the Data target and paste the Data Parameter to replace a hard-coded value.
    Example: Replace the call
    with the following request that uses the Data Parameter id instead:

How to Extract Response Values

An API Request in a Data Target can optionally read a value from the response and assign it to a local variable or Data Parameter. You will be able to reference the variable after this Data Target step was executed.

  1. Go to the Data Targets tab and expand a Data Target to edit it.
  2. Select a Request that returns a value, for example:
  3. Go to the Extract from Response
  4. Click the blue Plus button and select an existing Data Parameter from the list or create a local variable.
  5. Select the Source of the value, either the response body or header.
  6. Under Content Selection, choose one of the following selection methods:
    • (for Headers and Body) Regular Expression
    • (for Body) JSON Pointer (JSONPointer syntax)
    • (for Body) XPath
  7. Enter the response attribute that you want to assign to the variable.
    For example, to extract the first element of the JSON array named "result", use /result/0. To extract an identifier named id, use /id, and so on.

Best Practices:

  • If the extracted value is temporary and used only inside the orchestration (such as an authentication token, cookie, or session ID), click into the Variable Name field and Add a Local Variable for it.
    Example: In the orchestration, you reference a local variable named token as ${_var.token}.
  • If you want to reuse the extracted value in the test later, click into the Variable Name field and select an existing Data Parameter to store it.
    Example: You have left the userKey parameter empty in the Data Entity. As part of the orchestration, you publish data and extract a valid key value from the response. In the orchestration, you reference the data parameter named userKey as ${_data.userKey}. In a BlazeMeter test, you reference it as ${userKey}.

How to Handle Exceptions Using Response Actions

The API requests sent by the Orchestration return server response codes, such as 200 OK, 201 Created, network errors, permissions errors, client errors, or server errors. Certain responses may be expected or can be ignored, while others might render your test results invalid. Therefore, you can optionally react to server responses and choose whether you want to fail the whole test or just skip one row of publishing.

All Response Actions skip the current iteration that triggered .

The following Response Actions are available:

Stop Iteration:

Continue with orchestration

Continue with test, including incomplete rows

Stop Publishing:

Stop the orchestration

Continue with test, including incomplete rows

Stop Publishing & Test:

Stop the orchestration

Fail the test

Exclude Data Row for Test:

Continue with orchestration

Continue with test, excluding incomplete rows

Wait and Repeat Until Repeats the orchestration request and waits a configurable amount of time until the condition is met, then runs the next orchestration request. If condition is not met, it stops after 5 minutes. Continue with test

To define your response handling for each Data Target, use the Response Actions tab.

  1. Go to the Data Targets tab and expand a Data Target to edit it.
  2. Go to the Response Actions
  3. Under Fallback Assertion, select the Action to trigger if publishing does not return a response. The default Action is Stop Iteration. Note that “Network Error” is the only Fallback Assertion.
  4. Click the Plus button to add as many Response Actions as needed.
  5. Select whether you want to react to either the server response code or body:
    • HTTP Response Code
      1. Select a response code or a response category.
      2. Select an Action to trigger.
    • Response Body
      1. Select a comparison: Equals, Contains, XPath Field match, JSON Pointer match, Regular Expression match.
      2. Enter a value to match.
        Example: You match a field named category as /category, using JSON Pointer syntax.
      3. Select an Action to trigger.
  6. Click Save.

How to Publish Data in Bulk (Advanced Request Body Handling)

The approach shown above assumes that you want to publish or delete only a small number of objects. You manually create a handful of data targets, you use Data Parameters to handle test data, and publish the orchestration with the test.

However, to initialize multiple objects in the environment, it would be too tedious to manually create POST API Requests for, say, hundreds of test users. How can BlazeMeter make this effort easier for you?

  • Some APIs accept an array of values in a single request. If your API supports such bulk operations, you can use templating. Templating lets you provide an array of data in a single request body.
  • Do some initial values have dependencies on other values? To automate the orchestration even further, BlazeMeter even supports conditional logic and filters to impose constraints on the array.
  • For example, the following template initializes the value of balanceType with either the string “HI” or “LO”, depending on the accountBalance value being greater or less than the given threshold:
    "balanceType" : "${_fn.condition(_data.accountBalance < 200, "LO", "HI")}'

Template Syntax: Wrap templates in single quotes and format them as one line; alternatively, escape newline characters with backslashes. Reference variables with the appropriate namespace prefix.

You can use these templates in the API Request URL, Headers, and Body.

Basic Template Variables

You can reference any custom or default property in the body or URL of a Request:

  • ${_data.parameterName}
    References the value of a data parameter named “parameterName”.
  • ${_data.entityName.parameterName}
    References a unique data parameter in a Data Entity. If two Data Entities contain a Data Parameter of the same name, use this dotted notation to specify which one you mean.
    Usage Example: If the "Users" and the "Accounts" Data Entities both contain an id Data Parameter, resolve the conflict by referencing one as ${} and the other as ${}.
  • ${_data.entityName.ALL}
    Returns an array of all generated test data for the Data Entity entityName. From here you can use an array index to access the rows of data, and then the parameter name to access the data.
    • Usage example 1: To convert the "Accounts" Data Entity to JSON, use .ALL together with _fn.json():
    • Usage example 2: To return a comma-separated list of all data parameter names inside the "Accounts" data entity, use:
      ${Object.keys(_data.Accounts.ALL[0]).join(", ")}
    • Usage example 3: To get the value of the parameter named accountNo from the first row of the "Accounts" data entity, use:
  • ${_env}
    Returns environment variables defined on the Settings tab under Configuration Properties. You can identify environment variables by the _env. prefix.
    The following variables are available:
    • ${_env.baseUrl}
    • ${_env.withCredentials}
    • ${_env.auth.username}
    • ${_env.auth.password}
    • ${_env.xsrfCookieName}
    • ${_env.xsrfHeaderName}
    • ${_env.proxy.protocol}
    • ${}
    • ${_env.proxy.port}
    • ${_env.proxy.auth.username}
    • ${_env.proxy.auth.password}

Basic Functions in Templates

Templates support the following functions in ${_fn.functionname} format. All arguments are mandatory, optional arguments are marked with a question mark. Mutually exclusive alternatives are separated by a pipe symbol “|”.

  • ${_fn.condition(condition: boolean, trueValue: any, falseValue?: any)}
    If the condition evaluates to true, this function returns trueValue, otherwise it returns falseValue, if defined. Providing the falseValue is optional.
  • ${_fn.json(obj: any)}
    This function converts any object to JSON format.
  • ${_fn.sizeOf(data: entityName | array)}
    This function returns either the number of rows of a Data Entity, or the length of an array.
  • ${_fn.each: (data: entityName | array, callbackFunction, template: string, separator: string = "")}
    Use this function to filter data, sort it, or return a subset.
    • The ${_fn.each()} functions loops over all rows of generated data of entity entityName, or over all items in the given array, respectively.
    • Write a template that contains the property name that you want to return as an array.

For each item, each() optionally calls a callback function. You can optionally filter, sort, or modify the rows by using supported functions. You do not need the _data. prefix when referencing Data Parameters inside callback functions.
The following callback functions are supported inside each():

    • ${_fn.filter(propertyName: string, operator: string, value: string)}
      Returns only items where a property has a certain value. Operator can be “>” or “<” or “=”.
    • ${_fn.sort(propertyName: string)}
      Returns the items sorted alphabetically by the value of the given property.
    • ${_fn.left(n: number)}
      Return the first n items.
    • ${_fn.right(n: number)}
      Returns the last n items.
    • ${_fn.slice(start: number, end: number)}
      Return the subset of items from start to end index.

Lodash Functions in Templates

Templates support scripting functions from the Lodash library in ${_fn._} format. Examples include ${_fn._.reverse()}, ${_fn._.fill()}, ${_fn._.findIndex()}, ${_fn._.join()}, ${_fn._.uniq()}, and many more.

For a full list, see the function list and argument syntax here:

Template Examples

The below examples assume the following test data has been generated for the Data Entity Accounts:

























Basic template - Input (Run for Each Data Row):

The following is an example of a request body definition that assigns four values:

  1. It uses a function to count how many rows are in the data set, and assigns that number to the variable accountsCount.
  2. It maps the accountNo and accountBalance values to the respective Data Parameters.
  3. It uses a conditional function that assigns either the string "HI" or "LO" to the balanceType parameter, depending on whether accountBalance is higher or lower than the threshold value 200.
"accountsCount": ${_fn.sizeOf(_data.Accounts)},
"accountNo": "${_data.accountNo}",
"accountBalance": "${_data.accountBalance}",
"balanceType": "${_fn.condition(_data.accountBalance < 200, "LO", "HI")}"

Basic template – iteration results:

In five iterations you send five distinct request bodies, one by one, with the following sample contents:

"accountsCount": 5,
"accountNo": "0752284437/9449",
"accountBalance": "100",
"balanceType": "LO"
"accountsCount": 5,
"accountNo": "2227473859/9389",
"accountBalance": "100",
"balanceType": "LO"
"accountsCount": 5,
"accountNo": "7623787625/7673",
"accountBalance": "1000",
"balanceType": "HI"
"accountsCount": 5,
"accountNo": "8405199392/0901",
"accountBalance": "1000",
"balanceType": "HI"
"accountsCount": 5,
"accountNo": "2394682082/4402",
"accountBalance": "1000",
"balanceType": "HI"

Lodash library template 1 - Input (Run for Each Data Row):

The following is an example of a request body definition that assigns the account name data parameter to the account name. It also uses an upper-case function to assign the account name in all upper case to accNameUpper.

"accName": "${_data.accountName}",
"accNameUpper": "${_fn._.toUpper(_data.accountName)}"

Lodash library template 1 – iteration results:

In five iterations you receive five distinct request bodies with the following sample contents:

"accName": "OGrpaqvvKZ",
"accNameUpper": "OGRPAQVVKZ"
"accName": "OUaOdbbzXD",
"accNameUpper": "OUAODBBZXD"
"accName": "fGnAe5oqj7",
"accNameUpper": "FGNAE5OQJ7"
"accName": "yhjdZs0LLw",
"accNameUpper": "YHJDZS0LLW"
"accName": "KIK5n805eM",
"accNameUpper": "KIK5N805EM"

Advanced template 1 – Input (Run Once):

This example sends a single request body with an array of values taken from all rows in the test data, separated by comma. The accountNo reference within the each() loop is inside the _data.Accounts entity and does not need the _data. prefix.

{ "allAccountsNo": [ ${_fn.each( _data.Accounts, '"${accountNo}"',  ","  )} ] }

Advanced template 1 – result:

In a single iteration, you receive one request body with the following sample content:

"allAccountsNo": [

Advanced template 2 – Input (Run Once):

This request sends a single request body with an array of values taken from all rows in test data, sorted according to the values of the property accountNo. The accountNo reference within the each() loop is inside the _data.Accounts entity and does not need the _data. prefix.

{ "allAccountsNo": 
[ ${_fn.each(
_data.Accounts, _fn.sort("accountNo"), '"${accountNo}"', ","
) } ]

Advanced template 2 – result:

In a single iteration, you receive one request body with the following sorted content:

"allAccountsNo": [

Advanced template 3 – Input (Run Once):

This request returns certain rows using a filter. It loops over the five rows in Accounts and checks whether a value, here the id, is larger than three. If the id is larger than three, it adds the accountNo value to the array, separated by comma. The id and accountNo references within the each() loop are inside the _data.Accounts entity and do not need the _data. prefix.

{ "allAccountsNo": 
[ ${_fn.each(
_data.Accounts, _fn.filter("id", ">", 3), _fn.sort("accountNo"), '"${accountNo}"', ", "
) } ]

Advanced template 3 – result:

In a single iteration, you receive one request body with the following sorted and filtered content:

"allAccountsNo": [

Advanced Template 4 - Input:

This example shows a longer template inside the each() loop that returns an array with multiple elements. The template has to be wrapped in single quotes and must be on a single line. If you want to format the template with line breaks, escape the new lines with backslashes inside the template. The following two variants are valid and equivalent:

The template on one line (in bold):

"number_of_users": ${_fn.sizeOf(_data.users)},
  "users": [ 
    '{"first_name": "${firstName}", "last_name": "${lastName}"}', 

The same template reformatted with backslashes before newlines:

"number_of_users": ${_fn.sizeOf(users)},
  "users": [ 
"first_name": "${firstName}",\
"last_name": "${lastName}"\
"," )} ] }

Advanced Template 4 - Result:

  "number_of_users": 3,
  "users": [
      "first_name": "Nicolette",
      "last_name": "Pandey"
      "first_name": "Angie",
      "last_name": "Morgan"
      "first_name": "Katherine",
      "last_name": "Yoshioka"

Variables and Functions (Prefixes) Reference

Remember the following prefixes to distinguish namespaces used in Orchestration:

For prefix usage examples, see Template Examples and Extract From Response.