API Monitoring Test Steps

API tests consist of a series of steps, most often HTTP requests. In addition to requests, you can also add additional types of steps to your tests like pauses and conditions.

The Editor is where you'll define the steps (HTTP requests, pauses, etc.) and execution order that make up a test. For each request in a test, you can specify the HTTP request data, assertions, variables and scripts by clicking on a request.


HTTP Request Step

Click Request to add an empty request template to the end of your test. Replace the placeholder data with the method, URL, headers and parameters for the API your test needs to call.

Request steps can be imported from other tools like Swagger, AWS API Gateway and Postman.

Request Lifecycle

When a request step is executed each of the associated assertions, variables and scripts will be processed. The execution order is as follows:

  1. Pre-request Scripts are executed. The variable context from initial/variables and scripts and previous steps are available via variables.get().
  2. The HTTP request is executed and a response is returned.
  3. Variables defined in the editor are processed on the response.
  4. Post-response Scripts are processed. Initial and request-specific variable values extracted from previous steps are available for use.
  5. Assertions defined in the test editor are processed on the response. If the response object was modified by a Post-response Script, the data is not available to be evaluated by an Assertion.

Pause Step

Pauses are a type of test step that allow you to introduce short delays between steps in the test plan. You can add as many pauses as you need, but tests require at least one request to execute. If your test is set to use a schedule, take care that the total amount of execution time is less than your test's schedule interval.

Click Pause to add a step to your test to pause test run execution for a brief period of time. Pause duration can be from 1-180 seconds long. Pauses are not guaranteed to be exact, but durations are guaranteed to be at least the amount of time specified.

Incoming Request Step

Incoming Requests allow you to pause execution of a test run until a request is received at a unique URL. This is useful for testing webhooks or other asynchronous HTTP callback methods.

Important Considerations for Incoming Steps

  • The response to the incoming request will be a 200 OK with an empty response body.
  • The max wait time for an incoming request is 10 minutes.
  • If you make an incoming request (ie. use a request's unique URL) before reaching the incoming request step in a test run, we'll store the data for up to 30 seconds.
  • Tests with incoming steps should only be executed from a single location, on a schedule that is longer than the expected response time window. Simultaneous test runs may cause unexpected results.

Curl Step (Experimental)

Click From curl to create and add requests using a curl command. Supported options include -d, --data, --data-urlencode, -F, --form, -H, --request, -X, -b, --cookie, --user, -u. Unsupported options are ignored. File uploads are not supported.

Note: This feature is experimental. If you encounter unexpected behavior, contact support.

Condition Step

Condition steps are a way to conditionally run select test steps based on criteria you define. If the condition assertion evaluates to True, test steps embedded within a condition are executed. Otherwise, embedded test steps are skipped.

To create a new condition, click Condition, and drag any test steps you want conditionally executed into it. A condition assertion is an expression made up of a left operand, a comparison operator, and a right operand. When you define your condition assertion, you can hardcode the values or use any variables you've previously defined in your test or shared environment settings. To include the value of a variable in a condition assertion, enter the name of the variable surrounded by double braces e.g. {{variable_name}}

Assertion comparisons in conditions behave the same way they do in request steps, and you have access to all the same comparison operators you would in request steps. See the assertions comparison chart for more details about each type of comparison operator.

Subtest Step

Subtest steps can run other BlazeMeter API Monitoring tests as part of a test run. This is useful for reusing tests that perform common functionality like generating a new access token, setup/teardown or creating suites or groups of tests.


A subtest step can use environment settings from the parent test, the subtest, or any shared environment in the parent test's bucket.

Locations and Agents

Subtest steps always use the location of the parent test's chosen environment and ignore location settings in the selected environment.

Assertions, Variables & Scripts

When a subtest completes, a JSON object representing the result is made available to run Assertions, Variables and Scripts against. You can validate the resulting subtest run was correct and extract data from the variables object (the end result of the subtests's variable state) just like any other JSON payload. The JSON object you get back is the same as a webhook notification payload.

Passing Parameters to Subtests

By default, all of the selected environment's initial variables are passed to the subtest. To pass additional data, add 'Parameters' in the subtest step editor. These values will be passed to the subtest's initial variables, overriding any initial variable values with the same name.

Getting Variables from Subtests

You can use the "Variables" tab in a subtest step to extract variables that were create inside that subtest. Under "Property", you can access them by using variables.variable_name_in_subtest. For example:

A subtest step expanded to show the Variables tab with the property value set to variables.starWarsCharacters

Ghost Inspector Step

When your team is connected with a Ghost Inspector account, a new step type is available to add to your tests: UI test. This is useful if your API requires a sign in step before requests can be issued. Learn more in the Ghost Inspector integration guide.

Conditional Loop Step

Restriction: This pre-GA (Generally Available) feature is accessible only to a select set of customers. The full GA of this feature will soon be made available.

Conditional loops (not to be confused with Condition) are useful when you want to repeatedly make an API call to the server until you get a favorable response. For example, if you want to run calls until a certain status code is reached, or if you only want to send calls to a test suite with multiple iterations.

Using Conditional Loops

To use conditional loops:

  1. Under Add Step, click Conditional Loop.

  2. For each conditional loop (the While box), define the API Monitoring variable you want to conditionalize as a left operand, and a different variable or hardcoded values (when applicable) as a right operand. Enter the name of the variable surrounded by double braces e.g. {{variable_name}}.

    Comparison operators include:

    • equals/does not equal

    • is empty/is not empty

    • contains/does not contain

    • is a number

    • equals (number)

    • less than/greater than

    • less than/greater than or equal to

    • has key

    • has value

    • is null

  3. Optional: Add substeps such as Pause, Incoming Request or Subtests into the conditional loop by clicking on the step and dragging them into the conditional loop box. You can also change the execution order by dragging and reordering the substeps.
  4. Restriction: You can add any step into a conditional loop except for the Condition step.

After running the test, you can find details under each conditional loop, showing the test result and reason for each assertion's pass or failure. When using conditional loops, both test_result_uuid and template_uuid will remain the same for all loop iterations.


Let's say you want to set a conditional loop to run the test until you get a status code that equals 500. The first thing to do is to set up a variable that extracts a value from the response, in this case, its status code. Go to Request Step > Variables to define a status_code variable.

In the While statement, enter the status_code variable in the left operand and input 500 in the right operand. Set the comparison operator to does not equal, which will have the conditional loop run indefinitely until the status code finally equals 500.

Skipping Loops

You can skip conditional loops by using the dot button on the top-right of a conditional loop box.

To unskip and put the loop back in the test, click the dot again:

Changing Execution Order

To change the execution order of requests within a test, drag the Reorder icon for a given request and drop it where you want it to be executed. The new order will be used on subsequent test runs. If you are using variables in your request data, take care to make sure that the request that defines a given variable comes before the request(s) that use it.

Duplicating Requests

To create a copy of an existing request in a test, click the Duplicate icon for any existing request in the test. The new request will be added to the end of the test.