API Test Creator

This pre-GA (Generally Available) feature is accessible only to a select set of customers. The full GA of this feature will soon be made available.

Covering a standard set of positive and negative test cases is an important step on the way to 100% test coverage, but manually creating and maintaining API tests for all possible combinations of parameters is time intensive and error prone.

BlazeMeter can derive the API model from your API definition (provided through Swagger OpenAPI files or URLs) and automatically build hundreds of standard tests which can produce hundreds of insightful test results. Thanks to API Test Creator, you do not need to write standard tests manually.

You must authenticate with API Monitoring before API Test Creator can build tests for your API Model. To monitor internal APIs that reside behind a firewall or in a private staging environment, install a Radar Agent.

This article covers the following concepts:

  1. API Test Creation Concepts
  2. How to Create an API Model
  3. How to Define an API Model
    1. Define Request Parameters
    2. Define Responses
    3. Define Data Dependencies
    4. Define Test Rules
    5. Define Assertions
    6. Define Custom Types
  4. How to Generate Tests
  5. How to View Test Details and Description
  6. How to Run Tests
  7. How to View Test Reports and Coverage

API Test Creation Concepts

Requirements

You must create a Runscope API key and authenticate with Runscope before you can build API tests. API Test Creator relies on the default API Monitoring account of the user who creates the API Model. It uses API Monitoring tests to evaluate the uptime, performance, and correctness of an API. Each API model in BlazeMeter is represented by one or more buckets in API Monitoring. For more information, see API Monitoring Test Steps.

What are API Models?

BlazeMeter helps you maintain your API model so you can keep your API tests consistent. If multiple requests use the same request id, for example, and you change that parameter later, you update the id parameter once in the API Model, and BlazeMeter rebuilds all tests accordingly.

BlazeMeter defines models in the same way as Swagger does, as a list of operations; operations consist of GET, PUT, POST, DELETE, etc. methods and their parameters. You either import API definitions from a Swagger file, or create smaller models manually. You can also edit and create endpoints in the model manually. The operations describe the API behavior, and your API definitions specify the structure of parameters and endpoints in the model.

The API Model Testing Workflow

  1. Load your Swagger file or URL.
    1. Define Request Parameters
    2. Define Responses
    3. Define Data Dependencies
    4. Define Test Rules
    5. Define Assertions
    6. Define Custom Types
  2. Let BlazeMeter build a standard set of tests for the model.
  3. Click Run to execute the tests.
  4. Review the test reports.

API_Test_Creator_-_API_Definitions.png

How to Test Stateless and Stateful Calls?

BlazeMeter supports stateless and stateful API calls.

  • Stateless calls do not depend on any existing data. An example of a stateless call is adding a new book to a bookstore database. You can always execute tests for stateless calls, even if the database in the test environment is empty.
  • Stateful calls have data dependencies that need to be fulfilled before the API Test can run. An example of a stateful call is that you can update book ID 123 in the bookstore database only if an entry with the ID 123 exists in the database. For these calls to be testable, you must provide valid sample data together with your API definition.

What is an Environment?

BlazeMeter uses API Monitoring Shared Environments to store configuration settings for API Test Creator. The default is "No Environment" and, typically, you don't need to change your environment settings to use API Test Creator.

Advanced API Monitoring users can choose to click the Environment indicator under the API Model name to select a different default environment, or go to the API Modelling tab to manage API Monitoring Config Buckets and to modify Shared Environments in their API Monitoring account. For more information on environments, see Managing Configuration with Environments.

To monitor internal APIs that reside behind a firewall or in a private staging environment, install a Radar Agent. For more information, see Radar Agent Overview.

How to Create an API Model

Go to the Functional tab and click API Test Creator. The API Model screen opens.

If you have any API Models, the Model screen lists them and their creation dates. To change a model's name or base URL, hover the mouse over a model and then click the Edit icon. To delete a model, hover the mouse over it and then click the Delete icon.

Here you can create API Models either from an existing Swagger OpenAPI 2.0 and 3.0 file, or URL. Both JSON and YAML formats are supported. Or you can define the model manually.

To start creating an API Model from an existing API, you upload a definition and then you amend the information in it with your custom rules.

  • Either click the Plus button to upload a Swagger OpenAPI 2.0 and 3.0 file.
  • Or enter the URL of your Swagger location and click Go to load it.
    • Supports importing of multiple files with local file references

The API Definitions Screen opens and shows the imported API Model. The model name and base URL are also imported; you can still modify them later. Continue with How to Define an API Model in this article.

Alternatively, you can also define the whole model manually:

  1. Click Create Model.
  2. Provide a model name and base URL. Then click Save.
    The API Definitions Screen opens.
  3. Select a API Monitoring Config Bucket from the list of Environments.
    By default, a new Default Environment is created.
  4. Enter an operation (such as "GET /user" or "PUT /order" and so on) and click Add.
    When adding operations, you can optionally assign custom Group Names to tag operations in the UI according to your criteria.
  5. For each operation, define parameters, test rules, and assertions. Continue with How to Define an API Model in this article.

How to Define an API Model

The API Definitions screen is where you perform the main workflows of API Model creation and management, before you run tests. For BlazeMeter, the API definition is what it knows about your model, and API Test Creator uses this information to build suitable tests for you.

On this screen, you will:

  1. Manage methods and endpoints of operations in the model
  2. Define parameters and their properties
  3. Define test rules
  4. Define assertions
  5. Review and run tests

To add new operations (such as "GET /user" or "PUT /order") to an existing API Model manually, scroll to the bottom of the API Definitions screen, enter the Operation and optional Group Name into the provided fields, and click Add.

Define Request Parameters

On the API Definitions screen, expand one of the operations. On the Request Parameters tab, you add model parameters and describe their properties: You provide information such as, that parameter X of type integer is required in the query, and only values within the following boundaries are accepted. BlazeMeter uses this information to automatically build tests that cover all parameters and their properties.

The properties of parameters are:

  • their data type,
  • whether the parameter occurs in the header or body,
  • the parameter's default values and boundaries, and
  • whether the parameter is required.

Input values have boundaries. If your model is based on an OpenAPI file that contains boundary values, BlazeMeter will include them in the model properties. In all other cases, define the boundaries here manually. Typically, you define minimum and maximum length for values such as age or size , and valid values for properties that are custom categories, such as "payment options".

From these boundaries, BlazeMeter automatically builds tests for all combinations of test values. The tests cover the given boundary values and include additional obvious cases, such as 0 and empty values, which are always tested. After you run the tests, you will be able to filter the results by "passed" or "failed" to see at one glance how well your implementation handles out-of-boundary values.

Often BlazeMeter can automatically derive whether the same parameter (property) is used elsewhere. In other cases, where they are not obvious, you also have to define Data Dependencies, for example, if different parameters are used as identifiers.

Parameters:

For each operation, click Add Parameter and define one or more parameters. Each parameter has:

  • Source
    Specifies whether this parameter occurs in the body, query, path, or header.
  • Default Value
    Defines a valid default value.
  • Required
    Specifies whether this parameter is mandatory (true) or optional (false).
  • Type
    Defines the data type. Select a standard type such as string, integer, boolean, (floating point) number, or create a custom type. You edit custom types on the Model Assets screen.
  • Generated Tests
    How many tests were built to cover this parameter. Click an entry to jump straight to the Tests tab.

Properties:

For each parameter, click Add and define properties and boundary values, as applicable for the type:

  • Format — Limit the format to one of the following types: int32 (integer), int64 (integer), float (number), double (number), string, byte (string), binary (string), Boolean, date (string), date-time (string), password (string).
  • Max Length — Specify the maximum length of this string.
  • Min Length — Specify the minimum length of this string.
  • Max Value — Specify the maximum value of this integer or number.
  • Min Value — Specify the minimum value of this integer or number
  • Exclusive Max Value — Specify the exclusive maximum value of this integer.
  • Exclusive Min Value — Specify the exclusive minimum value of this integer.
  • Pattern — Describe valid values for this string using a regular expression according to the Ecma-262 Edition 5.1 regular expression dialect.
  • Starts With — Limit this value to strings that start with certain substrings.
  • Ends With — Limit this value to strings that end with certain substrings.
  • Contains — Limit this value to strings that contain certain substrings.
  • Multiple of — Limit integers and numbers to multiples of a given value.
  • Invalid Values — Disallow this comma-separated list of values.
  • Possible Values — Allow this comma-separated list of values.
  • Nullable — Specify true if this value can be null, and false if this value can never be null.

To edit or delete a parameter, hover the mouse pointer over the parameter and click the Delete or Edit icons on the right hand side. To edit or delete a parameter's properties, click the arrow to expand the parameter, and click the +Add button to open the properties editor; here you edit properties, or click the Delete button to remove them.

Define Responses

For every API call, you want to assert that the operation only returns status codes and formats that you expect. On the API Definitions screen, expand one of the operations. On the Responses tab, click Add Response to declare one or more allowed responses for this operation:

  • Status Code
  • Content type (such as "application/json")
  • Type (a standard type such as string, or a custom type)
  • Description (a reminder to your future self why testing this response is important.)

If the status code and data do not match the allowed responses, this is considered an invalid response, and BlazeMeter will flag the test as failed.

Define Data Dependencies

Stateful API calls need existing data in the database to be tested, and BlazeMeter uses this test data to prepare the environment against which the API Model Test runs. For example, values such as auth tokens or tester credentials do not need to be generated or boundary tested, they are fixed values from your environment. BlazeMeter will not attempt to build boundary test values for an "email" string if a data dependency is mapped to the existing User.email value from an assigned API Monitoring sub-test.

Your test data depends on an API Monitoring sub-test with pre-steps in an API Monitoring bucket that you need to create. API Test Creator uses this test to prepare the environment, pre-populate the database, initialize values, or make necessary API calls—whatever mandatory requirements you have to be able to test this endpoint.

  1. On the API Definitions screen, expand one of the operations

  2. Open the Data Dependencies tab.

  3. Click Add Data Dependency. The Create Step Setup dialog opens.

  4. Give the data dependency a descriptive name.

  5. Choose whether to run this setup step either pre-test or post-test.

  6. Select an API Monitoring bucket.

  7. Select the Test Name in that bucket that initializes the environment.
    For the selected API Monitoring test, the dialog nowshows a list of variables and values.

  8. Define the dependencies by mapping values to variables in API Test Creator or in the API Monitoring test, respectively.

    • Assign Mapped Values to Test Input Parameters
      Mapped API model values are the request parameters that you have defined in API Test Creator. You can assign the API Model's values to test input variables in the selected API Monitoring test.
    • Assign Test Output Parameters to Mapped API Model Parameters
      Test output parameters are variables in the selected API Monitoring test. You can assign API Monitoring variable values to Mapped API Model Parameters that you have defined in API Test Creator.
    • If an API Monitoring variable is not listed in the drop-down, you can also enter free-form text. The value can be a value parameter or a Custom Type that is nested.
  9. Click Add to add more mappings if needed.
  10. Click Save.

To define global data dependencies, go to the Model Assets tab and use the same procedure. When adding global data dependencies on the model level, you assign API Monitoring test variable values to request header parameters in API Test Creator. Global data dependencies are used for example for setting authorization headers.

 

Define Test Rules

On the API Definitions screen, expand one of the operations and open the Test Rules tab. When you do functional testing of custom requirements, and you want to know how your application behaves for valid and invalid input, you must define your business logic as a list of conditions. BlazeMeter then builds tests that validate your API's business logic.

In BlazeMeter, you phrase your requirements as logical conditions using Boolean operators for comparisons after the pattern "If this condition is true, then I expect this to happen, and otherwise that".

Why do you need to define Test Rules? An application's business logic often relies on arbitrary human concepts or external dependencies that test tools cannot know about: For example, "A customer's age is a number between 18 and 110. If they are older than 65, the senior discount becomes active. Otherwise, the senior discount is inactive." BlazeMeter cannot logically derive such arbitrary rules, that's why you need to define them explicitly.

To define Test Rules:

  1. Ensure that you have defined the request parameters that you need.
  2. Go to the Test Rules tab.
  3. Open an operation and click Add Test Rule.
  4. Give the Test Rule a name.
  5. Define the rules: The following example defines a rule that asserts that a user with a certain age and account type is shown the "Golden Oldies" discount, and others are not.
    • If the following condition is true...
      Enter parameters and their conditions as Boolean expressions. The editor supports auto-suggest.
      Example:
      age >= 65 and (accountType == "Gold" or accountType == "Platinum")
    • Then the following assertions should be true...
      Enter the first assertion.
      Example:
      Text Body contains Golden Oldies
    • Otherwise we should expect...
      Enter the alternative assertion.
      Example:
      Text Body does not contain Golden Oldies
  6. Verify that there are no red error warnings in the editor. Resolve any error before continuing.

Test Rules Format:

  • Define conditions in the format parameter - operator - value. Accepted operators are ==, !=, >, >=, <, <=. You can group conditions using parentheses, and chain grouped conditions with "and" and "or" to form more complex expressions.
    Example: age >= 65 and (accountType == "Gold" or accountType == "Platinum")
  • Define assertions in the same format as on the Model Assets > Assertions screen. You can compare values of Headers, Body, Status Codes, and more.
  • You can use fully qualified names of parameters, such as body.name, header.auth, path.XY_ID, or pet.category.weight. The references name and body.name reference the same parameter for path generation if they are in the same expression.
  • Parameter names are case sensitive.
  • BlazeMeter does not support dots '.' in parameter names (such as address.entry), because it already uses dots as separators to mark nested levels, such as user.address.street.number . If a parameter name contains dots, surround that segment in single quotes, for example, user.'address.entry'.street.number.
  • BlazeMeter does not support parameters of a composite type, this means, you cannot have a rule like Pet == { name: my-dog, color: white }. Neither does it support arrays, this means, you cannot have a rule like Pet.tags == [dog, white].

Define Assertions

The Model Assets screen stores shared assets for the whole API such as Assertions.

Define assertions to tell BlazeMeter how to validate responses as negative or positive test cases. Assertions are applied across all operations which fit their scope. A scope of /* means the assertion is applied to every operation.

After loading the Swagger file, BlazeMeter builds a set of basic assertions, for example:

  • If Status Code equals 200 then type positive
  • If Status Code greater than or equal 400 then type negative

Update these assertions as needed, and add custom assertions.

How to Use Negative and Positive Assertions

Why do you need two types of assertions? You do not only want to validate that the "happy path" succeeds, but also that the many expected "unhappy paths" trigger helpful error messages. Positive test cases cover successful usages of API calls. Negative test cases cover expected failures that you need to handle gracefully. A common problem is that testers cannot do enough manual testing for all possible negative cases.

For example, you model the behavior of a banking application's API. You have to define a negative assertion to tell BlazeMeter that, in a bank, successfully "transferring -100 Dollars" to a bank account is considered stealing, even though it would be mathematically correct.

In practice, most of your assertions will be negative. That's because there are few ways to provide valid parameters, and countless ways of getting them wrong. Testers want to make sure that the app recovers gracefully from all the various wrong states, that's why you will spend more time working with negative assertions.

Example

Success reported

Failure reported

For a valid API call, you assert status code=200, and create a positive type assertion.

If a positive test succeeds, all is well and no action is required.

If a positive test starts failing, you need to deal with a regression. BlazeMeter warns you that the happy path cannot be completed.

For an invalid API call, you assert status code=400, and create a negative type assertion.

If a negative test fails as expected, you have succeeded in predicting the failure (and have hopefully provided a helpful error message). BlazeMeter reports that as a test success, no action is required.

If a negative test starts succeeding, you need to deal with a regression. BlazeMeter warns you that invalid API calls are being executed unchecked.

How to Manage Assertions

On the Assertions tab, you edit, duplicate, run, and delete assertions. To run an assertion means BlazeMeter builds and runs tests based on that assertion. To manage your list of assertions, filter them by method, endpoint paths, and type, and combinations thereof.

Each assertion has a scope. For example, an assertion may apply to all GET * calls, or to all * /user/ calls, or to only one parameter, or to all calls. Thanks to the scope, you don't need to define the same assertions repeatedly.

To apply an assertion to all operations, or to apply an assertion to all paths, use an asterisk. Consequently, a scope of * * means the assertion applies to all methods and all paths.

For each assertion, first add an operation (including path) as scope:

Operation:

  • * (all operations)
  • GET /path
  • POST /path
  • DELETE /path
  • PUT /path
  • PATCH /path
  • HEAD /path
  • TRACE /path
  • OPTIONS /path

For the check, each assertion compares a source value (or its parameter, if applicable) with a target value.

  • Source:
    • Headers
    • JSON Body
    • Response Size (in bytes)
    • Response Time (in milliseconds)
    • Status Code
    • Text Body
    • XML Body
  • Parameter
    Specify the source parameter to check for Headers, JSON Body, XML Body only, otherwise leave it empty.
  • Comparison
    • equals
    • does not equal
    • is empty
    • is not empty
    • contains
    • does not contain
    • is a number
    • equals (number)
    • less than
    • less than or equal
    • greater than
    • greater than or equal
    • has key
    • has value
    • is null
  • Target value
    Enter a target value for the comparison.

Type

Define the assertion type to control which condition BlazeMeter considers a test success or failure.

  • Positive tests confirm that the check succeeds as expected.
  • Negative tests confirm that the check fails as expected.

Apply
Specifies whether this assertion is currently enabled or disabled. Use this toggle to switch individual assertions on or off, temporarily, without having to delete and recreate them.

Define Custom Types

The Model Assets tab stores shared assets for the whole API Model, such as Custom Types.

Custom types are data types that you have designed specifically for your API Model. For example, you may have a street_address parameter that is made up of three sub-parameters:

  • a string named street_name,
  • an integer named house_number, and
  • a custom type named coordinates (which itself is an array of numbers).

On the Model Assets > Custom Types tab, click Add Custom Type. Define the Name of the custom type and click Add to specify the following fields of its sub-parameters:

  • Name — Define the name of the sub-parameter.
  • Default Value — Define a valid default value.
  • Required — Specify true if this parameter is required, and false if it is optional.
  • Type — Specify the parameter's data type.
    • A standard type, such as string, (floating point or double) number, integer, or Boolean
    • Or a custom type, a set of sub-parameters that can be standard types and custom types, or a combination of both. Circular references among custom types are not allowed.
    • Or an array of one of these types.
  • Add and define field properties in the same way as you did for the parameters on the API Definition > Operation > Request Parameters tab.

How to Generate Tests

On the API Definitions screen, expand one of the operations; the Generated Tests tab lists your automatically built tests. The initial set of tests is built from the original model to check basic parameters. After you've enriched the model by adding test rules, BlazeMeter builds more tests to help you verify that your API implementation conforms to your rules. The Test Creator chooses different test conditions depending on whether you have designated an assertion as a positive or negative case.

When you upload a new file, or change parameters, rules, or assertions, the tests are rebuilt. This means, listed tests can change, new ones will be added, and outdated ones will be removed automatically.

BlazeMeter builds tests that verify each known fact separately. Separation is important because if one test can break two constraints at once, the results are less informative.

Example:

For a parameter age, you have defined a minimum of 0 and a maximum of 100, and you have defined "comma" as an invalid character. BlazeMeter builds multiple "valid characters" tests:

  • BlazeMeter builds one test with a comma and one without. At the same time, it will use only valid numbers within the min/max values, because we are testing one property and not the other.
  • BlazeMeter then builds tests within, outside, and on the border of the minimum and maximum values. At the same time, it will use only valid characters, because we are testing one property, and not the other.
  • When such a test fails, you know exactly whether the root cause was either an invalid character or an out-of-bounds numeric value.

After running the tests, the Last Passed, Last Executed, and Failed Reason columns show you the test results immediately.

The results are the same information that you can see aggregated on the Reports screen.

How to View Test Details and Description

Before running a generated test, you may want to confirm what the test will do, and after running a test, you may want to review specific results.

On the API Definitions tab, open an operation, and go to its Generated Tests tab. Click the name of a test to review all test details:

  • Name, method, type
  • Request (header, body, parameters).
    Click to quick-copy the request in cCurl format into the clipboard and paste it into your terminal for manual testing.
  • Response (header, body), and status code (after the run)
  • Setup and Cleanup
  • Assertions
  • An automatically generated human-readable description.
    Example: This test is for the operation "get - /v1/promotions/". It is a Positive test verifying the properties: maximum of query.years. In this test, we use a value of '100' for query.years, which is valid. The request is expected to be Accepted which we test by applying the assertions: response.status-code equal 200 ...
  • Status and run duration in milliseconds (after the run)

How to Run Tests

On the API Definitions tab, open an operation, and go to its Generated Tests tab to see the tests that BlazeMeter has built to test these properties.

You can choose which subset of generated tests to run, and you can even group tests for different operations and endpoints that you want to run together as a custom Run Configuration.

  • Click the Run button next to an endpoint to run all generated tests for this endpoint.
  • Click the Run button next to an endpoint’s method to run all generated tests for this operation.
  • Click the green Run button to run all generated tests for this API.
  • Next to the Run button, click the Run menu to repeat a Recent Run, or to run a saved Run Configuration.
    Click the Edit button on an entry in this menu to edit a configuration.

The tests will run in the API Monitoring production environment.

To create a Run Configuration:

  1. Run a test.
  2. Click the Run Menu and click the Edit icon for this run.
  3. In the Configuration Name field, enter a name that sums up its purpose.
  4. In the Description field, optionally write what this configuration is to be used for.
  5. Select one or more Operations that you want to test.
  6. Click Save.

To edit existing Run Configurations:

  1. Click the Ellipsis next to the green Run button and select Show All / Manage Configurations.
    The Saved Configurations Window opens and shows all Run Configurations, their Last Run date and time, and their Descriptions.
  2. Hover the mouse over a configuration to see available actions:
    • Click Run Configuration to run this set of tests.
    • Click Edit Configuration to edit the name or description, or to add or remove operations from this test set.
    • Click Clone Configuration to use this configuration as a base for another.
    • Click Delete Configuration to remove a configuration entry from the Run Menu.

How to View Test Reports and Coverage

When reading the test reports, you are typically looking for answers to the following questions:

  • Which tests failed, for which property?
  • Why did they fail?
  • Which assertions passed/failed?
  • Which test rules passed/failed?
  • When did a test start failing after a series of successes? When did it start succeeding after a series of failures?
  • How many tests were built in total, and how many passed/failed?
  • Which endpoints are covered by tests, and which parameters and properties are covered?

The Coverage tab helps you analyze which parameters and which endpoints are covered by tests, which boundary values these tests will use, and whether they are positive or negative tests. BlazeMeter automatically describes each test in concise natural language.

The API Definitions > Operation > Tests tab gives you a quick insight on results for a specific operation. If the status column indicates "Not Run", click the green Run button to run tests first to get results.

Tests-Results.png

Visit the Reports screen to evaluate all results in context over time. For each recent test run, the Reports screen gives you a color-coded overview of the Results, the overall Status, which Run Configuration was used, and which User ran the tests.

  • Red and green bar segments indicate the pass/failed status for each run, so you can tell at a glance which build it was that flipped the test results from passed to failed, or from failed to passed.
  • Orange indicates an external blocker that prevented the test from running.
  • Gray indicates the tests were not run yet.

reports.png

Under Run Date, click a timestamp to drill down into which tests passed and failed in a particular historical run. Expand each tested endpoint to inspect details, such as Type and Status. The Last Passed and Last Executed columns help you identify which test switched from passed to failed, and the Failed Reason column shows you why.

reports-details.png

To get details on the headers, body, query strings and so on, click an individual test to open the Details window: here you can review its method, type (positive or negative), the parameter and its property being tested, and details about the Request/Response and the Assertions.

  • On the Request tab, you can copy the request in curl format for manual testing. You can view the Request URL, Query String Parameters, Request Headers, and Request Body.
  • On the Response tab, you can review Response Headers and Body.
  • On the Assertions tab, you can review the condition of the assertion being tested.

Click Show Run Details to split the window and compare these details in context.

Click Close to close the Details window and return to the Generated Tests screen.