BlazeMeter MCP Server usage examples

 

The BlazeMeter MCP Server enables AI assistants to interact with BlazeMeter through natural language. Below are real-world examples showing what you can accomplish, from querying workspace data to running full test lifecycles, all driven by conversational prompts.

The examples listed here are meant in addition to the primary use of the MCP, which is configuring and executing BlazeMeter tests.

Prerequisites: BlazeMeter AI consent must be enabled for your account. All examples below were executed using GPT-5 as the backing model. Results may vary depending on the LLM client and model used.

Use case 1: top 10 concurrency analysis

Instantly identify the tests consuming the most virtual users in your workspace. The MCP server scans every test configuration and ranks them by peak concurrency, giving you immediate visibility into resource-heavy tests.

Category: Data and insights

Tools called: 8

Prompt

List the top 10 tests with the highest concurrency usage in the default workspace configured in the account. For each test, please provide the test name, test ID, and the maximum concurrency used.

  1. The MCP reads your user profile to find the default workspace.

  2. It fetches the workspace and enumerates all projects.

  3. It scans every test's configured load across all projects.

  4. It ranks tests by maximum configured concurrency and returns a table.

Response

The MCP pulled the default workspace and scanned all tests' configured load to compute the highest concurrency per test.

Test name Test ID Max concurrency
demoblaze-test-simple 15143426 100
DEMO 15146247 20
DemoBlaze_PERFECTO 15136625 20
realistics 15134463 20
OfficeHours 15130429 20
Test1 15130220 20
realistics 15130370 20
realistics 15130179 20
func_abs 14847449 20
func_lr_xml_find 14801735 20
Most tests in this workspace have a configured concurrency of 20 or less. The test "demoblaze-test-simple" (ID: 15143426) has the highest configured concurrency at 100, distributed across two locations (southamerica-east1-a: 50, sa-east-1: 50).

MCP tools used

blazemeter_user, blazemeter_project, blazemeter_workspaces, blazemeter_tests

Video

Use case 2: most frequently run tests

Category: Data and insights

Tools called: 45

Prompt

Retrieve the top 10 most frequently run tests from the default configured workspace. For each test, display its name, ID, and the total number of times it has been executed.

 

  1. The MCP reads your user profile and resolves the default workspace.

  2. It lists all projects and all tests within the workspace.

  3. For each test, it queries the execution history to count total runs.

  4. It processes up to 45 parallel tool calls for speed and returns a ranked table.

Response

Test name Test ID Total runs
Untitled test June_12_12:56 PM 14837827 8
DemoBlaze_PERFECTO 15136625 1
DemoBlaze 15130137 1
OfficeHours 15130429 1
Test1 15130220 1
UnitTests 14534132 1
UnitTests 14534100 1
UnitTests 14534061 1
UnitTests 14534039 1
UnitTests 14534025 1

Results can be exported as CSV or include direct execution URLs.

MCP tools used

blazemeter_user, blazemeter_project, blazemeter_tests, blazemeter_execution

Video

Use case 3: interactive workspace dashboard

Build a comprehensive, visually appealing dashboard through a series of six natural-language prompts. The MCP identifies your account, analyzes costs for scenarios like Black Friday and smoke tests, maps resource distribution across workspaces, performs error analysis on weekly executions, and generates optimization recommendations — all assembled into a single interactive HTML dashboard.

Category: Dashboard and visualization

Tools called: 30

Prompts

This use case chains six consecutive prompts into one unified dashboard:

1. Who am I at BlazeMeter and what's my default workspace?

2. What will it cost to run a large-scale performance test?(Scenarios: Black Friday, smoke test, daily baseline)

3. Are the testing resources distributed across our organization?

4. Show me all the test executions from this week with an error analysis.

5. How can we optimize our testing costs and efficiency?

6. Can you create, based on the whole conversation, an HTML dashboard to look up all of the information together?

  1. Identity & workspace: The MCP identifies the account owner (e.g., Joaquin Araujo) and retrieves the default workspace structure with key findings.

  2. Cost analysis: It calculates cost breakdowns for multiple scenarios — Black Friday large-scale load, smoke tests, and daily baselines — with detailed insights.

  3. Resource distribution: It maps resource consumption across all workspaces in the organization, revealing which workspaces consume the most.

  4. Weekly executions & error analysis: It pulls all test executions from the current week and performs error analysis on each. Successful tests show clean results. Tests with critical issues — such as assertion failures related to specific devices (e.g., Samsung Galaxy S6) — get detailed diagnostics and improvement recommendations.

  5. Optimization recommendations: The BlazeMeter MCP generates actionable recommendations to optimize resources, reduce costs, and improve testing efficiency across the account.

  6. Dashboard generation: Finally, it assembles all of the above into a single interactive HTML dashboard — a visually appealing interface that consolidates every answer from the conversation into one shareable view.

Key highlights

  • Account overview: Shows who you are, your role, and the default workspace structure at a glance.

  • Multi-scenario cost modeling: Side-by-side cost breakdowns for Black Friday, smoke test, and daily baseline scenarios with insights.

  • Resource distribution: Visualizes how testing resources are spread across workspaces — immediately spots imbalances and heavy consumers.

  • Error analysis with device context: Weekly executions are analyzed individually; critical issues like assertion failures on specific devices get recommendations for improvement.

  • AI-driven optimization: The MCP proactively suggests ways to optimize your testing costs and resource usage.

  • One-click dashboard: Everything is packaged into an interactive HTML page — no BI tools, no manual aggregation.

MCP tools used

blazemeter_user, blazemeter_workspaces, blazemeter_project, blazemeter_tests, blazemeter_execution, blazemeter_billing, blazemeter_skills

Video

Use case 4: end-to-end test lifecycle

From account discovery to a fully configured test in BlazeMeter — all in one conversation. Create a Taurus YAML script, upload it, find the nearest load location to Uruguay (Brazil), configure a baseline, tap into the built-in help system for advanced settings, and generate an error summary from a previous run to share with colleagues.

Category: Test management, Cost and planning, Reporting

Tools called: 25

Prompt

This use case chains multiple prompts into a full test lifecycle:

1. Who am I at BlazeMeter and what's my default workspace?

2. Create a single-request test to the demo website using Taurus and upload it into BlazeMeter.

3. I'm testing the BlazeMeter website from LATAM and I know one of the new locations is in Uruguay. How can I get closer with my load?

4. I only want to set up a baseline. What's the ideal concurrency configuration? Please apply the recommended configuration into the test.

5. How do I configure JMeter properties at BlazeMeter? (This triggers the built-in Skills & Help system)

6. I have run a different test last week that got some errors. Give me a detailed summary of the errors to share with my colleagues. Markdown format is fine.

  1. Identity & workspace: The MCP identifies the account owner and retrieves the default workspace structure with available information.

  2. Script creation & upload: It generates a Taurus YAML with a single request to the demo website and creates the test in BlazeMeter, uploading the script automatically.

  3. Location detection: You mention testing from LATAM and ask about getting closer to Uruguay. The MCP queries available locations and identifies that Brazil (São Paulo) is the nearest BlazeMeter region, configuring it on the test.

  4. Baseline configuration: You ask for a baseline setup. The MCP recommends an ideal concurrency configuration for that scenario and applies it directly to the test.

  5. Skills & Help system: A question about configuring JMeter properties triggers the built-in BlazeMeter Skills and Help system, which returns the solution directly in the conversation.

  6. Error summary from previous run: You reference a different test that ran last week with errors. The MCP retrieves that execution, performs a detailed error analysis (identifying assertion failures, problems, and next steps), and generates a Markdown-formatted summary ready to share with colleagues.

Verifiable in the BlazeMeter UI

After the conversation, the test is fully visible in BlazeMeter:

Configuration tab: Shows the Taurus YAML script created during the conversation.

Load configuration: The baseline concurrency settings that were recommended and applied.

Location: Brazil (São Paulo) — the location identified as closest to Uruguay.

Key highlights

  • Full lifecycle in one conversation: From "who am I" to a fully configured, verifiable test in BlazeMeter.

  • Automatic geo-detection: Say "close to Uruguay" and the MCP finds Brazil as the nearest location.

  • Taurus script generation: The MCP writes the YAML and uploads it — no local file management needed.

  • Baseline recommendations: Ask for a baseline and the MCP suggests and applies the ideal concurrency.

  • Built-in knowledge access: The Skills and Help system answers advanced configuration questions (e.g., JMeter properties) without leaving the conversation.

  • Cross-execution error analysis: Reference a previous run and get a detailed, shareable error summary in Markdown.

MCP tools used

blazemeter_user, blazemeter_workspaces, blazemeter_project, blazemeter_tests, blazemeter_execution, blazemeter_skills, blazemeter_help

Video

Use case 5: cost estimation and credit planning

Before running a test, understand exactly what it will cost. The MCP server retrieves your workspace entitlements, analyzes the test configuration, and calculates estimated credit consumption — so you can plan confidently without billing surprises.

Category: Cost and planning

Tools called: 3

Prompt

I need to plan a new load test and want to understand the cost.

Can you first give me a summary of my default BlazeMeter workspace, including my current credit entitlements and key capabilities like concurrency limits?

Once you have that, please calculate the estimated credit consumption for running the test named demoblaze-test-simple for 30 minutes.

  1. The MCP fetches your workspace details (entitlements, limits, locations).

  2. It retrieves the test configuration (concurrency, executor, location distribution).

  3. It calculates credit consumption based on the VU-hour billing model.

Response

Workspace summary:

Property Value
Workspace Abstracta (ID: 1752425)
Credits available 39,979 credits
Locations available 58 public, 11 private
Max concurrency/location 40,000
Max engines/location 170
Max concurrency/engine 1,500

Test configuration:

Property Value
Test demoblaze-test-simple (ID: 15143426)
Executor JMeter (Taurus on Cloud)
Configured concurrency 100
Locations

50% southamerica-east1-a (GCP São Paulo),

50% sa-east-1 (AWS São Paulo)

Estimated credit consumption for 30 minutes:

100 VUs × 30 min = 3,000 VU-minutes = 50 VU-hours = 50 credits

MCP tools used

blazemeter_user, blazemeter_workspaces, blazemeter_tests, blazemeter_billing

Use case 6: root cause error analysis

Hand the MCP server a failed execution URL and get a full post-mortem: KPI review, error categorization, root-cause hypothesis, and a prioritized to-do list with specific Taurus YAML and JMeter fixes — all from a single prompt.

Category: Troubleshooting

Tools called: 8

Prompt

Act as an expert Performance Test Engineer specializing in troubleshooting failed tests run with Taurus on BlazeMeter.

Your task is to conduct a root cause analysis of the test execution found at the following URL: https://a.blazemeter.com/app/#/accounts/.../masters/80038475/summary

Please follow this analysis plan:

  1. Initial Summary & KPI Review

  2. Detailed Error Analysis

  3. Root Cause Hypothesis (Taurus Focus)

  4. Actionable To-Do List


  1. The MCP parses the execution URL and extracts account, workspace, project, and execution IDs.

  2. It fetches summary, error, and request statistics reports in parallel.

  3. It retrieves the test configuration to review Taurus/JMeter settings.

  4. It analyzes the data and produces a structured root cause report.

Response

KPI summary:

Metric Value
Max VUs 1
Throughput 25.78 req/s
Avg response time 34.68 ms
Error rate 87.44% (1,330 / 1,521)
Pass/Fail criteria None configured

Error breakdown:

Error type Count Percentage
415 Unsupported Media Type 570 42.86%
Non HTTP: Host may not be blank 380 28.57%
412 Precondition Failed 190 14.29%
405 Method Not Allowed 190 14.29%

Root cause: Failures are dominated by client-side scripting mistakes — malformed URLs (http:/+), wrong HTTP methods, and missing Content-Type headers — not performance saturation.

Actionable to-do list:

  • Immediate: Fix URL construction, correct HTTP methods to POST, set Content-Type: application/json.

  • Configuration review: Add pass/fail thresholds (errors > 1%, avg-rt > 500ms), centralize base URL.

  • Further investigation: Confirm API contract via cURL, enable request/response body logging, run a smoke test before scaling.

The MCP also provides a corrected Taurus YAML snippet:

Copy
scenarios:
  demoblaze-signup:
    default-address: https://api.demoblaze.com
    headers:
      Content-Type: application/json
    requests:
      - url: /signup
        method: POST
        body: '{"username": "${username}", "password": "${password}"}'
        assert:
          - subject: http-code
            contains: [200, 201]

MCP tools used

blazemeter_execution, blazemeter_tests, blazemeter_skills, blazemeter_help

Use case 7: test comparison and executive reporting

Compare two executions side-by-side and produce a stakeholder-ready report. The MCP server extracts KPIs from both runs, computes deltas, highlights regressions and improvements, and delivers a clear recommendation — ready to paste into an email or Slack.

Category: Reporting

Tools called: 4

Prompt

Act as a Performance Analyst reporting directly to the Head of Product.

Compare the following two BlazeMeter test executions:

Test A (Baseline): https://a.blazemeter.com/.../masters/80037741/summary

Test B (New Release): https://a.blazemeter.com/.../masters/80038931/summary

Structure your report as:

  1. Executive Summary (The Bottom Line)

  2. Side-by-Side KPI Comparison (markdown table)

  3. Key Insights & What Stands Out

  4. Recommendation

 

  1. The MCP fetches full reports for both executions in parallel.

  2. It computes KPI deltas and detects regressions or improvements.

  3. It frames findings in business-impact language.

  4. It outputs a formatted executive report.

Response

Executive summary: Test B sustained ~23x higher throughput at 20x concurrency but with ~2.9x slower average responses and 0% errors. Outcome: Fail for release decision — 1-second burst test with a different scenario cannot validate steady-state user experience.

Side-by-side comparison:

KPI Test A (Baseline) Test B (New Release)
Max Concurrent Users 1 20
Average Response Time 198 ms 571 ms
Error Rate 0% 0%
Total Throughput 0.875 hits/s 20 hits/s
Test Status Pass Fail (inconclusive)

Key insights

  • At 20x higher concurrency, the new release adds ~373 ms to average response time - this could reduce perceived snappiness and lower conversion.

  • Throughput scaled ~23x while latency rose ~2.9x; reasonable scaling but unverifiable in a 1-second run.

  • Scenario mismatch: Test A exercised multiple flows; Test B hit only one endpoint. Not apples-to-apples.

  • Test B ran for ~1 second with only 20 samples - far too short to evaluate stability.

Recommendation: Do not proceed with release based on Test B. Re-run with a like-for-like scenario, sustained duration (10–15 min steady state), and defined pass/fail thresholds. If release is time-sensitive, ship behind a feature flag with canary monitoring.

MCP tools used

blazemeter_execution, blazemeter_tests, blazemeter_skills