AI Log Analysis Report
The AI Log Analysis tab is a tool that sums up lengthy log files in a helpful, human-readable format.
Use case
After running a Performance test, BlazeMeter possibly alerts you to some errors. You download multiple log files for multiple engines, extract them, search through them, correlate them, read third-party documentation to investigate the cause of errors and warnings, before you find a solution. Manual log analysis is quite time intensive.
-
To help you pinpoint the root causes, BlazeMeter AI Log Analysis presents you with a summary of pertinent error messages and warnings that helps you understand common issues happening during test executions.
-
The analysis covers all logs across engines so you don't have to search through several logs.
-
If applicable, it will give you recommendations for remediation.
-
The summary includes links to pertinent log files to help with your investigation.
-
The analysis distinguishes whether issues stem from the system under test or from the test script.
AI Consent
BlazeMeter's AI-assisted features require explicit consent from the account owner before team members can use them. By default, AI features are disabled. For more information, see AI Consent.
Use AI to analyze the log
You can generate AI insights only for completed test runs of Performance tests and Browser-Based tests. Multi-tests are not supported.
Follow these steps:
- On the Performance tab, select Reports. The most recent reports are shown on top.
- Click Show all reports and select a report to view its details.
-
Start the AI Log analysis:
- Either click Run AI Log Analysis on the Summary tab.
- Or click Run AI Log Analysis on the AI Log Analysis tab.
- Wait while BlazeMeter analyzes the logs and shows you a list of detected errors grouped by solutions. Each group is assigned a Priority value based on how the errors impact your test.
-
Expand the list of errors and view the suggested solutions:
-
Title
-
Original Message
Indicates the original error message or warning found in the log. -
Explanation
Informs you what this log message means and in which situations it typically occurs, so you can evaluate the impact and relevance for your system or test. -
Recommendation
Contains typical solutions to solve this issue or recommends best practices to avoid a warning, if applicable. -
Found in Logs
Click Check Logs to jump to the Logs tab, where you can download the relevant log files. If you are using multiple engines, this action displays only the engines relevant to the error.
-
Priority value
To help you know which errors are the most critical, AI assigns each group of errors a priority value. Priority values can range from 0-100, with 100 being the most critical errors:
-
90-100: Critical infrastructure issues, authentication failures, system crashes
-
70-89: High-frequency errors, significant functionality failures
-
50-69: Moderate impact errors, performance issues, configuration problems
-
30-49: Low-frequency issues, minor functionality problems
-
0-29: Cosmetic issues, warnings, non-blocking problems
AI assigns priority values based on the following:
| What is being assessed | Weight | Types of errors |
|---|---|---|
| Impact on Test Success | 40% | Errors that prevent tests from running or cause complete failures get higher priority |
| Error Frequency | 25% | More frequent errors (higher errorCount) deserve higher priority |
| Error Origin Category | 20% |
|
| Scope of Impact | 15% | Errors that appear in more tests and files get higher priority |
Filter results
If necessary, select the category by which you want to filter the results:
-
System Under Test
-
Test Script
-
Default: Show all
Jump to next section: