Reporting functionality in Taurus is delegated to special modules category. There is special internal facility that reads results from executors, aggregates them and feeds to configured reporters. Reporters are specified as list under top-level config key reporting, by default it is configured with two reporters:
--- reporting: - final_stats - console
The example above uses a shorthand form for specifying reporters. Full form is using dictionaries and allows specifying some additional settings for reporters:
--- reporting: - module: final_stats - module: console
Possible reporting modules are listed below.
This is the simplest reporter that just prints few basic KPIs in the console log after test execution, for example:
18:04:24 INFO: Samples count: 367, 8.17% failures 18:04:24 INFO: Average times: total 0.385, latency 0.115, connect 0.000 18:04:24 INFO: Percentile 0.0%: 0.125 18:04:24 INFO: Percentile 50.0%: 0.130 18:04:24 INFO: Percentile 90.0%: 1.168 18:04:24 INFO: Percentile 95.0%: 1.946 18:04:24 INFO: Percentile 99.0%: 2.131 18:04:24 INFO: Percentile 99.9%: 3.641 18:04:24 INFO: Percentile 100.0%: 3.641
This reporter is enabled by default. To enable it manually, use following config, some additional options are available:
--- reporting: - module: final_stats summary: true # overall samples count and percent of failures percentiles: true # display average times and percentiles failed-labels: false # provides list of sample labels with failures test-duration: true # provides test duration dump-xml: filename to export data in XML format dump-csv: filename to export data in CSV format
Dump Summary for Jenkins Plot Plugin
Two options dump-csv and dump-xml allows to export final cumulative stats into files that can be used byJenkins Plot Plugin to plot historical data inside Jenkins. Prefer CSV as it is much easier to use with Plot Plugin. XML format also can be used with other tools to automate results processing.
This reporter shows fullscreen dashboard with some KPIs and even ASCII-art graphs like this:
This reporter is enabled by default. To enable it manually, use following config:
--- reporting: - console
There is module settings for Console Screen, containing option disable. It allows easy disabling fullscreen display by using command-line switch -o:
bzt config.yml -o modules.console.disable=true
On Windows, Console Screen is shown in separate window and users may change font size by holding Ctrl key and using mouse wheel. Two additional options are dummy-cols and dummy-rows, they affect the size of dummy screen that is used for non-tty output.
Like it always happens with tools that focused on executing tests, they are unable to provide sufficient reporting functionality. As professional user, you need some centralized storage to be able to access test results in convenient and interactive way, compare different executions, see trends over time and collaborate with your colleagues. BlazeMeter.com offers such service, it has both commercial and free of charge versions.
The simplest way to get a taste of BlazeMeter reporting is to use -report command-line switch. This will enable result feeding to service without any other settings required. You will receive the link for your report in the console text, and the link will be automatically opened in your default browser, see browser-openoption for more tuning.
The official policy for BlazeMeter reports uploaded from Taurus, is that anonymous reports are kept for 7 days and if you’re using your own account, then reports are kept according to the retention policy of your account. For details see BlazeMeter service website.
If you want the results to be stored in your existing BlazeMeter account, you’ll need to specify the reporting settings in your configuration file. Get the API token from BlazeMeter.com (find it under your Settings => API Key) and put it into token option:
--- modules: blazemeter: token: TDknBxu0hmVnJ7NqtG2F
It is highly recommended to place the token setting in your personal per-user config ~/.bzt-rc to prevent it from being logged and collected in artifacts.
Now you can use -report command-line switch, or you can set BlazeMeter reporting as part of your config, the test option specifies test name to use, project names group of tests:
--- reporting: - module: blazemeter report-name: Jenkins Build 1 test: Taurus Demo project: Taurus Tests Group
Advanced settings:
--- modules: blazemeter: address: https://a.blazemeter.com # reporting service address data-address: https://data.blazemeter.com # data service address browser-open: start # auto-open the report in browser, # can be "start", "end", "both", "none" send-interval: 30s # send data each n-th second timeout: 5s # connect and request timeout for BlazeMeter API artifact-upload-size-limit: 5 # limit max size of file (in megabytes) # that goes into zip for artifact upload, 10 by default # following instructions will have effect when no per-reporter settings report-name: My Next Test # if you will use value 'ask', it will ask it from command line test: Taurus Test project: My Local Tests
Note how easy is to set report settings from command line, i.e. from inside Jenkins build step:
bzt mytest.yml -o modules.blazemeter.report-name="Jenkins Build ${BUILD_NUMBER}"
This reporter provides test results in JUnit xml format parsable by Jenkins JUnit Plugin. Reporter has two options:
If sample-labels used as source data, report will contain urls with test errors. If pass-fail used as source data, report will contain Pass/Fail criterias information.
Sample configuration:
--- reporting: - module: junit-xml filename: /path_to_file/file.xml data-source: pass-fail
Aggregating facility module is set through general settings, by default it is:
--- settings: aggregator: consolidator
The consolidator has several settings:
--- modules: consolidator: generalize-labels: false # replace digits and UUID sequences # with N and U to decrease label count ignore-labels: # sample labels from this list # will be ignored by results reader - ignore buffer-seconds: 2 # this buffer is used to wait # for complete data within a second percentiles: # percentile levels to track, # 0 also means min, 100 also means max - 0.0 - 50.0 - 90.0 - 95.0 - 99.0 - 99.9 - 100.0
Note that increasing buffer-seconds might sometimes make results aggregation more robust, by price of delaying analysis.