XLT 4.11.0

Test Framework

Improvements for Data-Driven Tests

Run test with a single data set during development – When running a test case with a data sets file attached, the test case will normally be executed multiple times, once for each data set in the file. During test case development or maintenance, where you often need to try and see whether the test is running okay now, this automatic multiplication might become annoying. That’s why you can now annotate the test case with the index of the data set that you would like to use during development:

@DataSetIndex(2)    // use the 3rd data set
public class MyTest 
{
...
}

Don’t forget to remove or comment out this annotation once you are done with maintaining the test case.

Data set support in load tests – When a test case with an attached data set file is part of a load test, XLT will not run it multiple times, but only once. Previously, XLT did even ignore the test data in the data sets file altogether. Hence, you would have to provide default test data in your test case, or otherwise your test case would break. This is not necessary any longer.

From now on, XLT will inject a single data set from your data sets file into your test case. You are in control which data set will be used. By default, the first data set is taken. If the property com.xceptance.xlt.data.dataSets.loadtest.pickRandomDataSet is set to true, the data set is chosen randomly whenever the test is executed. If the annotation @DataSetIndex is present at the test case, the data set matching the specified index is chosen. Note that the annotation takes precedence over the property.

Comment lines in CSV data set filesXLT now supports comment lines in CSV data set files. This is useful not only to add comments, but also to temporarily disable certain data sets without deleting them. The # is the default line comment marker, but you can configure another character if needed:

com.xceptance.xlt.data.dataSetProviders.csv.lineCommentMarker = %

Improvements for Java-based Script Test Cases

Mouse coordinates as separate values – Some script command methods require passing a mouse pointer position, i.e. contextMenuAt, mouseDownAt, mouseMoveAt, and mouseUpAt. These methods have got an overloaded version. Previously, you had to specify the mouse position as a single string parameter in the format “x,y”. Now, you can pass the mouse position as separate coordinates as well:

contextMenuAt("css=#foo", "10,20")
contextMenuAt("css=#foo", 10, 20)

Helpful stack trace for failed …AndWait commands – If an ...AndWait command fails in Java-based script test cases, the stack trace shown is no longer one of XLT’s internal threads, but the one of the test case thread. This way, it is easier for you to locate the failed command in the test case code.

Other Improvements

Selenium updated – Selenium has been updated to latest available version 3.12.0. Make sure you also update the driver binaries for all the browsers you want to use in your test cases. See below for a list of links to download the driver binary for your browser:

Change the list of items stored in a data provider – The class DataProvider reads test data items from a file, stores them in memory, and returns a randomly chosen item when requested by a test case. Previously, the list of test data items could not be changed during runtime. Now, it is possible to add items to the data provider – or remove items from it, respectively – while the (load) test is still ongoing. For example, if a certain coupon code is no longer applicable, the test case might remove it from the data provider so that it won’t be used any longer.

Transaction runtimes also in dev mode – When running a test case, XLT will store all measurements and other information to the corresponding timers.csv file. However, transaction data was available only when the test case was run in a load test, but not when running it in development mode, i.e. from within your IDE or via a JUnit test framework. From now on, this data is available in any case.

Each execution of a test method results in a separate transaction entry in the appropriate timers.csv file. In case of a data-driven test, you will get one entry per data set as expected. But note that this feature comes with some subtle behavioral changes:

Any method annotated with @BeforeClass or @AfterClass in your test case won’t be part of the transaction runtime measurement any longer. Likewise, exceptions thrown in of any of these methods will no longer appear in timers.csv.

Please check your existing load test cases if they make use of @BeforeClass / @AfterClass methods. If so, please turn them into @Before / @After methods. This shouldn’t be a breaking change as load test cases can have only one test method anyway. And in that case, it makes no difference if you annotate your before / after methods with @Before / @After or @BeforeClass / @AfterClass.

Load Testing

Success Criteria Validation Tool

In case you do not run your load tests manually but in an automated fashion, you might also want to qualify the results in an automated way. For example, if a load test violates some basic success criteria, then there is probably no use of running further load tests afterwards and you might want the automated process to fail.

To make this possible, you will need two things: a set of formal success criteria according to your requirements and a command-line tool to check them against the load test results. While we cannot really help you with the former, we have provided the latter – XLT now ships with the criteria validation tool check_criteria.sh. This tool basically reads success criteria definitions from a JSON file and applies them to one or more XML files of your choice. The return code of the tool indicates whether or not all success criteria were met.

<xlt>/bin/check_criteria.sh -c success-criteria.json -o validation-results.json <xlt>/reports/20180516-183822/testreport.xml

You can use the tool with any kind of XLT reports as they all host an XML file with the bare result data in their root directory. These files are named testreport.xml for regular load test reports, diffreport.xml for comparison reports, and trendreport.xml for trend reports.

For more details on how to define success criteria and how to run the tool, see Criteria Validation Tool in the How-To section of the user manual.

XLT Jenkins Plugin

XLT pipeline step returns a result object – Since XLT 4.10, our Jenkins plugin has been able to take part in a pipeline natively. In this release, we have extended the plugin to return all useful result information bundled as a result object so that this data can be evaluated in the pipeline if needed.

def r = xlt stepId: 'any-step-id', xltTemplateDir: '/path/to/xlt'
echo "Run failed: ${r.runFailed} | Report URL: ${r.reportUrl}"

Please see the Jenkins How-To for all the details on this result object.

Create a comparison report – Up to now, the plugin offered the option to create a summary report or a trend report based on the results of the last N builds. Now, the plugin may also generate a comparison report that compares the results of the current build with the results of a given baseline build.

Create a comparison report

Additionally, you may also specify a success criteria definition file to let the comparison report be evaluated automatically. See Success Criteria Validation Tool for more information.