The Mark59 User Guide
Document Convention for the Mark59 File Download
2. Getting Started: Install and Quick Start Demo
3. Mark59 Selenium Scripting Quick Start
Dependencies and the Sample Projects
Running a Script from Eclipse: main()
Script Parameters: additionalTestParameters
Script Properties: mark59.properties
User Error Handling: userActionsOnScriptFailure()
The methods of SeleniumIteratorAbstractJavaSamplerClient
Using Devtools Chrome Protocol (CDP)
Running the DevTools sample script.
DevTools CDP transactions in Mark59 Trend Analysis and Reports
The Maven Build for Selenium Scripts
Deploying a Mark59 Selenium Script to JMeter
Mark59 (Selenium and non-Selenium) Script Results Output
Mark59 Selenium Scripting: Tips and Tricks
What to time (using the startTransaction and endTransaction methods)
How to verify and time an event (such as a Page Load)
Sleep times and logging (in the Page Load script excerpt above) ..
Using Page Elements for Clarity
Additional Parameters - Proxies
Additional Parameters - Switching on DevTools Automatically
Use Chromium instead of Chrome
Keeping tests independent of the default machine browser
Choosing a particular (or new) version of CDP/Selenium
5. JMeter Execution: Tips & Tricks
Implementing SeleniumIteratorAbstractJavaSamplerClient
JSR223 Sampler Selenium Scripting
6. The Mark59 Web Apps: Technical Overview
A Worked Example of Adding a Profile, a Command and it’s Parser
Download Server Profiles Using Excel
Security - API Basic Authentication
Adding a Basic Authentication Header
The --ignoredError (-e) Parameter
The 'Copy All' and 'Delete All' options
Metrics Event Mapping Administration
The matching procedure and using a generic 'catch all' entry.
14. Continuous Integration Quick Start Demo
15. Continuous Integration Design
Continuous Integration Job Flow General Design
Start JMeter Distributed Server(s) [Not part of the Jenkins Demo]
Setup for Distributed Testing in Jenkins
Distributed Test Sample Commands
Port and Security Considerations (Distributed Tests in AWS)
Distributed Testing References
Appendix A. Chrome Browser Setup
Appendix B. Download ChromeDriver
Appendix C. MySQL Installation
Appendix D. JMeter/Selenium Test Injector Capacity
Mark59 is an open-source Java-based set of tools, with the aim of providing the capability of performance testing applications in a regular, repeatable manner, by having a focus on the automated detection of SLA breaches in a test, and highlighting trends and issues with the application under test over time. It is designed so that this is achievable without necessarily needing to have purely specialist performance test skills.
We’ve avoided calling Mark59 a ‘framework’ in this guide, as that word is often used in test automation to describe ‘clever’ software hiding or overlying core technologies, so people using them don't have a fair chance to feel motivated and to learn the proper skills they need in the industry. Rather, for example, our integration of two of the popular products in test automation, JMeter and Selenium, along with the work we have put into our examples and documentation, hopefully gives Performance Testers and Automation Testers an insight into scripting skills that can be easily learnt and benefit each other’s skill sets.
Mark59 is designed to run on Windows or Linux-based operating systems, and is compatible with Macs.
The Mark59 components have primarily been targeted to work with vanilla JMeter (i.e., no special hacks or plugins needed). It can also produce trend analysis reporting for Loadrunner and Gatling tests (some limitations apply).
Mark59 consists of three relatively easy-to-use Java Spring Boot web applications, a few mainly JMeter related integration artefacts, and some sample projects to get your started. The web applications are independent of each other, so you can choose which one(s) you want to use. They are:
The integration artefacts:
Throughout this guide “mark59” will be used to refer to the file directory where you have placed the root of the downloaded and unzipped Mark59 release zip file obtained from the Mark59 website for Linux, Windows or Macs, rather than repeatedly having to describe it as such.
Also, sample links to the Mark59 web applications are used in this guide. The links assume the applications are loaded with the supplied data and started as per the ‘Quick Start Demo’’.
The idea of the Quick Start Demo is to install and demonstrate execution of the components of Mark59 in a way (you won’t be surprised to hear) that is designed to be the quickest, easiest way possible. Note the demo will create and write data into directory C:\Mark59_Runs (Windows) or ~/Mark59_Runs (Linux). We are assuming local ports 8081, 8083 and 8085 are free. You can check port allocation with netstat -an (Win) or sudo netstat -lntup (Linux).
Mac Users.
Although Mark59 is designed to execute on Win or Linux machines, it will run on a Mac and this Quick Start Demo followed. You may need to do extra actions at times.
For example, when you download the special Mac chromedriver and try to open it you’ll get a message like “chromedriver cannot be opened because it is from an unidentified developer”. You will need to initially ‘open with Terminal’ -> Open (a screen should open which includes version information). Similarly for the Mac Commands. You may also need to set file permissions (eg chmod 777 Mac_Start_Metrics_H2_DemoMode.command).
On Windows and Linux machines, the shell commands that get executed using the Metrics application open a new terminal window, which displays output as the command(s) are running. On Macs however, these commands only show output at the end of execution directly on a Metrics application page, rather than opening a separate terminal like Win and Linux do.
To run the JMeter jmx samples directly from the JMeter GUI, to get the working directory to align with the values in mark59.properties, start JMeter from a terminal with commands like
% cd ~/apache-jmeter/bin
bin % ~/apache-jmeter/bin/jmeter
As long as you have Java and the Chrome browser on your machine, there is no real install as such. This demo uses the H2 database option for Mark59 which creates all its table definitions and data on the fly. Other database options are available for a more permanent setup. They are discussed later in the guide and need to be installed. What you do need to do is download a few things and put them in the right places, which we cover here.
Open a command window and type : java -version . You should see something like:
openjdk version "17.0.2" 2022-01-18
OpenJDK Runtime Environment (build 17.0.2+8-86)
OpenJDK 64-Bit Server VM (build 17.0.2+8-86, mixed mode, sharing)
As long as you see a version of 17 or higher you are good to go. If not, for the purposes of this demo just install OpenJDK 17, or any Java version you want that’s higher than that.
Download JMeter binaries (latest version). Extract and copy so that the root is C:/apache-jmeter (Win), ~/apache-jmeter (Linux/Mac home). That is, don’t include the version number in the directory name.
Download the current release zip file from the Mark59 website. Unzip it somewhere. For these instructions we are assuming you unzipped to C:/mark59 or ~/mark59 (shortened from now on to just mark59(the Document Convention)
Go to the mark59/bin directory, and run StartAllMark59WebApplications_H2_DemoMode (.bat for Win, .sh for Linux). Three command windows will start running, one for each of the applications.
For Mac, go to mark59/bin/mac_commands and run
Mac_Start_Datahunter_H2_DemoMode.command,
Mac_Start_Metrics_H2_DemoMode.command and
Mac_Start_Trends_H2_DemoMode.command
Unless something drastic has gone wrong, you should be something like
Started ApplicationEntry in ... seconds.. towards the end of each of the three console’s output.
The Mark59 Metrics application is designed to run Windows or Linux commands either locally or remotely that obtain metrics from a server, as well as run Groovy scripts. However, you can also write a command that does stuff other than capture metrics. In Windows (admittedly a bit awkwardly) you can execute a DOS command and in Linux/Mac any shell script. We take advantage of this here, where we run commands from the Server Metrics Application that execute the major components of Mark59. You can look at the commands (shortcut is the ‘Selected Commands’ link on the Server Profile page) to understand what’s being executed.
mpstat is not installed on Linux by default, which is the utility being used to obtain cpu stats. If it is not installed on your machine (confirm by trying out command mpstat 1 1 ) do so:
sudo apt-get install sysstat (Debian, Ubuntu based systems),
yum install sysstat (CentOS, RedHat or Fedora)
The Mark59 web applications need to be running for the demo, so start them (see the last step of the install instructions) if they aren’t already running. Leave the three command windows that are started during the demo open (when you close them, the web applications stop).
In a browser open http://localhost:8085/mark59-metrics/serverProfileList
User/Password is demo/demo. (Do not use these in a live environment!)
You may need to download a version of the ChromeDriver that is compatible with your machine and the version of Chrome you are using (note that versions of Linux and Windows ChromeDrivers are supplied, but may be out of date by the time you run this demo).
If you do not have Chrome at all on your machine, just download the latest version.
See Appendix A. Chrome Browser Setup.
To download a chromedriver (and find your current Chrome version).
See Appendix B. Download ChromeDriver.
Once you have downloaded the ChromeDriver, use it to replace the existing ChromeDriver file at
mark59/dataHunterPerformanceTestSamples/chromedriver.exe (Win) or
mark59/dataHunterPerformanceTestSamples/chromedriver (Linux/Mac)
This step is going to copy required artefacts to JMeter, then start a JMeter test that runs the DataHunter application using Selenium scripts, and also collects local server metrics.
Depending on your o/s, in the Metrics application click on the Server Profile Name
DemoWIN-DataHunterSeleniumDeployAndExecute,
DemoLINUX-DataHunterSeleniumDeployAndExecute or
DemoMAC-DataHunterSeleniumDeployAndExecute
Click the ‘Run Profile’ button… but just before you do:
You should also open and view/tail the JMeter log at apache-jmeter/bin/jmeter.log
during execution. This is especially useful for Mac users, as the summary results (example below) are only output in the ‘Command Log’ section of the Metric application at the end of the test run (takes a few minutes).
For Windows and Linux, a command window opens during execution and you should finish with something like this:
Starting JMeter DataHunter test ...
Creating summariser <summary>
Created the tree successfully using C:\mark59\mark59-metrics\..\mark59-datahunter-samples\test-plans\DataHunterSeleniumTestPlan.jmx
Starting standalone test …………
Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445
OS property has been set to : WINDOWS
summary + 1 in 00:00:06 = 0.2/s Avg: 5368 Min: 5368 Max: 5368 Err: 0 (0.00%) Active: 3 Started: 3 Finished: 0
summary + 4 in 00:00:30 = 0.1/s Avg: 16763 Min: 4511 Max: 32106 Err: 0 (0.00%) Active: 5 Started: 5 Finished: 0
summary = 5 in 00:00:36 = 0.1/s Avg: 14484 Min: 4511 Max: 32106 Err: 0 (0.00%)
summary + 5 in 00:00:31 = 0.2/s Avg: 14621 Min: 4886 Max: 32080 Err: 0 (0.00%) Active: 5 Started: 5 Finished: 0
summary = 10 in 00:01:07 = 0.1/s Avg: 14552 Min: 4511 Max: 32106 Err: 0 (0.00%)
summary + 5 in 00:00:37 = 0.1/s Avg: 25374 Min: 14845 Max: 32085 Err: 0 (0.00%) Active: 0 Started: 5 Finished: 5
summary = 15 in 00:01:44 = 0.1/s Avg: 18159 Min: 4511 Max: 32106 Err: 0 (0.00%)
Tidying up ……….
... end of run
Press any key to continue . . .
Executing the DataHunter Selenium Test in non-GUI mode (partial output, Windows)
All operating systems do artefact copies to JMeter and run the JMeter test in non-GUI mode.
You should also see Selenium opening and running the Datahunter application in Chrome browsers.
Once the run has finished, hopefully with Err: counts of 0, you can view the test results output file at Mark59_Runs/Jmeter_Results/DataHunter/DataHunterTestResults.csv
Chrome and ChromeDriver Mismatch Problem.
If you start seeing nasty exception errors being printed which include a message like
Response code 500. Message: session not created: This version of ChromeDriver only supports Chrome version 106
It means the version of Chrome you have on your machine and the ChromeDriver you are using don’t line up. Just go back to the Align the ChromeDriver to be used to the version of Chrome on your machine step and make sure you get the right ChromeDriver
This step uses the Enhanced JMeter Reporting utility in Mark59 to generate JMeter reports split by data type.
Depending on your o/s, in the Metrics application click on the Server Profile Name:
DemoWIN-DataHunterSeleniumGenJmeterReport,
DemoLINUX-DataHunterSeleniumGenJmeterReport or
DemoMAC-DataHunterSeleniumGenJmeterReport
Click the ‘Run Profile’ button.
The commands work slightly differently for each operating system, but all call a script from the mark59/mark59-results-splitter directory to generate four reports. Again, note Windows and Linux open a command window, for Mac the output appears at the end of execution in the application Command log window.
Results Splitter starting.. Version: 5.3
cwd = C:\mark59\mark59-results-splitter
JmterResultsConverter executing using the following arguments
--------------------------------------------------------------
inputdirectory : C:\Mark59_Runs\Jmeter_Results\DataHunter
outputdirectoy : C:\Mark59_Runs\Jmeter_Results\DataHunter\MERGED
outputFilename : DataHunterTestResults_converted.csv
errortransactions : No
eXcludeResultsWithSub : True
metricsfile : SplitByDataType
cdpfilter : ShowCDP
--------------------------------------------------------------
initializeCsvWriter C:\Mark59_Runs\Jmeter_Results\DataHunter\MERGED\DataHunterTestResults_converted.csv
initializeCsvWriter C:\Mark59_Runs\Jmeter_Results\DataHunter\MERGED\DataHunterTestResults_converted_CPU_UTIL.csv
initializeCsvWriter C:\Mark59_Runs\Jmeter_Results\DataHunter\MERGED\DataHunterTestResults_converted_DATAPOINT.csv
initializeCsvWriter C:\Mark59_Runs\Jmeter_Results\DataHunter\MERGED\DataHunterTestResults_converted_MEMORY.csv
Processing CSV formatted Jmeter Results File DataHunterTestResults.csv at Thu Mar 02 15:12:10 AEDT 2023
^
DataHunterTestResults.csv processing completed ………. :
285 file lines processed
270 transaction samples loaded
took 0 secs
MERGED bypassed
____________________________________
270 Total samples written
…... report generation output follows.
Splitting the JMeter results file just before the JMeter report generation steps run (partial output, Windows)
A quick way to look through the generated reports is to open a browser window to C:/Mark59_Runs/Jmeter_Reports/DataHunter/ (Win) or ~/Mark59_Runs/Jmeter_Reports/DataHunter/ (Linux/Mac),
then drill up and down to open each of the four index.html files.
This step adds results for the DataHunter selenium test that has just been run into the Trends database. The database comes loaded with some historical sample tests, so first open the Trends application to the DataHunter 90th percentile response time graph:
http://localhost:8083/mark59-trends/trending?reqApp=DataHunter.
Observe that a run tagged as a ‘BASELINE’ appears as the front row of the graph.
In the Metrics application, depending on your o/s click on the Server Profile Name
DemoWIN-DataHunterSeleniumTrendsLoad,
DemoLINUX-DataHunterSeleniumTrendsLoad or
DemoMAC-DataHunterSeleniumTrendsLoad
Click the ‘Run Profile’ button.
As with the previous commands, each works slightly differently for each operating system, with Windows and Linux opening a command window showing output, and for Mac output appearing at the end of execution in the application Command log window. All load the Trends database by running jar file mark59/mark59-trends-load/target/mark59-trends-load.jar.
JdbcSQLNonTransientConnectionException
Unfortunately sometimes on Windows machines, when you try to run the Trends database load using H2 you may get get an error like org.h2.jdbc.JdbcSQLNonTransientConnectionException: Connection is broken: "java.net.SocketException: Permission denied: connect: … "
The work around is to close the Trends web application (closing the bat Window titled ‘StartTrendsFomTarget.bat’), and then retry.
Once the Trends load has completed, you can restart the Trends app using mark59\mark59-trends\StartTrendsFromTarget.bat
Below is an example of the expected output - in this run a single metric SLA has failed, highlighted in red ( Metric SLA Failed..):
RLoad DataHunter Test Results into Mark59 Trends Analysis h2 database.
Starting Trends Load.. Version: 5.3
TrendsLoad executing using the following arguments
--------------------------------------------------
application : DataHunter
input : C:\Mark59_Runs\Jmeter_Results\DataHunter
database : h2
reference : No argument passed. (a reference will be generated)
tool : Jmeter
dbserver : localhost (all db settings hard-coded for h2
dbPort :
dbSchema : trends
dbxtraurlparms :
dbUsername : sa
eXcludestart : 0 (mins)
captureperiod : All (mins)
ignoredErrors :
simulationlog : simulation.log
simlogcustoM :
keeprawresults : false
timeZone : Australia/Sydney
------------------------------------------------
2022-09-27 13:00:45.041 INFO 24520 --- [ main] com.mark59.trends.load.ApplicationEntry : Starting …
2022-09-27 13:00:45.041 INFO 24520 --- [ main] com.mark59.trends.load.ApplicationEntry : The following 1 profile is active: "h2"
2022-09-27 13:00:46.005 INFO 24520 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2022-09-27 13:00:51.144 INFO 24520 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2022-09-27 13:00:51.555 INFO 24520 --- [ main] com.mark59.trends.load.ApplicationEntry : Started ApplicationEntry in 7.071 seconds (JVM running for 7.751)
Spring configuration complete, h2 database in use.
***************************************************************************************
* The use of a H2 database is intended to assist in tutorials and 'quick start'. *
* Please use a MySQL or PostgreSQL database for more formal work. *
* *
***************************************************************************************
Processing CSV formatted Jmeter Results File DataHunterTestResults.csv at Tue Sep 27 13:00:51 AEST 2022
^
DataHunterTestResults.csv file uploaded at Tue Sep 27 13:00:51 AEST 2022 :
273 file lines processed
258 transaction samples created
took 0 secs
MERGED bypassed (only files in the input folder with a suffix of .xml, .csv or .jtl are processed)
____________________________________
258 Total samples written
Run start time set as 202303021502 [ Thu Mar 02 15:02:02 AEDT 2023, Timestamp 1677729722983 ] with a duration of 1 minutes.
Run period of 20230302_150202 - 20230302_150341
Epoch Range (msec) [ 1677729722983 : 1677729821210 ]
Run reference has been set as 20230302_150202
Collation of test transactional data starts at Thu Mar 02 15:15:37 AEDT 2023
Collation of test transactional data completed Thu Mar 02 15:15:37 AEDT 2023. Took 0 secs
TrendsLoad: SLA Failed Warning : DH_lifecycle_0200_addPolicy has failed it's 90th Percentile Response Time SLA as recorded on the SLA database !
response was 0.421 secs, SLA of 0.400
TrendsLoad: No metric SLA has failed (as recorded on the SLA Metrics Reference Database)
Trends Load completed.
2022-09-27 13:00:51.803 INFO 24520 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2022-09-27 13:00:51.813 INFO 24520 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
Press any key to continue . . .
A typical execution of TrendsLoad - loading test results data into Trends (Windows), highlighting a SLA failure
Refresh/Open the Trends application to the DataHunter 90th percentile response time graph:
http://localhost:8083/mark59-trends/trending?reqApp=DataHunter.
Observe that a run has been added to the graph and the ‘BASELINE’ run now appears as the second row.
As you go through the other various ‘graph’ selections, you should see the last results also added to the front of the graphs. The one exception is the Linux/Mac graph for MEMORY. It will just have the last run displayed, as the historical data was executed on Windows which uses differently named metrics. Also, the Mac CPU command gives figures very different from the historical (Windows) ones.
Done :)
Machine sizing for Selenium scripting.
Unless you are running the demo from a powerful laptop or desktop computer you are quite likely to get a CPU_UTIL SLA failure as in the Trends load output above. The demo is running the whole of Mark59 including the system under test (dataHunter) on one computer, so not surprisingly it will be stressed. We know from experience properly sized servers acting as load generators are more than capable of executing multiple selenium scripts concurrently. Obviously however, there are limits to how much you can realistically expect to run using browsers running selenium..
From time to time we will update our estimated load generator capacities, please see
Appendix D. JMeter/Selenium Test Injector Capacity
Percentiles Reporting Discrepancies in JMeter (Low Volume Runs)
At low sample counts such as in the demo, 90th percentiles etc, reported in the JMeter report ‘Dashboard’ can differ from those reported on the Mark59 Trend Analysis Graph. At very low volumes the Dashboard values are especially inconsistent. It appears to be using an estimate - see this Stack Overflow question. Strangely, the JMeter Chart for ‘Response Time Percentiles’ is usually the same as, or at least closer to, the Mark59 Trend Analysis graphic 90th percentile value. A difference can happen between inclusion (JMeter) or exclusion (Mark59) of the first datapoint at the 90% ordinal ranking. Anyway, the JMeter report discrepancies do become insignificant with higher data sampling counts (but see the next box below).
Tip: The transaction averages in the JMeter report ‘Dashboard’ are accurate at any volume, so if you want to validate your transaction timing in the ‘Quick Start’ demo, you can compare the Trend Analysis TXN_AVERAGE graph to the reported JMeter averages.
Percentiles Reporting Discrepancies in JMeter (High Volume Runs)
By default, the percentile calculations in JMeter by default are only calculated on the last 20,000 transaction samples. Yep, strange but true. So if you’re running a high volume test, it’s possible to see differences between the Mark59 Trend Analysis Graph and the JMeter reporting of percentiles, particularly if the last 20,000 transactions were significantly different to the transaction set as a whole.
To remedy this oddity in JMeter you can set a JMeter property: jmeter.reportgenerator.statistic_window = -1
See https://jmeter.apache.org/usermanual/generating-dashboard.html and this article
If you intend to do any Selenium Scripts using the Mark59 integration into JMeter, go through The Selenium Quick Start Demo in the next chapter.
If you have the general idea of what’s going on you can execute or repeat the demo by running scripts directly from the mark59/bin directory.
Windows:
StartAllMark59WebApplications_H2_DemoMode.bat
TestRunWIN-DataHunter-Selenium-DeployAndExecute.bat
TestRunWIN-DataHunter-Selenium-GenJmeterReport.bat
TestRunWIN-DataHunter-Selenium-TrendsLoad.bat **
Linux:
StartAllMark59WebApplications_H2_DemoMode.sh
TestRunLINUX-DataHunter-Selenium-DeployAndExecute.sh
TestRunLINUX-DataHunter-Selenium-GenJmeterReport.sh
TestRunLINUX-DataHunter-Selenium-TrendsLoad.sh
Mac : (in the mark59/bin/mac_commands directory)
Mac_Start_Datahunter_H2_DemoMode.command
Mac_Start_Metrics_H2_DemoMode.command
Mac_Start_Trends_H2_DemoMode.command
TestRunMAC-DataHunter-Selenium-DeployAndExecute.command
TestRunMAC-DataHunter-Selenium-GenJmeterReport.command
TestRunMAC-DataHunter-Selenium-TrendsLoad.command
These scripts, and the scripts they point to, can be useful examples for the basis of your Continuous Integration (Jenkins) automation too.
Application Urls :
http://localhost:8081/mark59-datahunter
http://localhost:8083/mark59-trends
http://localhost:8085/mark59-metrics (demo/demo)
Reports:
C:/Mark59_Runs/Jmeter_Reports/DataHunter/ (win)
~/Mark59_Runs/Jmeter_Reports/DataHunter/ (linux/mac)
** If you get a JdbcSQLNonTransientConnectionException when running TrendsLoad, refer to the note a few pager earlier about the problem.
In this chapter we assume a working knowledge of Java, an understanding of Maven basics, and have set up and know your way around favourite Java IDE for Maven based projects. There’s no need to read this chapter if you do not intend to use Mark59’s selenium integration in your JMeter test plans.
Writing a Mark59 selenium script is pretty similar to writing any other selenium script. In order to allow integration with JMeter, certain methods are required, but the general idea is to allow an existing selenium script to be converted to a Mark59 Selenium script without too much effort. A higher quality of script is needed that may sometimes be found in a poorly written automation suite, particularly where just ‘thread.sleeps’ have been used to make a script work. The mark59-dsl-samples project should help you if you are starting from scratch.
This demonstration steps through the scripts executed during ‘The Quick Start Demo’, but also shows how to run the selenium scripts using your IDE.
A prerequisite is that you have a Java IDE installed. If you are using Eclipse we suggest you install the Spring Tools For Eclipse plugin. It’s not necessary, but a ‘nice to have’ when dealing with Spring. Especially useful is the ‘Boot Dashboard’ view for easily running Spring Boot web apps, which is what Mark59 uses. STS4 which comes with Spring Tools included, and is an easy install on Linux and Windows, may be a good IDE option if you don't have a particular favourite. These instructions assume an Eclipse-based IDE, such as STS4 or Eclipse itself.
Java Versioning: Note that Selenium scripting only requires Java 8+ (all Mark59 Maven Central artefacts are currently Java 8). However Java 17+ is required for the Mark59 Web Applications, Trends Load and the Metrics API. That means you need to run JMeter using Java 17+ when using the Metrics API in JMeter as in ‘The Quick Start Demo’. Also, as the Selenium team itself is planning to move from Java 8 (to 11), we suggest generally just using Java 17+ for everything if you can.
If you haven’t already done so, follow the Mark59 ‘Install’ steps in the last chapter. Leave the Mark59 applications running (the final step).
Tip: You don’t need to, but we suggest you go through ‘The Quick Start Demo’ first. JMeter script deployment and execution is already covered there, and also how you create and access JMeter Reports and the Trend Analysis graph (also covered in the last step of this demo).
In Eclipse: File > Import > Existing Maven Projects > Next > browse to the mark59 install directory
The sample projects:
mark59-datahunter-samples and
mark59-dsl-samples
should appear in the list.
> Finish
Exactly the same process as in The Quick Start Demo (ChromeDrivers have been included at the root of the mark59-datahunter-samples project, but with Chromium being updated on a monthly release cycle, the drivers go out of date pretty quickly).
From project mark59-datahunter-samples, open class
com.mark59.datahunter.samples.scripts.DataHunterBasicSampleScript
right click > Run As > Java Application
Hopefully you will see a few DataHunter screens pop up as selenium drives through them, and a list of transaction names with a response message of PASS at the bottom of the console output.
Repeat for the other classes in the same package: :
DataHunterBasicRegressionScript,
DataHunterLifecycleIteratorPvtScript,
DataHunterLifecyclePvtScript,
DataHunterLifecyclePvtScriptUsingRestApiClient
The second and third classes on the list are the ones executed during the Quick Start Demo.
Tip : For IntelliJ users: pom.xml entries for Mark59 projects that have a JMeter dependency tag them with a scope of ‘provided’:
<scope>provided</scope>),
When you set up your run/debug configuration, you need to set “Include dependencies with provided scope”. Also, always ensure the Working Directory is the project root (the default used by Eclipse).
In DataHunterLifecyclePvtScript, just after the line:
jm.endTransaction("DH_lifecycle_0001_loadInitialPage");
insert lines:
jm.startTransaction("DH_lifecycle_0002_hasMyChangeWorked");
SafeSleep.sleep(200);
jm.endTransaction("DH_lifecycle_0002_hasMyChangeWorked");
Run DataHunterLifecyclePvtScript, and confirm the new transaction appears in the console, with a response time value around 200
This will create the jar file that is needed in order to execute the DataHunter scripts in JMeter.
Run (or click ) > Run configurations > right click on Maven Build > New Configuration
Fill in the details for the mark59-dsl-samples project (Eclipse style for base dir shown):
Name : mark59-dsl-samples
Base directory: ${workspace_loc:/mark59-dsl-samples}
Goals : clean package install
Apply and Run
Then for the mark59-datahunter-samples project
Name : mark59-datahunter-samples
Base directory: ${workspace_loc:/mark59-datahunter-samples}
Goals : clean package
Apply and Run
If all went OK, you should have created a new mark59-datahunter-samples.jar file in the mark59/mark59-datahunter-samples/target directory, and a subdirectory with that jar’s dependencies called mark59-datahunter-samples-dependencies. These (along with a few other artefacts) are what gets deployed to JMeter.
Perform the steps as outlined in ‘The Quick Start Demo’, or alternatively just use the scripts as listed in ‘The Very Quick Start Demo’ from the mark59/bin directory:
Windows:
TestRunWIN-DataHunter-Selenium-DeployAndExecute.bat
TestRunWIN-DataHunter-Selenium-GenJmeterReport.bat
TestRunWIN-DataHunter-Selenium-metricsRunCheck.bat *
Linux:
TestRunLINUX-DataHunter-Selenium-DeployAndExecute.sh
TestRunLINUX-DataHunter-Selenium-GenJmeterReport.sh
TestRunLINUX-DataHunter-Selenium-metricsRunCheck.sh
Mac : (in the mark59/bin/mac_commands directory)
TestRunMAC-DataHunter-Selenium-DeployAndExecute.command
TestRunMAC-DataHunter-Selenium-GenJmeterReport.command
TestRunMAC-DataHunter-Selenium-TrendsLoad.command
You should observe your new DH_lifecycle_0002_hasMyChangeWorked transaction listed in the JMeter Transactions Report,
C:/Mark59_Runs/Jmeter_Reports/DataHunter/ (Win) or ~/Mark59_Runs/Jmeter_Reports/DataHunter/DataHunter/ (Linux/Mac),
and added to the latest DataHunter run displayed in Trends:
http://localhost:8083/mark59-trends/trending?reqApp=DataHunter
Done.
* If you get a JdbcSQLNonTransientConnectionException when running metricsRunCheck, refer to the note in the previous chapter about the problem.
In this chapter we assume you have completed The Selenium Quick Start Demo and have the Java/Selenium skills as outlined in that chapter.
This chapter is a deeper look at how Mark59 Selenium scripting works. Considerable effort has been made to provide good Javadoc for the Mark59 classes accessed during scripting, and that should be the primary source of documentation for script development. This chapter focuses on structure and concepts used.
There is no extra Mark59 install required in order to build and execute Mark59 selenium Java scripts. Once a script is written and tested, typically in an IDE, the only build actually needed is a Maven build on the project containing the script(s), which creates a jar file and a library of dependencies that need to be copied into the 'lib/ext' directory of the JMeter instance running the test. See The Maven Build for Selenium Scripts, and also the comments in the plugins segment of the pom.xml file of the mark59-datahunter-samples project.
The necessary Mark59 dependency is accessed via Maven from the Central Repository. The Maven entry needed in your project pom.xml for a Mark59 selenium script is:
<dependency>
<groupId>com.mark59</groupId>
<artifactId>mark59-selenium-implementation</artifactId>
<version>5.3</version>
</dependency>
Mark59 has the capability of taking advantage of the Chrome DevTools Protocol (introduced on the release of Selenium 4), covered later in this chapter.
The mark59-datahunter-samples project contains sample Mark59 Selenium scripts, which give good coverage of the functionality available in Mark59. They expect a DataHunter instance to be up and running to work.
The Selenium DSL (Domain-Specific Language - a term coined by Martin Fowler while working at ThoughtWorks) defined in the mark59-dsl-samples project is referenced in the mark59-datahunter-samples project. It gives a good starting point for your Selenium scripting, but may need to be extended depending on the Html or JavaScript structures used by the application you are testing. To start with, you could just copy the DSL packages ( com.mark59.dsl.samples.seleniumDSL.., com.mark59.dsl.samples.devtoolsDSL..) from mark59-dsl-samples into your own project, and go from there.
A brief description of the sample scripts:
DataHunterBasicRegressionScript is just a generalised simple regression test for DataHunter. Note: to actually use DataHunter in a test to store/retrieve data, you should use the DataHunter API (DataHunterLifecyclePvtScriptUsingRestApiClient).
DataHunterBasicSampleScript is a simple script (it doesn't use the DSL project), so it’s a good way to see what the Mark59 is providing, particularly if you are not familiar with Selenium.
DataHunterLifecyclePvtScript has been written to give an example of DSL use and the general functionality available. It also demonstrates Chrome DevTools Protocol usage.
The 'Iterator' version DataHunterLifecycleIteratorPvtScript is more specialised, showing how to write a script that may need to repeat a workflow several times.
DataHunterLifecyclePvtScriptUsingRestApiClient demonstrates the use of the DataHunter API, replicating the functionality of DataHunterLifecyclePvtScript.
The ability to run and test a script directly from Eclipse or your favourite IDE is an important part of Mark59 functionality, so different options have been provided to give some flexibility in testing the script:
A Log4jConfigurationHelper class assists in setting the log4j level, rather than having to use a log4j2.xml file. Further details are available in the JavaDocs.
Example of Mark59 selenium script main(), set so any visible browsers close at the end unless there is a failure.
As mentioned in the Quick Start guides when you attempt to run the script from the IDE it is possible you will get a failure due to a mismatch of the Selenium Chromedriver and the version of Chrome on your machine. If so, you can replace chromedriver(.exe) in the project root with a compatible version. See Chrome Browser Setup and Download Chromedriver.
This is where the main scripting logic is. It is passed a Selenium WebDriver, setup options are discussed next in the the Script Parameters section. It is also passed a JmeterFunctionsForSeleniumScripts object (jm), which contains logging and startTransaction / endTransaction methods used to capture transaction timings. For example:
jm.startTransaction("DH_lifecycle_0100_deleteMultiplePolicies");
driver.findElement(By.id("submit")).submit();
checkSqlOk(driver.findElement(By.id("sqlResult")));
jm.endTransaction("DH_lifecycle_0100_deleteMultiplePolicies");
endTransaction() options exist to set the transaction explicitly as a PASS or a FAIL, and there are setTransaction() methods to explicitly set labels and timings. You can also set datapoints. Like a normal transaction, the same datapoint name can be used many times, to see how some value has changed over time during a test.
jm.userDataPoint(application + "_PolicyRowsAffected", rowsAffected);
There are several controls over the script log output, either via defaults based on the log4j level, or they can be explicitly set off, or to write, or to buffer (at the start and/or end of transactions). The logs available at script level are:
Tip: Occasionally the WebDriver can fail to write an image, in which case a stack trace is written to the 'jpg' screenshot file, which can be opened using a text editor.
When logs are 'buffered' (held in-memory), they can be written by calling jm.writeBufferedArtifacts(), otherwise they will simply be cleared at the end of script execution.
On a script Exception or AssertionError, logs at point of failure plus a text file with any available stack trace can be output, controlled using script parameters. Also, the script Results Summary output (a list of transactions and timings) is controlled by script parameters. See next section for details
Fine-grain log control is available as indicated in the DataHunterLifecyclePvtScript sample.
// jm.logScreenshotsAtStartOfTransactions(Mark59LogLevels.WRITE);
// jm.logScreenshotsAtEndOfTransactions(Mark59LogLevels.WRITE);
// jm.logPageSourceAtStartOfTransactions(Mark59LogLevels.WRITE);
// jm.logPageSourceAtEndOfTransactions(Mark59LogLevels.WRITE );
// jm.logPerformanceLogAtEndOfTransactions(Mark59LogLevels.WRITE);
// jm.logAllLogsAtEndOfTransactions(Mark59LogLevels.BUFFER);
Log-level control is detailed in full in the JmeterFunctionsForSeleniumScripts and Mark59LogLevels Javadoc.
Parameter values can be set in the script using the additionalTestParameters() method. Some parameters are pre-defined with default values that can be overridden in this method.
See sample script sample script DataHunterLifecyclePvtScript for usage - several parameters are set to default values, so normally you would not need to enter them.
Further descriptions of the built-in parameter values are available from the 'See Also' links in the Javadoc for SeleniumAbstractJavaSamplerClient.
Pre-defined parameter | default value | Other options, notes |
SeleniumDriverFactory.DRIVER | CHROME | CHROME, or FIREFOX is the alternative. The Chromium browser also uses the CHROME driver |
SeleniumDriverFactory.HEADLESS_MODE | true | false |
SeleniumDriverFactory .BROWSER_DIMENSIONS | 1920,1080 | Browser width by height (a trick in awkward scrolling pages can be is to use a large height in Headless Mode) |
SeleniumDriverFactory .PAGE_LOAD_STRATEGY | NORMAL | NONE or EAGER are alternatives. NONE is useful or even necessary for proper timings in complex pages, but you have to control page flow yourself. We have not tested with EAGER. |
SeleniumDriverFactory.PROXY | (no proxy) | two general formats are available, specifying a PAC script or a direct httpProxy / sslProxy. See Javadoc at DriverFunctionsSeleniumBuilder .setProxy() |
SeleniumDriverFactory .ADDITIONAL OPTIONS | (no options) | https://peter.sh/experiments/chromium-command-line-switches/ for a plethora of Chrome options available |
SeleniumDriverFactory .WRITE_FFOX_BROWSER_LOGFILE | false | stops the rather verbose output written to log when using Firefox |
SeleniumDriverFactory .UNHANDLED_PROMPT_BEHAVIOUR | ignore | Action to take for an unexpected alert. Values should be one of the text values (case ignored) of org.openqa.selenium.UnexpectedAlertBehaviour |
SeleniumDriverFactory .BROWSER_EXECUTABLE | (default locations) | used to set a path to an alternate browser executable. Will override the mark59 property mark59.browser.executable (if set). If neither are set, the default installation of the expected browser is assumed. |
log settings: JmeterFunctionsForSeleniumScripts.LOG_SCREENSHOTS_AT_START_OF_TRANSACTIONS, JmeterFunctionsForSeleniumScripts.LOG_SCREENSHOTS_AT_END_OF_TRANSACTIONS, JmeterFunctionsForSeleniumScripts.LOG_PAGE_SOURCE_AT_START_OF_TRANSACTIONS, JmeterFunctionsForSeleniumScripts.LOG_PAGE_SOURCE_AT_END_OF_TRANSACTIONS, JmeterFunctionsForSeleniumScripts.LOG_PERF_LOG_AT_END_OF_TRANSACTIONS | default | By default, logging behaviour will depend on the log4j setting. These settings can also be changed during the script flow ( see sample script DataHunterLifecyclePvtScript Also see JavaDocs in com.mark59.selenium.corejmeterimpl.JmeterFunctionsForSeleniumScripts |
Exception logging: ON_EXCEPTION_WRITE_BUFFERED_LOGS, ON_EXCEPTION_WRITE_SCREENSHOT, ON_EXCEPTION_WRITE_PAGE_SOURCE, ON_EXCEPTION_WRITE_PERF_LOG, ON_EXCEPTION_WRITE_STACK_TRACE | true | Log output on script execution failure. By default all available mark59 logs are output for the point of failure, plus any previously buffered logs. Logs can be suppressed by setting its associated parameter to false: |
Script Results Summary output: LOG_RESULTS_SUMMARY, PRINT_RESULTS_SUMMARY | false | Log (via Log4j) or Print (to console) the script Results Summary. We suggest setting LOG_RESULTS_SUMMARY as true when running a script in an IDE. |
IpUtilities .RESTRICT_TO_ONLY_RUN_ON_IPS_LIST | (empty) | Intended for Distributed testing, where a script thread is only to run on the listed IP address(es). Javadoc at IpUtilities. localIPisNotOnListOfIPaddresses() |
SeleniumDriverFactory .EMULATE_NETWORK_CONDITIONS | (no throttling) | (Chrome only) allows for network throttling. Speeds are in kilobits per second (kb/s), and latency in milliseconds (ms). Three comma-delimited values are needed in the order : download speed, upload speed, and latency. Eg: "12288,1024,10" represents a connection with 12Mbps download, 1Mbps upload, and 10ms latency |
SeleniumIteratorAbstractJavaSamplerClient only: | | Refer to Javadoc for SeleniumIteratorAbstractJavaSamplerClient |
ITERATE_FOR_PERIOD_IN_SECS | 0 | |
ITERATE_FOR_NUMBER_OF_TIMES | 1 | |
ITERATION_PACING_IN_SECS | 0 | |
STOP_THREAD_AFTER_TEST_START_IN_SECS | 0 | |
STOP_THREAD_ON_FAILURE | "false" | |
Parameter values are set in additionalTestParameters(), or are available by default (as listed above). They can be accessed during script execution via the JavaSamplerContext object:
protected Map<String, String> additionalTestParameters() {
:
jmeterAdditionalParameters.put("FORCE_TXN_FAIL_PERCENT", "20");
:
protected void runSeleniumTest(JavaSamplerContext context,
JmeterFunctionsForSeleniumScripts jm,
WebDriver driver) {
:
int forceTxnFailPercent = Integer.valueOf(
context.getParameter("FORCE_TXN_FAIL_PERCENT").trim());
:
Once deployed to JMeter, these parameters are available and can be overwritten in the Java Request panel for the script. Refer to the DataHunterSeleniumTestPlan.jmx in the mark59-datahunter-samples project, including some examples of how to use the parameters in a more dynamic fashion.
Script parameters in JMeter
A Mark59 selenium script must be given two locations in order to run: the location of the WebDriver executable it is going to use (which may be for Firefox or Chrome), and where to write log files. It looks for a mark59.properties file to get the information. It can also pick up the properties from system properties, which will override anything in mark59.properties if both exist. Assuming mark59.properties is placed on the root directory of your Maven project, along with the WebDriver executable(s), then a Windows version of the file would typically contain values similar to the table below. A sample of mark59.properties is available in the root of the mark59-datahunter-samples project.
property | sample value / options |
mark59.selenium.driver.path.chrome | ./chromedriver * |
mark59.selenium.driver.path.firefox | ./geckodriver * |
mark59.log.directory | C:/JmeterTestErrorLogs/DataHunter/ |
mark59.log.directory.suffix | date | datetime |
mark59.logname.format | pre-ordered, case-insensitive, comma delimited list of options: ThreadName, ThreadGroup, Sampler, Label Default is ThreadName |
mark59.server.profiles.excel.file.path | ./mark59serverprofiles.xlsx * |
mark59.browser.executable | C:/chromium_ungoogled/chrome.exe |
mark59.print.startup.console.messages | true | false (the default) |
Mark59.screenshot.directory (deprecated - use mark59.log.directory) | C:/JmeterTestErrorLogs/DataHunter/ |
* “./” translates as the project root in Eclipse, and apache-jmeter/bin directory for JMeter deployment (see the ..DataHunter-Selenium-DeployAndExecute.. scripts in the mark59/bin directory)
A script will actually execute without the mark59.log.directory property being set but that’s not a good idea in a normal test. A message will be an output warning the property is not set, and failure will occur when the script attempts to write a log.
mark59.log.directory.suffix will add the start date-time or the date of the test and the end of the log directory name. If date is used, and the test is run again on the same day, only the logs of the last run for the day are preserved.
mark59.logname.format provides a way of controlling the naming of the log directory structure in line with the artefact in the test which produced the log. The default of ThreadName should be sufficient in most cases.
For deployment purposes to a server where you may have multiple instances of JMeter, you could consider placing the WebDriver in a common location: mark59.selenium.driver.path.chrome=C:/_chrome/chromedriver.exe
The mark59.browser.executable property is optional (the default location of the browser is assumed if it’s missing). If set this property can be overridden in a script using the BROWSER_EXECUTABLE script argument.
The property mark59.server.profiles.excel.file.path, defines the location of a Metrics excel spreadsheet. Discussed further in Download Server Profiles Using Excel.
mark59.print.startup.console.messages just outputs to the console a summary of basic information for an executing script - the mark59 version and log directory settings.
An example of using system properties instead of having a mark59.properties file (or overriding values in an existing one) is given in DataHunterSeleniumTestPlan.jmx in the mark59-datahunter-samples project:
Programmatically setting system properties for Mark59 in JMeter
By default whenever a script fails (Exception or AssertionError thrown) logs are written including any stack trace information and the script ends. When a user action or error is needed, you can create a userActionsOnScriptFailure() method. A typical use case would be where an application user logout may be needed to allow the next script attempt to run. DataHunterLifecycleIteratorPvtScript contains a sample usage.
As the idea of this class is to allow iteration over a workflow, the methods to override are slightly different to SeleniumAbstractJavaSamplerClient :
Iteration is controlled by additional parameters as shown in the "SeleniumIteratorAbstractJavaSamplerClient only" part of the Script Parameters table above, with details and example settings are available in the class Javadoc.
For an example of usage see sample script DataHunterLifecycleIteratorPvtScript.
Devtools Chrome Protocol (CDP) gives access to the many functions available from Chrome DevTools. An important use case for performance and volume testing is the ability to capture information for individual requests and their responses using the DevTools Network listeners. With most modern applications there are often dozens of http requests per load of an application page, so you will usually need to filter which requests and responses are of interest to you, and just timings (or whatever you need) for those.
An example of this type of usage is provided in the sample script DataHunterLifecyclePvtScript, which calls methods in class com.mark59.devtoolsDSL.DevToolsDSL (in the mark59-dsl-samples project).
To invoke DevTools in the DataHunterLifecyclePvtScript, change the “START_CDP_LISTENERS” parameter to true.
Run the script. You should see a lot of transactions appear marked as (CDP)
DevTools sample script code overview.
When DataHunterLifecyclePvtScript starts it calls method startCdpListeners.
The startCdpListeners method calls three methods in the Mark59 class DevToolsDSL. These methods invoke selenium’s DevTools APIs to:
The DevToolsDSL class uses an internal map to match and calculate the time difference between a request and its response, but the important thing to note for Mark59 code is the method used to create timed transaction (in the addListenerResponseReceived method):
or more generally:
jm.setCdpTransaction( transactionLabel, transactionTime )
These “...CdpTransaction” transaction methods (there’s a set of them), distinguish the transactions as “CDP” transactions. These can be filtered in the Trend Analysis graphic, and optionally printed in the JMeter Transactions report.
Lambda Expressions
The interactions between the sample script and the DevToolsDSL class make use of lambda expressions. This type of code is widely used in Java and will be second nature to experienced Java programmers, but may look a bit confronting to more casual Java users. There’s an excellent summary of lambda expressions in the oracle tutorials.
If you run the The Very Quick Start Demo with CDP transactions ‘switched on’, you will see how CDP transactions in the JMeter transactions report and Trend Analysis - in bat/sh/command file: TestRun{WIN|LINUX|MAC}-DataHunter-Selenium-DeployAndExecute
change
..StartCdpListeners=false… to …StartCdpListeners=true...
Then run the 3 bats/shell files for your operating system, as per the demo. You will see the “CDP” transactions in the JMeter report, and in Trend Analysis the CDP dropdown selection will appear. Select SHOW_CDP to see the CDP transactions. Also, they can optionally be excluded from the JMeter report.
The DataHunter CDP transactions are not that exciting due to the nature of the application, which has minimum interaction between server and browser, and they should run in a fraction of a second. This just demonstrates the concept.
The ‘transactions’ JMeter report showing CDP transactions
Trend Analysis showing CDP transactions
For a Java Request Sampler, which is what a Mark59 Selenium script is to JMeter, a jar containing the classes of the script project, and a library of the dependencies, need to be placed in the lib/ext directory of JMeter. In the Mark59 sample project mark59-datahunter-samples a Maven build with goals clean package builds the jar file, and the library of dependencies.
Specifically, the maven build uses the maven-jar-plugin to create manifest entries in the 'main' jar that lists all the jars in the dependency library, which is then constructed using the maven-dependency-plugin.
<artifactId>maven-jar-plugin</artifactId>....
<artifactId>maven-dependency-plugin</artifactId>....
Tip: In Eclipse when doing a maven build that uses the maven-dependency-plugin and you want to pick up local dependencies (usually other Eclipse projects) that you have installed into your local maven repository, you may get an error “Artifact has not been packaged yet…” . Ensure you have the “Resolve Workspace artifacts” option switched off in your maven configuration for the build.
.
In releases of Mark59 prior to the introduction of DevTools (CDP) with Selenium 4, most of the time a single project jar containing all necessary dependencies could be built just using the maven-assembly-plugin, but with Selenium 4 all the dependencies could no longer be all “jar”ed up together, at least using all the usual Maven plugins.
The deployment of required artefacts to JMeter has been touched on in previous parts of the guide, so this is just a summary of the various options.
As background review the mark59/bin{/mac_commands} script
TestRunLINUX-DataHunter-Selenium-DeployAndExecute.sh,
TestRunWIN-DataHunter-Selenium-DeployAndExecute.bat, or
TestRunMAC-DataHunter-Selenium-DeployAndExecute.command
All artefacts required to run the DataHunter Selenium scripts are copied into JMeter in the initial commands.
Depending on the build and how you set up the properties required by a Mark59 selenium script, there may be different combinations and locations of the files required.
For instance, the maximum number of artefacts required to be copied into a JMeter instance will occur when you need to use a Maven build producing the script jar with it’s separate library of dependencies, use and place both WebDriver executables within the JMeter file structure, use a Server Metrics Excel spreadsheet to obtain monitored server details, and choose to use a mark59.properties file. So the list is:
The minimum number of artefacts required to be copied into a JMeter instance will occur when you use a Maven build producing the script jar and its library of dependencies, and use system properties to define the location of the WebDriver executable(s) somewhere that is outside the JMeter instance:
For a single injector JMeter test no changes are required to the existing JMeter properties or any other files, however for distributed testing, you may need to set up SSL for RMI (the communication method the slave servers use to talk to the master server). Refer to the JMeter User's Manual for details (this has nothing to do with Mark59, it's JMeter requirements for distributed testing). There are more details about distributed testing in the Continuous Integration Design chapter.
To get an idea of why the Mark59 Reporting and Trend Analysis components work the way they do, you need to know the basics of the results file structure of a JMeter test. Nothing in these components requires that anything in a JMeter test needs to use or have any awareness of Mark59 in order to produce a JMeter report, or even to load results into the Trends web application (which can also load results from LoadRunner or Gatling tests). However the "Reporting” and "Trends" components are designed to produce improved output when using Mark59 enabled scripts and Mark59 metrics capture.
A big difference between output from a Mark59 enabled script, and for example a JMeter test just using the Http Sampler, is that sub-results have been implemented. Typically, the output file from a JMeter test has a 'flat' structure:
transactiona, time1, elapsedtime1
transactionb, time2, elapsedtime2
transactiona, time3, elapsedtime3
With a Mark59 enabled script, many transactions can be recorded for each run of the script. A summary transaction for a script run is also output, so the results have a structure like:
DataHunterScript_A, time1, totalelapsedtimeofrunofscriptA
DataHunterScript_A_transactiona, time2, elapsedtimeforScriptAtransactiona
DataHunterScript_A_transactionb, time3, elapsedtimeforScriptAtransactionb
DataHunterScript_B, time4, totalelapsedtimeofrunofscriptB
DataHunterScript_B_transactiona, time5, elapsedtimeforScriptBtransactiona
DataHunterScript_B_transactionb, time6, elapsedtimeforScriptBtransactionb
DataHunterScript_B_transactionc, time7, elapsedtimeforScriptBtransactionb
DataHunterScript_A, time8, totalelapsedtimeofrunofscriptA
DataHunterScript_A_transactiona, time9, elapsedtimeforScriptAtransactiona
DataHunterScript_A_transactionb, time10,elapsedtimeforScriptAtransactionb
The JMeter output file can be formatted as XML or CSV. The actual way this structure is implemented differs in each, but they both conceptually contain this two-level structure.
In order to produce a nice JMeter report that understands this structure the mark59-results-splitter utility can be used. If you don't, you can still get a report, it's just that it will always blurt out all these transactions as if it was dealing with a flat structure, which is usually not what you want.
By a “Mark59 enabled script”, we mean a script (class) using the mark59-selenium-implementation dependency and usually extending Mark59 class SeleniumAbstractJavaSamplerClient. However, you also can have a “Mark59 enabled script” with no selenium at all - in which case you would be better off directly extending the JMeter class AbstractJavaSamplerClient. You just need to include and make use of the Mark59 class JmeterFunctionsImpl. This will allow you to time any sort of event, like a MQ get/put, a database query, an API call, and so on. What creates the event timings are invocations of the jm.startTransaction() and jm.endTransaction() methods, which create JMeter sub-results in the output.
See DataHunterLifecyclePvtScriptUsingRestApiClient in the mark59-datahunter-samples project for an example of a Mark59 non-Selenium script.
Any action that may be relevant to the metrics needed to capture performance of the application being tested. Generally, this would be navigation from one page to the next, but could also include such things as a triggered request when on field entry (validation of a car registration plate could be an example) You also need to be aware of considerations like a lookup being mocked or calling a 3rd party environment may not be of production grade. For individual http request/response timings, CDP transactions can be used.
By default Selenium attempts to determine when a page web is fully loaded (eg, after clicking a ‘Next’ button on a page). This is fine for most functional automation scripts, but when you want to capture time taken to transition between the pages, especially for more complex pages, Selenium’s default PAGE_LOAD_STRATEGY of NORMAL may not be accurate. A PageLoadStrategy of NONE switches this off, meaning you can (and need) to control the timings of page loads.
You should include a verification that the server returns the expected result/response. There are two types of verifications used in general:
1. Wait for the presence of a text in the response, and/or
2. Wait for an element to be clickable in the resultant page (often preferable as you can often use the first element you want to interact with on the next page, so you know it's ready).
Sleep time simulates real user behaviour by introducing pause between user actions / requests. Do not include sleep times within jm.startTransaction and jm.endTransaction methods as this will be added to transaction response time.
Be careful with LOG statements - remember you are running a performance test! Short LOG.debug statements through the code to help you debug should be fine. They can be switched on when running the script using the main() in an IDE with: Log4jConfigurationHelper.init(Level.DEBUG);
Assign a page element to its relevant type (CheckBox/DropdownList/OptionButton/InputTextElement etc). Often it is not strictly necessary, but it does make the script more readable.
Something you may find you need a lot working at a corporate site. Incidentally, you may want to check if you have the option of using a non-production proxy at your site. Especially relevant if you are running or combining your selenium scripts with high-load API tests.
--auto-open-devtools-for-tabs is a nice little trick when you are debugging or stepping through a script and want to look at the network traffic from the start of the script.
Recent ChromeDriver releases use significant additional time in the driver itself when loading the first web page in a script. You can get around this by loading a dummy page at the start of the script. In this example the Chrome browser’s version information page is used.
Recent trials on our heaviest selenium tests gave a 10 to 20% CPU reduction on our load generators running selenium, when compared to using Chrome. The ungoogled version seemed to perform best. Downloads at https://chromium.woolyss.com/
If you download the ‘Archive’ zip, you simply need to unzip it in your desired location.
The selenium chromedriver is aware of the default location of the Chrome browser, which is why you don’t have to specify its location. When you swap to Chromium (or use Chrome Beta or a non-default location), you need to specify the location of the driver executable. In a script:
Or in mark59.properties:
Particularly handy when you are working at a corporate site where you may not be able to control the Chrome install. Combining with the tip above (using Chromium), you could place your Chromedriver and Chromium browser in set places on all machines, then reference them using mark59.properties. Eg:
We plan to release Mark59 on a fairly regular cycle, but certainly not for every new release of Selenium and the CDP classes (which are aligned to the current monthly release cycle of Chromium). If you wish to override the release of selenium being used by Mark59, we have found you can usually get away with overriding the script project classpath with the version of selenium you want to use by adding its dependency above the mark59 dependency in the pom for script project. For example:
<!-- the version of Selenium (CDP) you want to use -->
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>4.8.1</version>
</dependency>
<!-- the current version of Mark59 you are using
(mark59 5.2 used selenium 4.5.0) -->
<dependency>
<groupId>com.mark59</groupId>
<artifactId>mark59-selenium-implementation</artifactId>
<version>5.2</version>
</dependency>
The reason this trick often works is that the Mark59 selenium implementation classes are mainly focused on just the creation of the Selenium Driver and logging, and so only use a subset of all the Selenium classes, reducing the chances of a compatibility clash. Also, the Selenium team is very good at maintaining backwards compatibility.
Hints and Tips for running JMeter Test Plans using Mark59.
There's nothing particularly special about running a JMeter test plan that includes one or more Mark59 Selenium scripts, they are just an implementation of a JMeter Java Request Sampler as far as JMeter is concerned. Also, changes to existing default property settings such as those in jmeter.properties are not required by Mark59. This chapter's primary purpose is to give suggestions and discuss some techniques we have found useful.
For a sample JMeter test plan, review the DataHunterSeleniumTestPlan.jmx at
mark59/mark59-datahunter-samples/test-plans.
One difficulty we found with JMeter is that the provided Timers did not produce consistent pacing for Mark59 selenium scripts, particularly at the end of the test, where the threads tended to fire off too close together. This may be because the JMeter timers have been designed with millisecond responses in mind, instead of the several minutes a Selenium script often runs for (that’s just a guess). Anyway, our solution has been to create a BeanShell Timer, the Iteration Pacing Timer. We found the BeanShell Timer gave more consistent results than the JSR223 timer (despite what the JMeter documentation says). See the DataHunterSeleniumTestPlan for an example. The timer gets passed a parameter that is the number of seconds between the start script runs per thread, and an optional second parameter to allow for randomisation between start times. If the script takes too long and overruns the next start time, it just starts immediately.
Parameterised pacing set in the demo DataHunterSeleniumTestPlan
Segment of the Mark59 Iteration Pacing Timer from the demo DataHunterSeleniumTestPlan
Tip: If you look at the code for the 'Iteration Pacing Timer' you will see that it contains several commented out print statements like:
//System.out.println(" pausing for calculated delay of " + delay + " ms");
If you run the Thread Group containing a single user, preferably over several iterations, with the println statement un-commented, you get a good idea of how the timing calculation works, and also how long your script typically runs for.
These scripts should have all pacing and control achieved using the provided 'iteration settings' parameters (details and example settings are available in the class Javadoc).
SeleniumIteratorAbstractJavaSamplerClient should only be used when necessary. Keeping a single browser / thread open and iterating for an entire test means no intermediate results can be output, and they need to be held in memory. It can also make debugging and script logic more challenging, particularly when re-start conditions after failure may have to be considered.
The classic use case for this sort of iterative processing goes something like "the users stay logged on for four hours at a time, and so each user in the test can only log on and off at the start and end". Well, maybe, but our own experience is that for the vast majority of applications it simply makes little or no difference. It may be justified in applications where (in-memory) session data keep on growing significantly for each business iteration a user performs, which really should be called out by the application team and explicitly tested. In practice this should be rare, it's hard to think of a typical business scenario where this would be part of a sound design.
If the issue is the behaviour of a corporate Identity Management system, then perhaps that should be explicitly tested separately rather than as part of the test for the target application. A compromise may be to run a reasonable number of iterations, say 10 to 20 or so per cycle, rather than potentially 100's if you just let the iterateSeleniumTest() method loop for an entire test.
It is possible to run a Mark59 enabled selenium script in a JSR223 Sampler - so the code is directly embedded in the JMeter test plan. This isn’t intended to replace the normal approach of developing a Mark59 script using an IDE, then building and deploying into JMeter’s lib/ext directory, but may have a place for very simple scripts or when a script may need to be maintained by a team not familiar with a Java development.
mark59/mark59-datahunter-samples/test-plans/DataHunterSeleniumTestPlan.jmx contains disabled thread group DemoMark59ScriptAsJSR223 with two examples. The more relevant example for most cases would be DataHunterBasicSampleScriptJSR223Format (the other example demonstrates how more complex structures such as a basic DSL could be implemented).
This script has a very particular structure and is fully Java compliant, even though Groovy is the selected JSR223 language. The advantage of keeping in pure Java is that it makes it easy to (mostly) copy-paste the code between the JSR223 Sampler and your Java IDE, where you can run and test as a normal Mark59 selenium script. The lines of code marked like:
just need to be uncommented as marked (the class name in this example is DataHunterBasicSampleScriptJSR223Format). The Mark59 Java scripts used to create these JSR223 Sampler scripts have been included in the mark59-datahunter-samples project in package com.mark59.datahunter.performanceTest.scripts.jsr223format.
As a minimum you need to copy the mark59-selenium-implementation jar and any other dependencies you need into the JMeter lib/ext directory. If you have deployed any Mark59 Selenium script for your JMeter test you will already have the required classes (except for any extra dependencies that are just for your JSR223 code). Also, as for a standard Mark59 script, a mark59.properties file and WebDriver executable need to be in place as described previously. See the mark59-dsl-samples/pom.xml for an example of how to generate a jar file for your JSR223 in one of your own projects.
Tip: The easiest way to set up lib/ext, at least to start, is just copy mark59-datahunter-samples.jar and its dependencies library mark59-datahunter-samples-dependencies from mark59/mark59-datahunter-samples/target. Then you also get the sample DSL project classes as well, as they are a dependency of the mark59-datahunter-samples project.
When creating the JMeter test plan we suggest always including a Summary Report Listener. For the JMeter summary report, you should generally just use the default configuration (which will produce a .csv formatted file). The one addition that may be of benefit is to tick the "Save Hostname" option - this helps when reporting on distributed tests. The filename must end in .csv, .xml or .jtl.
Other configurations can still work. Xml file output ("Save As XML" is ticked) will work, although there are a few limitations with xml format such as the ability to ‘ignore’ selected errors in Trends Load. Generally though, if at least the fields "timeStamp", "elapsed", "label", "dataType" and "success" are present, sub-results are saved ("Save Sub Results" is ticked), and for csv a valid header is present, the file should be capable of being processed by Mark59, but unless there is good reason just stick to the default settings.
Upon completion of test execution, the JMeter output file is used in the Mark59:
The recommended Summary Report Listener settings
The test output file name can also be specified from the command line (“-l” option when running in non-GUI mode), and the Summary Report Listener disabled (which is what we do in our Continuous Integration setup).
There are three stand-alone web applications shipped with Mark59:
mark59-datahunter.war, H2/pg/mysql database mark59datahunterdb
A simple app to retain key-value pairs of transactional data
mark59-metrics.war, H2/pg/mysql database mark59metricsdb
Enables the collection of server (and other) metric data
mark59-trends.war, H2/pg/mysql database mark59trendsdb
Visualisation tool for test result trends over time.
They are all Java Spring Boot Web applications built using Java 17 (Spring Boot requires Java 17+). The apps can be started directly from commands, samples provided in the mark59/bin/../StartAllMark59WebApplications.. files (separate commands in /mac_commands for Mac), or they can also be started as services, for example through the use of nssm in Windows, and have all been written so that they can be deployed to a Tomcat application server. There are no special requirements for Tomcat deployment. The main difference using Tomcat is that the URL domain and port is always that of the Tomcat instance.
Three database options are provided for each application: H2 (file -based), MySQL (minimum version 8.x) and Postgres (minimum version 4). Technically there is a fourth option of H2 in-memory. It’s really only included for internal testing purposes, although it may be useful for DataHunter when you don’t need to retain data between tests. See mark59/bin/StartAllMark59WebApplications_H2_InMemory...
The Quick Start demos use a H2 file-based database, which is why no special database setup is required.
To use a MySQL or Postgres database with the applications, you will need to install the database server, and run the database creation scripts. Appendix C gives some guidance on how to install a MySQL database. Take particular note of the need to set GLOBAL variable group_concat_max_len. Postgres is well documented on the web for Windows (for Linux it depends on your particular distro/version). The databases are independent of each other, so each of the three applications could run from different databases, although with the possible exception of using H2 in-memory for DataHunter it's hard to see much point doing that. Database creation files are available at mark59/databaseScripts/PostgresSQL and mark59/databaseScripts/MySQL
Although the Mark59 web applications are independent of each other, there are some runtime dependencies with other Mark59 artefacts:
The DataHunter application is intended to assist with managing application-related test data during and between performance tests. If you are familiar with LoadRunner it is akin to the Web version of VTS.
DataHunter is also used as the sample application in Mark59, which is why the scripts in the mark59-datahunter-samples project require DataHunter to be running. The supplied sample data used in the install of the Trends application is based on historical (slightly hacked) DataHunter runs. Trends itself does not need DataHunter to be installed.
The functionality is pretty straight forward. The test script DataHunterLifecyclePvtScript in the mark59-datahunter-samples project covers the most common use cases. You can also look at DataHunterBasicRegressionScript.
To call DataHunter directly from a http request in a JMeter test, see test plan DataHunterLifecyclePvtScriptUsingApiViaHttpRequestsTestPlan.jmx in the mark59-datahunter-samples project. More detail in the DataHunter API chapter.
DataHunter does cover one use case more complex than basic CRUD style functionality and a simple file loader: when event(s) occur asynchronously, and the timing you want to capture is between when the first and last event for an identifiable data key.
Full life-cycle usage is covered in the com.mark59.datahunter.api.rest.samples.
DataHunterRestApiClientSampleUsage class available in the mark59-datahunter-api project (see DataHunter API). Basically, you create timed points for an Identifier within an Application. This is done by creating Lifecycle entries within the Application/Identifier at the the start and end of the event you are timing, to which you (would usually but not necessarily) assign a Useability of UNPAIRED to. Lifecycle entries must be unique within a given Application/Identifier. You can actually create multiple timing events for an Application/Identifier, but only the earliest and latest events on the database will be used to resolve the timing of Application/Identifier.
Then, usually periodically during a test, you run the “Asynchronous Message Analyzer” process. This will match all unpaired Application/Identifier entries with at least two events (Lifecycle entries) in the database. The matching process calculates the timings for the matched Application/Identifier rows, and then can update their Usability (eg from UNPAIRED to USED).
Here’s an example of handling asynchronous events via the DataHunter UI (in practice you should use the DataHunter API):
Let’s say you add a set of 6 items in DataHunter, with the first entry looking like this:
You continue so that you get a list that looks like this (the first entry is at the bottom):
Then to collect async results from this table of events, you run the Asynchronous Message Analyzer against the SAMPLE_DEMO_ASYNC application, with a Useabilty of UNPAIRED. As long as at least two UNPAIRED entry timings (first and last) exist for a given identifier, it will appear in the list. The difference column provides the timing result for each Identifier listed.
As already mentioned, in a test, to avoid the same events being picked up again, you would change the Useabilty of all the rows in the list by setting the ‘Update to Useability’ field (e.g. to USED).
Also, you still need to capture these timings against transaction names. In particular note that the results table layout does not determine what you should use as your transaction names. In this example is EventType1 (with two txns) and EventType2 (with a single txn) the transaction names you want, or do you want every identifier to be a named transaction in the result (i.e. three transaction names in the result)? You will need to consider this when designing your test.
The DataHunter database can be accessed multiple ways. For example, using Selenium or via http requests based on the DataHunter UI in a JMeter test. These are not the preferred methods, as invoking the UI is relatively inefficient. Of course, you can also create your own Java classes or whatever to access the DataHunter database directly, but that defeats the control that using predefined application code gives you.
The preferred method is to invoke the DataHunter Rest service, either directly from http requests, or via the Java DataHunter API Client in the mark59-datahunter-api Maven artefact, class com.mark59.datahunter.api.rest.DataHunterRestApiClient. You should find the API calls pretty straightforward to learn and use, as where possible there is a one-to-one relationship between the DataHunter UI and the available calls to the DataHunter API.
Basic use of the DataHunter API Client, enough to cover the majority of use cases, can be found in class DataHunterLifecyclePvtScriptUsingRestApiClient in the mark59-datahunter-samples project. Full life-cycle usage is covered in the com.mark59.datahunter.api.rest.samples.
DataHunterRestApiClientSampleUsage class is in the DataHunter API artefact itself, or on the Mark59 github repository, in the mark59-datahunter-api project).
To see how to invoke the API directly via http, including how to extract data from the JSON responses, in the mark59-datahunter-samples, see test plan DataHunterLifecyclePvtScriptUsingApiViaHttpRequestsTestPlan.jmx
The DataHunter API Client artefact mark59-datahunter-api.jar is available from Maven Central.
<dependency>
<groupId>com.mark59</groupId>
<artifactId>mark59-datahunter-api</artifactId>
<version>5.3</version>
</dependency>
You would normally include this entry in your pom.xml when creating a Java project that includes scripts that will use the DataHunter API.
The table below is a summary of the available DataHunter API Client methods, the corresponding Rest API Http calls, and related UI function.
JavaDocs for the Rest service is available in the mark59-datahunter project, class com.mark59.datahunter.controller.DataHunterRestController.
JavaDocs for the API Client from the mark59-datahunter-api project, class com.mark59.datahunter.api.rest.DataHunterRestApiClient
DataHunter UI ( ${dataHunterUrl}/… ) | Java Client API (com.mark59.datahunter.api.rest.DataHunterRestApiClient) Http Rest call ( ${dataHunterUrl}/api/… ) |
/add_policy | public DataHunterRestApiResponsePojo addPolicy(Policies policies) …/api/addPolicy?application=${application}&identifier=${identifier}&lifecycle=${lifecycle}&useability=${useability}&otherdata=${otherdata}&epochtime=${epochtime} Optional parameters: otherdata, epochtime Examples: http://localhost:8081/mark59-datahunter/api/addPolicy?application=testrest&identifier=id1&lifecycle=somelc&useability=UNUSED&otherdata=other&epochtime=12345 http://localhost:8081/mark59-datahunter/api/addPolicy?application=testrest&identifier=id2&lifecycle=somelc&useability=UNUSED&otherdata= |
/count_policies | public DataHunterRestApiResponsePojo countPolicies(String application, String lifecycle, String useability) …/api/countPolicies?application=${application}&lifecycle=${lifecycle}&useability=${useability} Optional parameters: lifecycle,useability Example: count all entries for an application: http://localhost:8081/mark59-datahunter/api/countPolicies?application=testrest |
/count_policies_breakdown | public DataHunterRestApiResponsePojo countPoliciesBreakdown(String applicationStartsWithOrEquals, String application, String lifecycle, String useability) …/api/countPoliciesBreakdown?applicationStartsWithOrEquals=${applicationStartsWithOrEquals}&lifecycle=${lifecycle}&useability=${useability} Optional parameters: lifecycle,useability Allowed values for applicationStartsWithOrEquals: EQUALS | STARTS_WITH Example: breakdown count by key of all entries on the database: |
/print_policy | printPolicy(String application, String identifier, String lifecycle) and printPolicy(String application, String identifier) ** means blank lifecycle, NOT any lifecycle) …/api/printPolicy?application=${application}&identifier=${identifier}&lifecycle=${lifecycle} Optional parameters: lifecycle (means a blank lifecycle) Example: http://localhost:8081/mark59-datahunter/api/printPolicy?application=testrest&identifier=id2&lifecycle=somelc |
/print_selected_policies | public DataHunterRestApiResponsePojo printSelectedPolicies(String application, String lifecycle, String useability) …/api/printSelectedPolicies?application=${application}&lifecycle=${lifecycle}&useability=${useability} Optional parameters: lifecycle,useability Example: print all UNUSED entries for an application: http://localhost:8081/mark59-datahunter/api/printSelectedPolicies?application=testrest&useability=UNUSED |
/delete_policy | public DataHunterRestApiResponsePojo deletePolicy(String application, String identifier, String lifecycle) …/api/deletePolicy?application=${application}&lifecycle=${lifecycle}&useability=${useability} Optional parameters: none Example: deletes an item with a blank lifecycle http://localhost:8081/mark59-datahunter/api/deletePolicy?application=testrest&identifier=id4&lifecycle= |
/delete_multiple_policies | public DataHunterRestApiResponsePojo deleteMultiplePolicies(String application, String lifecycle, String useability) …/api/deleteMultiplePolicies?application=${application}&lifecycle=${lifecycle}&useability=${useability} Optional parameters: lifecycle, useability Example: delete all entries for an application: http://localhost:8081/mark59-datahunter/api/deleteMultiplePolicies?application=testrest Example: delete all USED entries for an application: http://localhost:8081/mark59-datahunter/api/deleteMultiplePolicies?application=testrest&useability=USED |
/next_policy?application=testrest&pUseOrLookup=use | public DataHunterRestApiResponsePojo useNextPolicy(String application, String lifecycle,String useability, String selectOrder) …/api/useNextPolicy?application=${application}&lifecycle=${lifecycle}&useability=${useability}&selectOrder=${selectOrder} Optional parameters: lifecycle Example: return oldest UNUSED entry for an application for any lifecycle (its Useability will be set to USED by the call) http://localhost:8081/mark59-datahunter/api/useNextPolicy?application=testrest&useability=UNUSED&selectOrder=SELECT_OLDEST_ENTRY |
/next_policy?application=testrest&pUseOrLookup=lookup | public DataHunterRestApiResponsePojo lookupNextPolicy(String application, String lifecycle,String useability, String selectOrder) …/api/lookupNextPolicy?application=${application}&lifecycle=${lifecycle}&useability=${useability}&selectOrder=${selectOrder} Optional parameters: lifecycle Note: Exactly the same as useNextPolicy, except the Useability of the selected item is not updated |
/update_policies_use_state | public DataHunterRestApiResponsePojo updatePoliciesUseState(String application, String identifier, String useability, String toUseability, String toEpochTime) …/api/updatePoliciesUseState?application=${application}&identifier=${&identifier=${identifier}}&useability=${useability}&useability=${useability}&toUseability=${toUseability}&toEpochTime=${toEpochTime} Optional parameters: identifier, useability, toEpochTime Example: set all USED entries for an application to UNUSED http://localhost:8081/mark59-datahunter/api/updatePoliciesUseState?application=testrest&useability=USED&toUseability=UNUSED |
/async_message_analyzer | public DataHunterRestApiResponsePojo asyncMessageAnalyzer(String applicationStartsWithOrEquals, String application, String identifier, String useability, String toUseability) …/api/asyncMessageAnalyzer?applicationStartsWithOrEquals=${applicationStartsWithOrEquals}&application=${application}&identifier=${identifier}&useability=${useability}&toUseability=${toUseability} Optional parameters: lifecycle, useability, toUseability Values for applicationStartsWithOrEquals: EQUALS | STARTS_WITH Example: perform the same match as in the Timing Asynchronous Processes example, but also update the matched rows to USED |
/upload | No implementation |
Values for enumerated parameters
Parameter name | Values |
{To|From}useability | REUSABLE | UNPAIRED | UNUSED | USED |
applicationStartsWithOrEquals | EQUALS | STARTS_WITH |
selectOrder | SELECT_MOST_RECENTLY_ADDED | SELECT_OLDEST_ENTRY |SELECT_RANDOM_ENTRY |
The Mark59 Metrics web application can be configured to run commands on your Linux, Unix or Windows servers, or Groovy scripts, to obtain server metrics during JMeter test execution. Communication from JMeter to the Metrics web application is handled by the Mark59 Metrics Api artefact mark59-metrics-api.jar (see Metrics API). Communication from the Metrics web application to a target server happens in one of two ways:
Groovy scripts run on the server hosting Metrics.
The Quick Start Demo uses Metrics to run all the shell, (somewhat more awkwardly) Windows.bat and Mac command files necessary to demonstrate the core functionality of Mark59.
Windows WMIC commands have limitations - they are designed for server status information and are not meant as a general “run any remote command you want” solution, unlike a LInux/Unix SSH connection. WMI was chosen because it's built into Windows and is always available.
However, WMIC is being deprecated by Microsoft. We currently plan to introduce a POWERSHELL option as an alternative in the next release.
The Metrics web application contains three main data structures covered in the following sections: Server Profiles, Commands, and Response Parsers.
Server Profiles are identifiers used in the JMeter Mark59 Metrics Api Java Request to link back to the Metrics web application. Server Profile relates to each server you have under test, connected to a list of one or more Commands you want to run on a particular server, or for a Groovy Server Profile, a Groovy script.
Pre-loaded Server Profiles (with Commands on the RHS of the page) shipped with Mark59
Can be a Groovy, Linux/Unix SSH or WMIC command. The application comes shipped with basic CPU and Memory commands for Linux, Unix, Windows and Mac (for test purposes - they may not suit your needs). Also a simple Groovy script that can be executed immediately, and an example of how to invoke the New Relic API, for which at a minimum you will need to set some parameter values such as the newRelic API Key in order to execute.
The worked example below shows how to create a new command. Note you do not use a ‘Response Parser’ for a Groovy command, since you can programmatically parse what you need to output in the Groovy itself. With a Groovy command, you can add parameter values in a Server Profile that runs the command. These parameters are not available externally (ie, to the Mark59 Metrics API). This means every Server Profile can be run directly from the Web Application (unless you decide to use an indirect parameterisation technique in a Groovy command like a file lookup).
Tip: The Unix Memory scripts were provided by our Unix VM management team. They recommended the following SLAs: pgsp_aggregate_util (pagespace percent) - should not increase to more that 50, pinned_percent - should not increase to more than 30, numperm_percent - considered informational only (memory being used for file caching)
A single command may return more than one metric, for example the Linux Memory command returns three values. Each metric is captured from the command’s response using a parser.
A Linux “free” (memory) command using three response parsers, in order to capture three metrics.
Parsers use Groovy to parse the response from a server-based command and return a numeric result. If the parsed result is not numeric it is considered invalid. A Groovy command does not use a response parser, as you should be able to extract the metric(s) you need using the Groovy command itself.
Stringing together commands and parsers, and getting them working for a server profile can be challenging, particularly if you are trying to do some bespoke, complex work. Some assistance to test as you go is built into the application. This section goes through the workflow using an example of adding a metric capturing disk usage on Windows servers. We are assuming you are using a local Windows machine for this.
Mark59 is shipped with WMIC commands to capture Windows Memory and Cpu Utilisation metrics, but nothing for Disk Usage. To see CPU Utilisation, click on “localhost_WINDOWS” Server profile link, then Click on ”WinCpuCmd” link in the Selected Commands. You will see the Command:
cpu get loadpercentage
With WMIC_WINDOWS commands the “WMIC” at the start is built in (as are connection credentials when you connect remotely). To run this locally open a cmd prompt and type:
WMIC get loadpercentage.
You will probably see something like this (you may get a list of numbers instead of just one if there are multiple CPUs on your machine):
Now, to create a new WMI command for disk usage, unless you are a Win systems admin, you will need to find a decent reference site, and work out the command WMIC you want is:
LogicalDisk Where "DeviceID='c:'" get FreeSpace
We’ll assume our servers only use ‘c:’ for disk usage. Confirm it locally in a cmd window:
The full command, including the directory containing WMIC, which isn’t guaranteed to be on the Windows path by the way, when running for a target server is:
C:\WINDOWS\System32\wbem\WMIC /user:user /password:pass /node:server LogicalDisk Where "DeviceID='c:'" get FreeSpace
The next thing to note is the format of the response of the command, in this case it’s something like ‘Freespace 48571215872’, where 48571215872 is a byte count. We want to return this metric in GB (the general formula is to divide by 1073741824). This is done using a parser, written in Groovy. To help you check your parser while you are writing it, you can enter a Sample Response, and test your Groovy against the sample response until you get it right. The response from the command you are parsing is held in a variable named commandResponse in the parser script. The parser also sets the suffix of the metric transaction id. Here is a Response Parser you could use - with a Script entry Math.round(Double.parseDouble(commandResponse.replaceAll("[^\\d.]",""))/1073741824)
Here the metric has been assigned as a DATAPOINT. There’s no actual rules about choice of metric type, it’s whatever makes the most sense. For example you might want to assign a type of MEMORY to this metric. That could make sense if you have no other DATAPOINT values for this application, and may be preferable to getting a single, lonely point on the Trends Datapoint graphs..
Once you save it, you use the Test Parser button to test your code until it’s okay.
You will see nastier messages if you make a mistake in the Groovy. For example:
Next step is to create the actual command and attach the new parser to it:
To run the command, use a Server Profile and pick a server to run against. As we only have ‘localhost’ available as a demonstration, we’ll have to use that. You can either edit an existing ‘localhost’ Server Profile to add the extra command in:
Or you can create a new Server Profile, that will just run the new command when called:
Tip: When any localhost server profile is executed by the Metrics web application it will return metrics for the server the web application is installed on.
When using the Excel Spreadsheet option (Download Server Profiles Using Excel) the localhost server profile executes on the server with the Excel Spreadsheet..
You can set the ‘Alternative Server Id ‘ field to HOSTID during Server Profile entry, so the actual server name is reported rather than ‘localhost’.
Save the new Server Profile for the purposes of this demo. Once saved, you are presented with the View and Test Server Details page.
Click the Run Profile button to test everything out:
The full metric transaction id is displayed, and in the Command Log section:
If you want to see the actual request that is sent to the Metrics application and it’s response, click the API Link
Details on how to run a Server Profile from JMeter in coved in the Metrics API chapter.
Finally, with this particular command, it’s pretty easy to verify the result by looking at the C drive properties:
As discussed you can write a command using Groovy code. You don’t add a parser to the command, but you can add parameters, and set their values in Server Profiles running the command. You cannot set the parameter values externally - ie, as a parameter on a JMeter Server Metrics Java Request. Parameter names are defined as a list in the ‘Parameter Names’ section when creating the command.
A predefined variable scriptResponse can be used to return the required metrics. It consists of two structures, a string called CommandLog which is really just meant to help out with debugging while creating or changing a script, and ParsedMetrics, the set of metrics being reported.
A simple, working example Command is provided to put this all together: SimpleScriptSampleCmd. It can be executed using using Server Profile SimpleScriptSampleRunner
For a more realistic example of where a Groovy command may be needed, see the skeleton New Relic example provided:
NewRelicSampleCmd (Server Profile NewRelicSampleProfile)
You may decide that you do not want to connect to the Server Metris Web application during execution. For instance, if you don’t have it running all the time. In that case you can download a copy of all the Server Profiles and related data into an Excel spreadsheet, and run from that instead. Use the Download Server Profile link on the Server Profiles page to create the spreadsheet.
Downloading the Excel Spreadsheet
Server Profile Data as it appears in the Excel Spreadsheet
Tip: The Server Metrics spreadsheet displays particularly cleanly using Google Sheets.
The downloaded file name is by default mark59serverprofiles.xlsx. In order for Mark59 to know where you have put the file, you need to define a system property mark59.server.profiles.excel.file.path (usually in mark59.properties). For instance if you put the Excel spreadsheet in the JMeter/bin directory :
mark59.server.profiles.excel.file.path=./mark59serverprofiles.xlsx
See the Metrics API chapter on how to use the excel file from a JMeter test.
Tip: It may be tempting to do quick hacks in the spreadsheet directly, and as long as you don’t make a mistake that will work. However, we suggest always making changes in the Metrics application, and downloading a fresh copy of the spreadsheet, so you keep a definitive ‘source of truth’ to work from.
As the Metrics web application holds server credentials it is password protected. Examples of setting the application’s user/passwords can be found in bat and sh files:
mark59/mark59-server-metrics-web/StartMark59ServerMetricsWebFromTarget
The three parameters involved are
--mark59metricsid
--mark59metricspasswrd
--mark59metricshide
Defaults (as used in the The Quick Start Demo) are demo/demo/false. Obviously user and password need to be changed when using Metrics with real Server Profiles.
The last parameter mark59metricshide will prevent the credentials being printed on the console if set to true or yes. Instead of defining the credentials in the startup script, you can use environment variables. For example in Windows:`
The ‘Password Cipher’ field used when setting server credentials is meant to prevent clear-case passwords being visible on screen. It’s an obfuscation rather than encryption, and is not designed to be used for highly secure production passwords. See com.mark59.core.utils.SimpleAES for usage.
For Linux/Unix, simply installing Kerberos on a client server can cause connection failure (due to this). This has been catered for by removing Kerberos as the preferred default protocol ("gssapi-with-mic") in the Jsch Api being used. If KERBEROS is entered as the password for a Linux/Unix server, "gssapi-with-mic" authentication will be used instead of the current settings. No other considerations, including the use of Kerberos on Win servers, have been made.
The Metrics web application can require a call from clients (such as a com.mark59.metrics.api.ServerMetricsCaptureViaWeb Java Request in a JMeter test) to provide a Basic Authentication Header. The default is that the header is not required. Examples of setting up the requirement for the Header can be found (commented out) in bat and sh files:
mark59/mark59-server-metrics-web/StartMark59ServerMetricsWebFromTarget
The parameters directly involved are
--mark59metricsapiauth
--mark59metricsapiuser
--mark59metricsapipass
Defaults are false/sampleuser/samplepass. The Basic Authentication Header requirement can be switched on by setting mark59metricsapipass to true (the user and password being irrelevant until that happens. As with the Security - Application Logon parameters, they can be set as environment variables. Aso, the mark59metricshide parameter will prevent the Basic Authorization Credentials being printed on the console if set to true or yes.
See Adding a Basic Authentication Header on how to set up the Basic Authentication Header token in the API call.
Metrics functionality is triggered in JMeter using an API call from Jmeter Request com.mark59.metrics.api.ServerMetricsCaptureViaWeb, or via an excel spreadsheet downloaded from the Server Metrics Web application using JMeter Request com.mark59.metrics.api.ServerMetricsCaptureViaExcel. The jar containing these requests is mark59-metrics-api.jar (at mark59/mark59-metrics-api/target)
This needs to be copied to the lib/ext directory of the JMeter instance running the test.
In the Quick Start Demo, the ServerMetrics_localhost thread in sample test plan DataHunterSeleniumTestPlan (at mark59/mark59-datahunter-samples/test-plans) demonstrates usage of ServerMetricsCaptureViaWeb.
It looks a bit more complex than the normal case where you would just enter a plain Server Profile Name, because it has to resolve the operating system it's running on.
The connection between JMeter and Mark59 Metrics web application in the demo test plan
See test plan ServerMetrics_localhost_via_Excel thread group in test plan DataHunterSeleniumTestPlanUsingExcel.jmx, for sample usage of ServerMetricsCaptureViaExcelt.
Available parameters for ServerMetricsCaptureViaWeb:
Parameter | default value | Other options, notes |
MARK59_SERVER_METRICS_WEB_URL | http://localhost:8085/mark59-server-metrics-web | Url of the Server Metrics Web application |
SERVER_PROFILE_NAME | required | The Server Profile to run |
API_AUTH | (empty) | See below: |
LOG_ERROR_MESSAGES, PRINT_ERROR_MESSAGES | short | short | full | no Determines if (or what length) error messages should be logged via log4j (e.g., jmeter.log) or console when an attempt to retrieve a metric throws an error. |
LOG_RESULTS_SUMMARY, PRINT_RESULTS_SUMMARY | false | Metrics Results Summary output: Log (via Log4j) or Print (to console) the script Results Summary (as per a Selenium script). |
Restrict_To_Only_Run_On_IPs_List | (empty) | As per a Selenium Script: Intended for Distributed testing, where a script thread is only to run on the listed IP address(es). |
Available parameters for ServerMetricsCaptureViaExcel:
Parameter | default value | Other options, notes |
SERVER_PROFILE_NAME | required | The Server Profile to run |
LOG_ERROR_MESSAGES, PRINT_ERROR_MESSAGES | short | short | full | no Determines if (or what length) error messages should be logged via log4j (eg jmeter.log) or console when an attempt to retrieve a metric throws an error. |
LOG_RESULTS_SUMMARY, PRINT_RESULTS_SUMMARY | false | Metrics Results Summary output: Log (via Log4j) or Print (to console) the script Results Summary (as per a Selenium script). |
OVERRIDE_PROPERTY_MARK59_SERVER_PROFILES_EXCEL_FILE_PATH | (empty) | The location of the excel spreadsheet to use should normally be set using Mark59 property mark59.server.profiles.excel.file.path However, you can override that location if you wish using this parameter. |
Restrict_To_Only_Run_On_IPs_List | (empty) | As per a Selenium Script: Intended for Distributed testing, where a script thread is only to run on the listed IP address(es). |
For the ServerMetricsCaptureViaWeb Java Request, a basic Authentication header will be required when the property mark59metricsapiauth has been set to 'true' in the mark59-metrics application (see Security - API Basic Authentication).
The credentials token should be placed in the API_AUTH JMeter argument, and will need to match the values as set in mark59-metrics. For an example see the ‘main’ in ServerMetricsCaptureViaWeb. For a example of how to create the token from the credentials, in the Mark59 GitHub repo, look at com.mark59.metrics.utils.MetricsUtils.createBasicAuthToken in the mark59-metrics project, and test case MetricsUtilsTest.testCreateBasicAuthToken()
This chapter assumes you have performed and reviewed the sample Trends database load described in The Quick Start Demo. In order to store data permanently, you need to create a mySQL or Postgres mark59trendsdb database rather than use H2. Database creation files are provided at mark59/databaseScripts/MySQL/MYSQLmark59trendsDataBaseCreation.sql or
mark59/databaseScripts/PostgresSQL/POSTGRESmark59trendsDataBaseCreation.sql
The Trends database load utility mark59-trends-load.jar (at mark59/mark59-trends-load/target) takes the results from a JMeter, Gatling or LoadRunner test, and creates a summary of the test run which can be viewed in the Trends web application. As the test results are processed, they are compared to values on SLA (Service Level Agreement) tables, set using the Trends application. Warning or Error messages are output if the expected SLA ranges are not met, which can be parsed and colour-coded when running from a Jenkins server. The Jenkins parser used would typically set the severity to Error (Red) for SLA failures which mean the test cannot be considered valid (out-of-range Txn PASS Counts or Txn Fail %), and Warning (Yellow) when a SLA has failed, but that has not invalidated the test (eg 90th % response times, metric SLA failures).
To view the full list of parameters available for TrendsLoad, just start it with a missing required parameter such a –a (application). The program will list the options before it errors
Starting Trends Load.. Version: 5.3 Parsing failed. Reason: Missing required options: a, i Exception in thread "main" usage: TrendsLoad -a,--application <arg> Application Id, as it will appear in the Trending Graph Application dropdown selections -c,--captureperiod <arg> Only capture test results for the given number of minutes, from the excluded start period (default is all results except those skipped by the excludestart parm are included) -d,--databasetype <arg> Load data to a 'h2', 'pg' or 'mysql' database (defaults to 'mysql') -e,--ignoredErrors <arg> Gatling, JMeter(csv) only. A list of pipe (|) delimited strings. When an error msg starts with any of the strings in the list, it will be treated as a Passed transaction rather than an Error. -h,--dbserver <arg> Server hosting the database where results will be held (defaults to localhost). ******************************************* ** NOTE: all db options applicable to MySQL or Postgres ONLY ******************************************* -i,--input <arg> The directory or file containing the performance test results. Multiple xml/csv/jtl results files allowed for JMeter within a directory, a single .mdb file is required for Loadrunner -k,--keeprawresults <arg> Keep Raw Test Results. If 'true' will keep each transaction for each run in the database (System metrics data is not captured for Loadrunner). This can use a large amount of storage and is not recommended (defaults to false). -l,--simulationLog <arg> Gatling only. Simulation log file name - must be in the Input directory (defaults to simulation.log) -m,--simlogcustoM <arg> Gatling only. Simulation log comma-separated customized 'REQUEST' field column positions in order : txn name, epoch start, epoch end, tnx OK, error msg. The text 'REQUEST' is assumed in position 1. EG: for a 3.6.1 layout: '2,3,4,5,6,' (This parameter may assist with un-catered for Gatling versions) -p,--dbPort <arg> Port number for the database where results will be held (defaults to 3306 for MySQL, 5432 for Postgres, 9902 for H2 tcp) -q,--dbxtraurlparms <arg> Any special parameters to append to the end of the database URL (include the ?). Eg "?allowPublicKeyRetrieval=truee&useSSL=false " (the quotes are needed to escape the ampersand) -r,--reference <arg> A reference. Usual purpose would be to identify this run (possibly by a link). Eg <a href='http://ciServer/job/myJob/001/HTML_Rep ort'>run 001</a> -s,--dbSchema <arg> database schema (MySQL terminology) / database name (Postgres terminology) defaults to mark59trendsdb -t,--tool <arg> Performance Tool used to generate the results to be processed { JMETER (default) | GATLING | LOADRUNNER } -u,--dbUsername <arg> Username for the database (defaults to admin) -w,--dbpassWord <arg> Password for the database -x,--eXcludestart <arg> exclude results at the start of the test for the given number of minutes (defaults to 0) -y,--dbpassencrYpted <arg> Encrypted Password for the database (value as per the encryption used by mark59-metrics application 'Edit Server Profile' page) -z,--timeZone <arg> Loadrunner only. Required when running ann extract from azone other than where the Analysis Report was generated. Also, internal raw stored time may not take daylight saving into account. Two format options 1) offset against GMT. Eg 'GMT+02:00' or 2) IANA Time Zone Database (TZDB) codes. Refer to https://en.wikipedia.org/wiki/List_of_tz_dat abase_time_zones. Eg 'Australia/Sydney' Sample usages ------------ 1. JMeter example Process JMeter xml formatted result in directory C:/jmeter-results/BIGAPP (file/s ends in .xml) The graph application name will be MY_COMPANY_BIG_APP, with a reference for this run of 'run ref 645'. The mark59trendsdb database is hosted locally on a MySql instance assigned to port 3309 (default user/password of admin/admin) : java -jar mark59-trends-load.jar -a MY_COMPANY_BIG_APP -i C:/jmeter-results/BIGAPP -r "run ref 645" -p 3309
2. Gatling example Process Gatling simulation.log in directory C:/GatlingProjects/myBigApp The graph application name will be MY_COMPANY_BIG_APP, with a reference for this run of 'GatlingIsCool'. The mark59trendsdb database is hosted locally on a Postgres instance using all defaults (but you want to disable sslmode) java -jar mark59-trends-load.jar -a MY_COMPANY_BIG_APP -i C:/GatlingProjects/myBigApp -d pg -q "?sslmode=disable" -t GATLING -r "GatlingIsCool"
3. Loadrunner example Process Loadrunner analysis result at C:/templr/BIGAPP/AnalysisSession (containing file AnalysisSession.mdb). The graph application name will be MY_COMPANY_BIG_APP, with a reference for this run of 'run ref 644'. The mark59trendsdb database is hosted locally on a MySql instance assigned to port 3309 (default user/password of admin/admin) : java -jar mark59-trends-load.jar -a MY_COMPANY_BIG_APP -i C:/templr/BIGAPP/AnalysisSession/AnalysisSession.mdb -r "run ref 644" -p 3309 -t LOADRUNNER |
The application id (-a) and input file or directory (-i) are the only mandatory parameters.
When you are wanting to load the results for the first run of a new application, you do not need to do anything special for everything to work (it will appear as a new application with a single run graphic in Trend Analysis). However, you will need to set transactional SLAs, and new metric SLAs, if you want SLA checking for the application.
The general format will be similar to this DataHunter application load example:
cd C:\mark59\mark59-trends-load\target
java -jar C:\mark59\mark59-trends-load\target\mark59-trends-load.jar -a DataHunter -i C:\Jmeter\Jmeter_Results\DataHunter -p 3309, -r "<a href='http://mycorpserver.corp/job/DataHunter_Result/177/HTML_Report'>run 177</a>"
Refer to mark59/mark59-trends-load/LoadDataHunterResultsIntoTrends for more detailed examples.
For JMeter, the input is a directory. It can process multiple files in the directory, considering them part of a single run. The -d, -h and -p parameters refer to the location of the target MySQL/Postgres database. The -r parameter is just a tag to help identify the run. In this example the data load run was from a Jenkins server, and the parameter was built in a way that allows a link back to Jenkins when displaying the run from in the Trends application.
For LoadRunner, the input is all taken from the Access ".mdb" file created during Analysis. Note that this file contains data for the entire test (you can use the run -x and -c parameters to capture a target test period). As an example of the format, assuming a MySQL database is local on port 3306 with default user/password, and the access db file is at
C:/templr/BIGAPP/AnalysisSession/AnalysisSession.mdb:
java -jar mark59-trends-load.jar -a MY_COMPANY_BIG_APP -i C:/templr/BIGAPP/AnalysisSession/AnalysisSession.mdb -d mysql -r "run ref 666" -t LOADRUNNER
A few notes about the data extract process:
Filtering of which metrics and datapoints are to be captured is done using the Trending Analysis Web Application, using the Metrics Event Mapping Reference pages. To assist the metricsRuncheck program outputs the full Event-map table, indicating what events have been mapped.
The LoadRunner functionality is considered 'legacy'. However we will retain the functionality in the code base on future releases.
Note that the raw .mdb file does not seem to allow for Daylight Savings or when the Analysis and run occur in different time zones. You can use the -z parameter to compensate.
The general format will be similar to the sample given in the Sample Usages printout:
java -jar mark59-trends-load.jar -a MY_COMPANY_BIG_APP -i C:/GatlingProjects/myBigApp -d pg -q "?sslmode=disable" -t GATLING -r "ILuvGatling"
For Gatling the input directory is the one containing simulation.log (you can override the default name of the log file with the -l option).
mark59trendsdb database connection is as per JMeter and LoadRunner. You can only process one simulation log in a given run (unlike with JMeter).,
The main problem with processing Gatling results is that the developers will not formalise an output format, so it can (and does) change from release to release. For that reason it is not actively maintained, with JMeter being the primary tool we support. At time of writing version 3.3 to 3.6 had been tested and explicitly catered for. A -m,--simlogcustoM parameter has been included which may assist if it's simply a matter of a change of field ordering or extra tabs on REQUEST lines in the simulation log on a future release, that is not covered (3.6 is the default ordering).
Just a quick note on what may seem like a strange parameter, available when processing Gatling or JMeter (csv only) results:
-e,--ignoredErrors <arg> Gatling, JMeter(csv) only. A list of pipe
(|) delimited strings. When an error msg
starts with any of the strings in the list,
it will be treated as a Passed transaction
rather than an Error.
Both JMeter and Galting mark transactions that fail assertions (eg: a JMeter response assertion) as failed, but you may actually want to treat the transaction as a PASS when loading it into Trend Analysis. For example, if in a Gatling test some transaction assertions fail and you get a pile of ‘responseTimeInMillis.find.lessThan(...’’ messages, you may not want to mark those transactions as FAILs’. The command line for this would contain something like:
... "-e","responseTimeInMillis|anothererrortoingoreStartsWith", "-t","GATLING" ...
The reasoning is that if transactions actually completed but were just slow, you definitely want to report them as PASSed transactions, or your results are going to get skewed (FAILed transactions are not taken into account for timings).
For further usage have a look at the TrendsLoadGatlingTest JUnit test case in the mark59-trends-load project on the Mark59 GitHub.
The Trends Application is at the heart of Mark59. It consists of a number of web pages which perform the following functions:
You need to connect to the same MySQL / Postgres / H2 database instance that was used to load the Trends data using mark59-trends-load.jar (last chapter).
See mark59/mark59-trends/StartTrendsFromTarget (.bat or .sh) for details.
The current status of applications under test at a glance. The SLA state refers to the last test for the application. Note that the status is computed dynamically when you enter the dashboard, so will reflect changes you have made to the SLA tables since last run. 'Active' or 'All' applications can be displayed. An entire application can be deleted from here - it must be set to be inactive first.
By default only Active applications are listed in the Trend Analysis drop down
The main selectors should be sufficient for the majority of cases (listed across the top of the page):
The Trend Analysis main selectors
Advanced Filters are selectors expected to be less used, intended for when you are drilling down into the data for a specific reason, maybe to highlight a particular result you want to highlight to the user community. There are subtleties with the way the selectors work and can interact with each other, the best way to learn is just to play around. The DataHunter samples should be good for this.
Work in the same way as described for the Transaction Display Filters, except as you might guess allowing for run selection. It does allow for some leniency in that the date formatting is ignored (the dots and colons are optional). For example entering 2022.__.10T11:__ and 2022__1011:__ in the select 'run date-time (SQL like)' field select the same runs (occurring on the 10th day of any month in 2022, started in the hour from 11 a.m. on that day) .
The Trend Analysis Advanced Filters
Hopefully should mostly make sense, although there are several things worth commenting on. The range bars (and Range Bar Legend in the bottom right of the graph canvas) will depend on the graph being displayed. If you want to know exactly what the Range Bar is for a graph, you can see the SQL used in the Graph Mapping Admin page.
A transaction breaching a SLA will be displayed on the graphic with a red exclamation mark (!) beside it. In the table data, the transaction name will be in red. However, the metric quantity is only shown in red if that is the quantity that caused the failure. In the following example, the Pass Count SLA has actually failed for transaction DH-lifecycle-0100-deleteMultiplePolicies, but as this is the 90th percentile graph and the 90th percentile was okay, the metric (0.167) is not shown in red.
A SLA Pass Count failure on a transaction as displayed on the TXN_90TH graph
SLA Pass Count failure on a transaction as displayed on the TXN_PASS graph
Where a transaction name exists on the SLA tables and has SLA set for it, but the transaction doesn't appear in the results for the last displayed run, it will appear on a table called 'Missing Transactions List' .
A similar table, the "Missing Metrics List", will display on the particular graph relevant to a metric that has had a metric SLA set, but the metric was not captured during the run.
For transactional graphs, you can remove transactions that you don't want to see on the graph. You set the flag via the Transaction SLA Reference Maintenance screen. The “Ignored Transactions List” displays below the main transaction table as a reminder.
Both metric and transactional SLA can be made inactive by setting the ‘Active’ flag to ‘N’ in their respective SLA Maintenance pages. Inactive transactional SLAs will be shown on a table on the transactional graphs.
Deactivated Metric SLAs will also display on a table, but only for the graph displaying the particular metric and value derivation that has been disabled. For example, disabling a SLA for a Metric Type of DATAPOINT and Value Derivation of Last means the table will be shown on the DATAPOINT_LAST graph, but not any other metric graph.
Two graphs that ship with the application, whose purpose might not seem obvious are TXN_90TH_EX_DELAY and TXN_DELAY. They relate to the presentation of mocked transaction delays, see the description of the Txn delay field in Transaction SLA Maintenance.
A straightforward page providing functionality to delete an application run, or to edit it. The most important aspect of the edit function is the ability to mark a run as 'Baseline'. This allows selection of the run as a baseline on the Trend Analysis Graph. The latest baseline can also be used to automatically create transactional SLAs with defaulted values, as described in the next section.
Maintenance of transaction-based SLAs for an application, also bulk creation of SLA data for an entire application, and ability to copy/delete an entire application's SLA data. We suggest you play around with the SLA data as uploaded from the supplied sample for DataHunter to get a feel for the screens. Any changed SLA are applied to the Trend Analysis page graphic as soon as you redraw the graph, so you can see what constitutes passed/failed/ignored transactions as you make changes.
Summary of fields:
Key
SLA related values
Non-SLA related values. Stored here as a convenience as they related to application by transaction data
This option is to let you create a set of default-valued transactional SLAs when you add or significantly change an application. The list of transaction names to be added will be based on the most recent baseline for the application. Only those transactions that do not already have a SLA entry will be added. Default values can be set, or for percentiles can be based on the values from the last baseline, which will be the values placed in the new SLAs. The transaction Pass Count SLA is based on the count for the transaction in the baseline. You also have the option of re-writing the ‘Reference’ field for all SLAs, or just new ones.
Can be handy when you are setting up new application ids where the application is a close copy of an existing test, or you just want to save a copy of the current SLAs as you are editing or whatever.
Maintenance of metric based SLAs for an application - DATAPOINT, CPU_UTIL and MEMORY statistics. As for transactional SLAs, any changes are applied to the Trend Analysis page graphic as soon as you redraw the graph, so you can see what constitutes passed or failed SLAs as you make changes.
Summary of field values:
Not all Value Derivations for all Metric Types are graphed, or even captured, as they may be of marginal or nonsensical value. A value of -1 is used when a metric value derivation is not recorded. For example, the "Stop" value derivation is only relevant to LoadRunner and so is never captured for JMeter tests. The Minimum recorded value for CPU_UTIL is actually captured, but is considered of little value so is not graphed. But if for some reason you wanted to set an SLA against CPU_UTIL Minimum you could.
Setting an unusual metric SLA : CPU_UTIL Minimum
Such an SLA will be checked and reported, it just will not appear on any graph. However, If you consider this SLA metric relevant to your test results you may want to report, you can create a graph for it, as covered in the Graph Mapping Administration section.
A simple facility to help with the situation where you want to rename a transaction in a script, but keep the history of the transaction available in the Trend Analysis graphic
The main page includes the names of all transactions for the application. To help with context it also lists the last time the transaction appeared in a test, and how many tests it appears in.
Two forms of renames are allowed: when you want to rename a transaction to something not used before, or when you want to ‘merge’ one transaction with another. You get a warning when you do a ‘merge’... because you can’t just undo it.
Confirmation of a ‘merge’ of transaction names
You cannot rename a transaction to a name of a transaction that appears in the same test, nor can you change the translation type of the transactions … if you attempt you will get a link to the Trend Analysis graphic, that will show data for all runs with both transaction names.
a ‘clash’ of transaction names
There is one case where the changing of a transaction type is allowed (since this could be a pretty common thing to want to do). You can flip between a transaction being “CDP” and non-”CDP”
This page provides a mechanism to match groups of similarly named metric transactions of a metric type to a set of attributes relevant to those transactions. This functionality tends to be particularly relevant to LoadRunner, but can be useful in JMeter too.
In LoadRunner, you can choose which SiteScope entries you want (Metric Source Loadrunner_SiteScope) by setting SQL 'like' checks in the 'Match When Source Name Like' field, and setting what type of Mark59 metric data type those entries should map to (CPU_UTIL, MEMORY or DATAPOINT). It's a similar process for Loadrunner_DataPoint.
SiteScope entries tend to be very long, so if you can find a common Left Boundary and/or Right Boundary around the actual metric name you want to display on the graph and set SLAs against, you can set boundaries (potentially useful for JMeter testing as well).
It's possible that the Unix machines you are monitoring are capturing the idle percent (via LPARSTAT idle). In that case, it can be 'inverted' to a CPU_UTIL by setting the 'Is Inverted %' flag to 'Y'
Basic samples of the above situations are provided in the sample data.
In JMeter, a potentially useful feature is the ability to remap an input 'metric source' to a different metric type. The most important re-mapping is probably from an input 'metric source' of Jmeter_TRANSACTION to the metric data types of CPU_UTIL or MEMORY. This will be relevant when you are capturing metrics via some 3rd party tool that cannot set the datatype of the transaction in the way Mark59 can.
Pretending a DataHunter transaction is actually a CPU_UTIL
.. will mean the transaction behaves as if it was a CPU_UTIL in Trend Analysis
That this re-mapping does not apply in the JMeter reports produced using Enhanced JMeter reporting, as only the raw csv JMeter output is used to create those reports.
As alluded to above, during the Trend Analysis Database Load, transaction/metric entries are attempted to be matched against the 'Match When Source Name Like' field using a SQL 'like' query. In the situation where multiple matches are possible, the topmost entry on the page is the one that will be matched. If you want further details of the ordering algorithm refer to the source code at the mark59 GitHub for class: com.mark59.trends.data.eventMapping.dao.EventMappingDAOjdbcTemplateImpl.java in the mark59-trends project
One particular entry type is worth noting: a Metric Source with a 'Match When Source Name Like' field of just '%'. If one of these entries exists, it will always be the last entry matched TemplateImpl.findAnEagainst. But that also means that Metric Source will always have a match, with the ‘%’ entry will act as a ‘catch all’. In the situation where you only want DATAPOINTs that are specifically matched with the other entries in the mapping table, you would want to remove this 'catch-all' entry, so that DATAPOINTs you don't want are simply bypassed.
If it exists the "%" only catch-all entry is always selected last
Is an administrative function that we do not expect most users to be using often, if at all. The main graphs that have been found useful over time are supplied in the sample data. The best way to get a feel for the options available is to examine the graphs setup as provided by the samples, and to look at the details provided on the 'Add new Graph' and 'Edit Graph Mapping Details' pages.
For an example of adding a new graph, we will add a graph for Minimum CPU_UTIL, which would display the unusual SLA that was discussed in the Metric SLA Maintenance section above. The following entry would create a graph type called CPU_UTIL_LOWS:
The metric SLA entry that would create the bar range:
And here is graph you would see:
Mark59 allows for JMeter report splitting by datatype.
This chapter assumes you have performed and reviewed the JMeter Report Generation step described in The Quick Start Demo.
The mark59-results-splitter.jar (at mark59-results-splitter/target) is a simple utility that can be used to split the output csv or xml formatted results file(s) of a JMeter test into multiple reports, according to data type. You can just have one combined report, you can split into individual data types (Transactions, CPU_UTIL, DATAPOINT, MEMORY), or you can have a Transactions Report and Metrics Report (combining CPU_UTIL, DATAPOINT, MEMORY).
There are a few other miscellaneous options (see the full description below). You have a couple for how to list errored transactions. You can also print out the 'main' transaction of a result containing sub-transactions if you choose (for instance this would give you the total length of time an entire Mark59 Selenium script executed for). For the transactions report you can choose not to print CDP transactions.
To view the parameters available for ResultFilesConverter, just start the utility without parameters, and the options will be listed.
Results Splitter starting.. Version: 5.3
ERROR: ERROR : Parsing failed. Reason: Missing required option: f
Exception in thread "main" usage: ResultsConverter
-c,--cdpfilter <arg> 'ShowCDP' (the default) will include
CDP transaction in the transactions.
'HideCDP' will remove any CDP
transactions from the report.
'OnlyCDP' will create a transactions
report that ONLY contains
transactions marked as CDP
transactions. Any separate Metrics
report(s) will not be affected, as
CDP filtering is only to
transactional data
-e,--errortransactionnaming <arg> How to handle txns marked as failed.
'Rename' suffixes the failed txn
names with '_ERRORED'. 'Duplicate'
keeps the original txn plus adds a
'_ERRORED' txn. Default is 'No' -
just keep the original txn name.
-f,--outputFilename <arg> Base output CSV file name. File
extension will be .csv (will be
suffixed .csv even if not included in
the argument). If metrics are split
out, an additional file ending will
be added for metric datafile(s) - see
'Metricsreportsplit' options for
details.
-i,--inputdirectory <arg> The directory containing the
performance test result file(s).
Multiple xml/csv/jtl results files
allowed. Default is current
directory
-m,--Metricsreportsplit <arg> Option to create separate file(s) for
metric data. 'CreateMetricsReport' -
create separate file with all non-txn
data, suffixed _METRICS ,
'SplitByDataType' create a file per
datatype, suffixed with _{datatype}.
Default is 'No' - just put everything
in the one output file
-o,--outputdirectoy <arg> Directory in which to write the
output CSV file. Must already exist.
Default is a folder named 'MERGED'
under the input directory
-x,--eXcludeResultsWithSub <arg> TRUE (the default) will exclude the
XML file main sample transaction for
entries which has sub-results, or for
CSV files lines marked as 'PARENT'
('FALSE' to include)
Sample usage
------------
1. Concatenate a set of Jmeter result files in D:/Jmeter_Results/MyTestApp, into a single .csv result file, output file MyTestAppJmeterResult.csv to directory D:/Jmeter_Results/MyTestApp/MERGED :
java -jar mark59-results-splitter.jar -iD:\Jmeter_Results\MyTestApp\ -fMyTestAppJmeterResult
2. As above (but with the current directory set as D:/Jmeter_Results/MyTestApp before running), but this time split the metric data types out into separate csv files, and suffix errored txns named with _ERRORED
java -jar mark59-results-splitter.jar -fMyTestAppJmeterResult -eRename -mSplitByDataType
The only mandatory parameter is the base output file name (-f). The parameters you are likely to need to set are the input directory (-i) , which has a default value set to the current directory, and possibly the output directory (-o) which holds the split CSV file(s), the default is to place them in a folder named 'MERGED' under the input directory.
Only files with suffixes .jtl, .xml and .csv in the input directory are picked up, other files and sub-directories are ignored. The format of JMeter results output files should be as discussed in Mark59 (Selenium and non-Selenium) Script Results Output.
This section we will step through one way to set up a basic workflow in Jenkins to run and report on a JMeter test utilising Mark59. As per the earlier ‘Quick Starts’, the application under test is DataHunter, testing using Selenium scripts. This example goes through a Windows setup - the commands being used are Windows .bat files. Assumption is that the mark59 zip file was unzipped to C:\mark59. Basic knowledge of Jenkins is assumed (there’s plenty of help online for Jenkins if you need it).
The concepts for this setup are described in the following Continuous Integration Design chapter.
Create a directory called C:/jenkins on your local machine, copy the contents of directory mark59-jenkins-demo in the Mark59 xtras github repository into it.
Download the latest jenkins.war file from the Jenkins download page (“Generic Java Package .war”), and place it in C:/jenkins
Start Jenkins using the .bat file StartJenkinsFromWar.bat.
Go through the usual setup steps: ‘secret’ password and setting an ‘admin’ user.
There’s no need to add the ‘suggest plugins’ - but some of the plugins needed may be on the list that is presented - see next step.
As part of the initial setup, you will be asked to either:
OR
The following is the list of additional plugins that are required, or we have found useful. Please install them via the Jenkins Plugin Manager.
Additional Plugins Installed
Green balls
OWASP Markup Formatter
Html publisher
Parameterized Trigger
Conditional BuildStep
Dashboard View
JavaMail API
Email Extension
Node and Label parameter
Log Parser Plugin
PowerShell
Sidebar Link
Slave Setup
slave-status
SSH Agent
Other useful plugins
Job Configuration History
Extra Columns
Nested View
Mask Passwords
Slack Notification
ThinBackup
In Manage jenkins > Configure System
Confirm the Home directory is set as C:\jenkins\jenkins_home, then:
In Console Output Parsing add these Parsing Rules:
Description: Log_Parse_Rules_Jmeter_Verification
Parsing Rules File: C:\jenkins\_parserules\LogParseRulesJmeterVerification.txt
Description: Log_Parse_Rules_Jmeter_Generic
Parsing Rules File: C:\jenkins\_parserules\LogParseRulesJmeterGeneric.txt
In Manage jenkins > Configure Global Security
Check Markup Formatter is set to Safe HTML
The three chained jobs DataHunter_Check, DataHunter_Execute and DataHunter_Result are set up in reverse order, so the next job in the chain already exists when we create the link.
Create job DataHunter_Result as a freestyle project. Then setup the configuration using values as below. Please review Job Configuration images in the mark59-jenkins-demo\_sampleJobConfigImages folder to guide you as well.
Description:
- Uses Trends Load to update the Trends database with the results<br>
- Creates JMeter reports for the run.<br><br>
Trends at <a href="http://http://localhost:8083/mark59-trends/trending?reqApp=DataHunter">DataHunter Trends</a>
<br><br>You will need to update the 'SetPath' parameter for your environment.
<br>See mark59\mark59-trends-load\LoadDataHunterResultsIntoTrends.bat for other database connection examples.
<br><br><a href="https://stackoverflow.com/questions/35783964/jenkins-html-publisher-plugin-no-css-is-displayed-when-report-is-viewed-in-j">See this discussion</a>
if your html reports do not render properly.
<br>(summary: system property hudson.model.DirectoryBrowserSupport.CSP="" must be set on startup)
Discard old builds > Max # of builds to keep : 50
This project is parameterised (tick)
Boolean Parameter: parmExecuteScriptVerificationPassed
Set by Default: unticked
Description: flag passed through from the Check job
String Parameter: parmExecuteResult
Default Value: Execution_OK
Description: Value indicating status of previous job (Execute)
String Parameter: Application
Default Value: DataHunter
Description: Application name as it will appear in the Mark59 Trends application
String Parameter: Granularity
Default Value: 15000
Description: Defines the granularity for jmeter graph reporting. Value is in milliseconds
String Parameter: JmeterHomeReportGeneration
Default Value: C:\apache-jmeter
Description: Alternate JMeter instance to use for report generations.
- allows for customization just for reporting purposes (eg removing the APDEX section)
Choice Parameter - MetricsReportSplit
Choices:
SplitByDataType
No
CreateMetricsReport
Description:
Option to create separate report(s) for metric data:<br>
- 'CreateMetricsReport' - create separate file with all non-txn data, suffixed _METRICS, <br>
- 'SplitByDataType' creates a file per datatype, suffixed with _{datatype}. <br>
- 'No' - just put everything in the one output file (this is the program default when if parameter is not entered)<br>
Choice Parameter: ErrorTransactionNaming
Choices:
Rename
No
Duplicate
Description
How to handle txns marked as failed.<br>
- 'Rename' suffixes the failed txn names with '_ERRORED'.<br>
- 'Duplicate' keeps the original txn plus adds a '_ERRORED' txn.<br>
- 'No' - just keep the original txn name. (this is the program default when if parameter is not entered)<br>
Choice Parameter: eXcludeResultsWithSub
Choices:
True
False
Description
Whether to exclude the 'main' sample Result data for sample Results that have SubResults.<br>
- 'True' exclude main Results (this is the program default when if paremeter is not entered).<br>
- 'False' include main Results.<br>
Choice Parameter: cdpfilter
Choices:
ShowCDP
HideCDP
OnlyCDP
Description
'ShowCDP' (the default) will include CDP transactions in the report.
'HideCDP' will remove any CDP transactions from the report.
'OnlyCDP' will create a report that ONLY contains transactions marked as CDP transactions.
Note: Metrics report(s) will not be affected, as CDP filtering only applies transactional data
String Parameter : eXcludestart
Default Value: (leave blank)
Description: (optional) EXclude results at the start of the test for the given number of minutes. Filter eg: <b><code>-x 10</code></b> <br>Leave blank for no exclusion.
String Parameter: captureperiod
Default Value: (leave blank)
Description: (optional) EXclude results at the start of the test for the given number of minutes. Filter eg: <b><code>-c 60</code></b> <br> Leave blank for no exclusion.
String Parameter : SetPath
Default Value :"C:\Java\jdk-11\bin";C:\Windows\System32;C:\windows\system32\wbem
Description: set PATH env variable ** you will need to set the Java path
Terminate a build if it's stuck
Time-out strategy : Absolute
Timeout minutes: 15
Build steps…
Execute Windows batch command: CALL C:\jenkins\commands\JMeterResult.bat
Post Build Actions…
Mark build Unstable on Warning: tick
Mark build Failed on Error: tick
Use global rule (select) Log_Parse_Rules_Jmeter_Generic
Publish HTML Reports…
HTML directory to archive: C:\Mark59_Runs\JenkinsJmeterReports\${Application}
Index page[s]: index.html
Report title: Transactions_Report
Options : tick all options except ‘Allow missing report’
HTML directory to archive: C:\Mark59_Runs\JenkinsJmeterReports\${Application}_CPU_UTIL
Index page[s]: index.html
Report title: CPU_Utilizations_Report
Options : tick all
HTML directory to archive: C:\Mark59_Runs\JenkinsJmeterReports\${Application}_DATAPOINT
Index page[s]: index.html
Report title: Data_Points_Report
Options : tick all
HTML directory to archive: C:\Mark59_Runs\JenkinsJmeterReports\${Application}_MEMORY
Index page[s]: index.html
Report title: Memory_Metrics_Report
Options : tick all
HTML directory to archive: C:\Mark59_Runs\JenkinsJmeterReports\${Application}_METRICS
Index page[s]: index.html
Report title: All_Metrics_Report
Options : tick all
Create job DataHunter_Execute as a freestyle project. Then setup the configuration using values as below. Please review Job Configuration images in the mark59-jenkins-demo\_sampleJobConfigImages folder to guide you as well
Description:
Executes the DataHunter Demo Selenium Test Plan
<br><br>you will need to set to update the 'SetPath' parameter for your environment
Discard old builds > Max # of builds to keep : 30
This project is parameterised (tick)
Boolean Parameter: ScriptVerificationPassed
Set by Default: ticked
Description: expected to be 'true' if verification job passed, 'false' if it failed
Boolean Parameter: RunCI
Set by Default: ticked
Description: Untick will perform this Script verification Job ONLY and will not trigger the full performance test.
String Parameter: Application
Default Value: DataHunter
Description: Application name as it will appear in the Mark59 Trends application
String Parameter: Duration
Default Value: (leave blank)
Description: Optional Duration parameter (seconds)<br>
Enter using the full format for the <b>Duration</b> jmeter parameter. For example <b><code>-JDuration=3600</code></b>
String Parameter: JmeterHome
Default Value: C:\apache-jmeter
Description:
String Parameter: JmeterTestPlan
Default Value: C:\mark59\mark59-datahunter-samples\test-plans\DataHunterSeleniumTestPlan.jmx
Description: location of the test plan to execute
String Parameter: TestResultsFileName
Default Value: DataHunterTestResults.csv
Description: Jmeter results file name
String Parameter: ExtraParam1
Default Value: -JForceTxnFailPercent=0
Description:
ExtraParma1 - ExtraParam10 Available<br>
Additional optional arguments that can be passed to Jmeter. For example, if your test plan has a parameter 'ForceTxnFailPercent'
(sample usage : <b><code>${__P(ForceTxnFailPercent,25)}</code></b> ), then a value can be passed from here such as: <br>
<b><code>-JForceTxnFailPercent=50</code></b>
String Parameter: ExtraParam2
Default Value: -JStartCdpListeners=true
Description: Whether or not to switch the CDP listeners on in the scripts
- You can add further 'ExtraParam' arguments in here if you wish , and leave there Default Values blank if they are not required
String Parameter : SetPath
Default Value :"C:\Java\jdk-11\bin";C:\Windows\System32;C:\windows\system32\wbem
Description: set PATH env variable ** you will need to set the Java path
Build Environment…
Terminate a build if it's stuck
Time-out strategy: Absolute
Timeout minutes: 15 ** increase this for a ‘real’ test
Build Steps…
Execute Windows batch command: CALL C:\jenkins\commands\JMeterExecute.bat
Post Build Actions…
Mark build Unstable on Warning: tick
Mark build Failed on Error: tick
Use global rule (select) Log_Parse_Rules_Jmeter_Generic
Trigger parameterized build on other projects…
Build Triggers
Projects to Build: DataHunter_Result
Trigger when build is: (select) Failed
Predefined parameters
parmExecuteScriptVerificationPassed=$ScriptVerificationPassed
parmExecuteResult=Execution_failure
Projects to Build: DataHunter_Result
Trigger when build is: (select) Stable
Predefined parameters
parmExecuteScriptVerificationPassed=$ScriptVerificationPassed
parmExecuteResult=Execution_OK
Create job DataHunter_Check as a freestyle project. Then setup the configuration using values as below. Please review Job Configuration images in the mark59-jenkins-demo\_sampleJobConfigImages folder to guide you as well.
Description:
Executes the DataHunter Demo Selenium Test Plan over a short period for verification
<br><br>you will need to set to update the 'SetPath' parameter for your environment
Discard old builds > Max # of builds to keep : 30
This project is parameterised (tick)
Boolean Parameter: RunCI
Set by Default: ticked
Description: Untick will perform this Script verification Job ONLY and will not trigger the full performance test.
String Parameter: Application
Default Value: DataHunter
Description: Application name as it will appear in the Mark59 Trends application
String Parameter: Duration
Default Value: 30 ** may need to be increased for a ‘real’ test
Description: Test run Duration in seconds - Should be only a long as necessary to validate the test plan
String Parameter: Users
Default Value: 1
Description: Number of users to use for shakeout. Will be applied across all (non-metric) thread groups that have been parameterized.
String Parameter: MonitoringUsers
Default Value: 1
Description: Number of users to use for shakeout of the 'metric capture' thread groups that have been parameterized (Usually either 1 or 0).
String Parameter: JmeterHome
Default Value: C:\apache-jmeter
Description:
String Parameter: JmeterTestPlan
Default Value: C:\mark59\mark59-datahunter-samples\test-plans\DataHunterSeleniumTestPlan.jmx
Description: location of the test plan to execute
String Parameter: OverrideTestResultsFile
Default Value: C:\Temp\DataHunterTestResults.csv
Description: Jmeter results file location generally set to some temp location for verify, so as not to be confused with actual test
Boolean Parameter: KillDrivers
Set by Default: unticked
Description: Removes drives and browsers that may be left over from previous jobs
(you may want to create separate jobs to Kill or not Kill drivers and browsers)
String Parameter: ExtraParam1
Default Value: -JForceTxnFailPercent=0
Description:
ExtraParma1 - ExtraParam10 Available<br>
Additional optional arguments that can be passed to Jmeter. For example, if your test plan has a parameter 'ForceTxnFailPercent'
(sample usage : <b><code>${__P(ForceTxnFailPercent,25)}</code></b> ), then a value can be passed from here such as: <br>
<b><code>-JForceTxnFailPercent=50</code></b>
String Parameter: ExtraParam2
Default Value: (leave blank)
Description: You can add further 'ExtraParam' arguments in here if you wish , and leave there Default Values blank if they are not required
String Parameter : SetPath
Default Value :"C:\Java\jdk-11\bin";C:\Windows\System32;C:\windows\system32\wbem
Description: set PATH env variable ** you will need to set the Java path
Build Environment…
Terminate a build if it's stuck
Time-out strategy: Absolute
Timeout minutes: 15
Build Steps…
Execute Windows batch command: CALL C:\jenkins\commands\JMeterCheck.bat
Post Build Actions…
Mark build Unstable on Warning: tick
Mark build Failed on Error: tick
Use global rule (select) Log_Parse_Rules_Jmeter_Verification
Trigger parameterized build on other projects…
Build Triggers
Projects to Build: DataHunter_Execute
Trigger when build is: (select) Stable
Boolean parameters
ScriptVerificationPassed: (select) True
Predefined parameters
RunCI=$RunCI
Projects to Build: DataHunter_Execute
Trigger when build is: (select) Failed
Boolean parameters
ScriptVerificationPassed: (select) False
Start all the mark59 web apps : mark59/bin/StartAllMark59WebApplications_H2_DemoMode.bat
Dashboard > DataHunter_Check > Build with Parameters > (scroll down) “Build”
All going okay, the Check - Execute - Results job sequence should run.
Note: as with the Quick Start, occasionally you may get a port clash with Trends (it will be in the Results job - just stop it as per the Quick Start)
Trends should have the new run at the front of the graph.
The Result page should have links to the generated JMeter reports. General appearance:
Create a new View called DataHunter, and add the DataHunter jobs to it.
Move the email-templates directory (in repo folder mark59-jenkins-demo\_direcoryToMoveIntoJenkinsHomeDirs) into the
C:\jenkins\jenkins_home directory.
Create job _Mailer_DataHunter as a freestyle project, and make sure it's added to the DataHunter View. Then setup the configuration using values as below. Please review Job Configuration images in the mark59-jenkins-demo\_sampleJobConfigImages folder to guide you as well.
Description:
Mails Test Results to listed recipients<br><br>
Use 'PV_CI_Results_With_History_By_View.template' at template name for Email Template Testing<br>
Note you must run this and DataHunter_Results jobs at least once before using Email Template Testing<br><br>
Tip1: remember to set the 'Trigger' to 'Always' send to 'Recipient List' (in Advanced section)<br><br>
Tip2: when doing the setup, emails may go into your Junk folder...<br><br>
Discard old builds > Max # of builds to keep : 14
Post Build Actions…
Editable Email Notification ..
Disable Extended Email Publisher: unticked
Project From: (leave blank)
Project Recipient List: add your email ** comma delimited list is a real project
Project Reply-To List: $DEFAULT_REPLYTO
Content Type: (select) HTML (text/html)
Default Subject: DataHunter Continuous Performance and Volume Test Results
Default Content: ${SCRIPT, template="PV_CI_Results_With_History_By_View"}
Attachments: green.png, red.png, yellow.png,green.gif,red.gif,yellow.gif,noRun.jpg,aborted.png,scriptVerifyFailed.jpg
Save Job
Move the gifs in repo folder mark59-jenkins-demo\_filesToMoveInto_Mailier_DataHunter_Workspace) into
C:\jenkins\jenkins_home\workspace\_Mailer_DataHunter directory.
Run the job. All going well not much will happen - it will be green, with console output looking like this:
From the Jenkins _Mailer_DataHunter project page, select Email Template Testing
Enter PV_CI_Results_With_History_By_View.template as the Jelly/Groovy Template File Name
The Build to Test should be your latest build (eg #1 if you have only done the one build)
Hit the “Go!” button
You should see output like this (Gifs are never displayed here)
You need to know your SMTP server details.
Dashboard > Manage Jenkins > Configure System
Go to the Extended E-mail Notification panel, and enter your SMTP server and port details.
Re-run the _Mailer_DataHunter job. All going ok you should receive an email at the address you entered (check to see if your email went to your junk folder)
The output should be properly formatted with gifs.
This chapter discusses the design and implementation of the Jenkins CI server demonstrated in the previous chapter.
The Continuous Integration (CI) test is designed to include the following integrated jobs.
CI tests start with a verification job which is a short duration shakeout to verify the test scripts & environment. Generally, the verification job will run the same test plan as the main test execution, but for a shorter time. The number of users per thread executed is usually reduced by using job parameters (the ‘Users’ parameter in the DataHunter Check job).
A successful verification job is marked by a green build and preceded by the Execute Job, which starts the actual JMeter test (there’s an extra step for a distributed test). A red build for a verification job indicates a failure. Flags are set to indicate job status, which in the case of a failed verify job are passed to the downstream Execute & Results job and used to mark them as Failure with a red build.
Only required for Distributed tests. See the following Distributed Testing section.
The Execute job could be a peak load test, a stress test or a long duration soak JMeter test. The configuration for a distributed test is similar to a single node test except that distributed test needs an additional parameter “DistributedIPsList” in the execute job, which allows you to include the list of (comma separated) IP Addresses to run the test plan from. It’s important to remember that the Master Instance should be listed as the first entry of a distributed IP List parameter.
Also, for a single node test (or a parameter that only needs to be known to the master), the -J options is used to define the parameter on the JMeter command line, but for a distributed test parameter the -G option has to be used (e.g. -GForceTxnFailPercent=0).
Upon successful completion of test execution, the JMeter Summary Report (csv or xml output file) is used to map results into the Trend Analysis Web Application and compared against the defined SLAs. Once the result is processed, JMeter HTML Report(s) are generated and can be viewed with the "Transactions Report" link. In addition to this, there may be separate reports generated for CPU Utilisation, Memory & Data points or a combined All Metrics Report which can be accessed from the Results job.
Flags passed through the job stream to the Results job get used to set a failure, if the Verify failed. These flags are also used by the "Mailer" job to produce the correct icons related to the stage of the job stream that failed.
The mailer job isn't a part of the DataHunter testing job flows, it is standalone. We tend to run the mailer jobs each morning to give everyone visibility of how overnight test runs went.
The email template PV_CI_Results_With_History_By_View.template (used in the Jenkins Demo) looks for jobs ending with _result (case insensitive) in the Jenkins View whose name corresponds with the end portion of the Mailer job name. The sample mailer job is named _Mailer_DataHunter, so will look for jobs ending with _result in the DataHunter view. It uses flags (as described above in the Results Job section above) to work out the status of the last run. It also provides a history of up to the previous 14 runs for the test.
A job needs to start JMeter server instances on remote servers required for distributed testing, before running the main JMeter execution job (it will use different commands depending on whether you are running Linux or Windows) . The Master and the remote servers need to be defined in this job, with parameters such as:
A typical distributed test would have a sequence like:
The Result job is the same as for a non-distributed test, and so is the Check job, except that it need to trigger the remote slave servers startup job.
If you have never done it before, getting JMeter distributed testing to work can take some effort. There is some good information in the JMeter user guide, but it can't cover all the security issues, etc you may run into. For example, when we tried to set up this demo using our work desktops instead of using servers, we ran into issues with the corporate anti-Virus software blocking ports. When we tried using a 'master' machine connected to the corporate network via cable and a slave using a wireless connection, a common way to connect in our offices (remember when we all used to go to an actual office? thanks Covid19), the slave would not connect because it was on a different subnet (breaking a RMI rule).
Tip If the "Execute" job fails with a "Connection timeout" error, verify
(1) If you are using a rmi_keystore, it has been placed in JMeter bin folder on both Master and Slave instances.Note that We have bypassed rmi_keystore in our distributed tests by using Dserver.rmi.ssl.disable=true in non-gui mode in DataHunterDistributed_DistributedJMeterServices Jenkins job.
(2) Java version is same across Master and Slave (injectors)
(3) Are all required ports / firewalls open?
The Mark59 xtras github repository folder mark59-jenkins-demo\commands has two .bat files which may be of some guidance for setting up a distributed test.
JMeterApplicationDistributedServices.bat serves to start the Distributed JMeter Services. Its configuration follows the same pattern as for the jobs in the Jenkins Demo, with similar parameters: RunCI, ScriptVerificationPassed, JmeterHome (must be the same on each server), SetPath.
The parameter DistributedServersList needs to contain a list of machine name with IP addresses for each JMeter slave (by the way, a JMeter slave does NOT have to be a Jenkins slave). Example showing the format:
SLAVEMC1:10.123.666.222,SLAVEMC2:10.123.666.333,...
The parameter MasterIP contains the IP of the ‘master’ JMeter server (where you will do reporting from). For example: 10.123.666.111
JMeterApplicationExecuteDistributedTest.bat is similar to JMeterExecute.bat used in the Jenkin Demo used to run a JMeter test. The configuration for a distributed test is similar to the single node test except that it has an additional parameter:
DistributedIPsList, which allows you to include the list of (comma separated) IP Addresses to run the test plan from, prefixed by -R. It’s important to remember that the Master Instance should be listed as the first entry of a distributed IP List parameter. Example format:
-R 10.123.666.111,10.123.666.222,10.123.666.333
Also, for a single node test (or a parameter that only needs to be known to the master), the -J option is used to define the parameter on the JMeter command line, but for a distributed test parameter the -G option has to be used. (eg -GForceTxnFailPercent=0). Here is a list of the other sample parameter values for a setup to run the same test as the Jenkins Demo as a distributed test.
ScriptVerificationPassed: ticked
RunCI: ticked
Application: DataHunterDistributed
Duration: (leave blank)
JmeterHome: C:\apache-jmeter
JmeterTestPlan:C:\mark59\mark59-datahunter-samples\test-plans\DataHunterSeleniumTestPlan.jmx
TestResultsFileName: DataHunterDistributedTestResults.csv
ExtraParam1: -GForceTxnFailPercent=0
ExtraParam2: -GStartCdpListeners=true
SetPath:"C:\Java\jdk-11\bin";C:\Windows\System32;C:\windows\system32\wbem(see Demo)
CopyResuts: unticked
CopyResutsDirectory: (leave blank)
Based on our experience setting up Mark59 Jmeter Selenium test in AWS, the following ports may have to be opened for a distributed testing:
1024-1034,49152-65535,135,445 (WMI),
1099 (Jmeter),
8081 (Datahunter),
8082 (Jenkins),
8099 (Jenkins Slave)
In addition, for Windows, you may have to enable the following in the Firewall:
Inbound Rules / File and Printer Sharing (Echo Request - ICMPv4-In)
Windows Defender Firewall\Allowed apps / Windows Management Instrumentation (WMI)
The main JMeter user guide references:
If you do not have the Chrome Browser installed, install the latest version https://www.google.com/chrome/
If you have Chrome installed, check its version ( in Chrome <click> the vertical dots on top rhs -> Help -> About Google Chrome). You should see something like:
Updates are disabled by your administrator.
Version 91.0.4472.77 (Official Build) (64-bit)
(auto updates will commonly be disabled at a corporate site)
If updates are not disabled, simply update to the latest version ( <click> the vertical dots on top right hand-side -> Update Google Chrome) -> Relaunch )
If you are not on the latest version (check at https://chromereleases.googleblog.com/ 'Stable Channel Update for Desktop') and updates have been disabled, you may be able to force update via https://www.google.com/chrome/ ('Download Chrome', run the ChromeSetup.exe)
Even if you cannot update your Chrome version, all is not lost as long as you are on at least a reasonably recent version of Chrome. You need to use a compatible ChromeDriver for the version of Chrome you are on, as detailed next.
Note that for performance testing at scale we have found Ungoogled Chrome useful - see the Use Chromium instead of Chrome tip.
Goto http://chromedriver.chromium.org/downloads
Download the latest Chromedriver Release relevant to your version of Chrome. Releases are currently named as chromedriver_win32.zip / chromedriver_linux64.zip / chromedriver_mac64{_type of mac}.zip.
You can check what version you have of Chrome by clicking on the ellipsis on top, right hand side of the Chrome browser window, and going to Help > About Google Chrome (the direct link is chrome://settings/help).
Unzip the downloaded file (any convenient place on your pc). It will contain a single file called chromedriver.exe / chromedriver.
Hints and Tips for MySQL installation
Upon installing MySQL you will need to do the following before building the Mark59 databases.
Sample connection below:
Upon completing the steps above your new connection will appear on the MySQL Workbench home page as per the screenshot below. Select the "localhost_3306" connection, you will now be able to build the three Mark59 databases.
Finally, there is an issue in MySQL that surfaces when you try to upload large runs into the Mark59 mark59trendsdb database (such as a soak test). MySQL doesn’t have a ‘percentile’ function, so the standard way to calculate a percentile, as done in Mark59, makes use of a MySQL system variable called group_concat_max_len. The default value (1024) is totally inadequate, and needs to be increased to a large value. For example (using the v8.0 Win 64 bit max value) :
SET PERSIST group_concat_max_len = 18446744073709551615; -- Win
SET group_concat_max_len = 18446744073709551615; -- Linux
SHOW VARIABLES LIKE 'group_concat_max_len'
A large value for 'group_concat_max_len' is set in the MySQL mark59trendsdb creation sql mark59/databaseScripts/MySQL/MYSQLmark59trendsDataBaseCreation.sql, but you may need to check the value being used is appropriately set correctly after you run (on Windows it appears simply to ‘truncate’ to the highest allowed value if lower).
An alternative approach mentioned on a few StackOverflow posts is to to edit the underlying MySQL my.cnf (Linux) / my.ini (Win) files:
[mysqld]
group_concat_max_len=18446744073709551615
See your version’s MySQL user guide for further details. Eg, for version 8.0 it is:
MySQL 8.0 Reference Manual :: 5.1.8 Server System Variables
This estimate of resource usage is included to give a very broad idea of the scale of testing that is capable of being executed on a single Windows injector, running JMeter using selenium scripts. Fyi, the maximum load we have executed using purely selenium was for a test where seven load injectors were used in a distributed test running scripts of similar length, complexity and structure to the test described here. We see no reason why many more injectors (perhaps running on cloud) could be strung together for testing at truly large scales.
We have noticed resource CPU usage differs hugely depending on the nature of the application under test. This particular test is running a Java application residing on in-house servers running Docker instances, with multiple requests and resources loaded on each page. Client side (browser) JavaScript is heavily used.
For many of the tests we run, large capacity machines are required. The specs of the machine this test was run on are:
Operating System : Windows 2016 Server VM
CPU : Inter(R) Xeon(R) E7-2870 @ 2.40GHz 16 CPU Cores (Virtual)
RAM : 24 GB
Disk : C drive 100GB, D drive 100 GB
The test described here has three scripts of different lengths running at different rates
Script | # txns in script ( approx = #pages) | # Iterations / in peak hr (business flows) | # threads (“users”) | script runtime | approx concurrent ave active browsers * | total number Http request per iteration (all types) | Mb transferred per iteration |
1 | 25 | 100 | 6 | 1m 15s | 2 | 1090 | 0.6 |
2 | 17 | 400 | 20 | 1m 15s | 8 | 797 | 0.7 |
3 | 30 | 1150 | 74 | 2m 20s | 45 | 1177 | 1.2 |
* estimated using total script runtime in min ( = # iterations * script runtime in min ) / 60 (peak hour in min)
Total number of transaction / pages process in peak hour = 25*100 + 17*400 + 30*1150 = 43,800
Total number of http requests processed in peak hour = 1,777,000
Total number of business flows (scripts) executed in peak hour = 1,650
Browser : ungoogled Chromium v93
Average CPU utilisation during test : 38%
Transaction times for tests seem to start to degrade around 50 - 60% CPU, so 38% gives plenty of leeway. Memory has never been relevant as a limiting factor. By the way, this is a test we have just been able to move from being a distributed test to a single injector test. Recent Chrome/Chromium releases appear to be using CPU more efficiently.