To get reliable measurements when performing load testing, the following must be taken into consideration:
- The test environment should be dedicated to load tests only.
- Routers and firewalls must not constitue bottlenecks or interpret load tests as Denial of Service attacks.
Because of the above, the load client should be physically positioned as close as possible to the Qlik Sense® Enterprise deployment under test.
To benchmark the capacity of a Qlik Sense Enterprise deployment in terms of number of clicks per second or response times, load test scenarios should resemble real usage as closely as possible. So, when creating scenarios, it is important to select realistic inter-departure times (that is, think times between clicks) for the requests from virtual users. The think times in between actions are often too short and do not reflect the behavior of real users who might, for example, perform their analysis while in a meeting or a phone call or for some other reason need more think time between the actions than initially expected (this is especially important to consider when simulating large amounts of users as the actions and think times greatly impact the load and thereby the number of users that the system can accommodate without becoming saturated).
Another challenging task is to create an average scenario that replicates the load generated by many users in total when scaled up. The load can be tuned by changing:
- The number of actions in a load test scenario
- The inter-departure times between adjacent actions
- The number of concurrent virtual users
The first two bullets are covered by the design of the load test scenario, whereas the number of concurrent virtual users can be tuned during the load testing session.
The performance testing is based on load scenarios, which are sequences of actions carried out by virtual Qlik Sense Enterprise users.
A load scenario is defined in a JSON file and can be executed sequentially or in parallel with other load scenarios to simulate a realistic user scenario that can be used to investigate the performance of a Qlik Sense Enterprise deployment.
gopherciser [command]
Commands:
execute
(orx
): Run a load scenario towards a Qlik Sense Enterprise deployment.help
: Show the help.objdef
(orod
): Export and validate object definitions files.script
(ors
): Execute script command.version
(orver
): Show the version information.completion
: Generate command line completion script.
Flags:
-h
,--help
: Show the help for a command (gopherciser [command] --help
).
gopherciser completion [bash|zsh|fish|powershell]
Run gopherciser completion --help
and follow the instructions to install command line completion for your shell.
gopherciser execute [flags]
gopherciser x [flags]
Flags:
-c
,--config string
: Load the specified scenario setup file.--debug
: Log debug information.-d
,--definitions
: Custom object definitions and overrides.-h
,--help
: Show the help for theexecute
command.--logformat string
: Set the specified log format. The log format specified in the scenario setup file is used by default. If no log format is specified,tsvfile
is used.0
ortsvfile
: TSV file1
ortsvconsole
: TSV console2
orjsonfile
: JSON file3
orjsonconsole
: JSON console4
orconsole
: Console5
orcombined
: Combined (TSV file + JSON console)6
orno
: Default logs and status output turned off.7
oronlystatus
: Default logs turned off, but status output turned on.
--metricslevel int
: Set level of Prometheus metrics to export/expose when Gopherciser is running. 0 - default off, 1 - Pull, 2 - Push without api, 3 - Push with api.--metricstarget string
: (Prometheus only) Depends on metricslevel > 0. For pull needs to be an int for port, for push is the full target URL.--metricslabel string
: (Prometheus PUSH only) A label (Prometheus job) to be used when pushing metrics to remote Prometheus--metricsgroupingkey key=value
,-g key=value
: (Prometheus PUSH only) This flag, which can be supplied multiple times, sets Prometheus grouping keys (in key=value format).--profile string
: Start the specified profiler.1
orcpu
: CPU2
orblock
: Block3
orgoroutine
: Goroutine4
orthreadcreate
: Threadcreate5
orheap
: Heap6
ormutex
: Mutex7
ortrace
: Trace8
ormem
: Mem
--regression
: Log data needed to run regression analysis. Note: Do not log regression data when testing performance.-s
,--set
: Override a value in script with key.path=value. See Using script overrides for further explanation.--summary string
: Set the type of summary to display after the test run. Defaults tosimple
for minimal performance impact.0
orundefined
: Simple, single-row summary1
ornone
: No summary2
orsimple
: Simple, single-row summary3
orextended
: Extended summary that includes statistics on each unique combination ofaction
,label
andapp GUID
4
orfull
: Same asextended
, but with statistics on each unique combination ofmethod
andendpoint
added5
orfile
: Saves basic counters to a filesummary.json
.
-t
,--traffic
: Log traffic information. Note: This should only be used for debugging purposes as traffic logging is resource-consuming.-m
,--trafficmetrics
: Log metrics information.
Exit codes:
0
: Execution OK1
-127
: Number of errors during the execution (127
means 127 errors or more)128
: Error during the execution (ExitCodeExecutionError)129
: Error when parsing the JSON config (ExitCodeJSONParseError)130
: Error when validating the JSON config (ExitCodeJSONValidateError)131
: Error when resolving the log format (ExitCodeLogFormatError)132
: Error when reading the object definitions (ExitCodeObjectDefError)133
: Error when starting the profiling (ExitCodeProfilingError)134
: Error when starting Prometheus (ExitCodeMetricError)135
: Error when interacting with host OS (ExitCodeOsError)136
: Error when using incorrect summary type (ExitCodeSummaryTypeError)137
: Error during test connection (ExitCodeConnectionError)138
: Error during during get app structure (ExitCodeConnectionError)139
: Error when missing parameter (ExitCodeMissingParameter)140
: Process was force quit (ExitCodeForceQuit)141
: Error count exceededmaxerrors
setting
gopherciser objdef [sub-commands]
gopherciser od [sub-commands]
Sub-commands:
generate
: Generate an object definitions file from the default values.validate
: Validate the object definitions in a definitions file.
generate
command flags:
-d
,--definitions
: (mandatory) Name of the definitions file to create.-f
,--force
: Overwrite an existing definitions file.-h
,--help
: Show the help for thegenerate
command.-o
,--object strings
: (optional) List of objects to include in the definitions file. Defaults to all.
validate
command flags:
-d
,--definitions
: (mandatory) Name of the definitions file to validate.-h
,--help
: Show the help for thevalidate
command.-v
,--verbose
: Display a summary of the validation.
For more information on how to use the objdef
command, see Supporting extensions and overriding defaults.
gopherciser script [sub-commands] [flags]
gopherciser s [sub-commands] [flags]
Sub-commands:
connect
(orc
): Test the connection using the settings provided in the config file.structure
(ors
): Get the app structure using the settings provided in the config file.validate
(orv
): Validate a scenario script.template
(ortmpl
ort
): Generate a template scenario script.
connect
command flags:
-c
,--config string
: Connect using the specified scenario config file.-h
,--help
: Show the help for theconnect
command.-s
,--set
: Override a value in script with key.path=value. See Using script overrides for further explanation.
structure
command flags:
-c
,--config string
: Connect using the specified scenario config file.--debug
: Log debug information.-h
,--help
: Show the help for thestructure
command.--logformat string
: Set the specified log format. The log format specified in the scenario setup file is used by default. If no log format is specified,tsvfile
is used.0
ortsvfile
: TSV file.1
ortsvconsole
: TSV console.2
orjsonfile
: JSON file.3
orjsonconsole
: JSON console.4
orconsole
: Console.5
orcombined
: Combined (TSV file + JSON console).6
orno
: Default logs and status output turned off.7
oronlystatus
: Default logs turned off, but status output turned on.
-o
or--output string
: Script output folder. Defaults to working folder.-r
or--raw
: Include raw properties in the structure.--summary string
: Set the type of summary to display after the test run. Defaults tosimple
.0
orundefined
: Simple summary, includes the number of objects and warnings and lists all warnings.1
ornone
: No summary.2
orsimple
: Simple summary, includes the number of objects and warnings and lists all warnings.3
orextended
: Extended summary, includes a list of all objects in the structure.4
orfull
: Currently the same as theextended
summary, includes a list of all objects in the structure.5
orfile
: Saves basic counters to a filesummary.json
.
-t
,--traffic
: Log traffic information.-m
,--trafficmetrics
: Log metrics information.-s
,--set
: Override a value in script with key.path=value. See Using script overrides for further explanation.--setfromfile
: Override values from file where each row is path/to/key=value.
validate
command flags:
-c
,--config string
: Load the specified scenario setup file.-h
,--help
: Show the help for thevalidate
command.-s
,--set
: Override a value in script with key.path=value. See Using script overrides for further explanation.--setfromfile
: Override values from file where each row is path/to/key=value.
template
command flags:
-c
,--config string
: (optional) Create the specified scenario setup file. Defaults totemplate.json
.-f
,--force
: Overwrite existing scenario setup file.-h
,--help
: Show the help for thetemplate
command.
Config file and overrides file can be piped from stdin. If no config is set stdin is assumed to be the config file, if config file is set, stdin is assumed to be the overrides file.
This would execute the sheetchanger example from stdin:
cat ./docs/examples/sheetChangerQlikCore.json | ./gopherciser x
This would execute overrides from stdin:
cat overrides.txt | ./gopherciser x -c ./docs/examples/sheetChangerQlikCore.json
Advanced example. Use jq
to disable all sheetchanger
actions then run the sheet changer example script, this would now only do the openapp action:
jq '(.scenario[] | select(.action=="sheetchanger") | .settings.disabled) = true' ./docs/examples/sheetChangerQlikCore.json| ./gopherciser x
Script overrides overrides a value pointed to by a path to its key. If the key doesn't exist in the script there will an error, even if it's a valid value according to config.
The syntax is path/to/key=value. A common thing to override would be the settings of the simple scheduler.
"scheduler": {
"type": "simple",
"settings": {
"executiontime": -1,
"iterations": 1,
"rampupdelay": 1.0,
"concurrentusers": 1
}
}
scheduler
is in the root of the JSON, so the path to the key of concurrentusers
would be scheduler/settings/concurrentusers
. To override concurrent users from command line:
./gopherciser x -c ./docs/examples/sheetChangerQlikCore.json -s 'scheduler/settings/concurrentusers=2'
Overriding a string, such as the server the servername requires it to be wrapped with double quotes. E.g. to override the server:
./gopherciser x -c ./docs/examples/sheetChangerQlikCore.json -s 'connectionSettings/server="127.0.0.1"'
Do note that the path is case sensitive. It needs to be connectionSettings/server
as connectionsettings/server
would try, and fail, to add new key called connectionsettings
.
Overrides could also be used to with more advanced paths. If the position in scenario
is known for openapp
we could replace e.g. the app opened, assuming openapp
is the first action in scenario
:
./gopherciser x -c ./docs/examples/sheetChangerQlikCore.json -s 'scenario/[0]/settings/app="mynewapp"'
It could even replace an entire JSON object such as the connectionSettings
with one replace call:
./gopherciser x -c ./docs/examples/sheetChangerQlikCore.json -s 'connectionSettings={"mode":"ws","server":"127.0.0.1","port":19076}'
Overrides could also be defined in a file. Each row in the file should be in the same format as when using overrides from command line, although should not be wrapped with single quotes as this is for command line interpretation purposes. Using the same overrides as above, the file could look like the following:
connectionSettings/server="1.2.3.4"
scenario/[0]/settings/app="mynewapp"
connectionSettings={"mode":"ws","server":"127.0.0.1","port":19076}
Overrides will be executed from top to button, as such the third line will override the server
overriden by the first line and script will execute towards 127.0.0.1:19076
.
Any command line overrides will be executed after the overrides defined in file.
A log file is recorded during each test execution. The logs.filename
setting in the settings
section of the load test script specifies the name of and the path to the log file (see Setting up load scenarios). If a file with the specified filename already exists, a number is appended to the filename (for example, if the file xxxxx.yyy
already exists, the log file is stored as xxxxx-001.yyy
).
The contents of the log file differ depending on the type of logging selected. Examples of rows that typically can be found in the log file include:
result
: The result of an action (complete with timestamp, response time and information whether or not the action was successful).info
: Information related to the test execution (for example, the total number of errors, actions and requests during the test execution).error
: Information related to errors during the test execution.
The test results (that is, log files) can be analyzed using the Scalability Results Analyzer.
Gopherciser is able to produce regression logs consumed by the regression
analyzer in Qlik Senese Enterprise Scalability Tools (QSEST). Enable
Regression logging with the --regression
flag or in the script settings
(see settings.logs.regression
in settingup.md).
Regression logs are written to a separate file with a .regression
filename
extension, in the same directory and with the same base name as the test
results.
The regression log contains a snapshot of the subscribed Qlik Sense objects after each action in the scenario. The regression analyzer in QSEST can then compare these snapshots to find any differences. Typically you run the same script with regession logging enabled, towards two versions of the same app. Then you use regression analyzer in QSEST to gain insight in how the app has changed.
Note Do not enable regression logging when running performance tests. The regression logging introduces a delay after each action in the executed scenario.
Note With regression logging enabled, the the scheduler is implicitly set to execute the scenario as one user for one iteration.
To capture the full end user experience, manual measurements are needed. A web browser in combination with an optional measurement method can be used to get a snapshot of the full response times including rendering of visualizations etc.
Perform the actions defined in the load test scenario and measure the time for each action to complete (using, for example, Fiddler). To measure the user-perceived response times under specific load, perform the measurements while the load test scenario is executed.
These are the current limitations in Gopherciser:
- Not supported:
- Variance waterfall chart
- P&L pivot chart object
- Trellis container extension
- Chart suggestions (that is, auto-charts) are supported, but only if the objects were created with Qlik Sense Enterprise June 2020 or later. Auto-chart objects created with earlier versions have to be manually updated in your app.
- Pivot table:
- The only supported selection type is
randomfromall
- Values are randomly selected from all values in the table
- The only supported selection type is
- Map:
- Selections can only be made in the first layer (that is, layer 0)
- Visualization bundle:
- Selections are not fully supported in the Heatmap chart.
- Selections not supported in Grid chart.
- Dashboard bundle:
- Changing variables using variable input not supported.
- Selections done by animator not supported.
- Selections done using date picker not supported.