WSO2 Identity Server performance artifacts are used to continuously test the performance of the Identity Server.
These performance test scripts make use of the Apache JMeter to run the tests with different concurrent users and different Identity Server version.
The deployment is automated using AWS cloud formation.
Artifacts in the master branch can be used with the WSO2 Identity Server version 5.10.0.For previous versions, please use below branches
At the moment we support for two deployment patterns as,
- Single node deployment.
- Two node cluster deployment.
WSO2 Identity Server is setup in an AWS EC2 instance. AWS RDS instance is used to host the MySQL user store and identity databases.
JMeter version 3.3 is installed in a separate node which is used to run the test scripts and gather results from the setup.
You can run IS Performance Tests from the source using the following instructions.
- Maven 3.5.0 or later
- AWS CLI - Please make sure to configure the AWS CLI and set the output format to
json
. - Apache JMeter 3.3 Setup tarball.
- WSO2 IS server zip file.
- Python 3.5
- Jinja2 2.11.1
- numpy
- Clone this repository.
git clone https://github.com/wso2/performance-is
- Checkout master branch for the latest Identity Server version or relevant version tag for previous releases.
cd performance-is
git checkout v5.8.0
- Build the artifacts using Maven.
mvn clean install
- Based on your preferred deployment, navigate to
single-node
directory ortwo-node-cluster
directory. - Run the
start-performance.sh
script. It will take around 15 hours to complete the test round with default settings. Therefore, you might want to usenohup
. Following is the basic command.
./start-performance.sh -k is-perf-test.pem -a ******* -s ******* -c is-perf-cert -n wso2IS.zip -j apache-jmeter-3.3.tgz -- -d 10 -w 2
See usage:
./start-performance.sh -k <key_file>
-c <certificate_name> -j <jmeter_setup_path>
[-n <IS_zip_file_path>]
[-u <db_username>] [-p <db_password>] [-e <db_instance_type>] [-s <db_snapshot_id>] [-r <concurrency>]
[-i <wso2_is_instance_type>] [-b <bastion_instance_type>] [-t <keystore_type>] [-m <db_type>]
[-l <is_case_insensitive_username_and_attributes>]
[-w <minimum_stack_creation_wait_time>] [-h]
-k: The Amazon EC2 key file to be used to access the instances.
-c: The name of the IAM certificate.
-y: The token issuer type.
-q: User tag who triggered the Jenkins build
-r: Concurrency (50-500, 500-3000, 50-3000)
-j: The path to JMeter setup.
-c: The name of the IAM certificate.
-n: The is server zip
-u: The database username. Default: wso2carbon.
-p: The database password. Default: wso2carbon.
-s: The database snapshot ID. Default: -.
-e: The database instance type. Default: db.m6i.2xlarge.
-i: The instance type used for IS nodes. Default: c6i.xlarge.
-b: The instance type used for the bastion node. Default: c6i.xlarge.
-w: The minimum time to wait in minutes before polling for cloudformation stack's CREATE_COMPLETE status.
Default: 10 minutes.
-g: Number of IS nodes.
-t: Keystore type. Default: PKCS12.
-m: Database type. Default: mysql.
-l: Case insensitivity of the username and attributes. Default: false.
-h: Display this help and exit.
- Validate the CloudFormation template with given parameters, using the AWS CLI.
- Run the CloudFormation template to creat the deployment and wait till the stack creation completes.
- Extract the following using the AWS CLI.
- Bastion node public IP. (Used as the JMeter client)
- Private IP of the WSO2 IS instance.
- RDS instance hostname.
- Setup the wso2 IS server in the instance and create the databases.
- Copy required files such as the key file, performance artifacts and the JMeter setup to the bastion node.
- SSH into the bastion node and execute the setup-bastion.sh script, which will setup the additional components in the deployment.
- SSH into the bastion node and execute the run-performance-test.sh script, which will run the tests and collect the results.
- Download the test results from the bastion node.
- Create summary CSV file and MD file.
We have added a performance analysis feature to the project, which allows you to generate performance plots based on CSV data files. This feature provides insights into response times for different deployment types and scenarios. By analyzing these performance plots, you can identify performance bottlenecks and make informed optimizations.
To use this feature, we have added the performance_plots.py
script to the project. This script reads CSV data files, filters the data based on concurrency ranges, and generates performance plots using the matplotlib library. The generated plots are saved in the 'output' folder.
Additionally, we have updated the README.md file in the performance_analysis
directory to provide detailed instructions on how to use the script, customize the settings, and understand the input CSV data format.
To get started with the performance analysis feature, please refer to the performance_analysis/README.md file for instructions and examples.
If needed to run the performance test in legacy mode, please use the legacy-mode branch. Legacy mode will include single node and 2 node setup deployments with previous test flows.