Python scripts for a generic performance testing infrastructure using Locust.

Related tags

TestingLocust_Scripts
Overview

TODOs

  • Reference to published paper or online version of it
  • loadtest_plotter.py: Cleanup and reading data from files
  • ARS_simulation.py: Cleanup, documentation and control workloads and parameters of the simulation model through CLI
  • locust-parameter-variation.py: Cleanup and Documentation
  • Move the files into subfolders (Executors, Load Testers, Evaluators, Systems under Test)

Locust Performance Testing Infrastructure

In [1] we introduced a generic performance testing infrastructure and used it in an industrial case study. Our idea is to have decoupled components, Python scripts in our case, that together allow to:

  1. reproducible execute a load testing tool with a set of parameters for a particular experiment,
  2. evaluate the performance measurements assisted by visualizations or automatic evaluators.

Generally, we have four types of components in our infrastructure:

  • Executors: execute a particular Load Tester as long as the Load Tester provides a CLI or an API;
  • Load Testers: execute the load test, parametrized with values given by an Executor. Have to output a logfile containing the response times;
  • Evaluators: postprocess the logfile and for example plot the response times;
  • Systems under Test (SUTs): Target systems we want to test. Usually, the target systems will be external systems, e.g., web servers. In our case, we build software that simulates the behavior of a real system, in order to provide the means for others to roughly reproduce our experiments.

More details about our generic performance testing infrastructure can be found in our paper [1].

This repository contains the aforementioned Python scripts:

  • Executors:
    • executor.py: executes Locust with a set of parameters;
    • locust-parameter-variation.py: executes Locust and keeps increasing the load. This is similar to Locust's Step Load Mode, however, our approach increases the number of clients for as long as the ARS complies with real-time requirements in order to find the saturation point of the ARS.
  • Load Testers:
    • locust_tester.py: contains specific code for Locust to perform the actual performance test. For demonstration purposes, this script tests ARS_simulation.py. Outputs a locust_log.log;
    • locust_multiple_requests: an enhanced version of locust_tester that sends additional requests to generate more load.
    • locust_teastore.py: performs load testing against TeaStore, or our simulated TeaStore.
  • Evaluators:
    • loadtest_plotter.py: reads the locust_log.log, plots response times, and additional metrics to better visualize, if the real-time requirements of the EN 50136 are met.
  • SUTs
    • Alarm Receiving Software Simulation (ARS_simulation.py): simulates an industrial ARS based on data measured in the production environment of the GS company group.
    • TeaStore (teastore_simulation.py): simulates TeaStore based on a predictive model generated in a lab environment.

Instructions to reproduce results in our paper

Quick start

  • Clone the repository;
  • run pip3 install -r requirements.txt;
  • In the file ARS_simulation.py make sure that the constant MASCOTS2020 is set to True.
  • open two terminal shells:
    1. run python3 ARS_simulation.py in one of them;
    2. run python3 executor.py. in the other.
  • to stop the test, terminate the executor.py script;
  • run python3 loadtest_plotter.py, pass the locust_log.log and see the results. :)

Details

Using the performance testing infrastructure available in this repository, we conducted performance tests in a real-world alarm system provided by the GS company. To provide a way to reproduce our results without the particular alarm system, we build a software simulating the Alarm Receiving Software. The simulation model uses variables, we identified as relevant and also performed some measurements in the production environment, to initialize the variables correctly.

To reproduce our results, follow the steps in the Section "Quick start". The scripts are already preconfigured, to simulate a realistic workload, inject faults, and automatically recover from them. The recovery is performed after the time, the real fault management mechanism requires.

If you follow the steps and, for example, let the test run for about an hour, you will get similar results to the ones you can find in the Folder "Tests under Fault".

Results after running our scripts for about an hour:

Results


Keep in mind that we use a simulated ARS here; in our paper we present measurements performed with a real system, thus the results reproduced with the code here are slightly different.

Nonetheless, the overall observations we made in our paper, are in fact reproducible.


Instructions on how to adapt our performance testing infrastructure to other uses

After cloning the repository, take a look at the locust_tester.py. This is, basically, an ordinary Locust script that sends request to the target system and measures the response time, when the response arrives. Our locust_tester.py is special, because:

  • we implemented a custom client instead of using the default;
  • we additionally log the response times to a logfile instead of using the .csv files Locust provides.

So, write a performance test using Locust, following the instructions of the Locust developers on how to write a Locust script. The only thing to keep in mind is, that your Locust script has to output the measured response times to a logfile in the same way our script does it. Use logger.info("Response time %s ms", total_time) to log the response times.

When you have your Locust script ready, execute it with python3 executor.py, pass the path to your script as argument, and when you want to finish the load test, terminate it with Ctrl + C.

Use python3 executor.py --help to get additional information.

Example call:

% python3 executor.py locust_scripts/locust_tester.py

After that, plot your results:

% python3 loadtest_plotter.py
Path to the logfile: locust_log.log
Owner
Juri Tomak
Juri Tomak
CNE-OVS-SIT - OVS System Integration Test Suite

CNE-OVS-SIT - OVS System Integration Test Suite Introduction User guide Discussion Introduction CNE-OVS-SIT is a test suite for OVS end-to-end functio

4 Jan 09, 2022
DUCKSPLOIT - Windows Hacking FrameWork using Reverse Shell

Ducksploit Install Ducksploit Hacker setup raspberry pico Download https://githu

2 Jan 31, 2022
Aioresponses is a helper for mock/fake web requests in python aiohttp package.

aioresponses Aioresponses is a helper to mock/fake web requests in python aiohttp package. For requests module there are a lot of packages that help u

402 Jan 06, 2023
An improbable web debugger through WebSockets

wdb - Web Debugger Description wdb is a full featured web debugger based on a client-server architecture. The wdb server which is responsible of manag

Kozea 1.6k Dec 09, 2022
A simple asynchronous TCP/IP Connect Port Scanner in Python 3

Python 3 Asynchronous TCP/IP Connect Port Scanner A simple pure-Python TCP Connect port scanner. This application leverages the use of Python's Standa

70 Jan 03, 2023
User-oriented Web UI browser tests in Python

Selene - User-oriented Web UI browser tests in Python (Selenide port) Main features: User-oriented API for Selenium Webdriver (code like speak common

Iakiv Kramarenko 575 Jan 02, 2023
The best, free, all in one, multichecking, pentesting utility

The best, free, all in one, multichecking, pentesting utility

Mickey 58 Jan 03, 2023
A simple python script that uses selenium(chrome web driver),pyautogui,time and schedule modules to enter google meets automatically

A simple python script that uses selenium(chrome web driver),pyautogui,time and schedule modules to enter google meets automatically

3 Feb 07, 2022
pytest plugin that let you automate actions and assertions with test metrics reporting executing plain YAML files

pytest-play pytest-play is a codeless, generic, pluggable and extensible automation tool, not necessarily test automation only, based on the fantastic

pytest-dev 67 Dec 01, 2022
A collection of testing examples using pytest and many other libreris

Effective testing with Python This project was created for PyConEs 2021 Check out the test samples at tests Check out the slides at slides (markdown o

Héctor Canto 10 Oct 23, 2022
A Python Selenium library inspired by the Testing Library

Selenium Testing Library Slenium Testing Library (STL) is a Python library for Selenium inspired by Testing-Library. Dependencies Python 3.6, 3.7, 3.8

Anže Pečar 12 Dec 26, 2022
自动化爬取并自动测试所有swagger-ui.html显示的接口

swagger-hack 在测试中偶尔会碰到swagger泄露 常见的泄露如图: 有的泄露接口特别多,每一个都手动去试根本试不过来 于是用python写了个脚本自动爬取所有接口,配置好传参发包访问 原理是首先抓取http://url/swagger-resources 获取到有哪些标准及对应的文档地

jayus 534 Dec 29, 2022
Active Directory Penetration Testing methods with simulations

AD penetration Testing Project By Ruben Enkaoua - GL4Di4T0R Based on the TCM PEH course (Heath Adams) Index 1 - Setting Up the Lab Intallation of a Wi

GL4DI4T0R 3 Aug 12, 2021
Fills out the container extension form automatically. (Specific to IIT Ropar)

automated_container_extension Fills out the container extension form automatically. (Specific to IIT Ropar) Download the chrome driver from the websit

Abhishek Singh Sambyal 1 Dec 24, 2021
It helps to use fixtures in pytest.mark.parametrize

pytest-lazy-fixture Use your fixtures in @pytest.mark.parametrize. Installation pip install pytest-lazy-fixture Usage import pytest @pytest.fixture(p

Marsel Zaripov 299 Dec 24, 2022
nose is nicer testing for python

On some platforms, brp-compress zips man pages without distutils knowing about it. This results in an error when building an rpm for nose. The rpm bui

1.4k Dec 12, 2022
Custom Selenium Chromedriver | Zero-Config | Passes ALL bot mitigation systems (like Distil / Imperva/ Datadadome / CloudFlare IUAM)

Custom Selenium Chromedriver | Zero-Config | Passes ALL bot mitigation systems (like Distil / Imperva/ Datadadome / CloudFlare IUAM)

Leon 3.5k Dec 30, 2022
pytest plugin for distributed testing and loop-on-failures testing modes.

xdist: pytest distributed testing plugin The pytest-xdist plugin extends pytest with some unique test execution modes: test run parallelization: if yo

pytest-dev 1.1k Dec 30, 2022
Obsei is a low code AI powered automation tool.

Obsei is a low code AI powered automation tool. It can be used in various business flows like social listening, AI based alerting, brand image analysis, comparative study and more .

Obsei 782 Dec 31, 2022
This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database

This is a pytest plugin, that enables you to test your code that relies on a running MongoDB database. It allows you to specify fixtures for MongoDB process and client.

Clearcode 19 Oct 21, 2022