Loading...

Follow BlazeMeter - JMeter and Load Testing Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid
What is Load Testing?

Load testing is testing that checks how systems function under a heavy number of concurrent virtual users, performing transactions over a certain period of time. In other words, load testing tests how systems handle heavy load volumes. There are a few types of open-source load testing tools, with JMeter being the most popular one.

Here I will go through the most important best practices for load testing your site, including for everyday traffic, and for peak traffic events.

  Test Early and Test Often
  • Plan ahead to fix failures before peak load is expected

It’s never too early to start load testing, but to ensure you have time to test all scenarios, it’s best to allow at least 90 days. This way your not just checking a box to say “I did it,” but you’re actually allowing time to fix bugs and bottlenecks and even rerun the heavy load test.

  • Make sure you are testing the right things


Building a checklist of the KPIs you want to test will make sure you only test what you need and will allow you to create a realistic load scenario. There’s no point checking for a million users if your site will never get hit by more than 10,000. Similarly, make sure your site performs so that it easy to use for your audience.

  • Protect your reputation and revenue


Poor performance not only costs over $300K p/hour if your site is down, but can also harm your reputation. Customers want easy accessibility, and will choose to go elsewhere if your site does not perform.  By checking your system under different loads, you can avoid system crashes, ensure there are no memory leaks, and keep response times low. This saves your company money and protects your company name.

  When and How to Load Test

When load testing your site, it is recommended to run both small tests after each build and larger tests for specific events when your site will be put under extra stress such as Black Friday.

You should run small load tests after every build, to ensure code changes don’t affect the everyday user. It’s not enough to test at the end of the process, so by testing continuously, you can find and fix bottlenecks before it’s too late. Other things to check when running small load tests include average load seen on the application, and the average time a user spends.

It’s best practice to run large tests for maximum load before peak events. This is to ensure your
infrastructure is prepared, allows you to plan ahead for known problems, and monitor end user experience concurrently.

We recommend you run large load tests during traffic downtime, like Sunday at 2:00 AM so that real users aren’t affected. Release a maintenance notice. If you can’t test in production, you should create a replica as similar as possible.

While your application is under a large load, you also want to verify what the end user is seeing. Having a single front end user capturing metrics like page load time gives you a clearer picture of both server and end user performance concurrently. You can have the faster response times in the world, but if the user never sees certain page objects, it may not matter.

What Type of Load Tests Should You Run?

We’ve just covered some best practices for when to run your load tests, but there are many different types of tests and scenarios to consider when running performance tests:

  Long Soak Tests


A peak event such as Black Friday isn’t over in an hour, but rather is a long day where you expect customers coming to your site for sale. In this case, you want to identify any memory leaks (where an application grabs memory to use but doesn’t give it up when it’s done – forces you to reboot if not caught) or queues that unload slowly. You can do this by running a Soak Test, which strains your system over time to ensure your system can properly recycle resources such as CPU memory, long-term stability, inactive threads, and lost connections. It’s essentially an endurance test to maintain healthy status over a period of time.

Spike Tests


If you want to verify how your system reacts properly to a sudden ramp-up of virtual users, you need to run a Spike Test. This will monitor the response to the sudden jump, not the failure point, but allow you to monitor how your system can recover from that sudden peak of users.

You also want to monitor how the system recovers.

  Failure Point Test


So your expected tests have passed, but you shouldn’t stop there. Take it one step further and determine where your system fails by taking it to its maximum limit. Even if you don’t expect that many users, it’s important to test your breaking point and see that the system can recover well.

  How to Identify You Peak Load Times

We’ve gone over the different test scenarios, and a general approach of when to run large or small load tests, but how do you identify your peak load times? The best place is to start by involving your entire team from marketing to product to R&D to discuss when the peak events will occur, and plan ahead to which parts of the application will be stressed. You should also consider that if a competitor’s site goes down, it might cause a flood of traffic to yours

Because all these factors are impossible to guess, you need to determine when, why and how your system will fail. Industry standards suggest that systems are considered under load if 80% of resources are being utilized, and you should test at least 20% over your expected peak. To help plan, you can turn to previous metrics to help.

Even if you think you can anticipate what type of traffic your site will be experiencing on a peak day, other factors can be unpredictable. By checking server logs, you can view the history of web requests, including client IP address, date and time and the page requests.

For example, in the graph above, you can see that the system under test reaches capacity at about 90 virtual users, where the hits per second stops increasing. By discussing these metrics with your entire technical team, you can plan ahead and put together a mitigation procedure if this number is reached. Then as a team you can strictly define your current state and where to focus on for your future goals. Just remember, depending on what environment you choose to run your tests in it is also pertinent to notify your R&D team of end users that maintenance will be happening during the time of the test.

Creating a Great Load Scenario

You’ve determined the best time to test, and what type of load test you want to run, now we need to look at the type of journey your users will take to build a realistic load scenario.

What kind of journeys do your users take?

Instead of testing a “happy path” test the real journey users are taking, by monitoring the behavior of your customers during those peak events.

For example, once you have identified the saturation point, you should cross-reference the information with APM data to check if anything is missing or contradictory and enrich your understanding of how users behave on your website.

It’s also important to look for bottlenecks and high stress points where user traffic spiked. Then, chart different trends and check where your system was close to reaching its limit. For example, if you put a popular product on sale, you would expect the high volume on that one page, and it might crash the entire system. Therefore it is best practice to divide your system into logical sections based off trends observed by testing them individually to help you identify the weakest link.

Where are your users coming from?

A common mistake load testers make is that they test their infrastructure only from inside the organization and not from outside. If you fail to test your network infrastructure, the chains of delivery are not adequately tested like they will be used on a peak user day. Testing external infrastructures and hosting servers is crucial to preventing failure. It is also important to test what types of networks your users are coming from. You can learn about running tests from up to 50 multiple geo locations in this guide.

Where do users drop off?

Another common mistake is that tests are made assuming that a user fully completes their actions in a system. However, users often drop off, and it’s important to test a scenario where users may drop off. For example, testing if someone adds something to their cart, but does not checkout. You need to know how your system reacts to the potential drop offs.

Determining ramp up and ramp down speed

Ramp-up is the amount of time it will take to add all test users (threads) to a test execution, and should be configured slow enough so that you have enough time to determine at what stage problems started occurring.

We suggest to start at 10% of your peak load and slowly ramp up from there, being sure to monitor indicators at each stage.

Then, start out by testing 80% of the capacity. Once you decide those numbers, run load tests for 80% of the capacity and monitor your KPIs and how your system reacts. Ensure everything is completely stable, memory capacity is mellow, CPU is low and recovery from spikes is quick. Everything should be working perfectly for 80%. If something seems jittery at this point, you can be pretty sure that you won’t be able to count on 100%.

If the test succeeded, slowly climb up to 100%. If not, identify bottlenecks and errors and fix what needs fixing before testing your system again.

Once you’ve reached full capacity it’s time to test your system. This is the number of users you are expecting on your website, according to previous user patterns, trend analysis, product requirements and expected events. Check for memory leaks, high CPU usage, unusual server behaviour and any errors.

Define KPIs in advance

You want to define Failure Criteria as part of your test to ensure your not just going through the motions, but actually verifying from both an SLA and business standpoint that the response time was quick enough, the error percentages did not get above a certain threshold and more.

Blazemeter offers a pass/fail flagging system based on a number of different criteria which is configurable.

For example, without setting a threshold for response time, your application could be performing very slowly, but the performance test would still pass. In this case, if the average response time is greater than 5 seconds, then the test will be marked as a failure. Therefore we can test that our users are having a fast experience in addition to monitoring things like errors or other KPIs.


Running Your Load Test

It’s now time to calibrate and run your load test!

1. Make sure the test you just created works

The first step is making sure the test you created with your configuration works.

In Blazemeter, we have a debug test option that allows you to validate your configuration with a low-scale test. You can learn more about debugging and calibrating your tests here.

2. Validate load resources are not over or under utilized

This is done through a calibration procedure that validates that performance test resources are not causing a bottleneck.

There is a recorded step by step process for this procedure in Blazemeter’s help tab.

3. Slowly begin to ramp up to peak load

In order to determine if your load resources are over or under utilized, you need to watch both CPU and Memory Usage on the generation machines as you slowly ramp up. To prevent load generator failure from being a variable, CPU values should be lower than 80% and Memory Levels should be less than 70%. Blazemeter has a tab in the report section to help you monitor these load engines as seen in the image below.

Identify, Fix & Eliminate Bottlenecks

The graph below shows what a bottleneck can look like. The hits per second in purple drops and the response time in gold increase abruptly.

If you do have a failure, you need to rerun your load test after the fix has been implemented. This way your not just checking a box that a load test has been done, but you can be confident that you system is prepared for high usage.

It’s ideal to fix any sources of errors ahead of time. This can be done by having a database replication, database or application failover cluster with a procedure tied to how to switch over beforehand. This lets you keep services up and running while fixes are happening in the background.

It is also helpful to have an organized platform to share information during a load test so that if any critical assignments need to be distributed quickly, they can be in a quick and organized way.

Rerun Tests

The status quo for high performing applications will continue to require faster and more flexible systems as time goes on. By integrating automation to seamlessly update tests, automating test runs, and choosing to test early and often in your development life cycles can help you meet those standards.

As you prepare for your own load test scenarios, make sure to prepare a backup plan with backup servers and locations ready to grow.

In conclusion, a solid load testing strategy involves running both smaller tests within your development cycle, as well as running stress tests in preparation for large testing events.

To get started with load testing, you can simply put your URL into the box below, and begin to run load tests within BlazeMeter’s Continuous Testing platform.

     
Please enter a valid URL
Start The Test
author_name: 
Post to slack: 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
What is a Listener in JMeter?

Listeners enable developers and performance testers to monitor JMeter requests and analyze test results. Listeners aggregate data, process and manipulate the information in it and even enable customization.

There are a lot of useful listeners for JMeter, which can be found by right clicking the test plan Add->Listeners. For a list of different types of JMeter listeners, you can refer to this, this and this blog.

With listeners, you can visualize almost any data gathered by your samplers. JMeter also provides unlimited possibilities to tune and tweak any component you want, so performance testers can customize the output of the tests. So, let’s see how we can write our own listener to visualize the data collected in any way we want. Let’s get started.

Basics of writing plugins for JMeter

Before we start, we should look at the basics of writing your plug-ins for JMeter. First of all, keep in mind the structure of JMeter sources. If you feel lost, you can always refer to the implementation of the existing components here:

  • components – contains non-protocol-specific components like visualizers, assertions, etc.;
  • core – the core code of JMeter including all core interfaces and abstract classes;
  • examples – example sampler demonstrating how to use the new bean framework;
  • functions – standard functions used by all components;
  • jorphan – utility classes providing common utility functions;
  • protocol – contains various protocols JMeter supports.

Also, it’s very important to follow the established hierarchy of JMeter classes. If you want to write your own component, you should inherit it from standard JMeter abstract classes. Abstract classes already implement basic JMeter integration, so if your components extend one of them, you only need to write the code for differences between your implementation and the standard one. Here are some abstract classes for component GUI:

AbstractSamplerGui - an abstract GUI class for a Sampler;
AbstractAssertionGui - an abstract GUI class for an Assertion;
AbstractConfigGui - an abstract GUI class for a Configuration Element;
AbstractControllerGui - an abstract GUI class for a Logic Controller;
AbstractPostProcessorGui - an abstract GUI class for a PostProcessor;
AbstractPreProcessorGui - an abstract GUI class for a PreProcessor;
AbstractVisualizer - an abstract GUI class for a Visualizer or Listener;
AbstractTimerGui - an abstract GUI class for a Timer.

When you make your own GUI class, it’s important to keep in mind the basic methods you need to implement, to make a unique component:

createTestElement() - in this method you should instantiate a class that implements actual logic of your component. Example:
@Override
public TestElement createTestElement() {
   TestElement te = new MySampler();
   modifyTestElement(te);
   return te;
}

modifyTestElement(TestElement te) - this method is used to pass the data from the GUI class to the logic class. Example:

@Override
public void modifyTestElement(TestElement te) {
   if (te instanceof MySampler) {
       te.setFilename(getFile());
       sampler = te;
   }
    configureTestElement(te)
}

configure(TestElement te) - here you configure your GUI according to the saved data from your test element. Example:

public void configure(TestElement te) {
   super.configure(te);

    nameField.setText(te.getPropertyAsText(MySampler.NAME));
}

And here are abstract classes for component logic:

AbstractSampler - abstract class for a Sampler logic;
Assertion - interface class for an Assertion logic.
ConfigElement - interface class for a Configuration Element logic.
GenericController - basic class for a Logic Controllerlogic.
PostProcessor - interface class for a PostProcessor logic.
PreProcessor - interface class for a PreProcessor logic.
AbstractListenerElement - abstract class for a Listener logic.
Timer - interface class for a Timer logic.

Creating the GUI for your JMeter listener

A GUI for JMeter components is made on Javax Swing. It allows you to construct your GUI with ease. All of the basic GUI classes are inherited from the Java Container class. This allows you to add new GUI elements simply by invoking add method, that is also inherited from the Container class. For example, if you want to add a text field to your GUI component you can do the following in your GUI class:

JTextField myTextField = new JTextField();
add(myTextField, BorderLayout.NORTH);

Remember, basic GUI elements for the class have already been added by the basic implementation you inherit your class from. So, for example, there is no need to add standard Name and Comment fields to your script - you only need to add specific elements.

Collecting data

A data to display is collected in a form of SampleResult JMeter class objects. To add your own logic of processing results, you can override add method. For example:

@Override
public void add(final SampleResult res) {
   newSamples.add(res);
   updateGui(res);
}

In the method above you can modify the data collected in any way you want. Here you also call updateGui method to draw a table or a graph for your visualizer.

Visualizing your data

There are a number of ways you can visualize your data - it can be a table, tree or graph. Each of these representations uses updateGui method that is usually called when your listener receives a sample result.

For example, if you want to use JMeter graph to display the results of your test, you should define a variable for a Graph object and a model object in your listener script. A model object manages a data that will be visualized:

private final Graph graph;
private final CachingStatCalculator model;

public GraphVisualizer() {
   model = new CachingStatCalculator("Graph");
   graph = new Graph(model);
   init();
}

public void updateGui(final SampleResult res){
   graph.updateGui(res);
}

Here, the model is an object of the CachingStatCalculator class, that processes collected data in a way we want our output. If you want to change the output format, you should extend the SamplingStatCalculator class and implement your own logic.

There is basically how to create and  implement your own JMeter listener. 

You can run your own script in BlazeMeter for testing in the cloud. Just upload your url here to get started:

     
Please enter a valid URL
Start The Test
author_info: 
Konstantine is the Head of QA department at ISS Ar
author_name: 
Post to slack: 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
How to Use Selenium with JMeter's Webdriver Sampler

 

To use Selenium Webdriver with JMeter, simply install "Webdriver Set" plugins. The WebDriver sampler is useful for testing the performance of AJAX, GWT-based Web applications, and simulated user actions.

 

Install the Webdriver using the JMeter Plugins Manager.

 

Write your WebDriver script as usual, then add "Thread Group" to your "Test Plan."

 

 

 

Add Config Element -> HTTP Cookie Manager, Config Element -> jp@gc - Firefox Driver Config, Sampler -> jp@gc - Web Driver Sampler, Listener -> View Results Tree.

 

The result is as follows:

 

 

You do not need to configure two config elements – you may omit that step. Open the "Web Driver Sampler" and add this code:

 

var pkg = JavaImporter(org.openqa.selenium); //WebDriver classes
var support_ui = JavaImporter(org.openqa.selenium.support.ui.WebDriverWait); //WebDriver classes
var wait = new support_ui.WebDriverWait(WDS.browser, 5000);

WDS.sampleResult.sampleStart(); //captures sampler's start time
WDS.sampleResult.getLatency();
WDS.log.info("Sample started");

WDS.browser.get('http://duckduckgo.com'); //opens website specified in 'http://duckduckgo.com'
WDS.log.info("Sample ended - navigated to duckduckgo.com");

var searchField = WDS.browser.findElement(pkg.By.id('search_form_input_homepage')); //saves search field into searchField
searchField.click(); //clicks search field
searchField.sendKeys(['blazemeter']); //types word "blazemeter" in field
WDS.log.info("Searched for BlazeMeter");

var button = WDS.browser.findElement(pkg.By.id('search_button_homepage')); //Find Search button
button.click(); //Click Search Button
WDS.log.info("Clicked on the search button");

var link = WDS.browser.findElement(pkg.By.cssSelector('#r1-0 > div > h2 > a.result__a > b')); //also saves selector as variable but uses CSS.
link.click(); //Click the search result's Link

WDS.sampleResult.sampleEnd();

 

(Don't worry if the entire code doesn’t seem clear yet. We'll revisit it below).

 

Now, try to start your test. Whatever you do, DO NOT change the "Thread Group" values. They must all be set to 1.

 

 

You should see the new Firefox window that will open the website. Search for “BlazeMeter.” After the test has started, open View Results Tree to confirm there are no errors. If the Response Code is “200” and the Response Message is “OK,” the test was run successful. If not, check the WebDriver script for errors. 

 

Code Review

 

Our code starts with the import Java packages “org.openqa.selenium” and “org.openqa.selenium.support.ui.WebDriverWait” that allow you to use the WebDriver classes.  

 

Here is a handy list of WebDriver’s packages.

 

If you want to use any of the packages, import them with JavaImporter:

 

var action = JavaImporter(org.openqa.selenium.PACKAGENAME.CLASSNAME)

 

WDS.sampleResult.sampleStart() and WDS.sampleResult.sampleEnd() captures the sampler’s time and tracks it. You can remove them. The script will still work, but you can’t get load time:

 

 

  • WDS.browser.get('http://duckduckgo.com') - Opens the website http://duckduckgo.com

 

  • var searchField = WDS.browser.findElement(pkg.By.id('search_form_input_homepage')) - Saves the search field into searchField variable.

 

  • searchField.click() - Clicks the search field.

 

  • searchField.sendKeys(['blazemeter']) - Types “blazemeter” in field

 

  • var link = WDS.browser.findElement(pkg.By.ByCssSelector('#r1-0 > div > h2 > a.result__a > b')) - Saves selector as variable but uses CSS.

 

  • WDS.log.info(WDS.name + ' has logged an entry') - Logs a message.

 

How to Use Selectors

 

To simplify the use of selectors, install the Selenium IDE add-on. Selenium IDE is a Firefox add-on with a recording option for actions in the browser. To get similar selectors, download and install the add-on. (Be sure to download the .xpi file.)

 

Open Duck Duck Go and Selenium IDE. Set the Selenium IDE’s base URL https://duckduckgo.com/. Type “blazemeter” and click Search. If you open Selenium IDE, you see the captured actions and selectors.

 

 

All captured data can be manually converted to the WebDriver format(see below).

 

 

Start the Test Run on BlazeMeter

 

To launch the WebDriver test in the cloud with more concurrent users on BlazeMeter, use Firefox, which is the only currently supported browser for use with WebDriver. Create a new test and upload your JMX file to run it.

 

Before uploading your JMeter script, it’s best to remove/disable View Results Tree, which can slow test performance. After a few minutes, reports will generate. We launched the test with 40 concurrent users, as noted in the response time (*see the Monitoring tab).

 

 

Although we launched the test with only 40 users, the CPU is fully utilized from the outset. This is precisely why each sampler starts one browser. Be sure to take this into account when writing tests.

 

BlazeMeter Recommends

 

When using the WebDriver plugin, in order to perform better load testing, assemble the Selenium tests with JMeter tests. The number of WebDriver samplers should be fewer than the number of JMeter samplers. If you need any values from websites through Ajax, you can use WebDriver with the Once Only Controller to avoid continual/duplicate browser launches.

 

You can also use Taurus with its native Selenium executions, as a way to leverage existing functional tests written in Selenium.

 

Read more about Continuous Testing in our White Paper "Continuous Testing in Practice"

author_info: 
Bob Meliev is a Contributing Writer to the BlazeMeter blog
author_name: 
Post to slack: 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We live in a world where nothing sits still. In today’s rapidly changing IT world, companies strive towards scalable and fault-tolerant distributed systems, and not only in regards to software development. A lot of digital transformations are going on in the Quality Assurance area as well. Companies are ‘shifting to the left’, focusing more on how they can automate their test processes to make them quicker, easier to configure and more efficiently deployed.

 

Selenium Grid: Advantages and Disadvantages

 

When it comes to web-application automation, Selenium Grid has become the first fundamental part of the revolutionizing tool sets, allowing to perform Selenium tests on different machines and with multiple browsers at the same time. In the good old days you would just plug a couple of computers together, run a couple of Selenium agent scripts on each of those machines and that was that. But nothing stays relevant forever in its initial form, and the wildly popular automation solution ‘Selenium Grid’ has been given a breath of fresh air. But before jumping into the most interesting part, let’s think about what was not perfect in Selenium Grid’s infrastructure in its original form:

 

  • The Grid has an ‘Achilles heel’ - Selenium Hub: The hub-and-node concept has a serious weak spot - Selenium Hub. Selenium Hub is a single browser access point that makes all nodes unavailable when the hub goes down.
  • Selenium Grid doesn’t scale well: It’s a widely known fact that the Selenium hub can only handle a limited number of connected nodes.
  • Stale infrastructure for ongoing test runs: Selenium Grid nodes usually end up in a poor state left up and running for quite a long time. Hence, the node restart is exactly what the doctor ordered, but it can only be done manually.
  • Frustrating version synchronization: Keeping the project up-to-date with the latest Selenium release and having it synchronized with the browser and webdriver version can cause difficulties and takes ages. Grid maintenance and updates are a time consuming node-by-node process.

 

Introducing Zalenium

 

This is where Zalenium comes in! According to its official page, Zalenium is a flexible and scalable container based Selenium Grid, with video recording, live preview, basic auth & a dashboard. Moreover, it has out-of-the-box Docker and Kubernetes integration. This fact makes Zalenium an attractive choice to get Selenium based infrastructure up and running.

 

When you dig into Zalenium, you will realize all the benefits it brings to web automation.

 

Selenium/Zalenium -Kubernetes Integration

 

Out-of-the-box integration with Kubernetes significantly reduces test execution time by spinning up Docker containers of the necessary web browsers and running them in parallel. This actually means horizontal scaling of your tests. Moreover, it lets you avoid the headache of Grid set up and maintenance. Therefore, you don’t have to deal with containers/pods directly. Instead, Kubernetes automatically manages the desired state of the system.

 

Running Zalenium in a Kubernetes cluster becomes a powerful solution to get Selenium based infrastructure up in a more efficient and simpler way. This is true especially if you’ve already orchestrated, shipped and run the company’s software with Kubernetes.

 

Let’s dive into Zalenium and take advantage of its integration with Kubernetes together. The desired configuration is represented in the diagram below.

 


Kubernetes Cluster configuration on the local machine

 

Creating our Selenium based Zalenium-Kubernetes Test

 

The main prerequisite for this configuration is to have a running Kubernetes cluster. To experiment with Kubernetes locally, you can use Minikube. Minikube is a tool that allows spinning up a single-node virtual cluster on your machine.

 

The easiest way to set up Minikube on a Mac is to execute the following brew command (the installation steps for other platforms you can find here):

 

brew cask install minikube

Alright!

 

  minikube was successfully installed!

 

In order to interact with the cluster, we can trigger the Kubernetes API via Kubectl. This tool allows running containerized applications on top of the Kubernetes cluster. If you setup Minikube with the command above, it downloads Kubectl as a dependency during the installation process.

 

==> Installing Formula dependencies: kubernetes-cli
==> Installing kubernetes-cli
==> Downloading https://homebrew.bintray.com/bottles/kubernetes-cli-1.11.2.high_
######################################################################## 100.0%
==> Pouring kubernetes-cli--1.11.2.high_sierra.bottle.tar.gz
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d

zsh completions have been installed to:
  /usr/local/share/zsh/site-functions
==> Summary
  /usr/local/Cellar/kubernetes-cli/1.11.2: 196 files, 53.7MB
==> Downloading https://storage.googleapis.com/minikube/releases/v0.28.2/minikub

 

You just need to make sure that the sufficient version has been installed:

 

kubectl version



For example, the Kubectl version I use is

 

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

 

In case of separate Kubectl installation, use the instructions provided in the official documentation.

 

After we have all the necessary tools installed, we can instantiate our own Kubernetes cluster.

 

Just run the following:


minikube start

 

It will start a local Kubernetes cluster:


Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

 

Check the status and IP of the cluster:


➜  ~ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
 

Running Zalenium on Top of the Kubernetes Cluster

 

In order to spin up Zalenium in Kubernetes, we need to pull Zalenium and Selenium Docker images inside the cluster. For this to be possible, we should switch Docker to Kubernetes context by executing the docker-env command:


➜  ~ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/anastasiabushneva/.minikube/certs"
export DOCKER_API_VERSION="1.35"
# Run this command to configure your shell:
# eval $(minikube docker-env)

 

This will provide a cmd to use Docker from the command line of your local machine against the Kubernetes VM.

 

eval $(minikube docker-env)

Let’s download the Selenium and Zalenium Docker images for spinning them up in the cluster.

docker pull elgalu/selenium
docker pull dosel/zalenium

 

As was mentioned before, guys from Zalando already prepared all necessary scripts to run Zalenium with Kubernetes using a one line command.

 

All the ‘magic’ scripts and configurations are located in the Zalenium GitHub repo:

git clone https://github.com/zalando/zalenium
cd zalenium

 

So, create a resource from the cloned files:


kubectl create -f kubernetes/

 

It will start Kubernetes pods under a namespace ‘Zalenium’.

persistentvolumeclaim/zalenium-data created
persistentvolumeclaim/zalenium-mounted created
persistentvolume/zalenium-data created
persistentvolume/zalenium-mounted created
namespace/zalenium created
service/zalenium created
serviceaccount/zalenium created
clusterrole.rbac.authorization.k8s.io/zalenium-role created
clusterrolebinding.rbac.authorization.k8s.io/zalenium created
deployment.extensions/zalenium created

 

Check that the service is up by listing all the container images in the cluster.

 

➜  ~ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-minikube                           1/1       Running   0          1d
kube-system   kube-addon-manager-minikube             1/1       Running   1          2d
kube-system   kube-apiserver-minikube                 1/1       Running   1          1d
kube-system   kube-controller-manager-minikube        1/1       Running   1          1d
kube-system   kube-dns-86f4d74b45-lkkwb               3/3       Running   3          4d
kube-system   kube-proxy-7hbqs                        1/1       Running   0          1d
kube-system   kube-scheduler-minikube                 1/1       Running   1          1d
kube-system   kubernetes-dashboard-5498ccf677-l69q2   1/1       Running   3          4d
kube-system   storage-provisioner                     1/1       Running   3          4d
zalenium      zalenium-40000-2vk79                    1/1       Running   0          1d
zalenium      zalenium-40000-m8jkd                    1/1       Running   0          1d
zalenium      zalenium-547f788dd4-ds2kc               1/1       Running   0          1d

 

Moreover, you can open the  Kubernetes dashboard to verify all the running services under the ‘Zalenium’ namespace:

 

minikube dashboard

 

An Advanced (alternative) Way to Start Zalenium in Kubernetes using Helm

 

In order to simplify deploying applications to Kubernetes, we can use its own package manager  Helm, which helps with the installation and maintenance of any software on the cluster.

 

After Minikube is set up, install the Kubernetes package manager.


brew install kubernetes-helm

 

Initialize Helm on both client and server sides


helm init

 

Helm packages, aka charts, might be installed in several ways. Zalenium developers have already provided a chart that bootstraps a Zalenium deployment on a Kubernetes cluster.

 

Navigate to the helm directory in the cloned zalenium repo:


cd zalenium/docs/k8s/helm

 

The chart’s installation command might include arguments for Zalenium customization. For example, in the string below the desired number of pods is increased to 3.


helm install --name my-release \
  --set hub.desiredContainers=3 \
    local/zalenium

NAME:   my-release
LAST DEPLOYED: Wed Sep  5 14:25:47 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME                 TYPE      CLUSTER-IP     EXTERNAL-IP  PORT(S)         AGE
my-release-zalenium  NodePort  10.102.157.47  <none>       4444:31215/TCP  0s
==> v1beta1/Deployment
NAME                 DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
my-release-zalenium  1        1        1           0          0s
==> v1/Pod(related)
NAME                                  READY  STATUS    RESTARTS  AGE
my-release-zalenium-647d8bff8c-ds2t2  0/1    Init:0/1  0         0s
NOTES:
Hi.

 

Great! All Zalenium pots are running on top of the Kubernetes cluster!

 

Web-based Test Execution with Zalenium

 

After you set up Zalenium, open the service zalenium/zalenium in the default browser via the command:


minikube service zalenium --namespace zalenium

 

You can get a URL of the service deployed on Kubernetes by the following command:


➜  ~ minikube service zalenium --namespace zalenium --url
http://192.168.99.100:32708

 

Now all you need is to point the Selenium webdriver configuration to use this URL in order to run your tests with Zalenium.

 

If you want to practice Zalenium with an already existing test framework, let’s use the project from Yuri Bushnev ‘s blog about Top 15 UI Test Automation Best Practices.

 

You just need to modify the base.properties file in order to integrate the framework with Zalenium.


browser=CHROME_ON_GRID
selenium.grid.url=http://192.168.99.100:32708/wd/hub


Now you are ready to run Selenium tests with the new configuration:


mvn clean verify

 

Zalenium also gives a great possibility of a live preview of your running tests. Moreover, you can interact with the dockerized container via VNC to watch a life stream of the test during the execution process. It can be accessed at the URL: 

 

http://192.168.99.100:32708/grid/admin/live

And it will look like this:

 

 

Moreover, Zalenium with its nice ‘video/logs recording’ feature enables to monitor the test from a browser. Just check the dashboard by the URL:

http://192.168.99.100:32708/dashboard/

 

 

Congratulations! You’ve just practiced Selenium tests running on the local Kubernetes cluster using Zalenium! But please notice that Minikube is a limited version of Kubernetes. This essentially means that you can not really scale your Selenium Grid by running Minikube (It’s restricted by the power of your local machine). But the installation process of Zalenium on Kubernetes is the same as on Minikube, so you can get a good understanding of this workflow.

 

The best way to run your test would be to deploy Kubernetes on a cloud provider like AWS and enjoy a  real cloud-native scalable Selenium Grid. If you are ready to try out a cloud service,  Amazon EKS User Guide has plenty of resources to get you started.



The combination of Zalenium and Kubernetes has become a powerful solution for testing web-based applications in a more efficient manner. In the QA space test automation is a rapidly growing area and shifting more towards a cloud-native approach.

 

Use BlazeMeter’s End User Experience Monitoring to easily run Selenium tests as part of a load test, to get insights into your front end performance under load. You can either add your URL after uploading your load testing script and we will run it in Selenium for you, or you can upload a Selenium script. The test results enable you to monitor your front end performance under a load: Get combined KPIs for HTTP/JMeter response times and for GUI page load times. Analyze requests in a Waterfall chart with webpage screenshots to get a complete client side breakdown.

 

Start testing now
 

author_info: 
Anastasia Golovkova is a Quality Assurance.
author_name: 
Post to slack: 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Docker approach to virtualization can provide advantages when testing applications with open source performance testing tools. By making it replicable, Docker enables sharing the tests between users and replicating the test environment. This can be helpful for sharing knowledge between users of different levels, in addition to saving time by not running unnecessary tests.


This blog post will cover an overview of Docker, the specifics of using Docker containers for generating load, and how to involve open source Taurus for universal automation of performance tests. We’ll also cover how dockerization solves specific issues of performance tests inside CI, and how to use the cloud for scaling performance tests for massive loads.


Let’s get started.

What is Docker, and What Are Docker Containers?

Before we get into how to run performance tests with Docker containers, let’s get an overview on what Docker is.


Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.


Key features of Docker
  • An all inclusive application packaging approach 


Everything you need to run your application sits within a simple container image. You can copy it, share it with your team, develop it locally, and then put it into production.

  • Resource-efficient

With Docker, you can run many different applications at the same time. For example, you can run a full set of microservices without them interfering with each other. In the past, applications had dependency conflicts, it was challenging to run different applications on the same operating systems. We used to use virtual machines, but that required a lot of hardware to run. With Docker you can run isolated processes without running virtual machines, so it is resource efficient.

  • Portable

You can build a Docker container in one environment, and transfer it to another environment.  You can also multiply containers and run it on the same image on many different machines. There’s even Windows based Docker, so it’s not just for Linux users anymore.

  • Disposable


Docker containers require minimal effort to shut them down by destroying everything, allowing you to avoid a huge clean up process.

  • Open Source

Docker is a free open source solution, which means that you don’t need to pay for licensing, and it is frequently updated with new releases.


Benefits of Using Docker

There are many benefits for using Docker. Firstly, versioned images simplify deployments. This means you can simplify the evolution of your application with deploying just a new version with a new image.
Docker also provides a huge repository of free images for popular apps. We’ll show one of them when we delve deeper using a Taurus image.

Finally, Docker’s success has led to additional solutions such as Kubernetes and “Cloud-Native” which have really expanded cloud capabilities for developers and testers.
Now that we’ve covered what Docker is,  we're going to dig deeper into how to use open source Taurus to automate performance tests on Docker containers.

What is Taurus?


Taurus an open source command-line test automation tool. It’s a wrapper on top of JMeter, Gatling and many others.
A typical Taurus-based test starts with creating a YAML file describing the test configuration, like the one below:


Then, the YAML file is launched with the help of Taurus command-line tool, like this:

bzt test.yml

As a result, you see a live dashboard with KPIs of your test, You can also configure Taurus to send these results in a web report.

From here, all the nuances of how your test will behave is based on your YAML file instructions.

The “bzt” command is used to run the tests and you will see the results in real-time on your dashboard.

Before we can run a performance test, we need our Taurus Docker image.

Using a Pre-Built  Taurus Image What are the advantages of using a pre-built image?


There are several advantages of using a pre-build Docker image including:

  • not needing to install Taurus
  • no Python requirement
  • no JMeter installation needed
  • no Java requirement
  • pre-installed Selenium
  • as well as Chrome & Firefox

The command to run pre-built Taurus image is very simple, like this:

sudo docker run -it blazemeter/taurus http://blazedemo.com

Docker will download the image (if it is not present in the cache) and will run Taurus inside the container. Then, Taurus will generate real-time results in its dashboard..                                                      

Making Your Own Docker Image

If you want to have your own, maybe customized, version of Docker image with Taurus, it’s not too hard to build one from scratch:

Create a file named “Dockerfile” with the following contents:


FROM python
RUN pip install bzt
RUN apt-get update && apt-get -y install default-jre-headless
RUN bzt -install-tools -o modules.install-checker.include=jmeter
ENTRYPOINT ["bzt"]

Create the Docker image from these instructions by issuing the following command:

sudo docker build . -t bzt1

This will create the image, based on Alpine with Python, Java, and Taurus inside. Also, JMeter will be pre-installed. The name for the image will be “bzt1”, and we’ll use it to point to this image when we run the container.

Now that you’ve created your Docker image, let’s run a test.


Testing Our Docker Image

Testing our own Docker image will be just the same as with the pre-built one, only the name of the image changes.

First, run the following command:

sudo docker run -it bzt1 http://blazedemo.com


And it runs and you will see the results in the dashboard:


Now that you’ve run a performance test, how can we use Docker within our continuous integration process?


Advantages of Using Docker in Your Continuous Integration Process

Conflicting dependencies are not a problem anymore when using Docker. For example, if you  want to build one project with Java 8, and one project that requires Java 7, Docker can help separate these processes. Resulting images can also be put into a registry for future use and you can use dedicated images for intermediary build steps.


Another advantage is you don’t have to build the image as part of your Jenkins job, you can have several images, eg- one for unit testing, one for integration testing etc.
Docker layer cache speeds things up. You don’t need to start from scratch, which saves development time.Jus be aware of HDD space running out! When you rely heavily on building images as part of your CI integration ,you need to keep on top of maintenance for deleting old images.


Now we know the advantages, let’s look at how to transfer files in and from your Docker container.


Test-related files need to get into your Docker container, and the resulting files need to get out of the container. In order to do this, you can use the following command to mount the local directory into the container directory: docker run -v /local-dir:/container-dir…


For example: sudo docker run -it -v `pwd`:/bzt-configs bzt1 requests.jmx

We see our JMeter Script running, but how do we get to the JMeter logs, in order to get to the artifacts?

For this we have to add the mount command to image launch, by adding the following:

sudo docker run -it -v `pwd`:/tmp bzt1 /tmp/requests.jmx

I can now check in the JMeter logs, and in the Taurus logs to further investigate any problems

As you can see here, all original files have root permissions:

Taurus is a CI friendly tool so deleting root permissions shouldn’t add too much work to your flow.

Limitations


While Docker has many advantages, it has its limitations.

  • When you run several programs within the same machine you can consume major CPU, and it can affect other programs while running parallel runs, which can distort your results. Also, Docker does not solve HW scaling problems. So if your computer can only run 1,000 Virtual Users, you can’t run more than that when using Docker Containers.
  • Host OS and container still need settings tuned
  • You still need to be able to configure your docker image and your host machine
  • Disposable nature => easy to lose results
  • You will need to store your results, because of the disposable nature of Docker
  • Load Testing Reporting When Using Docker
  • We usually want proper reporting features and those aren’t available through Docker.

The first thing to try is an online web report generated from your Taurus test.

In Taurus, there is an additional parameter for command-line, called “-report”. This parameter makes Taurus to send all report KPIs into cloud storage. Also, those KPIs are presented live on web report page, which you can share with your colleagues. By using this feature, you avoid Docker’s trap of losing results after test has been completed.

If you have the BlazeMeter API key, you can configure Taurus to store results into your personal workspace, by providing API key via “-o modules.blazemeter.token=...” option.

Scaling your performance tests in the Cloud with BlazeMeter


With Docker you can only scale with the limitations of your machine. By uploading your tests to BlazeMeter you can scale tests up to 2 million virtual users, and share reporting easily.


BlazeMeter provides full support for JMeter and Taurus, as well as up to 20 other open source tools.


You can share these reports with your team, compare reports from previous tests etc.

To start running your tests in BlazeMeter, just enter your url in the box below to start testing
 

     
Please enter a valid URL
Start The Test
author_info: 
Andrey Pokhilko is Chief Scientist at BlazeMeter.
author_name: 
Post to slack: 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In today’s agile age, Continuous Delivery is not only a great concept but an increasingly critical one. This discipline, which facilitates the release of software to production at any time, supports agile practices and can cut the time-to-release of websites and apps from several weeks to just a few hours.

 

 

However, it could be argued that the industry hasn’t closed the circle yet when it comes to realizing a full Continuous Delivery Process.

 

Continuous Integration (CI) gave us the first part. CI methodologies are already very popular, with many IT organizations incorporating it into their daily working practices. CI involves integrating code into a shared repository and verifying each check-in with an automated build, enabling us to detect, locate and solve problems much quicker - and at a fraction of the cost.

 

 

Hot on the heels of Continuous Integration came Continuous Deployment - the deployment or release of code to production as soon as its ready, ensuring your releases go to market as quickly as possible. And so the second piece of the Continuous Delivery puzzle fell into place.

 

But, to date, there’s been a vital piece missing. And that’s why it’s now time for the next paradigm changing methodology to make its mark on our online forums and the lives of DevOps teams everywhere.

 

Continuous Testing: Continuous Delivery’s Missing Link

 

What is Continuous Testing?

 

Up to now, testing has been slow to catch up with other agile methodologies.

 

If you run your testing late in the software development process, you risk discovering problems at a very late stage. At best, this is a huge and complicated headache. At worst, it’s a total game changer, forcing you to go back to the drawing board at the last minute.

 

Even for developers who run testing early with manual test executions, it can be incredibly time consuming and problematic. They need to run tests after each phase of the cycle - after writing the test, after producing the code and after refactoring the code.

 

On top of that, as software ages, you need to test much more often to ensure its quality. But most companies are working with limited resources and without much time to execute these tests.  This gives you an unappealing choice: either risk compromising on quality or compromise on time. Neither of these options fit nicely within a smooth Continuous Delivery process.

 

Continuous Testing means you don’t need to compromise.  You can automate your testing and integrate it into the build process as early as possible. A continuous testing platform runs in the background, automatically executing the tests and ensuring that issues are identified almost immediately. Thus reducing the time-to-release and closing the circle to ensure a successful Continuous Delivery process.

 

Key benefits of this technique include:

 

  1. Finding Issues Earlier in the SDLC. Issues can be found and fixed quicker because developers can QA their own code.

  2. Faster Release Cycles. Automated testing saves a huge amount of time. QA is now part of the build - not something done manually after its pushed to a QA environment.

  3. Improved Code Quality. You can test EVERYTHING on every build.

  4. Minimal Time Wasted

  5. Less Risk. You KNOW your code is good with every build.

 

Continuous Testing is only going to become more essential as time goes on and technology continues to evolve. Just take a look at modern applications like mobile apps or wearables. They all have a backend and several front ends. This means that, at any point in time, you need to be able to test APIs, regressions, the performance and UIs on various operating systems and devices. To effectively manage this, fully automated testing is a must.

 

But, as I hope I’ve shown in this article,  Continuous Testing is more than just automation. It’s the final step in the Continuous Delivery Process, augmenting software quality processes and ensuring speed, agility and risk management.

 

A true agile process must be 100% automated. And I believe it will be inherent to the Continuous Delivery Process in 2015 and over the coming years.

 

 

For a detailed look at continuous testing, which includes a checklist of requirements, check out the continuous testing whitepaper.

 

 

Why is Continuous Delivery So Important Anyway?

 

If you don’t have Continuous Delivery, you don’t have agility in your processes.

 

Without agility, you’re at a huge competitive disadvantage. To compete in today’s environment, you must act and react fast! Otherwise your competition will beat you to it. And let’s face it, nobody wants to be dubbed ‘old news’.

 

What do you think? Do you agree that Continuous Testing will be big in 2015? Or do you see other trends emerging? Share your thoughts and comments here.

 

Again, here’s the link to the whitepaper on continuous testing.

 

This post was written in May 2015, and updated for accuracy in March 2019. 

 
Post to slack: 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
 

In today’s development world, the importance of APIs is known to almost all.

APIs make it possible for any two separate applications to transfer and share data between them, and makes it easier for an application users to execute actions without having to use the application’s GUI. From the developers POV, it's an easy way to execute certain functionalities of their app and test it as well.

Using APIs on a daily basis might become cumbersome, as one might have dozens or even hundreds of APIs that he or she needs to use or test. That makes it difficult to keep up with their exact request’s address, headers, authorization credentials etc., and by that make it harder to test the API for functionality, security and exception handling.

What is Postman?

Postman is a popular API client that makes it easy for developers to create, share, test and document APIs. This is done by allowing users to create and save simple and complex HTTP/s requests, as well as read their responses. The result - more efficient and less tedious work.

In this blog post we will go over how to use Postman to execute APIs for your daily work, an ability that is available in their free version. We will also show you how to use Postman when using BlazeMeter.

In case you don’t have Postman installed, you’ll need to download it and install it.

How to use Postman to execute APIs

Postman is very convenient when it comes to executing APIs, since once you’ve entered and saved them you can simply use them over and over again, without having to remember the exact endpoint, headers, API key etc.

After completion of Postman application, launch it clicking on Postman logo. Welcome panel shows different “Building block” present in Postman application. For now skip this panel clicking on “x” on top right corner.

 

Here is a detailed example explaining how to enter a new API request using CA BlazeMeter’s ‘test create’ API, but you can do this for the product you are developing:

1. Enter the API endpoint where it says ‘Enter request URL’ and select the method (the action type) on the left of that field. The default method is GET but we will use POST in the example below.

2. Add authorization tokens/credentials according to the server side requirements. 

 
The different methods/protocols Postman supports are No Authentication, Basic Authentication (provide username and password only), Digest Authentication, OAuth 1.0, OAuth 2.0, Hawk Authentication, AWS Signature and NTML Authentication [Beta].
 

3. Enter headers in case they are required.

4. Enter a post body in case it is required. In this example we are creating a BlazeMeter test that requires a JSON payload with relevant details.

5. If you wish to execute this API now, hit the ‘Send’ button, which is located to the right of the API request field. You can also click on the ‘Save’ button beside it to save that API request to your library.

That’s it! Now you know how to enter your API request to Postman and save it to your library.

How Postman helps you share BlazeMeter’s API

One of Postman’s fantastic features is ‘Collections’. ‘Collections’ allow you to group together several APIs that might be related or perhaps should be executed in a certain sequence.

For example, in the screenshot below you can see a collection that includes 4 APIs that are all required to create and run a BlazeMeter test. The first two APIs create the test object - the first of the two applies the necessary configuration, and the following API uploads the script file needed to run it. The last two APIs start and stop the test we created previously. Obviously they should be executed in that sequence, hence the collection will be sorted accordingly.

Running a Postman Collection

In order to run a Postman Collection you will need to use a feature called ‘Collection Runner’.

1. In Postman GUI, in the top left corner of the screen, click the ‘Runner’ button.

2. Select the relevant Collection. In our case it will be the one called ‘BlazeMeter API’.

3. There are additional configuration parameters that you may define but it’s not mandatory, for example you can specify the number of iterations you wish to run the collection for, as well as add delays between each API request. Also there is an option to choose your ‘Environment’. Environments allow you customize requests using variables that might include environment specific values. This is extremely helpful in case you are testing against several environments such as development environment, production environment etc. In order to set up a new environment, click on the gear icon on the top right side of the Postman GUI. Select ‘Manage environments’. You will be able to add a new one as well as its respective variables.

How Postman helps you use APIs within your own app or script

Postman also has a feature called ‘Snippets’. By using it you can generate code snippets in a variety of languages and frameworks such as Java, Python, C, cURL and many others.
This is a huge time saver since a developer can easily integrate APIs with his own code without too much hassle. To use it, click on the ‘Code’ link below the ‘Save button on the top right section of Postman’s GUI.

Below is an example for our Create Test API in a python snippet.

You can now run API testing through BlazeMeter! Simply put your URL and assertion into the box below to atart testing.

     
Please enter a valid URL
Start The Test
Assert: is found in response

This blog post was originally published in December 2016, and was updated in March 2019 for accuracy. 

author_info: 
Jacob Sharir is Support Team Leader at BlazeMeter.
author_name: 
Post to slack: 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 



Under the average website load scenario, numerous users use their browsers to surf certain websites. A single web page presented in a browser can generate tens, sometimes hundreds of unique HTTP requests. The browser, after receiving all of the responses renders the page for the user to view.

Every browser (IE, Firefox, Chrome, etc.) has its own way of generating the HTTP request according to the viewed web page. In a load scenario, the overall number of HTTP requests is directly related to the number of users that are surfing concurrently. All requests hit the server during the test generating a response for each request.

To better understand overall system performance, consider the following three metrics:
Metric Description

Perceived system performance

System performance as perceived by the load testing servers. The load testing servers measure numerous metrics related to all generated requests and responses received.

Perceived user experience

Page load time as perceived by a real browser. This metric represents the user experience. In particular the time it takes the browser to load a certain page during the test.

System performance

The system traditional KPIs such as CPU, memory, bandwidth, etc as measured during the test.
Each metric is a combination of numerous measures such as:
Metric Description
Response Time The time it takes a request to fully load. From the time the request is initiated until the time it is complete. This generally indicates the performance level of the entire system under test (web server + DB). This measure represents the average response time at a certain minute of the test.
Latency Time until first response. The time it takes for the first byte to be received as part of the response. This generally indicates the performance level of the web server. This measurement represents the average latency time at a certain minute of the test.
Users This measurement represents the number of active users at a certain minute of the test.
Hits This measure represents the number of hits per minute at a certain minute of the test.
Errors Errors generated by the server during the test and errors due to connection timeouts, refusals or broken connections.
Bandwidth The amount of bandwidth used by a request or set of requests.
Perceived System Performance

Perceived system performance is the most important metric in performance testing as it contains all three metrics mentioned above. This metric provides answers to the following questions:

  • Q. Does the system behave differently under different load scenarios? For example: Is the level of performance the same when 10 Qusers are visiting the site and when 1000 Qusers are visiting the site?
  • Q. Is there a degradation in performance level for each unique request under different load scenarios? For example: POST requests, DB transactions etc.
  • Q. When does performance degradation begin?
  • Q. Where are the bottlenecks in the system under test?

Perceived system performance applies three different measurements to each unique request that is part of the simulated traffic and three different measurements for the aggregated results of the simulated traffic. These measures allow a load-testing professional to evaluate the performance of each request under a certain load.


Perceived system performance provides aggregated reports as well, taking into account all request and responses. The aggregated reports can provide insight while identifying bottlenecks. Some examples of conclusions that can be reached via perceived system performance:

  • Performance results can state that an average response time of a website is 700 milliseconds. However, some POST requests response time can grow with the growth of the number of users. One can only identify this by looking at the specific reports of each request.
  • Identify that a DB request is taking too long to execute under a load scenario.
  • Discover which CSS pages break under a load.
  • Connection timeouts and broken connections.
  • Error responses generated under load.
Perceived User Experience

Perceived user experience provides an answer to one of the most important questions:

  • Q: What would be the user experience under a certain load scenario?

As each brand of browser generates the HTTP requests in a different way, the above mentioned metric (Perceived system performance) can not tell us what would be the perceived user experience.

Consider the example where a page generates 10 HTTP requests. Assume each HTTP request takes 1 second to load, what would be the load time of the full page?

It's hard to say. Some browsers will execute all requests in parallel while some will execute one after the other. So it can be anywhere from 1 to 10 seconds. The only way to measure the user experience is by launching a real browser and measure the load time of the web page.

Using the same technique under a load scenario can assist in an evaluation of the user experience during the load.

With the perceived user experience, a website owner can know what the user experience would be under some load scenarios.

For example:

  • A web page load time can be 2 seconds at 100 users and 4 seconds at 500 users.
  • The website will not load at all at 1000 users.
System Performance

System performance completes the performance picture by describing the system performance using traditional KPIs such as:

  • CPU
  • Load
  • Memory
  • Bandwidth

Correlating perceived system performance and system performance results can assist with the identification of bottlenecks and problems that are responsible to a poor performance level.

For example: If under a certain load the CPU level of the server under test goes over 70%, we know that the server is not capable dealing with such a load. That said, the response time gathered from perceived system performance, can tell us about the same thing.

Conclusions

Load results, user experience and system performance - all three of these metrics are required to get the full system performance picture. The most important metric is the perceived system performance, as it simulates the numerous users that will visit the website during a load scenario and describes in details all of the measure accumulated during that time.

Post to slack: 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Apache JMeter™ enables you to tailor your tests with different types of Thread Groups, and not only the regular Thread Group. In this blog post, we will learn how to utilize the setUp Thread Group and incorporate it in your JMX script while running it in JMeter and in BlazeMeter.

The setUp Thread group is used when you need to run initial actions to prepare the testing environment, prior to starting your main test. These actions should be configured within the SetUp Thread Group and not within the regular Thread Group that you will use for running your performance test.

  When Should You Prepare the Load Testing Environment?


Preparing the target environment for a load test is required in many testing situations. For example, when you need to:

  • Create a list of registered users in the database
  • Retrieve required data from a database and use it during the load test
  • Populate the database with test data before the load test starts
  • Run pre-load test calculations that should be executed separately from the load test. For example, calculating some parameters once and then using them during the load test

The setUp Thread Group

JMeter enables its users to run pre-load test actions via a special thread group – setUp Thread Group.

As mentioned above, the setUp Thread Group is a special type of Thread Group that can be handy if you need to perform pre-test actions. With the setUp Thread Group, you can set-up your testing environment before running the load test thread group. The "inverse" of this action is the TearDown Thread Group, which is used for post-test actions after all the other thread groups are done (deleting users from the database for example).

As we can see above, the setUp Thread Group and the regular Thread Group are almost identical in terms of configuration options. The only difference is that the setUp Thread Group will execute before the execution of the other thread groups.

Now, let’s see it in action.

Creating Your JMeter Script

Let’s define the following scenario: We want to test 10 users that will login to our website and purchase some clothes. We will need to create the users before executing the test. This can be done using the setUp Thread Group in the following way:

  Configuring the database

I’ve created a database collection called “app-sign-ups” (I used a free trial of restdb.io). It is configured with 3 fields: “first-name”, last-name” and “email”. Currently, the collection has no entries.

Configuring our script:

First, create a test-plan that contains the setUp Thread Group.

The setUp Thread Group consists of a CSV Data Set Config with a reference to the CSV file containing the list of users to register (first name, last name and email address)

In our example, we send the registration request with 3 variables: “firstName”, “lastName” and “emailAddress”. JMeter will parse the CSV file and populate those variables with the values in correspondence to the data of the CSV file.


Here is a screenshot of our CSV file that contains the details of 10 users:


Now let’s configure the registration request with a POST method that will initiate an API call to add a new record to our collection (registering a user). We will use the following API call:

POST https://sampledb-d52b.restdb.io/rest/app-sign-ups

And the Data that will send in the Body of the request:

{
"first-name":"${firstName}",
"last-name":"${lastName}",
"email":"${emailAddress}"
}


We need to make sure that we are using the same variable names we defined in our CSV and the same fields we defined in our database.


I’ve also added an HTTP Header Manager which specifies the content-type as “application/json” and the API key with the value of our database API key.

Next, add the regular Thread Group which contains the actual test logic and samplers:

  1. Login
  2. Purchase
  3. Checkout
  4. Logout

Once we run this script, the first thread group to run will be the setUp Thread Group. It will create 10 users since we configured 10 threads with 1 iteration. Once the setUp Thread Group is finished, the other thread group will start executing, which will be the start of our load test.

In the following screenshot, we can see the results of our run; The requests sent to our database were successful and we received a 200 response code. Notice that the setUp Thread Group will be executed first, no matter the order that it appears in the GUI.

The JSON response indicates that the request was successful:


Let’s go back to the database and verify that it has been modified as expected with the creation of 10 new users.
We can see that the entries are now populated by the values within our CSV file.

That’s it! Now that our testing environment is ready, we can move forward and check the behavior of our load test.


SetUp Thread Group when you want to run your script with more than one engine

When integrating our test scenario with BlazeMeter, you can increase the number of engines running the test.

Under the Load Distribution configuration, you have the ability to choose how many engines to allocate to run your test and how the concurrency is spread over these engines.

The simple default configuration is to set BlazeMeter to run with 1 engine.


But what will happen if we want to configure more than 1 engine?

Each engine will take part in generating the actual load and simulate the number of threads/virtual users specified in the script you provide.

As you can see from the screenshot below, each engine will perform the script, so we end up with setting the environment multiple times.

To overcome this problem of each engine running the setUp Thread Group logic, we can make use of the following variable:

${__env(TAURUS_SESSIONS_INDEX)}

TAURUS_SESSIONS_INDEX is evaluated in correspondence to the number of engines you chose to run with. For example, if you choose to run with 2 engines then TAURUS_SESSIONS_INDEX will be evaluated to 1 (for the first engine) and 2 (for the second engine).

We need to configure BlazeMeter to run the logic under the setUp Thread Group only once.

To achieve this, we will modify our JMX script as follows:

Add an “If Controller” before as a parent to the registration sampler. Set the condition to the following:
${__groovy(${__env(TAURUS_SESSIONS_INDEX)} == 1)}

Here is the configuration in JMeter:

When running a performance script with a large number of threads, it is generally recommended to use “Interpret Condition as Variable Expression” with “groovy” over the default JavaScript interpretation as the former consumes more resources.


What will happen is that engine #1 will set up the environment while engine#2 will skip the “if” statement inside the setUp Thread Group and continue straight to the regular Thread Group. Below are the results of the test run that are shown as part of the Request Statistics report:

One thing to note is that engine#2 will start executing the regular Thread Group before engine#1 has finished the set-up process.

You should configure all the engines to wait until the set-up process is complete.
One way is to add a timer inside the setUp Thread Group with the amount of time we know it should take to set-up the environment. We have decided to add a Constant Timer (see screenshot above).

In this blog post, I’ve covered how we can utilize the setUp Thread Group and run it with multiple engines through BlazeMeter.

To start testing with BlazeMeter yourself, just put your URL in the box below and your test will start in minutes.


 

     
Please enter a valid URL
Start The Test
author_name: 
Post to slack: 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
 

The Representational State Transfer (REST) uses the HTTP request method, and the most popular  are GET, POST, PUT and DELETE. Responses to these requests return status codes indicating success or failure as well as any applicable headers, and JSON representing the affected fields (or nothing) in the message body. The following sections describe how you can easily write a JMeter script with one of these methods.

 
GET REQUEST METHOD
 

1. Add an HTTP Request to your Thread Group.


 

2. Fill in the Protocol, Server Name or IP, Path, and choose GET method.

   

For example, we use

https for Protocol

jsonplaceholder.typicode.com for Server Name
/todos/1 for Path

 
 

4. Add View Results Tree, and run your script. The following show the Sampler result, Request, and Response data. 

 
 
POST REQUEST METHOD 
 

In POST requests, you can fill in both the body and the headers. You can also specify query parameters in path. The HTTP headers, which contain metadata, are tightly defined by the HTTP spec. They can only contain plain text and must be formatted in a certain manner. To specify headers, you’ll need the HTTP Header Manager, and the most common headers are Content-Type and Accept.

 

- The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient.

- Accept can be used to specify certain media types that are acceptable for the response. You can use a user agent to simulate different browsers' behaviors.

 

Post Body can be useful for the following requests: GWT RPC HTTP, JSON REST HTTP, XML REST HTTP, and SOAP HTTP Request.

 

For instance, we use the server name:

www.dneonline.com

 

Path: /calculator.asmx

Method: POST

 

Body:

<?xml version="1.0" encoding="utf-8"?>

<soap:Envelope

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

xmlns:xsd="http://www.w3.org/2001/XMLSchema"

xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">

<soap:Body>

<Add

xmlns="http://tempuri.org/">

<intA>20</intA>

<intB>5</intB>

</Add>

</soap:Body>

</soap:Envelope>

 
 

And header:

Content-Type text/xml; charset=utf-8

 
 

You can configure other requests (similar to GET and POST) using required methods, path, parameters, or body and headers. 

 
 
Next go into “View Result Tree” Listener is possible to verify sample response
 

 
Select the sample on left panel and choose XML visualization (as shown in above picture). Into response detail we can see response structure and data returned.
 
 
With BlazeMeter, you can also run all of your JMeter scripts in the cloud and get enhanced features and reporting. Just put your URL in the box below and start testing in minutes.
 
 
 
     
Please enter a valid URL
Start The Test
Post to slack: 

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview