Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

How Developers Can Understand the Relationship Between Users and Hits Per Second

October 8, 2015 No Comments

Featured article by Michael Sage, DevOps Evangelist, BlazeMeter

Load testing is important for the overall functionality and speed of a website. Throughput, or the difference between Concurrent Users and Hits and Requests per Second, plays a big role in website functionality and performance goals.

However, it’s often unclear what numbers teams should be shooting for in terms of performance goals, as the relationship between Concurrent Users and Requests per Second is not always understood. This post examines how to look at the relationship between these two values to strive for optimal website performance.

Throughput vs. Concurrent Users

Throughput is a measure of how many units of work are being processed. In the case of website load testing, this is usually hits per second, also known as requests per second.

Concurrent users are the number of users engaged with the app or site at a given time. They’re all in the middle of some kind of session, but they are all doing different things.

Blaze_1

Perhaps the easiest way to think about it is as a room full of people with laptops and devices, and you are the moderator for a group exercise. You ask each person to start browsing the site. You may even assign subgroups, some to buy an item, others to simply surf around. You then say “Ready, Set, Go!” and they’re off and running. Walking around, you can see that each of them is on a different page. Some are reading content, others are clicking around, and still others have stopped to take a call on their phone. One guy actually fell asleep!

These are concurrent users.

But if you want a measure of activity, you need to look at how many requests are being sent to the servers over time. That’s throughput.

Like a Conveyor Belt

Understanding throughput is fairly straightforward. Tests are sending a certain number of HTTP requests to the servers, which are processing them. This total is then calculated in one-second intervals. To keep the math simple, if a server receives 60 requests over the course of one minute, the throughput is one request per second. If it receives 120 requests in one minute, it’s throughput is two requests per second. And so on.

Makes sense, right? It’s like a conveyor belt in a factory. One widget at a time is passed through a machine that stamps it with a label. If the machine stamps sixty widgets in one minute, it’s throughput is one widget per second.

Blaze_2

Concurrent users are a different way of looking at application traffic. In planning load tests, the goal is often to measure whether or not our application (not just the web server, but the entire stack) can handle an expected amount of traffic. Among the easier variables to work with is how many users we expect to have visiting the site and interacting with various pages, or using an app from mobile devices.

These numbers may come from tracking tools like Google Analytics or MixPanel or KISSmetrics, or they could come straight from machine logs. Whatever the case, it is generally more useful to think of the number of people engaging with the site, not machine requests, when capacity is being planned and load tests designed. It’s an easier conversation for all stakeholders involved if the coming rush of Christmas shoppers or the expected number of new students registering is discussed, than the discussion is on how many GET or POST requests might be coming in every second.

That usually leads to test cases being developed based on the idea of a user or visitor. A script in a tool like Apache JMeter can be created that mirrors what users are expected to do, such as navigating through a retail site, searching for an item, choosing one and buying it, registering for an account, editing profile preferences, and so on. Scripting like this is a good practice since it exercises the same components throughout the stack that will be hit with actual production traffic. The scripts then act as “virtual copies” of a real user, doing the same things with the system under test that those real users do.

 Blaze_3

 If 500 users are expected to be on the site at any given point, the site’s behavior when 500 users are doing things at more or less the same time will need to be tested. But humans visiting the site don’t act in a synchronized way, of course. At any given point, some people will be searching, others will be reading an item’s description, still others will be updating their profile pic, and maybe some will stop for a minute to check their email and will return to the site after that pause. That would mean there are 500 concurrent users, but not making 500 concurrent requests. The hits per second that those concurrent users generate will only be based on their actual interactions with the app, when they click a button or a link or submit a form.

The Impact on Measuring Performance

What does this all mean for our load tests? How can performance be measured most accurately?

A good suggestion is to start with a set of scripts that represent the natural paths through the application that you already know users take. Then run those scripts at stepped intervals of load. For example, run a script with 10 users, then 100, then 1000 or more. Examine the Hits per Second for each run to get a sense of the levels of activity each set generates and you can probably extrapolate from there. Keep in mind it won’t be a linear scaling, but you’ll have some solid numbers to start from.

We at BlazeMeter are often asked to help customers translate one value to the other, to calculate that X number of users = Y number of hits/sec. We can definitely help with that but only if we have some good data sets as baselines, since every app is different, and every script has multiple variables that change the equation.

 

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner

WomeninTech