Thursday, May 19, 2011

Applying PDQ in R to Load Testing

PDQ is a library of functions that helps you to express and solve performance questions about computer systems using the abstraction of queues. The queueing paradigm is a natural choice because, whether big (a web site) or small (a laptop), all computer systems can be represented as a network or circuit of buffers and a buffer is a type of queue.

As a performance analyst, there are several things I really like about using PDQ in R; as opposed to the other programming languages: C, Perl, Python, etc. It enables you to:
  1. easily import (large) data with a variety formats
  2. perform sophisticated statistical analysis
  3. extract input parameters for a PDQ model
  4. construct and execute the PDQ model within R
  5. plot the PDQ output and compare it with the original data
  6. test your ideas in the R console and save the best into a script
In applying this approach, you could find yourself using a number of R library packages. To improve clarity in your modeling script, you might like to identify clearly which routines belong to PDQ; especially if you're new to PDQ and not familiar with all the functions.

R syntax for naming function dependency is the same as Perl. The :: operator is used for explicitly exported names. It also avoids conflict between packages the export different functions with the same name. The ::: operator is used for access to functions that are not exported in the package namespace.

Let's look at the above steps in the context of an example based on load testing data. A key point to observe here is how the performance data and the performance model play together to provide validation of the measurements.

Performance data

We begin by importing the load test data from measurements of an application intended for a three-tier architecture.


# Read in the performance measurements
gdat <- read.csv("/Users/njg/.../gcap.dat",header=TRUE)
Even though the ineq package is part of base R functionality, I've loaded it explicitly so as to name its functions explicitly. This will also provide a contrast with explicitly named functions from the PDQ package.

> gdat
  Vusr Xgps   Rms Uweb Uapp Udbm
1    1   24  26.0 0.21 0.08 0.04
2    2   48  26.0 0.41 0.13 0.05
3    4   85  29.3 0.74 0.20 0.05
4    7  100  44.7 0.95 0.23 0.05
5   10  115  66.0 0.96 0.22 0.06
6   20  112 140.0 0.97 0.22 0.06
The columns are respectively the client load, measured throughput, response time (in milliseconds), and system utilization on each of the three tiers.

Statistical analysis

We can now perform various kinds of statistical analysis on these data.

> summary(gdat)
      Vusr             Xgps             Rms              Uweb             Uapp       
 Min.   : 1.000   Min.   : 24.00   Min.   : 26.00   Min.   :0.2100   Min.   :0.0800  
 1st Qu.: 2.500   1st Qu.: 57.25   1st Qu.: 26.82   1st Qu.:0.4925   1st Qu.:0.1475  
 Median : 5.500   Median : 92.50   Median : 37.00   Median :0.8450   Median :0.2100  
 Mean   : 7.333   Mean   : 80.67   Mean   : 55.33   Mean   :0.7067   Mean   :0.1800  
 3rd Qu.: 9.250   3rd Qu.:109.00   3rd Qu.: 60.67   3rd Qu.:0.9575   3rd Qu.:0.2200  
 Max.   :20.000   Max.   :115.00   Max.   :140.00   Max.   :0.9700   Max.   :0.2300  
 Min.   :0.04000  
 1st Qu.:0.05000  
 Median :0.05000  
 Mean   :0.05167  
 3rd Qu.:0.05750  
 Max.   :0.06000  
More significantly, we can use R statistical functions to derive appropriate parameters for a PDQ model.

# Apply Little's law to get mean service times + CoVs
Sweb <- mean(gdat$Uweb/gdat$Xgps)
Sapp <- mean(gdat$Uapp/gdat$Xgps)
Sdbm <- mean(gdat$Udbm/gdat$Xgps)

Csw <- ineq::var.coeff(gdat$Uweb/gdat$Xgps)
Csa <- ineq::var.coeff(gdat$Uapp/gdat$Xgps)
Csd <- ineq::var.coeff(gdat$Udbm/gdat$Xgps)

s1 <- sprintf("System: %6s %6s %6s\n", "Web","App","DBMS")
s2 <- sprintf("Mean S: %6.4f %6.4f %6.4f\n", Sweb, Sapp, Sdbm)
s3 <- sprintf("CoV  S: %6.4f %6.4f %6.4f\n", Csw, Csa, Csd)
In particular, we calculate the average service times on each tier (second row) by applying Little's law.

 System:    Web    App   DBMS
 Mean S: 0.0088 0.0024 0.0008
 CoV  S: 0.0411 0.1989 0.5271

PDQ model

As shown in Figure 1, the service times for each of the three tiers in the load-test platform can be represented as queueing resources in PDQ.

There is a finite number of requests allowed in the system corresponding to the load clients or virtual users that range between N = 1 and N = 20 Vusers, represented by the octagonal box in Figure 1. Using the diagram, we set up the following PDQ model. Note the use of explicitly named functions from the PDQ library

# Plotting variables
xc <- 0  # Vuser loads
yc <- 0  # PDQ throughputs
rc <- 0  # PDQ response times

# Define and solve the PDQ model
for(n in 1:max(gdat$Vusr)) {
 pdq::Init("Three-Tier Model")
 pdq::CreateClosed("httpGETs", TERM, as.numeric(n), 0.028)
 pdq::CreateNode("WebServer", CEN, FCFS)
 pdq::CreateNode("AppServer", CEN, FCFS)
 pdq::CreateNode("DBMServer", CEN, FCFS)
 pdq::SetDemand("WebServer", "httpGETs", Sweb)
 pdq::SetDemand("AppServer", "httpGETs", Sapp)
 pdq::SetDemand("DBMServer", "httpGETs", Sdbm)

 xc[n] <- n
 yc[n] <- pdq::GetThruput(TERM, "httpGETs")
 rc[n] <- pdq::GetResponse(TERM, "httpGETs") * 10^3
In the above PDQ model, we've selected the predicted throughput and the predicted response times to compare with the original load-test data.

Plot PDQ results

# Plot throughput and response time models
plot(xc, yc, type="l", lwd=1, col="blue", ylim=c(0,120), main="PDQ Throughput Model", xlab="Vusers (N)", ylab="Gets/s X(N)")
points(gdat$Vusr, gdat$Xgps)
plot(xc, rc, type="l", lwd=1, col="blue", ylim=c(0,220), main="PDQ Response Time Model", xlab="Vusers (N)", ylab="ms R(N)")
points(gdat$Vusr, gdat$Rms) 
The above R code produces the following plot array:

We see that the data and PDQ model are in good agreement with the throughput saturating above N = 5 vusers with the corresponding response time rising up the proverbial "hockey stick" handle.

PDQ report

Optionally, we can produce a formal PDQ report to examine the performance of each of the three tiers, even if we don't have any corresponding performance measurements from the load-test platform. This is one way by which bottlenecks can be predicted and checked before deploying into production.

> pdq::Report()
                ****** Pretty Damn Quick REPORT *******
                ***  of : Sun May 15 18:26:21 2011  ***
                ***  for: Three-Tier Model          ***
                ***  Ver: PDQ Analyzer v5.0 030211  ***

                ******    PDQ Model INPUTS      *******

Node Sched Resource   Workload   Class     Demand
---- ----- --------   --------   -----     ------
CEN  FCFS  WebServer  httpGETs   TERML     0.0088
CEN  FCFS  AppServer  httpGETs   TERML     0.0024
CEN  FCFS  DBMServer  httpGETs   TERML     0.0008

Queueing Circuit Totals:
        Streams:      1
        Nodes:        3

WORKLOAD Parameters:
httpGETs      20.00        0.0120     0.03

                ******   PDQ Model OUTPUTS      *******

Solution Method: EXACT

                ******   SYSTEM Performance     *******

Metric                     Value    Unit
------                     -----    ----
Workload: "httpGETs"
Mean concurrency         16.8004    Users
Mean throughput         114.2725    Users/Sec
Response time             0.1470    Sec
Round trip time           0.1750    Sec
Stretch factor           12.2633

Bounds Analysis:
Max throughput          114.2725    Users/Sec
Min response              0.0120    Sec
Max Demand                0.0088    Sec
Tot demand                0.0120    Sec
Think time                0.0280    Sec
Optimal clients           4.5696    Clients

                ******   RESOURCE Performance   *******

Metric          Resource     Work              Value   Unit
------          --------     ----              -----   ----
Throughput      WebServer    httpGETs       114.2725   Users/Sec
Utilization     WebServer    httpGETs       100.0000   Percent
Queue length    WebServer    httpGETs        16.3144   Users
Waiting line    WebServer    httpGETs        15.3144   Users
Waiting time    WebServer    httpGETs         0.1340   Sec
Residence time  WebServer    httpGETs         0.1428   Sec

Throughput      AppServer    httpGETs       114.2725   Users/Sec
Utilization     AppServer    httpGETs        27.7529   Percent
Queue length    AppServer    httpGETs         0.3841   Users
Waiting line    AppServer    httpGETs         0.1066   Users
Waiting time    AppServer    httpGETs         0.0009   Sec
Residence time  AppServer    httpGETs         0.0034   Sec

Throughput      DBMServer    httpGETs       114.2725   Users/Sec
Utilization     DBMServer    httpGETs         9.2447   Percent
Queue length    DBMServer    httpGETs         0.1019   Users
Waiting line    DBMServer    httpGETs         0.0094   Users
Waiting time    DBMServer    httpGETs         0.0001   Sec
Residence time  DBMServer    httpGETs         0.0009   Sec


icrushservers said...

I am assuming that in the PDQ-R example in the blog above that all the virtual users perform the same actions?

In a lot of the load testing that I have performed in the past we have individual types of vusers that are made into a scenario.

Simple e-Commerce example:

10 vusers that search for product.
5 vusers that browse for product.
3 vusers that purchase product.

Neil Gunther said...

That's right. You can think of it as an aggregated workload with all the transactions (PDQ streams) lumped together. This is often a good starting point for building a performance model, anyway.

Although the example does come from real measurements, it is intended to convey more about the convenience and flexibility of using PDQ within the context of R's statistical functions and programming language, than reflecting the details of any particular load test.

It would be a straightforward matter to generalize this PDQ model to include three or more workload classes of the type mentioned in your e-commerce example.

The different performance perspectives revealed by composite and component models are compared in Chapter 3, Section 3.7 of my Perl::PDQ book

Neil Gunther said...

re: 3 workload scenario

Check out GMantra: 2.11 " No service, no queues" (a play on "No shoes, no service")

Although the multi-class workload may be considered to better reflect reality, the show-stopper question is: Do you have the requisite service times for all these addition PDQ streams?

A workload class is partly defined by it's individual service time at each resource (web, app, dbms). If, for whatever reason, you can't define those, then you are back to the aggregate model originally described.