Monday, August 24, 2015

PDQ Version 6.2.0 Released

PDQ (Pretty Damn Quick) is a FOSS performance analysis tool based on the paradigm of queueing models that can be programmed natively in

This minor release is now available for download.

If you're new to PDQ, here's a simple queueing model written R that you can paste directly into an RStudio console or script window:

# A simple M/M/1 queueing model in R-PDQ.


# input parameters
arrivalRate <- 0.75
serviceRate <- 1.0

## Build and solve the PDQ model
Init("Single queue model")                 # Initialize PDQ
CreateOpen("Work", arrivalRate)            # Create workload
CreateNode("Server", CEN, FCFS)            # Def single server
SetDemand("Server", "Work", 1/serviceRate) # Def service time
Solve(CANON)                               # Solve the model
Report()                                   # Formatted output
Also, check out the relevant books and training classes.

Wednesday, July 29, 2015

Hockey Elbow and Other Response Time Injuries

You've heard of tennis elbow. Well, there's a non-sports, performance injury that I like to call hockey elbow. An example of such an "injury" is shown in Figure 1, which appeared in a recent computer performance analysis presentation. It's a reminder of how easy it is to become complacent when doing performance analysis and possibly end up reaching the wrong conclusion.

Figure 1. injured response time performance

Figure 1 is seriously flawed for two reasons:

  1. It incorrectly shows the response time curve with a vertical asymptote.
  2. It compounds the first error by employing a logarithmic x-axis.

The relationship between performance metrics is generally nonlinear. That's what makes performance analysis both interesting and hard. Your brain has evolved to think linearly, whereas computer systems behave nonlinearly. As a consequence, your brain needs help to comprehend those nonlinear effects. That's one reason plotting performance data can be so helpful—as long as it's done correctly.

Nonlinearity is a class, not a thing.

Response-time data belongs to a class of nonlinearity that is best characterized by a convex function. That's a mathematical term that simply means the plotted data tends to curve upward and away from the x-axis.

Figure 2b. Hockey-stick response time profile

Sunday, July 26, 2015

Next GCaP Class: September 21, 2015

The next Guerrilla Capacity Planning class will be held during the week of September 21, 2015 at our new Sheraton Four Points location in Pleastaton, California. Early bird rate ends August 21st.

During the class, I will bust some entrenched CaP management myths (in no particular order):

  • All performance measurements are wrong by definition.
  • There is no response-time knee.
  • Throughput is not the same as execution rate.
  • Throughput and latency metrics are related — nonlinearly.
  • There is no parallel computing.

No particular knowledge about capacity and performance management is assumed.

Attendees should bring their laptops as course materials are provided on CD or flash drive. The Sheraton provides free wi-fi to the internet.

We look forward to seeing you there!

Monday, March 23, 2015

Hadoop Scalability Challenges

Hadoop is hot, not because it necessarily represents cutting edge technology, but because it's being rapidly adopted by more and more companies as a solution for engaging in the big data trend. It may be coming to your company sooner than you think.

The Hadoop framework is designed to facilitate the parallel processing of massive amounts of unstructured data. Originally intended to be the basis of Yahoo's search-engine, it is now open sourced at Apache. Since Hadoop now has a broad range of corporate users, a number of companies offer commercial implementations of Hadoop.

However, certain aspects of Hadoop performance, especially scalability, are not well understood. These include:

  1. So called flat development scalability
  2. Super scaling performance
  3. New TPC big data benchmark
Therefore, I've added a new module on Hadoop performance and capacity management to the Guerrilla Capacity Planning course material that also includes such topics as:
  • There are only 3 performance metrics you need to know
  • How performance metrics are related to one another
  • How to quantify scalability with the Universal Scalability Law
  • IT Infrastructure Library (ITIL) for Guerrillas
  • The Virtualization Spectrum from hyperthreads to hyperservices
  • Hadoop performance and capacity management
The course outline has more details.

Early bird registration ends in 5 days.

I'm also interested in hearing from anyone who plans to adopt Hadoop or has experience using it from a performance and capacity perspective.

Friday, March 20, 2015

Performance Analysis vs. Capacity Planning

This question came up in a (members only) Linkedin discussion group:
Often found a misconception about these terms. I'm sure this must be written in a book, but for informal discussions is always preferable to cite sources from standardization institutes or IT industry referents.

Thanks in advance

Gian Piero
Here's how I answered it.

Monday, March 9, 2015

Guerrilla Training: New Location

Finally! We have a new location for our Guerrilla training classes in Pleasanton, California: Sheraton Four Points.

We had some complaints last year about noise from the car parks of surrounding restaurants during the night at the previous location. Four Points is much more secluded. It also has its, own restaurant, which some of you will recognize if you've attended previous Guerrilla classes (more than likely, we did lunch and/or dinner there).

The current 2015 schedule and registration page is now posted. The classroom is intimate and only holds about 10-12 people, so book early, book often.

Monday, October 6, 2014

Tactical Capacity Management for Sysadmins at LISA14

On November 9th I'll be presenting a full-day tutorial on performance analysis and capacity planning at the USENIX Large Scale System Administration (LISA) conference in Seattle, WA.

The registration code is S4 in System Engineering section.

Hope to see you there.