Wednesday, June 8, 2016

2016 Guerrilla Training Schedule 2016

After a six month hiatus working on a major consulting gig, Guerrilla training classes are back in business with the three classic courses: Guerrilla Bootcamp (GBOOT), Guerrilla Capacity Planning (GCAP) and Guerrilla Data Analysis Techniques (GDAT).

See what graduates are saying about these courses.

Some course highlights:

  • There are only 3 performance metrics you need to know
  • How to quantify scalability with the Universal Scalability Law
  • Hadoop performance and capacity management
  • Virtualization Spectrum from hyper-threads to cloud services
  • How to detect bad data
  • Statistical forecasting techniques
  • Machine learning algorithms applied to performance data

Register online. Early-bird discounts run through the end of July.

As usual, Sheraton Four Points has bedrooms available at the Performance Dynamics discounted rate.

Tell a friend and see you in September!

Saturday, May 14, 2016

PDQ 7.0 Dev is Underway

The primary goal for this release is to make PDQ acceptable for uploading to CRAN. This is a non-trivial exercise because there is some legacy C code in the PDQ library that needs to be reorganized while, at the same time, keeping it consistent for programmatically porting to other languages besides R—chiefly Perl (for the book) and Python.

To get there, the following steps have been identified:

  1. High Priority

    1. Migrate from SourceForge to GitHub.
    2. Change the return type for these functions from int to void:
      • PDQ_CreateOpen()
      • PDQ_CreateClosed()
      • PDQ_CreateNode()
      • PDQ_CreateMultiNode()
      Using the returned int as a counter was deprecated in version 6.1.1.
    3. Convert PDQ-R to Rcpp interface.
    4. Clean out the Examples directory and other contributed code directories leaving only Examples that actually use the PDQ C library.
    5. Add unit tests for PDQ C library, as well as the Perl, Python, and R languages.
    6. Get interface accepted on CRAN
    7. Add the ability to solve multi-server queueing nodes servicing an arbitrary number of workloads.

  2. Low Priority

    1. Get interface accepted on CPAN and PyPI.
    2. Convert to build system from makefiles to Cmake.

Stay tuned!

—njg and pjp

Friday, May 13, 2016

How to Emulate Web Traffic Using Standard Load Testing Tools

The following abstract has been submitted to CMG 2016:

How to Emulate Web Traffic Using Standard Load Testing Tools

James Brady (State of Nevada) and Neil Gunther (Performance Dynamics)

Conventional load-testing tools are based on a fifty year old time-share computer paradigm where a finite number of users submit requests and respond in a synchronized fashion. Conversely, modern web traffic is essentially asynchronous and driven by an unknown number of users. This difference presents a conundrum for testing the performance of modern web applications. Even when the difference is recognized, performance engineers often introduce virtual-user script modifications based on hearsay; much of which leads to wrong results. We present a coherent methodology for emulating web traffic that can be applied to existing test tools.

Keywords: load testing, workload simulation, web applications, software performance engineering, performance modeling

Related blog posts:

  1. Emulating Web Traffic in Load Tests
  2. Mapping Virtual Users to Real Users
  3. How to Extend Load Tests with PDQ

Tuesday, September 29, 2015

Remember the Alamo at CMG 2015

The Alamo is a reference to an episode in Texan history about defeat and revenge. But, there's nothing defeatist or mythical about the sessions I'll be giving at CMG in San Antonio this year.

Workshop: How to Do Performance Analytics with R, Mon Nov 2, 8-12am

You've collected cubic light-years of performance monitoring data, now whaddya gonna do? Raw performance data is not the same thing as information, and the typical time-series representation is almost the worst way to glean information. Neither your brain nor that of your audience is built for that (blame it on Darwin). To extract pertinent information, you need to transform your data and that's what the R statistical computing environment can help you do, including automatically.

Topics covered will include:

  • Introduction to R using RStudio
  • Descriptive statistics
  • Performance visualization
  • Data reduction techniques
  • Multivariate analysis
  • Machine learning techniques
  • Forecasting with R
  • Scalability analysis

Invited talk: Hadoop Super Scaling, Wed Nov 4, 5-6pm

The Hadoop framework is designed to facilitate parallel-processing massive amounts of unstructured data. Originally intended to be the basis of Yahoo's search-engine, it is now open sourced at Apache. Since Hadoop has a broad range of corporate users, a number of companies offer commercial implementations or support for Hadoop.

However, certain aspects of Hadoop performance---especially scalability---are not well understood. One such anomaly is the claimed flat scalability benefit for developing Hadoop applications. Another is that it's possible to achieve faster than parallel processing. In this talk I will explain the source of these anomalies by presenting a consistent method for analyzing Hadoop application scalability.

CMG-T: Capacity and Performance for Newbs and Nerds, Thur Nov 5, 9-11am

In this tutorial I will bust some entrenched myths and develop basic capacity and performance concepts from the ground up. In fact, any performance metric can be boiled down to one of just three metrics. Even if you already know metrics like, throughput and utilization, that's not the most important thing: it's the relationship *between* those metrics that's vital! For example, there are at least three different definitions of utilization. Can you state them? This level of understanding can make a big difference when it comes to solving performance problems or presenting capacity planning results.

Other myths that will get busted along the way include:

  • There is no response-time knee.
  • Throughput is not the same as execution rate.
  • Throughput and latency are not independent metrics.
  • There is no parallel computing.
  • All performance measurements are wrong by definition.

No particular knowledge about capacity and performance management is assumed.

See you in San Antonio!

Monday, August 24, 2015

PDQ Version 6.2.0 Released

PDQ (Pretty Damn Quick) is a FOSS performance analysis tool based on the paradigm of queueing models that can be programmed natively in

This minor release is now available for download.

Wednesday, July 29, 2015

Hockey Elbow and Other Response Time Injuries

You've heard of tennis elbow. Well, there's a non-sports, performance injury that I like to call hockey elbow. An example of such an "injury" is shown in Figure 1, which appeared in a recent computer performance analysis presentation. It's a reminder of how easy it is to become complacent when doing performance analysis and possibly end up reaching the wrong conclusion.


Figure 1. injured response time performance

Figure 1 is seriously flawed for two reasons:

  1. It incorrectly shows the response time curve with a vertical asymptote.
  2. It compounds the first error by employing a logarithmic x-axis.

Sunday, July 26, 2015

Next GCaP Class: September 21, 2015

The next Guerrilla Capacity Planning class will be held during the week of September 21, 2015 at our new Sheraton Four Points location in Pleastaton, California. Early bird rate ends August 21st.

During the class, I will bust some entrenched CaP management myths (in no particular order):

  • All performance measurements are wrong by definition.
  • There is no response-time knee.
  • Throughput is not the same as execution rate.
  • Throughput and latency metrics are related — nonlinearly.
  • There is no parallel computing.

No particular knowledge about capacity and performance management is assumed.

Attendees should bring their laptops as course materials are provided on CD or flash drive. The Sheraton provides free wi-fi to the internet.

We look forward to seeing you there!