Showing posts with label PDQ. Show all posts
Showing posts with label PDQ. Show all posts

Tuesday, April 20, 2021

PDQ Online Workshop, May 17-21, 2021

PDQ (Pretty Damn Quick) is a free, open source, performance analyzer available from the Performance Dynamics web site.

All modern computer systems, no matter how complex, can be thought of as a directed graph of individual buffers that hold requests until to be serviced at a shared computational resource, e.g., a CPU or disk. Since a buffer is just a queue, any computer infrastructure, from your laptop up to Facebook.com, can be represented as a directed graph of queues.

The directed arcs or arrows in such a graph correspond to workflows between different queues. In the parlance of queueing theory, a directed graph of queues is called a queueing network model. PDQ is a tool for predicting performance metrics such as, waiting time, throughput, optimal user-load.

Two major benefits of using PDQ are:

  1. confirmation that monitored performance metrics have their expected values
  2. predict performance for circumstances that lie beyond current measurements
Find out more about the workshop and register today.

Thursday, November 26, 2020

PDQ 7.0 is Not a Turkey

Giving Thanks for the release of PDQ 7.0, after a 5-year drought, and just in time for the PDQW workshop next week.

New Featues

  1. The introduction of the STREAMING solution method for OPEN queueing networks. (cf. CANON, which can still be used).
  2. The CreateMultiNode() function is now defined for CLOSED queueing networks and distinguished via the MSC device type (cf. MSO for OPEN networks).
  3. The format of Report() has been modified to make the various types of queueing network parameters clearer.
  4. See the R Help pages in RStudio for details.
  5. Run the demo(package="pdq") command in the R console to review a variety of PDQ 7 models.

Maintenance Changes

The migration of Python from 2 to 3 has introduced maintenance complications for PDQ. Python 3 may eventually be accommodated in future PDQ releases. Perl maintenance has ended with PDQ release 6.2, which to remain compatible with the Perl::PDQ book (2011).

Monday, June 25, 2018

Guerrilla 2018 Classes Now Open

All Guerrilla training classes are now open for registration.
  1. GCAP: Guerrilla Capacity and Performance — From Counters to Containers and Clouds
  2. GDAT: Guerrilla Data Analytics — Everything from Linear Regression to Machine Learning
  3. PDQW: Pretty Damn Quick Workshop — Personal tuition for performance and capacity mgmt

The following highlights indicate the kind of thing you'll learn. Most especially, how to make better use of all that monitoring and load-testing data you keep collecting.

See what Guerrilla grads are saying about these classes. And how many instructors do you know that are available for you from 9am to 9pm (or later) each day of your class?

Who should attend?

  • IT architects
  • Application developers
  • Performance engineers
  • Sysadmins (Linux, Unix, Windows)
  • System engineers
  • Test engineers
  • Mainframe sysops (IBM. Hitachi, Fujitsu, Unisys)
  • Database admins
  • Devops practitioners
  • SRE engineers
  • Anyone interested in getting beyond performance monitoring

As usual, Sheraton Four Points has bedrooms available at the Performance Dynamics discounted rate. The room-booking link is on the registration page.

Tell a colleague and see you in September!

Wednesday, June 20, 2018

Chargeback in the Cloud - The Movie

If you were unable to attend the live presentation on cost-effective defenses against chargeback in the cloud, or simply can't get enough performance and capacity analysis for the AWS cloud (which is completely understandable), here's a direct link to the video recording on CMG's YouTube channel.

The details concerning how you can do this kind of cost-benefit analysis for your cloud applications will be discussed in the upcoming GCAP class and the PDQW workshop. Check the corresponding class registration pages for dates and pricing.

Saturday, April 21, 2018

Virtual cloudXchange 2018 Conference

Our abstract has been accepted for presentation at the FREE cloudXchange online event to be held by CMG on June 19th at 10am Pacific (5pm UTC). [Extended slides]

Exposing the Cost of Performance
Hidden in the Cloud


Neil Gunther
Performance Dynamics, Castro Valley, California

Mohit Chawla
Independent Systems Engineer, Hamburg, Germany

10am Pacific Time on June 19, 2018

Whilst offering lift-and-shift migration and versatile elastic capacity, the cloud also reintroduces an old mainframe concept—chargeback—which rejuvenates the need for performance analysis and capacity planning. Combining production JMX data with an appropriate performance model, we show how to assess fee-based EC2 configurations for a mobile-user application running on a Linux-hosted Tomcat cluster. The performance model also facilitates ongoing cost-benefit analysis of various EC2 Auto Scaling policies.

Friday, August 4, 2017

Guerrilla Training September 2017

This year offers a new 3-day class on applying PDQ in your workplace. The classic Guerrilla classes, GCAP and GDAT, are also being presented.

Who should attend?

  • architects
  • application developers
  • performance engineers
  • sys admins
  • test engineers
  • mainframe sysops
  • database admins
  • webops
  • anyone interested in getting beyond ad hoc performance analysis and capacity planning skills

Some course highlights:

  • There are only 3 performance metrics you need to know
  • How to quantify scalability with the Universal Scalability Law
  • Hadoop performance and capacity management
  • Virtualization Spectrum from hyper-threads to cloud services
  • How to detect bad data
  • Statistical forecasting techniques
  • Machine learning algorithms applied to performance data

Register online. Early-bird discounts run through the end of August.

As usual, Sheraton Four Points has bedrooms available at the Performance Dynamics discounted rate. Link is on the registration page.

Also see what graduates are saying about these classes.

Tell a colleague and see you in September!

Wednesday, August 3, 2016

PDQ as a Performance Periscope

This is a guest post by performance modeling enthusiast, Mohit Chawla, who brought the following very interesting example to my attention. In contrast to many of the examples in my Perl::PDQ book, these performance data come from a production web server, not a test rig. —NJG

Performance analysts usually build performance models based on their understanding of the software application's behavior. However, this post describes how a PDQ model acted like a periscope and also turned out to be pedagogical by uncovering otherwise hidden details about the application's inner workings and performance characteristics.

Some Background

Saturday, May 14, 2016

PDQ 7.0 Dev is Underway

The primary goal for this release is to make PDQ acceptable for uploading to CRAN. This is a non-trivial exercise because there is some legacy C code in the PDQ library that needs to be reorganized while, at the same time, keeping it consistent for programmatically porting to other languages besides R—chiefly Perl (for the book) and Python.

To get there, the following steps have been identified:

  1. High Priority

    1. Migrate from SourceForge to GitHub.
    2. Change the return type for these functions from int to void:
      • PDQ_CreateOpen()
      • PDQ_CreateClosed()
      • PDQ_CreateNode()
      • PDQ_CreateMultiNode()
      Using the returned int as a counter was deprecated in version 6.1.1.
    3. Convert PDQ-R to Rcpp interface.
    4. Clean out the Examples directory and other contributed code directories leaving only Examples that actually use the PDQ C library.
    5. Add unit tests for PDQ C library, as well as the Perl, Python, and R languages.
    6. Get interface accepted on CRAN
    7. Add the ability to solve multi-server queueing nodes servicing an arbitrary number of workloads.

  2. Low Priority

    1. Get interface accepted on CPAN and PyPI.
    2. Convert to build system from makefiles to Cmake.

Stay tuned!

—njg and pjp

Monday, August 24, 2015

PDQ Version 6.2.0 Released

PDQ (Pretty Damn Quick) is a FOSS performance analysis tool based on the paradigm of queueing models that can be programmed natively in

This minor release is now available for download.

Sunday, July 20, 2014

Continuous Integration Gets Performance Testing Radar

As companies embrace continuous integration (CI) and fast release cycles, a serious problem has emerged in the development pipeline: Conventional performance testing is the new bottleneck. Every load testing environment is actually a highly complex simulation assumed to be a model of the intended production environment. System performance testing is so complex that the cost of modifying test scripts and hardware has become a liability for meeting CI schedules. One reaction to this situation is to treat performance testing as a checkbox item, but that exposes the new application to unknown performance idiosyncrasies in production.

In this webinar, Neil Gunther (Performance Dynamics Company) and Sai Subramanian (Cognizant Technology Solutions) will present a new type of model that is not a simulation, but instead acts like continuous radar that warns developers of potential performance and scalability issues during the CI process. This radar model corresponds to a virtual testing framework that precludes the need for developing performance test scripts or setting up a separate load testing environment. Far from being a mere idea, radar methodology is based on a strong analytic foundation that will be demonstrated by examining a successful case study.

Broadcast Date and Time: Tuesday, July 22, 2014, at 11 am Pacific

Wednesday, December 25, 2013

Response Time Percentiles for Multi-server Applications

In a previous post, I applied my rules-of-thumb for response time (RT) percentiles (or more accurately, residence time in queueing theory parlance), viz., 80th percentile: $R_{80}$, 90th percentile: $R_{90}$ and 95th percentile: $R_{95}$ to a cellphone application and found that the performance measurements were not completely consistent. Since the relevant data only appeared in a journal blog, I didn't have enough information to resolve the discrepancy; which is ok. The first job of the performance analyst is to flag performance anomalies but most probably let others resolve them—after all, I didn't build the system or collect the measurements.

More importantly, that analysis was for a single server application (viz., time-to-first-fix latency). At the end of my post, I hinted at adding percentiles to PDQ for multi-server applications. Here, I present the corresponding rules-of-thumb for the more ubiquitous multi-server or multi-core case.

Single-server Percentiles

First, let's summarize the Guerrilla rules-of-thumb for single-server percentiles (M/M/1 in queueing parlance): \begin{align} R_{1,80} &\simeq \dfrac{5}{3} \, R_{1} \label{eqn:mm1r80}\\ R_{1,90} &\simeq \dfrac{7}{3} \, R_{1}\\ R_{1,95} &\simeq \dfrac{9}{3} \, R_{1} \label{eqn:mm1r95} \end{align} where $R_{1}$ is the statistical mean of the measured or calculated RT and $\simeq$ denotes approximately equal. A useful mnemonic device is to notice the numerical pattern for the fractions. All denominators are 3 and the numerators are successive odd numbers starting with 5.

Sunday, April 28, 2013

Visual Proof of Little's Law Reworked

Back in early March, when I was at the Hotsos Symposium on Oracle performance, I happened to end up sitting next to Alain C. at lunch. He always attends my presentations, especially on USL scalability analysis. During our lunchtime conversation, he took out his copy of Analyzing Computer System Performance with Perl::PDQ and opened it at the section on the visual proof for Little's law. Alain queried (query ... Oracle ... Get it?) whether the numbers really added up the way they are shown in the diagrams. It did look like there could be a discrepancy but it was too difficult to reanalyze the whole thing over lunch.

Friday, April 26, 2013

Book Review: Botched Erlang B and C Functions

As a consequence of looking into a question on the GCaP google group about the telecom performance metric known as the busy hour, I came across this book.

A new copy comes with a $267.44 price tag!!! Here is my review; slightly enhanced from the version I wrote on Amazon.

Monday, April 22, 2013

Adding Percentiles to PDQ

Pretty Damn Quick (PDQ) performs a mean value analysis of queueing network models: mean values in; mean values out. By mean, I mean statistical mean or average. Mean input values include such queueing metrics as service times and arrival rates. These could be sample means. Mean output values include such queueing metrics as waiting time and queue length. These are computed means based on a known distribution. I'll say more about exactly what distribution, shortly. Sometimes you might also want to report measures of dispersion about those mean values, e.g., the 90th or 95th percentiles.

Percentile Rules of Thumb

In The Practical Performance Analyst (1998, 2000) and Analyzing Computer System Performance with Perl::PDQ (2011), I offer the following Guerrilla rules of thumb for percentiles, based on a mean residence time R:
  • 80th percentile: p80 ≃ 5R/3
  • 90th percentile: p90 ≃ 7R/3
  • 95th percentile: p95 ≃ 9R/3

I could also add the 50th percentile or median: p50 ≃ 2R/3, which I hadn't thought of until I was putting this blog post together.

Tuesday, December 18, 2012

PDQ 6.0.1 is Released

As already described previously, the main purpose of Release 6.0.1 Build 121512 is improved compatibility and stability between PDQ and the R statistical environment. For example, many of the PDQ models, previously found in the ../examples/ directory, can now also be accessed via the demo() command in the R-console. Testing was carried out using R version 2.15.2 (2012-10-26).

Operationally, PDQ, in any of the supported languages, should appear cosmetically the same as Release 5.0; no additional programming required. Since the PDQ-R source can be compiled separately, this release will be of special interest to Microsoft Windows users.

If you're new to PDQ, here's a simple PDQ-R model you can paste directly into the R-console:


library(pdq)
# input parameters
arrivalRate <- 0.75
serviceRate <- 1.0
## Build and solve the PDQ model
Init("Single queue model")         # initialize PDQ
CreateOpen("Work", arrivalRate)    # open workflow
CreateNode("Server", CEN, FCFS)    # single server
SetDemand("Server", "Work", 1/serviceRate) # service time
Solve(CANON)                       # solve the model
Report()                           # tabulated output
Please see the online release notes for the download links and more detailed information, as well as the top-level README file in the distribution. Beyond that, check out the relevant books and training classes.

Merry Xmas from the PDQ Dev Team!

Thursday, November 29, 2012

PDQ 6.0 from a Developer Standpoint

This is a guest post by Paul Puglia, who contibuted significant development effort for the PDQ 6.0 release; especially as it relates to interfacing with R. Here, Paul provides more details about the motivation described in my eariler announcement.


PDQ was designed and implemented around a couple of basic assumptions. First, the library would be a C-language API running on some variant of the Unix operating system where we could reasonably assume that we'd be able to link it against a standard C library. Second, programs built using this API would be "stand-alone" executables in the sense that they'd run in their own, dedicated memory address spaces, could route its I/O through the standard streams (stdout or stderr), and had complete control over how error conditions would be handled.

Not surprisingly, the above assumptions drove a set of implementation decisions for the library, namely:

  • All I/O would be pushed through the standard stream library functions like printf and fprintf
  • Memory for internal data structures would be allocated and released through calls to the standard library functions calloc and free
  • Error conditions would result in PDQ causing the model execution to stop with an explicit call to exit().
These aren’t usual decisions for a C API.

With the arrival of PDQ 2.0, we introduced foreign interfaces programming environments (PERL, Python and R) that allowed PDQ to be called from these other environments. All these new foreign interfaces were built and released using the SWIG interface building tool, which allows us to build these interfaces with absolutely no modification to the underlying PDQ code—a major benefit when you’ve got a mature, debugged API that you really want to remain that way. For the most part this arrangement worked pretty well—at least for those environments where it was natural to write and execute PDQ models like standalone C-programs (you can also read this as PERL and Python).

When it came to R, however, our early implementation decisions weren’t such a great fit for how R is commonly used, which is as an interactive environment, similar to programs like Mathematica, Maple, and Matlab. Like these other environments, R users do most of their interaction with a REPL (Read-Execute-Print Loop) usually wrapped in either full-fledged GUI interface or a terminal-like interface called the console.

It turns out that most of PDQ's implementation decisions could (and do) interfere with using R interactively. In particular:

  • Calling the exit() function results in the entire R environment exiting – not a good feature for an interactive environment.
  • Writing directly to the stdout and strerr using fprintf, bypasses R's own internal I/O mechanisms and prevents internal /O functions (like the sink() command) from working properly.
  • Using calloc() and free() functions interfere with R's own internal memory management mechanisms and would prove to be a major impediment for any Windows version of the interface.

Not only do these severely degrade the interactive experience for R users, their use also gets flagged by R’s extension building mechanism when it does a consistency check. And not passing that check would prove a major impediment for getting the PDQ's R interface accepted on CRAN (Comprehensive R Archive Network).

Luckily, none of the fixes for these issues are particularly hard to implement. Most are either fairly simple substitutions of the R API calls for C library routines or/and localized changes to PDQ library. And, while all of this does potentially create a risk of introducing bugs in the PDQ library, the reward for taking that risk is a stable R interface that can be eventually be submitted to CRAN. A version of the PDQ library can be easily built under Windows™ using the Rtools utilities.

Monday, November 12, 2012

PDQ 6.0 is On Its Way

PDQ (Pretty Damn Quick) version 6.0.β is in the QA pipeline. Although this is a major release, cosmetically, things won't look any different when it comes to writing PDQ models. All the big changes have taken place under the hood in order to make PDQ more consistent with the R statistical environment.

R version 2.15.2 (2012-10-26) -- "Trick or Treat"
Copyright (C) 2012 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: i386-apple-darwin9.8.0/i386 (32-bit)

> library(pdq)
> source("/Users/njg/PDQ/Test Suites/R-Test/mm1.r")
                ***************************************
                ****** Pretty Damn Quick REPORT *******
                ***************************************
                ***  of : Thu Nov  8 17:42:48 2012  ***
                ***  for: M/M/1 Test                ***
                ***  Ver: PDQ Analyzer 6.0b 041112  ***
                ***************************************
                ***************************************
...

The main trick is that the Perl and Python versions of PDQ will remain entirely unchanged while at the same time invisibly incorporating significant changes to accommodate R.

Sunday, October 7, 2012

Guerrilla Class of October 2012

This is what you missed last week: another fine Guerrilla al fresco lunch at the BlueAgave restaurant. Clearly, capacity planning is a tough business⎯when you do it right.

Here's some of what went down:

Wednesday, July 4, 2012

Characterizing Performance Bottlenecks

If you do a Google search using keywords like: performance, bottleneck, analysis, you get quite a bewildering list of responses, and none of them seems to clearly define what they mean by the term bottleneck.

The word bottleneck refers to a choke point or narrowing, literally like the neck of a bottle, that causes the flow to take longer than it would otherwise. The effect on performance is commonly seen on the freeway in an area undergoing roadwork. Multiple lanes of traffic are forced to converge into a single lane and proceed past the roadwork in single file. Going from parallel traffic flow to serial flow means the same number of cars will take longer to get through that same section of road. As we all know, the delay at a freeway bottleneck can be very significant.

The same is true on a single-lane country road. If you come to a section where roadwork slows down every car, it takes longer to traverse that section of the road. Bottlenecks are synonymous with slow downs and delays, but they really determine a lot more than delay.

Sunday, January 1, 2012

My Year in Review 2011

Some days I wonder if I ever actually accomplish anything anymore. Maybe it's time to just pack it in and become a greeter at Walmart. I know a bit about how queues work, so that should put me a few notches ahead of the competition. And I would expect the competition to be fierce because it's a pretty cushy job; but not every day, apparently.

Before taking the big leap, I decided it might be a good idea to note down some of the technical projects I've worked on this year (over and above the daily grind):