Showing posts with label Little's law. Show all posts
Showing posts with label Little's law. Show all posts

Saturday, October 1, 2016

A Clue for Remembering Little's Laws

During the Guerrilla Data Analysis class last week, alumnus Jeff P. came up with this novel mnemonic device for remembering all three forms of Little's law that relate various queueing metrics to the mean arrival rate $\lambda$.

The right-hand side of each equation representing a version of Little's law is written vertically in the order $R, W, S$, which matches the expression $R=W+S$ for the mean residence time, viz., the sum of the mean waiting time ($W$) and the mean service time ($S$).

The letters on the left-hand side: $Q, L, U$ (reading vertically) respectively correspond to the queueing metrics: queue-length, waiting-line length, and utilization, which can be read as the word clue.

Incidentally, the middle formula is the version that appears in the title of John Little's original paper:

J. D. C. Little, ``A Proof for the Queuing Formula: L = λ W,''
Operations Research, 9 (3): 383—387 (1961)

Wednesday, July 30, 2014

A Little Triplet

Little's law appears in various guises in performance analysis. It was known to Agner Erlang (the father of queueing theory) in 1909 to be intuitively correct but was not proven mathematically until 1961 by John Little. Even though you experience it all the time, queueing is not such a trivial phenomenon as it may seem. In the subsequent discussion, I'll show you that there is actually a triplet of such laws, where each version refers to a slightly different aspect of queueing. Although they have a common general form, the less than obvious interpretation of each version is handy to know for solving almost any problem in performance analysis.

To see the Little's law triplet, consider the line of customers at the grocery store checkout lane shown in Figure 1. Following the usual queueing theory convention, the queue includes not only the customers waiting but also the customer currently in service.

Figure 1. Checkout lane decomposed into its space and time components

As an aside, it is useful to keep in mind that there are only three types of performance metric:

  1. Time $T$ (the fundamental performance metric), e.g., minutes
  2. Count or a number $N$ (no formal dimensions), e.g., transactions
  3. Rate $N/T$ (inverse time dimension), e.g., transactions per minute

Sunday, April 28, 2013

Visual Proof of Little's Law Reworked

Back in early March, when I was at the Hotsos Symposium on Oracle performance, I happened to end up sitting next to Alain C. at lunch. He always attends my presentations, especially on USL scalability analysis. During our lunchtime conversation, he took out his copy of Analyzing Computer System Performance with Perl::PDQ and opened it at the section on the visual proof for Little's law. Alain queried (query ... Oracle ... Get it?) whether the numbers really added up the way they are shown in the diagrams. It did look like there could be a discrepancy but it was too difficult to reanalyze the whole thing over lunch.

Wednesday, August 1, 2012

Little's Law and IO Performance

Next Tuesday, August 7th, I'll be presenting at the Northern California CMG meeting*. My talk will be about Little's law and its implications for storage IO performance.

As a performance analyst or capacity planner, you already know all about Little's law—it's elementary. Right? Therefore, you completely understand:

  1. How Little's law relates inventory and manufacturing cycle time
  2. John Little (now 84) is not a performance analyst
  3. John Little did not invent Little's law
  4. Little's law was known to A. K. Erlang more than 100 years ago
  5. That there are actually two three versions of Little's law
  6. Little's law is not based on queueing theory
  7. Little's law expresses the fact that response time decreases with increasing throughput
  8. However, on the SPEC website you'll see that response time increases with increasing throughput. WTF !!!?

If you're feeling slightly bewildered about all this, you really should come along to my talk (assuming you're in the area). Otherwise, you can read the slide deck embedded below.


3-dimensional view of Little's law

I'll show you how I discovered the resolution to the apparent contradiction between items 7 and 8 (above) by representing Little's law in 3-dimensions. It's very cool! Even John Little doesn't know about this.

Oh yeah, and I'll also explain how Little's law reveals why it's possible to make your application IOs go 10x to 100x faster. IOPS bandwidth has become irrelevant.

Some of these conclusions are based on recent work I've been doing for Fusion-io. You might've heard of their billion IOPS benchmark, and more recently by association with SSDAlloc software from Princeton University.


* If you're not a ncCMG member, it's a one-time $25 entry fee, which then makes you a life member. See the bottom of their web page for payment and contact details.

Sunday, January 22, 2012

Throughput-Delay Curves

A colleague of mine at Yahoo.com asked me if I'd ever seen curves like this:

Not only is the answer, yes (it's a throughput-delay plot or XR plot in my notation), but that particular plot comes from my GCaP course notes. There, I use it to analyze the comparative performance of a functional multiprocessor (NS6000) and a symmetric multiprocessor (SC2000). Note how the two curves cross at around 1500 OPS. You can ask yourself why and if you can't come up with an explanation, you should be registering for a Guerrilla class. :)

The above XR plot also serves as a useful reminder that the throughput and response-time metrics are not only dependent on one another, but they are generally dependent in a nonlinear way—despite what some experts may claim:

Saturday, July 2, 2011

Little's Lore

Guerrilla alumnus Paul P. has a penchant for sending me interesting things, and recently he sent me a piece on Little's law. Remarkably, it wasn't just another proof of L = λW, but a brief retrospective written by none other than John Little himself. I quote it here because it not only provides some unusual insight into how these things get done, but it is written in a charming and self-effacing style.

Saturday, August 22, 2009

Bandwidth and Latency are Related Like Overclocked Chocolates

Prior to the appearance of special relativity theory (SRT) in 1905, physicists were under the impression that space and time are completely independent aspects of reality described by Newton's equations of motion. Einstein's great insight, that led to SRT, was that space and time are intimately related through the properties of light.

Space and time are related

Instead of objects simply being located at some arbitrary position x at some arbitrary time t, everything moves on a world-line given by the space-time pair (x, ct), where c is the universal speed of light. Notice that x has the engineering dimensions of length and so does the new variable ct: a speed multiplied by time. In Einstein's picture, everything is a length; there is no separate time metric. Time is now part of what has become known as space-time—because nobody came up with a better word.

Tuesday, August 11, 2009

Towards a Cloud Capacity-Cost Formula

One of the (unscheduled) plenary sessions at Velocity 2009, was entitled: “Why elasticity, performance, and analytics will change how Webops is judged" (PDF), given by Alistair Croll. An earlier version of Alistair's ideas can be read on his blog. As I understand it, he's attempting to tie together the capacity-on-demand concept of cloud computing with the way a user is charged for resource consumption and how the provider counts revenue; a kind of dynamic capacity planning and chargeback association. Currently, for example, Amazon EC2, Google App Engine and Salesforce, all do this differently. This looks like a very important point, which I would like to understand more thoroughly. By slide 3 in his presentation, he refers to a simple capacity formula and that's what I want to discuss here, because that's what suddenly locked up my attention.

Sunday, April 5, 2009

Forged in the USA

Little's law, Jackson's theorem, and JIT are concepts that we associate with computer performance analysis and capacity planning for IT systems. But the truth is, these ideas were forged while solving business and manufacturing problems. It's also true to say that both IT and manufacturing in the USA have suffered dramatically from the effects of rapid offshoring in the new global economy. In the past decade, 5 million manufacturing jobs have been eliminated in the USA.

Walmart is one of the few USA retailers that has managed to do reasonably well in this economic recession, because they (patriotically?) cut USA manufacturers loose in favor of setting up factories offshore; mostly in China. Indeed, if you look at the tags on the items in a Walmart store, you won't see Made in Anywhere USA. Instead, it's anywhere but USA. Or so it would seem.

This piece: Despite Job Loss, U.S. Manufacturing Still Leads , on NPR this morning, caught my attention when it was pointed out that the USA still ranks as the number one manufacturer, producing some 20% of all goods in the world, ahead of Japan and well ahead of China. The reason this appears to contradict simple observation (like looking at Walmart tags) is that manufacturing in the USA is two levels of indirection away. The USA manufactures things that manufacture things. In other words, USA companies make heavy equipment and machines that are used in factories and on assembly lines that make the goods we buy, and that's not stated on those Walmart tags. But that's where Little, Jackson and JIT came in.