Some queueing theory notations, however, have become so ubiquitous in the literature that there is no point being different, just for the sake of being different. Therefore, I do conform to using the conventional queueing symbols:
- λ for arrival rate
- μ for service rate
- ρ for server utilization
Sadly, I always have to confess to students that I have never read a reason why these symbols are used. I still haven't, but I did have a flash of inspiration that might explain them. I haven't done any deep historical research, so caveat lector.
It seems reasonable to suspect that the above conventions come from Agner Erlang, the Dane who developed and solved the first queueing model (M/M/m) in 1917. (BTW, I also use the standard Kendall notation.) Erlang was doing capacity planning for the Internet of his day; the telephone network. Back then, to make calls outside Copenhagen, you needed a "trunk line", which had to be reserved ahead of time. The Copenhagen telephone company wanted to know how much trunk-line capacity they needed (measured today in Erlangs) to handle the expected number of calls. Some things never change.
I have a copy of Erlang's 1917 paper, so I looked it up. However, he does NOT use the modern Greek symbols. Instead, he uses x and y in the following way:
- x: number of trunk lines
- y: average number of calls/unit time or "traffic intensity" (Erlang's term)
- a: traffic intensity per line such that, a = y/x or ax = y
In modern Greek notation, this becomes:
- m: number of trunk lines
- λ S: total traffic intensity (where the mean service period S = 1/μ)
- ρ: traffic intensity per line such that, m ρ = λ S (Little's law)
So, it looks to me like the modern notation we use today comes from a later time, and perhaps from the probability theorists (such as Markov and Kolmogorov) who got hold of this stuff during the 1930's or thereabouts. The origins probably derive from something like the following:
- λ: Greek 'L' for the statistical mean of the exponential distribution for upward Markov transitions or interarrival periods
- μ: Next letter 'M' in the Greek alphabet for exponential downward Markov transitions or service periods
- ρ: Greek 'R' for "ratio": ρ = λ / μ (Little's law)
Just a guess on my part, but it seems plausible. If you happen to know the accurate historcial background, submit a comment.
4 comments:
It at least goes back as far as Kleinrock's two-volume set. Interestingly enough, volume II of Kleinrock is the earliest reference I've found to what we now call asymptotic bounds analysis and the choice of the intersection of the two bounding lines as the "optimum" point of operation.
Did some re-reading of Kleinrock (1976) volume II today. The relevant section on the asymptotic bounds analysis is section 4.12, "Multiple Resource Models", and he in turn picked it up from a 1974 paper by Muntz and Wong.
The sense is which the intersection of the two bounding lines is "optimal" is this: As you noted in Analyzing Computer Performance With Perl::PDQ, to the left of the intersection point, the system is under-utilized. But why is it "over-utilized" to the right of that point?
Kleinrock explains: "Nevertheless, there does exist an appropriate definition for saturation, which we denote by M* and which is given by ...
"Each user beyond M* would cause all other users to be delayed by a time equal to his entire processing time (1/mu seconds) -- such users are certainly not absorbed 'gracefully.'"
More evidence: I have Allan Scherr's An Analysis of Time-Shared Computer Systems (1967) and he uses Roman letters for the rates and utilizations, not Greek. So it must have happened some time after Scherr and before Kleinrock.
The classic BCMP paper (published 1975 but received 1972!) uses Greek. So does Buzen's paper that introduced the convolution algorithm, also received in 1972. I don't have either the classic Jackson or Gordon-Newell papers, so I don't know if they use Roman or Greek.
A big part of this is availability of typesetting capabilities and the programming languages in use. For example, Scherr's thesis looks like it was typeset on something before an IBM Selectric, as do a number of the older papers in the ACM library. And Scherr's work was done in FORTRAN and some simulation languages resembling FORTRAN. In those circumstances, before you could flip in and out a Symbol typeball, you'd be limited to fixed-width fonts and what was on your keyboard, and you'd write Erlang's formula as
M = LAMBDA*S/RHO
Oops ... I've forgotten my FORTRAN II :). In FORTRAN II, you'd write Erlang's formula as
FM = FLAMBDA*S/RHO
Variables with names starting with I-M were integers in FORTRAN II -- it wasn't till FORTRAN IV that you could say
REAL M, LAMBDA
M = LAMBDA*S/RHO
Post a Comment