Showing posts with label load average. Show all posts
Showing posts with label load average. Show all posts

Thursday, June 26, 2014

Load Average in FreeBSD

In my Perl PDQ book there is a chapter, entitled Linux Load Average, where I dissect how the load average metric (or metrics, since there are three reported numbers) is computed in shell commands like uptime, viz.,
[njg]~/Desktop% uptime
16:18  up 9 days, 15 mins, 4 users, load averages: 2.11 1.99 1.99

For the book, I used Linux 2.6 source because it was accessible on the web with convenient hyperlinks to navigate the code. Somewhere in the kernel scheduler, the following C code appeared:


#define FSHIFT    11          /* nr of bits of precision */ 
#define FIXED_1   (1<<FSHIFT) /* 1.0 as fixed-point */ 
#define LOAD_FREQ (5*HZ)      /* 5 sec intervals */ 
#define EXP_1     1884        /* 1/exp(5sec/1min) fixed-pt */ 
#define EXP_5     2014        /* 1/exp(5sec/5min) */ 
#define EXP_15    2037        /* 1/exp(5sec/15min) */ 

#define CALC_LOAD(load,exp,n) \
 load *= exp; \
 load += n*(FIXED_1-exp); \
 load >>= FSHIFT;

where the C macro CALC_LOAD computes the following equation

Friday, January 18, 2013

Linux Per-Entity Load Tracking: Plus ça change

Canadian capacity planner, David Collier-Brown, pointed me at this post about some more proposed changes to how load is measured in the Linux kernel. He's not sure they're on the right track. David has written about such things as cgroups in Linux and I'm sure he understands these things better than I do, so he might be right. I never understood the so-called CFS: Completely Fair Scheduler. Is it a fair-share scheduler or something else? Not only was there a certain amount of political fallout over CFS but, do we care about such things anymore? That was back in 2007. These days we are just as likely to run Linux in a VM under VMware or XenServer or the cloud. Others have proposed that the Linux load average metric be made "more accurate" by including IO load. Would that be local IO, remote IO or both? Disk IO, network IO, etc., etc?

Tuesday, June 28, 2011

The Backstory on Time-Share Computing

In chapter 4 of my Perl::PDQ book, "Linux Load Average—Take a Load Off" and Appendix B "A Short History of Buffers," I briefly refer to the history of Unix and ultimately Linux via Multics, starting with the original MIT project called CTSS (Compatible Time-sharing System). My purpose there was to point out that the load average metric is the earliest example of O/S performance instrumentation. Naturally then, the following 5-part series in the NYT on the development of time-share computers caught my attention:
These accounts are noteworthy because they are written by the brother of one of the developers (of early email—depending on how you define email) and the author is a journalist, so he interviewed some of the personalities (who are now getting on a bit).

There are also lots of fascinating photos.

Tuesday, May 5, 2009

Queues, Schedulers and the Multicore Wall

The other day, I came across a blog post entitled "Server utilization: Joel on queuing", so naturally I had to stop and take a squiz. The blogger, John D. Cook, is an applied mathematician by training and a cancer researcher by trade, so he's quite capable of understanding a little queueing theory. What he had not heard of before was a rule-of-thumb (ROT) that was quoted in a podcast (skip to 00:26:35) by NYC software developer and raconteur, Joel Spolsky. Although rather garbled, as I think any Guerrilla graduate would agree, what Spolsky says is this:
If you have a line of people (e.g., in a bank or a coffee shop) and the utilization of the people serving them gets above 80% , things start to go wrong because the lines get very long and the average amount of time people spend waiting tends to get really, really bad.

No news to us performance weenies, and the way I've sometimes heard it expressed at CMG is:
No resource should be more than 75% busy.
Personally, I don't like this kind of statement because it is very misleading. Let me explain why.

Monday, October 22, 2007

Streeeeeeetch!

The October 2007 Linux Magazine (no. 10, issue 83, p. 62) is carrying the English version of my original German article about converting load averages to stretch factors. Unfortunately, there is no direct URL (Sun Oct 28, 2007: As Metapost commented below, it is now available for viewing) but the cute visual hook has a picture of a stretch limo ... stretched across two pages.

I wish I'd thought of that.

Wednesday, July 11, 2007

Leistungsdiagnostik - Load Averages and Stretch Factors

My latest article for the German Linux-Magazin has just appeared in the August edition under the title "Leistungsdiagnostik". The abstract reads:

Shellkommandos wie »uptime« werfen stets drei Zahlen als Load Average aus. Allerdings wissen nur wenige, wie sie zustande kommen und was genau sie bedeuten. Dieser Beitrag klärt darüber auf und stellt zugleich mit dem Stretchfaktor eine Erweiterung vor.

The main theme is about how to extend absolute load averages to relative stretch factor values.

Wednesday, April 18, 2007

How Long Should My Queue Be?

A simple question; there should be a simple answer, right? Guerrilla alumus Sudarsan Kannan asked me if a rule-of-thumb could be constructed for quantitatively assessing the load average on both dual-core and multicore platforms. He had seen various remarks, from time to time, alluding to optimal load averages.