In chapter 4 of my Perl::PDQ book, "Linux Load Average—Take a Load Off" and Appendix B "A Short History of Buffers," I briefly refer to the history of Unix and ultimately Linux via Multics, starting with the original MIT project called CTSS (Compatible Time-sharing System). My purpose there was to point out that the load average metric is the earliest example of O/S performance instrumentation. Naturally then, the following 5-part series in the NYT on the development of time-share computers caught my attention:
These accounts are noteworthy because they are written by the brother of one of the developers (of early email—depending on how you define email) and the author is a journalist, so he interviewed some of the personalities (who are now getting on a bit).
There are also lots of fascinating photos.
Possibly pithy insights into computer performance analysis and capacity planning based on the Guerrilla series of books and training classes provided by Performance Dynamics Company.
Showing posts with label instrumentation. Show all posts
Showing posts with label instrumentation. Show all posts
Tuesday, June 28, 2011
Monday, October 19, 2009
Is SCO Waiting for Godot?
Bankruptcy Judge Kevin Gross remarked in his recent ruling that ongoing SCO Group litigation attempts were like a bad version of Samuel Beckett's play, Waiting for Godot. The almost decade-long legal saga gained publicity in the FOSS community for targeting Linux as illegally containing licensed AT&T UNIX System V source code.
Labels:
FOSS,
instrumentation,
Linux,
Solaris
Saturday, May 24, 2008
Instrumentierung – die Chance für Linux?
My latest article for the German publication Linux Technical Review appears in Volume 8 on "Performance und Tuning" and discusses a possible roadmap for future Linux instrumentation and performance management capabilities. The Abstract reads:
Topics discussed include a comparison of time-share scheduling (TSS) with fair-share scheduling (FSS) and the Linux Completely fair scheduler (CFS), how to achieve a more uniform interface to performance and capacity planning measurements, and the kind of advanced system management capabilities available on IBM System Z for both their mainframes and clusters.
- German: Linux könnte seine Position im Servermarkt ausbauen, wenn es dem Vorbild der Mainframes folgte und deren raffiniertes Performance-Management übernähme.
- English: Linux could be in a position to expand its presence in the server market by looking to mainframe computer performance management as a role model and adapting its instrumentation accordingly.
Topics discussed include a comparison of time-share scheduling (TSS) with fair-share scheduling (FSS) and the Linux Completely fair scheduler (CFS), how to achieve a more uniform interface to performance and capacity planning measurements, and the kind of advanced system management capabilities available on IBM System Z for both their mainframes and clusters.
Saturday, May 3, 2008
Object, Time Thyself
For quite a while (6 years, to be exact), I've thought that the only sane way to address the problem of response-time decomposition across multi-tier distributed applications, is for the software components to be self-timing. In fact, I just found the following old email which shows that I first proposed this concept (publicly) at a CMG vendor session conducted by Rational Software (now part of IBM) in 2002:
Friday, June 15, 2007
Linux Instrumentation: Is Linus Part of the Problem?
I was interested to read in a LinuxWorld article entitled "File System, Power and Instrumentation: Can Linux Close Its Technical Gaps?", that Linus Torvalds believes the current kernel instrumentation sufficiently addresses real-world performance problems.
This statement would be laughable, if he weren't serious. Consider the following:
This statement would be laughable, if he weren't serious. Consider the following:
- How can current Linux instrumentation be sufficient when older UNIX performance instrumentation is still not sufficient?
- UNIX instrumentation was not introduced to solve real-world performance problems. It was a hack by and for kernel developers to monitor the performance impact of code changes in a light-weight O/S. We're still living with that legacy. It might've been necessary, but that doesn't make it sufficient.
- The level of instrumentation in Linux (and UNIX-es) is not greatly different from what it was 30 years ago. As I discuss in Chap. 4 of my Perl::PDQ book, the idea of instrumenting an O/S goes back (at least) to c.1965 at MIT.
- Last time I checked, this was the 21st century. By now, I would have expected (foolishly, it seems) to have at my fingertips, a common set of useful performance metrics, together with a common means for accessing them across all variants of UNIX and Linux.
- Several attempts have been made to standardize UNIX performance instrumentation. One was called the Universal Measurement Architecture (UMA), and another was presented at CMG in 1999.
- The UMA spec arrived DOA because the UNIX vendors, although they helped to design it, didn't see any ROI where there was no apparent demand from users/analysts. Analysts, on the other hand, didn't demand what they had not conceived was missing. Such a Mexican standoff was cheaper for the UNIX vendors. (How conveeeeenient!) This remains a potential opportunity for Linux, in my view.
- Rich Pettit wrote a very thoughtful paper entitled "Formalizing Performance Metrics in Linux", which was resoundingly ignored by Linux developers, as far as I know.
Subscribe to:
Posts (Atom)