Saturday, May 3, 2008

Object, Time Thyself

For quite a while (6 years, to be exact), I've thought that the only sane way to address the problem of response-time decomposition across multi-tier distributed applications, is for the software components to be self-timing. In fact, I just found the following old email which shows that I first proposed this concept (publicly) at a CMG vendor session conducted by Rational Software (now part of IBM) in 2002:

From: "Jeff R." (Rational Performance Test Product Manager, Rational Software)
Date: Fri, 17 Jan 2003 16:11:56 -0500
To: "Neil Gunther"
Subject: RE: CMG Vendor Presn Follow-up

Hello Neil,

I apologize for my delayed response. Thank you very much for attending the Rational Vendor Presentation at CMG 2002 in Reno.

Your idea of component self-timing is very interesting. I am definitely interested in hearing more about this idea. Were you thinking of implementing your concepts via instrumentation of the component source code, or via instrumentation performed automatically during compilation? Or via a different method all together?

Rational Quantify provides some component timing via object code instrumentation; you are welcome to try out an evaluation copy if you'd like. I'm sure that you have ideas about component timing not covered by Quantify; if you have any original abstracts, papers or prototypes around these ideas, I would definitely be happy to give them an audience here in Rational.



Unfortunately, this contact never went anywhere, in part because I never found the time to write any kind of formal specification, and also because it was not clear how one would get all commercial IDE vendors to support such a thing without the clout of a standards body. At the time, I was hoping that CMG might fulfill that role but that never happened either. I did continue to promote the concept as an item on my list of Millennium Performance Problems:

but eventually I gave up after it never seemed to catch on.

Fast forward to 2008. Last week, during the first Guerrilla Boot Camp class, alumnus Michael Ducy ( informed me that they have implemented something close to my proposal called the "Extremely Reusable Monitoring API" and their lead architect, Matthew O'Keefe, will be presenting it under the title "Complex Event Processing at Orbitz" at JavaOne next week. I hope to learn more about it as I help Michael write his abstract and paper for CMG 2008.


Tom said...

I'm also looking forward to learning more what Orbitz shows us. We have designed Esper with the sensor monitoring as one of the premier use cases.

Thomas Bernhardt,

Neil Gunther said...

Another reason this self-timing idea may have been slow to catch on is, that it is at variance with object-oriented (OO) philosophy; particularly with regard to the reuse concept. The idea that you invoke a method or procedure or service via an API without needing to know anything about its implementation details is only reasonable under the assumption that the implementation is good. What is 'good'? More importantly (from a performance perspective), what is bad? Bad, could mean that the implementation of a given service happens to be the current *bottleneck* (longest service time) in my application. But how can I know that?

Today, I would have to determine that fact with external and often intrusive instrumentation and tracing. Self-timing is a better approach, but it also suggests that one actually *does* need to know something about the OO implementation or at least have the option of knowing. First, I might want to measure how long it takes to execute under load in the system (not unit test). Second, if the time is too long, I will want to know why by looking at its source code. Very un-OO.