The correct way to average rates (i.e., inverse-time metrics) is to apply the harmonic mean, not the arithmetic mean.At least that's what the classic computer performance texts, like Arnold Allen (1990) and Raj Jain (1991), tell you.
My comment wasn't too emphatic though because the examples in those books do not refer to averaging monitored data, i.e., time series data. So, I wasn't completely sure about what I was saying. Good thing too, because the usual form of the harmonic mean doesn't work for time series! That, it turns out, is a very subtle subject and why I decided to use the slide format above to reveal my progressive technical understanding about what does work.
I'll probably have more to say about all this at the upcoming Guerrilla Data Analysis class. In the meantime, comments and questions by all are welcomed.
11 comments:
For sampled count data, wouldn't AM make more sense, just like you point out that it's more like frequency histogram where the bars are all the same width? The AM * the total duration would be equal to the sum of individual counts. The open question is whether HM is good for sampled rate data to determine the average rate for a given larger interval, that is, aggregating up, but not the total observation period, but a multiple of the sampling period. Hope this is making sense to you.
For sampled count data, wouldn't AM make more sense, just like you point out that it's more like frequency histogram where the bars are all the same width? The AM * the total duration would be equal to the sum of individual counts. The open question is whether HM is good for sampled rate data to determine the average rate for a given larger interval, that is, aggregating up, but not the total observation period, but a multiple of the sampling period. Hope this is making sense to you.
I'm not sure that I do understand the comment.
I thought I did say in Prop 4 that the AM is applicable to sampled data, meaning measurement (or reporting) intervals of fixed width. The total time window can be arbitrary. That will just produce a different AM value.
For HM to apply, the data will not be sampled data b/c the intervals will not have fixed width. See Prop 2. This will be the case for event data that are only reported when some threshold is reached (e.g., 5000 page views). That then triggers an event which reports that count. The total time window can be arbitrary and will just produce a different HM value.
Maybe you can give me a better example.
I think I found 2 bugs in SlideShare:
1) If you update the original file with more slides than the original had, you don't see the added slides. It truncates at the original number of slides.
2) Internal hyperlinks don't work. External hyperlinks do work.
This occurs with slides in PDF format generated by LaTeX.
Neil---
Good start on a discussion of the means. Fights break out on the subject of which mean should be used for specific tasks. Since you are tackling this controversial subject, I think that you should also discuss the geometric mean and the weighted averages of all three classical means. When working with data in graphite, I discovered that the preaggregated data was derived using a simple arithmetic average. One component comprised 90% of the workload and most of the other seventeen components comprised much less than 2%. A simple arithmetic mean reduced the 90% component to a 5.56% contribution. Components with a contribution less than 0.01% were increased to 5.556%. Needless to say, the performance picture was completely misrepresented.
Bob
Neil---
Good start on a discussion of the means. Fights break out on the subject of which mean should be used for specific tasks. Since you are tackling this controversial subject, I think that you should also discuss the geometric mean and the weighted averages of all three classical means. When working with data in graphite, I discovered that the preaggregated data was derived using a simple arithmetic average. One component comprised 90% of the workload and most of the other seventeen components comprised much less than 2%. A simple arithmetic mean reduced the 90% component to a 5.56% contribution. Components with a contribution less than 0.01% were increased to 5.556%. Needless to say, the performance picture was completely misrepresented.
Bob
Updated with new Section 5: Application to Actual Time Series.
Updated with new Section 6 on applying the weighted harmonic mean to time series.
Just uploaded with new Section 7 on Accommodating Zero Rates.
The harmonic mean made it as a blog topic in the Huffington Post (which claims to offer "fresh takes and real-time analysis from HuffPost's signature lineup of contributors") under the slightly tortured title Mean Questions With Harmonious Answers.
As well as straining at popularism with statements like "The harmonic mean of two numbers A and B is the flip of the average of the flips of A and B" and the use of rather esoteric names like Aodh and Bea, it never gets beyond my The Ad Nauseam Example in Section 2 of the above slides.
Added a Conclusions section as well as various tweaks throughout. In particular, added a section that addresses the question, "When should I apply the harmonic mean to aggregating monitored data?" Answer: When all of the following criteria are met:
R - Rate data is the monitored metric
A - Async (unequal) time intervals
T - Too low data values are of interest
E - Event data, not sampled data
Post a Comment