This statement would be laughable, if he weren't serious. Consider the following:
- How can current Linux instrumentation be sufficient when older UNIX performance instrumentation is still not sufficient?
- UNIX instrumentation was not introduced to solve real-world performance problems. It was a hack by and for kernel developers to monitor the performance impact of code changes in a light-weight O/S. We're still living with that legacy. It might've been necessary, but that doesn't make it sufficient.
- The level of instrumentation in Linux (and UNIX-es) is not greatly different from what it was 30 years ago. As I discuss in Chap. 4 of my Perl::PDQ book, the idea of instrumenting an O/S goes back (at least) to c.1965 at MIT.
- Last time I checked, this was the 21st century. By now, I would have expected (foolishly, it seems) to have at my fingertips, a common set of useful performance metrics, together with a common means for accessing them across all variants of UNIX and Linux.
- Several attempts have been made to standardize UNIX performance instrumentation. One was called the Universal Measurement Architecture (UMA), and another was presented at CMG in 1999.
- The UMA spec arrived DOA because the UNIX vendors, although they helped to design it, didn't see any ROI where there was no apparent demand from users/analysts. Analysts, on the other hand, didn't demand what they had not conceived was missing. Such a Mexican standoff was cheaper for the UNIX vendors. (How conveeeeenient!) This remains a potential opportunity for Linux, in my view.
- Rich Pettit wrote a very thoughtful paper entitled "Formalizing Performance Metrics in Linux", which was resoundingly ignored by Linux developers, as far as I know.