Showing posts with label IBM. Show all posts
Showing posts with label IBM. Show all posts

Wednesday, March 14, 2018

WTF is Modeling, Anyway?

A conversation with performance and capacity management veteran Boris Zibitsker, on his BEZnext channel, about how to save multiple millions of dollars with a one-line performance model (at 21:50 minutes into the video) that has less than 5% error. I wish my PDQ models were that good. :/

The strength of the model turns out to be its explanatory power, rather than prediction, per se. However, with the correct explanation of the performance problem in hand (which also proved that all other guesses were wrong), this model correctly predicted a 300% reduction in application response time for essentially no cost. Modeling doesn't get much better than this.

Footnotes

  1. According to Computer World in 1999, a 32-node IBM SP2 cost $2 million to lease over 3 years. This SP2 cluster was about 6 times bigger.
  2. Because of my vain attempt to suppress details (in the interests of video length), Boris gets confused about the kind of files that are causing the performance problem (near 26:30 minutes). They're not regular data files and they're not executable files. The executable is already running but sometimes waits—for a long time. The question is, waits for what? They are, in fact, special font files that are requested by the X-windows application (the client, in X parlance). These remote files may also get cached, so it's complicated. In my GCAP class, I have more time to go into this level of detail. Despite all these potential complications, my 'log model' accurately predicts the mean application launch time.
  3. Log_2 assumes a binary tree organization of font files whereas, Log_10 assumes a denary tree.
  4. Question for the astute viewer. Since these geophysics applications were all developed in-house, how come the developers never saw the performance problems before they ever got into production? Here's a hint.
  5. Some ppl have asked why there's no video of me. This was the first time Boris had recorded video of a Skype session and he pushed the wrong button (or something). It's prolly better this way. :P

Tuesday, September 6, 2011

How Much Wayback for CaP?

How much data do you need to retain for meaningful capacity planning and performance analysis purposes? Sounds like one of those "how long is a piece of string?" questions and I've never really thought about it in any formal way, but it occurred to me that 5 years is not an unreasonable archival period.

Mister Peabody and Sherman in front of the WABAC machine

My reasoning goes like this:

Wednesday, August 17, 2011

IBM Introduces the Cognitive Chip

Last week, in the GDAT class, we were discussing performance visualization tools as requiring a good impedance match between the digital computer under analysis and the cognitive computer of the analyst—AKA the brain.

Sunday, August 30, 2009

Seeing Molecules: Kekulé's Dream Writ Large

As a chemist in a former life, I can't help but comment on this watershed moment in science, even though it's probably been blogged to death. Nanotechnologists at IBM Zürich have imaged the naturally occurring organic molecule pentacene (essentially, 5 benzene ring-molecules bolted together in a row). Why is this a big deal?

Thursday, April 2, 2009

Modern Microprocessor MIPS

The question of how modern microprocessors compare with mainframe processors of yore, arises from time to time. The vernacular rate metric that has persisted for a long time (long in the history of computers, that is) is MIPS. Whether you approve of MIPS as a valid performance metric or not is a different (philosophical) question. Since the mainframe has not gone away---it's just another server on the network today---even mainframers still talk about MIPS ratings. Nonetheless, it is true that the meaning of "instructions" does vary significantly across architectures so, one does have to exercise caution when making inter-architectural comparisons and not endow any conclusions with more credibility than they deserve.

Thursday, March 19, 2009

IBM Might Swallow the Sun


"Shares of Sun Microsystems, which makes the Java software that runs many Internet applications, were up 78.9 percent after reports that it was in talks to be acquired by I.B.M. Shares of Sun ended at $8.89. I.B.M. was down 1 percent, to $91.95."
I heard this rumor at the Portland CMG meeting yesterday. Apparently, Sun has been quietly "looking for a date" for some time. Presumably, IBM's main interest is in Java IP. Will Solaris replace AIX (under the covers)?

I had a long-standing theory that Sun Microsystems would be bought by Fujitsu Corp to simply milk Solaris service contracts for the next 10 years. It's not interesting innovation, but it is a business. Sun has always managed to have enough cash in the bank to be able to forestall such a move, but now, they're out of gas.

Update: Why an IBM purchase of Sun would make sense (cnet)

Monday, May 5, 2008

Microsoft Discovers the Dumpster Datacenter

OK, not exactly a dumpster but something slightly bigger; a shipping container. Hello!? Google has been developing this concept for years with Sun and IBM not far behind in adopting it. The new wrinkle is that Google has now been awarded a patent on it.


Supply Chain Factoid: There are so many more (full) shipping containers coming from Asia to the USA and Europe than going the other way, that it is less cost-effective to store the empties than to simply scrap them and make new containers as needed.

Friday, February 1, 2008

Penryn Performance. The Envelope Please!

Having kept pretty much to its promised schedule for the Penryn microprocessor ...

Intel is now significantly ahead of the industry with the production of 45 nm parts using the new high-K dielectric materials. The claims for the new Hafnium oxide-metal gate technology included:

  • Approximatley 2x improvement in transistor density, for either
    smaller chip size or increased transistor count

  • Approximatley 30% reduction in transistor switching power

  • Better than 20% improvement in transistor switching speed or 5x
    reduction in source-drain leakage power

  • Better than 10x reduction in gate oxide leakage power


Wednesday, August 29, 2007

Solaris to Shine on the Mainframe (Say what!?)

Quite apart from the surprise over what passes for physics these days, PhysOrg.com recently reported on a surprise deal that will enable Sun's Solaris operating system to run on IBM servers.

Initially, the agreement will involve only IBM's (AIX) mid-range servers, which can also run the Windows and Linux operating systems, but eventually, so the report says, IBM hopes to bring Solaris to the mainframe. I assume this means it will run in a z/OS LPAR, like they do with Linux. If I take the view (and I do) that the mainframe is not a "dinosaur" but just another (excellent data processing) server on the network, one wonders where this leaves future Sun hardware platforms.

Add to this the growing emphasis by Sun to deploy Intel and AMD microprocessors for cost reasons and, as Jonathan Schwartz says, it "represents a tectonic shift in the market landscape." No kidding! I just wonder whether Schwartz will be riding the plate that stays on top or the plate that goes under.

Wednesday, June 13, 2007

Linux CFS: Completely Fair or Completely Fogged?

A while ago, I saw what looked like an interesting blog entry announcing a new task scheduler for Linux called CFS: Completely Fair Scheduler. I wanted to compare CFS with Fair Share scheduling (FSS); something that took off in the 1990's for UNIX operating systems and something I've looked into from a performance perspective, especially because FSS provides the fundamenal resource allocation mechanism for most VMM hypervisors.

Friday, May 25, 2007

My Message to Virtualization Vendors

Virtualization is about creating illusions (see Chapter 7 in Guerrilla Capacity Planning). However, vendors need to recognize that virtualization is a double-edged sword.

Constructing illusions by hiding physical information from users is one thing, propagating that illusion to the performance analyst or capacity planner is quite another, and considered harmful.

Presumably, it's also potentially bad for business, in the long run. This unfortunate situation has arisen for one of the following reasons:

  1. The performance data that is available is incorrect.

    Example: Enabling hyperthreading on a Xeon processor misleads the operating system, and thereby performance tools, into treating the single core as 2 virtual processors. This means that many performance management tools will conclude and report that your system has 200% processor capacity available. But, this is an illusion, so you will never see 200% processor capacity.

  2. The correct performance data is not made available.

    Example: With hyperthreading enabled, there should be a separate register or port that allows performance management tools to sample the actual utilization of the single physical core (AKA the execution unit). The IBM Power-5 has something called the PURR register that performs this role.


There are many examples of this kind of mangled performance shenanigans in the virtualized world, especially in what I call the meso-VM (PDF) level such as VMware and XenSource. The good news there is, since it's software, it's easier to modify and therefore more likely that actual performance data will become exposed to the analyst.

In other words:

Fewer bells, more whistles

should be the watchword for virtualization vendors.

Wednesday, May 23, 2007

More on Moore

In an opinion piece in this month's CMG MeasureIT e-zine, I examine what is possibly going on behind the joint IBM-Intel announcement and the imminent release of 45 nm 'penryn' parts in CMOS.

Some related blog entries are: