This article was originally posted in 2007. When I updated the image today (in 2014), it reappeared with the more recent date and I don't know how to override that (wrong) timestamp. This seems to be a bug in Blogger.
In the aftermath of a discussion about software management, I looked up the Mythical Man-Month concept on Wikipedia. The main thesis of Fred Brooks, often referred to as "Brooks's law [sic]," can simply be stated as:
Adding manpower to a late software project makes it later.In other words, some number of cooks are necessary to prepare a dinner, but adding too many cooks in the kitchen can inflate the delivery schedule.
The wiki page goes on to explain that this inflation effect is due to "the time required for the new programmers to learn about the project, as well as the increased communication overhead" and cites an example: 50 developers implies 50*(50 − 1) / 2 = 1225 possible conversations between them, each with some fixed cost in time. I don't know if the example comes from Brooks' book because I borrowed it from the Xerox PARC library 20 years ago and therefore I don't have my own copy, but I don't recall there being a lot of equations in his book. Nonetheless, I recognized the combinatoric term as being the same one that appears in my quadratic scalability model in Chap. 14 of The Practical Performance Analyst and this led me to wonder if it would be possible to express Brooks' law mathematically. Here, in a nutshell, is what I came up with.
- Ideal Schedule Contraction
Suppose an individual programmer can complete a project in time T. Assume the work can be equi-partitioned then, p = 2 programmers should complete the project in half the time, p = 3 people in one third, and so on.The project schedule is contracted according to the hyperbolic function shown in Fig. 1. If we think of the programmers as processors, this effect is logically equivalent to ideal parallelism.
- Round-Table Meetings
I'm not sure if Brooks considers this case but, suppose that round-table or group meetings are called to exchange status and other information. Assume each person's report takes the same amount of time, on average.
Such group meetings introduce a fixed cost in time below which the schedule cannot be contracted further. See Fig. 2. This is the result of everyone needing to down tools and spend time either reporting to the rest of the group or listening to the current speaker as they go around the table.
- One-on-One Conversations
This is the case specifically attributed to Brooks.
Adding the cost of pairwise communications (e.g., email, IM-ing, phone, cube discussions, etc.) introduces a minimum into the schedule reduction which is worse (i.e., higher) than the fixed-cost asymptote in Fig. 2. Since there is a minimum, completion time increases beyond the minimum as the number of people on the project increases.
- Combined Productivity
Here's the kicker. If we combine all these effects and consider productivity, rather than scheduling delay, we get the following curve.Hopefully, many of you will recognize this as the Universal Law of Computational Scalability discussed in chapters 4-6 of my Guerrilla Capacity Planning book. In the above sense, it contains Brooks' law, but I wasn't aware of it until now.
It's interesting to note that for the chosen parameters in Figs. 3 and 4 (roughly, email or other conversations totalling 15 minutes during an hour) the optimal number of programmers is between 2 and 3, which concurs with the claims of the XP programming approach.
Finally, for those who missed it, Fig. 2 is logically equivalent to Amdahl's law.
4 comments:
There are actually better mathematical models of programmer productivity, developed by Lawrence Putnam not too long after Fred Brooks wrote "The Mythical Man Month".
See, for example, http://en.wikipedia.org/wiki/Putnam_model
By the way ... if Putnam can have his model on Wikipedia, why isn't the USM there??
Quite nicely put, however there is a complication. If we define development as delivering faultless function, there is a minimum nr of programmers needed to catch the errors before production. One single programmer may be very productive in producing code, but the catching of errors should be part of the effort.
Despite the title, the real purpose of this post was to demonstrate that the USL throughput model can also be understood in terms of latencies, which includes something akin to Brooks' law.
agreed
Post a Comment