The effect that Peter observed can be replicated by choosing N = 60 while keeping M = 3. A plot of the corresponding Pr(j|n) shows that the mean queue length (the peak location) moves to 48.42 users.

However, at some points in the calculation of Pr(j|n) by:
$pq[0][$n] -= $pq[$j][$n];
this choice of parameters introduces a determination of the difference between two very similar and very small numbers.

As a consequence, the limitations of the floating-point representations begin to show up as tiny negative values for the probabilities, which are disallowed by definition. NOTE: This effect is not caused by PDQ. Rather, it is a classic case of the type discussed in "Numerical Recipes". The problem has to do with the potential limitations due to the choice of finite internal representation of numbers in general and non-exact numbers in floating-point format, in particular.
The table at left (click on it to get a bigger image) shows explicitly how the degree of error can be traded off against computational time. The left column shows the original effect; negative values of the probability near zero queue length when calculated with fast machine-precision FP. The middle column shows what happens if a numerical cut-off condition, like the one discussed in "Mastering Algorithms with Perl", is introduced. Peter implemented a cut-off for fesc.pl as:
$eps = 1.0e-10;
...
$pq[0][$n] -= $pq[$j][$n];
if ( abs($pq[0][$n]) < $eps ) {
$pq[0][$n] = 0.0;
}
and this is probably the most expeditious method for PDQ models (in any language).
In the above table, I reproduced the same approach using the Mathematica parameter:
At the level of machine precision (the fastest), FP numbers can go wobbly on you. The black line shows the effect of introducing a cut-off, which sharply jumps up from zero (hence the name). The blue line shows the more graceful, and more computationally intensive, effect of calculating with arbitrary precision arithmetic. Note carefully the magnitudes on the y-axis.
There is no easy way around this effect, so one has to remain vigilant with all computations, no matter how they are done. Even standards like IEEE FP 754 cannot prevent these problems. At least in this case, the negative probabilities provided a tip-off.
No comments:
Post a Comment