There are 2 sides to the performance modeling coin:
- Your data. To apply the USL model, this merely requires gathering a set of throughput measurements. Let's call them: X(1), X(2), X(3), ..., X(N), where N is the number of virtual users, for example.
- Your model. You are free to choose any model you like (but choose carefully). In my Guerrilla Capacity Planning book I claim the USL is the best choice for quantitatively assessing application scalability.
1 + α (N − 1) + β N(N − 1)
The left-hand side indicates how your raw data is first normalized to produce the actual values of C(N). The right-hand side indicates how the USL model provides the expected values of C(N).
Your data are matched to the USL model by adjusting the value of the α and β parameters to obtain the best fit. Technically, that's accomplished through the magic of statistical regression. This video shows how it's done for the case of a simple linear model rather than the nonlinear USL model.
Graphically, the procedure looks something like this.
Notice how the roll off or maximum in the relative throughput C(N) is only manifest at user loads (N) much higher than were actually measured. Magic! That kind of look-ahead insight is what makes USL modeling important.