Saturday, May 5, 2007

ORACLE Scalability Oracles

For those of you concerned with ORACLE 10g performance, there are a couple of books by ORACLE oracles that you might find useful; especially with regard to ORACLE scalability

  • Cary Milsap, "Optimizing Oracle Performance" (O'Reilly, 2003).
    On p. 361, there is a definition of scalability expressed in terms of the response time (R). This stands in contrast to the way I developed my Universal Scalability Law, which is based on a kind of normalized throughput. It also reminded me that I did discuss query scalability starting on p. 195 of my book The Practical Performance Analyst. The same example is also presented in my more recent Perl::PDQ book. For those of you who haven't seen it, here's how it works.

    Consider a data mining application which runs on some large-scale parallel system (e.g., an NCR Teradata machine) with P processors and gobbs of disks. The capacity planning question is: Which database configuration produces the shortest average query times? Suppose a single query takes time T to complete on a single processor. You'd like it to take half that time (T/2) if there were 2 processors to handle the query in parallel, a quarter of the time (T/4) if there were 4 parallel processors, and so on; all of this assuming that the database tables can be re-stiped across the spindels belonging to the parallel query platform. In general, therefore, you expect the original query time to be reduced by the number of parallel processors you can throw at it:

    T(p) = T/p


    This reduction in time is shown in the plot with T(1) = 8 minutes for each query. Clearly, you get seriously diminished returns on the query-time reduction as the number of parallel processors increases, i.e., close to the x-axis. At some point, it becomes a waste of money to add more processing power. Can we determine that configuration? The answer is Yes, and here is how you can do it.

    Assume that you have some front-end processing (FEP) that goes on before the actual query is processed. This pre-processing may involve using a query planner/optimizer and so on. The database tables themselves are striped across the disks associated with P back-end processors (BEPs). The 3 steps needed to determine the optimal BEP configurations are:

    1. Determine the pre-processing time (TFEP) on its own, e.g., 6.7 milliseconds
    2. Determine the the maximun throughput (XBEP) that a single BEP is capable of, e.g., 100 queries per minute.
    3. Converting all the performance metrics to the same timebase viz., seconds, the optimal BEP configuration is given by:

      Popt = 1 / [ 0.0067 * (100/60) ] = 89.55 BEPs


    Rounding to an integer number of processors we have Popt = 90 BEPs as the optimal configuration. Using PDQ, it can be shown that this configuration has an aggregate throughput of almost 3,700 queries per minute with each query now taking only T(90) = 3 seconds.


  • Craig Shallahamer, "Forecasting Oracle Performance" (Apress, 2007), refers to my super-serial scalability model (acknowledged on p. 242), and the use of that terminology tells me his content draws largely on Chapter 6 of The Practical Performance Analyst. That earlier treatment of mine is now called the Universal Scalability Law and is discussed in Chapters 4-6 of my new Guerrilla Capacity Planning book.

1 comment:

Unknown said...
This comment has been removed by a blog administrator.