Back
Home
Up
Next

Manual optimizations.

    As almost every piece of SW that's for general use, Interbase supports manual optimizations. While the aim of the original creator and implementor was to make these hand-picked solutions obsolete, the engine up to version 6.0 needs that the database programmer rolls out his/her sleeves when the performance should be tuned for specific requirements.

While it's true that IB is able to run out of the box with an acceptable performance, this doesn't preclude people from taking further steps to tailor the engine for specific needs. As it's the case with almost any relational engine, there are different types of customizations:

A) The external high level optimizations that may result in having more than one IB server running and dispatching the service requests. This means using middleware to keep a connection pool and apply load balancing strategies along with replication. Examples of middleware are MIDAS, ASTA and dbOvernet. Some basic replication facilities have been written in the past for IB. Now, in IB6, a third-party, commercial grade replication facility for Interbase is offered. It was aimed to be a piece of the commercial IB6 but since it was developer by a partner company and IB6 will be free of royalties and license fees, it's offered as a shareware solution.

B) The external low level optimizations as described in the Operations Guide and in a classical paper from Bill Karwin known as . Since IB by default is tuned for a very small footprint, it uses few RAM for each database in use and gets more disk space in small chunks.

C) The internal high level optimizations that means taking explicit control of the query plan used to access data and adding some indexes. Currently, only the SELECT clause allows for explicit plans, so for deletions and updates, another approach has to be used. Indexes should be taken with care: more of them doesn't imply better performance; indexes should be added to enhance data access with WHERE and HAVING conditions and not for ordering, because ordering can be done in a sort algorithm rather than rattling the disk with random accesses. Also, many indexes on a table usually confuse the query optimizer that cannot pick the best index, hence it pushes people to take explicit control of the automatic generated query plans and we end up defeating the purpose of a declarative language as it's the case of SQL. Remember that while indexes can be a godsend for data access with complicated filtering conditions, each index must be maintained so the engine incurs in extra work when rows are updated, deleted or inserted. Simply put, the "index farm" is not the way to go in IB as a general rule. Several people have observed CPU spikes as a result of several indexes on a table, specially if some of them include fields used by other indexes. Ironically, after these high CPU usages, the selected plan usually is the worst.

D) The internal low level optimization that means writing some statements as executable stored procedures and selectable stored procedures and in some cases, including the use of positional updates. See the documentation section for positional updates through rdb$db_key, that's the raw position of a record in a database.

 

This page was last updated on 2000-06-12 16:59:17