How large databeses? - 06-13-2004 , 12:15 PM
What you know about benchmarks?
for MySQL, MS SQL and Oracle
^ Time response (for SELECT)
Re: How large databeses? - 06-13-2004 , 01:24 PM
"Pawel" <pawelec111 (AT) op (DOT) pl> wrote
is that is a pretty useless statistic. It will vary by OS, CPU, number of
disks, speed of disks, what the query is, what indexes, the cache strategy,
what cardinality of the data, how it is arranged on disk, how it is stored
on disk, etc. Pretty meaningless. For example, in Oracle I can define a
hash cluster on the primary key and if you look up the data by the primary
key I can get it down to 1 disk IO to retrieve the data. The table can be
very, very large. But there are disadvantages to this also. I have to have
an idea on the upper limit of the table size or I would get hash key
collisions. You could look at the tpc benchmarks, but you really need to
define what your business needs are choose a system that best fits those
Re: How large databeses? - 06-13-2004 , 02:38 PM
Jim Kennedy wrote:
for the benchmarks.) These are run under controlled conditions,
*but* the database and the hardware have been fine-tuned
by experts in the appropriate DBMS and disk technology.
e.g. the "meta-data" (the stuff which defines what the rest
of the tables look like), is probably clustered in the
center of the least used disk I/O wise, indices and data for
the same table are on separate physical disk drives (or even
separate disk arrays), log-file disks(arrays) are tuned for
sequential access while data disks(arrays) are tuned for
random access, etc., etc., etc.
It is highly unlikely that someone in your own shop could
get the same kind of performance as the benchmarks for any
of these products.
No DBMS gives optimal performance "out of the box" for
every application. As Jim said, what are your business
needs? What kinds of accesses does your application intend
to do (transactions vs. reports, for example)?
(If you're planning to do 7 way outer joins against
a whole lot of tables, then don't expect *any* DBMS
to meet your expectations.)
In addition, AFAICT, the number of tables in the database
is probably *not* a good measure of DBMS performance.
This is because most databases have a pre-defined schema
(metadata as noted above) with a fixed set of tables.
This metadata is likely to be cached by the DBMS at
startup (even though you could do tricks as mentioned
above to improve the disk accesses to it).
If you're planning to create and drop bunches of tables on
the fly, then, quite honestly, you're abusing the DBMS.
[Obligatory War Story:
I, and a bunch of other folks, were called in to
see what could be done about performance on one
particulara system. We found out that, as part
of a transaction, tables would be created on the
fly for various reasons and kept there as permanent
tables (don't ask.. you don't want to know).
There were over 250,000 tables in the database.
The DBMS vendor's representative, upon seeing this,
said: "You mean this spit actually works?"]
"It is impossible to make anything foolproof
because fools are so ingenious"
- A. Bloch
Re: How large databeses? - 06-13-2004 , 06:36 PM
Did you have a particular question?