log file size - 08-05-2004 , 10:10 AM
I use a transactional database. I've got one hash database with a
couple of secondary indexes, I put records to it, update records (each
about two times), never delete single one. Records are of a fixed
My test case has this database populated to a size of 11MB and I can
see about 12 log files in my environment, each sized about 10MB.
Is it possible to do some optimizations affecting the total size of
log file(s)? Some internal granularity setting or such stuff?
Many thanks in advance.
Re: log file size - 08-05-2004 , 12:36 PM
pavel (AT) gingerall (DOT) cz (Pavel Hlavnicka) writes:
and the new record to the transaction log. So with one creation and
two updates average, you have each item five times in the log.
As the log records have overhead, that seems to be fine to me.
wholly under your control. If your situation is that only a little part
of your values is updated, you could consider putting that changing
part into its own database, using transactions to guarantee consistency.
I am doing exactly that, with a main (customer) database with records
of about 700 byte each, and regular usage counter updates to 4 or 8 byte
counters: I keep the counters in a separate database. I found that the
log grows by about 100 byte for each counter update, which is certainly
better than the 1400 byte for an update to the whole record.