Using cache – HP NonStop G-Series User Manual
Page 151

Balancing and Tuning a System
Measure User’s Guide — 520560-003
7- 15
Checking and Tuning Problem Areas
Using Cache
A read or write operation to memory is much faster than a read or write operation to
disk. The objective of disk cache is to keep frequently accessed information in memory,
saving the time otherwise spent performing physical disk I/O.
Read and buffered write operations can use cache. Unbuffered write operations always
cause a disk I/O. Because buffered writes can be collected in cache and written out as
necessary, they save disk I/Os and, therefore, CPU time. For TMF-audited files,
buffered write operations are the default. For nonaudited files, buffered write
operations are not usually recommended because a failed disk or multiple failed CPUs
can cause data loss. However, for nonaudited files, buffered writes might be efficient in
cases where a job is rerun to re-create data. In most cases, Measure and Spooler data
files are good candidates for buffered writes.
The recommended cache size for a disk depends on a number of factors. The size of
physical memory on the CPUs containing the primary and backup disk processes and
the file activity on the disk are the most important.
•
Physical memory is shared by the memory manager (to allocate process code and
data pages) and the disk cache for each disk process (primary or backup) on the
CPU. The ideal division of memory gives the memory manager all the memory it
needs (no extra) and gives the rest to the disk cache.
If disk cache is too small, disk processes might perform unnecessary physical disk
I/Os. If disk cache is too large, it can induce unnecessary swap operations and
cache faults. (Swaps are also disk I/Os and therefore time consuming.)
•
File activity determines cache use:
°
Random access on a key-sequenced file. When performing random access on
a key-sequenced file, you must access one or more index blocks plus the data
block for each I/O operation. By configuring enough cache to hold as many of
the index blocks for the file as possible, you avoid the physical disk I/Os
associated with the index levels and so improve performance. By adding extra
cache for the data blocks, you can improve the chance of a cache hit on a data
block and avoid the possibility of forcing a data block out of cache before a
possible update operation.
°
Random access on an entry-sequenced or relative file. By providing enough
cache to hold a substantial percentage of the file, you can increase the
chances of a cache hit and thus improve performance. However, in the event
of a cache miss, an entry-sequenced or relative file requires one I/O where the
key-sequenced file might require more than one.
°
Sequential access on any file. Because you are accessing the information only
once, the cache requirements for the file are minimal. Even for a
key-sequenced file, a minimal cache configuration should keep the required
index blocks in cache until they are replaced by the index blocks required for
the next set of data blocks.