Glossary, Enhanced statistics, Generic – HP XC System 3.x Software User Manual
Page 123: Statistics, Epoch
Glossary
active fraction
The fraction of time an event was active in the PMU.
See also duty group.
duty group
A group of HPCPI events, used to multiplex the events monitored. If hpcpid is monitoring
more events than the number of event counters available for the processor PMU, hpcpid places
the events in duty groups and multiplexes (cycles through) the duty groups so that only the
events in one duty group are monitored at any time.
enhanced
statistics
Statistics that are processor-dependent. By default, the xclus and xperf utilities display
enhanced statistics.
epoch
A time-based division of HPCPI data. By default, the HPCPI daemon starts a new epoch each
time it runs. The HPCPI database contains a different subdirectory for each epoch.
generic statistics
Statistics that are processor-independent. By default, the xcxclus and xcxperf utilities display
generic statistics.
golden client
The node from which a standard file system image is created. The golden image is distributed
by the image server. In a standard HP XC installation, the head node acts as the image server
and golden client.
golden image
A collection of files, created from the golden client file system that are distributed to one or
more client systems. Specific files on the golden client may be excluded from the golden image
if they are not appropriate for replication.
golden master
The collection of directories and files that represents all of the software and configuration data
of an HP XC system. The software for any and all nodes of an HP XC system can be produced
solely by the use of this collection of directories and files.
head node
The single node that is the basis for software installation, system configuration, and
administrative functions in an HP XC system. There may be another node that can provide a
failover function for the head node, but HP XC system has only one head node at any one time.
image server
A node specifically designated to hold images that will be distributed to one or more client
systems. In a standard HP XC installation, the head node acts as the image server and golden
client.
job allocation
Nodes allocated to the user by the SLURM, LSF-HPC, or RMS subsystem. Also referred to as
node allocation.
label
An identifier for HPCPI data, created using the hpcpictl label command.
LSF execution
host
The node on which LSF runs. A user's job is submitted to the LSF execution host. Jobs are
launched from the LSF execution host and are executed on one or more compute nodes.
LSF-HPC with
SLURM
Load Sharing Facility for High Performance Computing integrated with SLURM. The batch
system resource manager on an HP XC system that is integrated with SLURM. LSF-HPC with
SLURM places a job in a queue and allows it to run when the necessary resources become
available. LSF-HPC with SLURM manages just one resource: the total number of processors
designated for batch processing.
LSF-HPC with SLURM can also run interactive batch jobs and interactive jobs. An LSF interactive
batch job allows you to interact with the application while still taking advantage of LSF-HPC
with SLURM scheduling policies and features. An LSF-HPC with SLURM interactive job is run
without using LSF-HPC with SLURM batch processing features but is dispatched immediately
by LSF-HPC with SLURM on the LSF execution host.
See also LSF execution host.
MPI
Message Passing Interface. A library specification for message passing, proposed as a standard
by a broadly based committee of vendors, implementors, and users.
node allocation
Nodes allocated to the user by the SLURM, LSF-HPC, or RMS subsystem. Also referred to as
job allocation.
RMS
Resource Management System. A set of commands for running parallel programs and monitoring
their execution. The set includes utilities that determine what resources are available and
commands that request allocation of resources.
123