Glossary, Display, Block – HP Scalable Visualization Array Software User Manual
Page 65
Glossary
Administrative
Network
Connects all nodes in the cluster. In an HP XC compute cluster, this consists of two branches:
the Administrative Network and the Console Network. This private local Ethernet network
runs TCP/IP. The Administrative Network is Gigabit Ethernet (GigE); the Console Network is
10/100 BaseT. Because the visualization nodes do not support console functions, visualization
nodes are not connected to a console branch.
bounded
configuration
An SVA configuration that contains only visualization nodes and is limited in size to four to
seventeen workstations plus a head node. The bounded configuration serves as a standalone
visualization cluster. It can be connected to a larger HP XC cluster via external GigE connections.
This configuration is based on racked component building blocks, namely the Utility
Visualization Block (UVB) and the Visualization Building Block (VBB).
Chromium
Chromium is an open source system for interactive rendering on clusters of graphics
workstations. Various parallel rendering techniques such as sort-first and sort-last may be
implemented with Chromium. Furthermore, Chromium allows filtering and manipulation of
OpenGL command streams for non-invasive rendering algorithms. Chromium is a flexible
framework for scalable real-time rendering on clusters of workstations, derived from the
Stanford WireGL project code base.
compute node
Standard node in an HP XC cluster to be used in parallel by applications.
Configuration
Data Files
Configuration Data Files provide specific information about the system configuration of an
SVA. File details are mainly of interest to the system administrator who manages and configures
the cluster. All visualization sessions that you initiate to run your application depend on input
from the Configuration Data Files. There are three such files: Site Configuration File, User
Configuration File, and Job Settings File.
display block
The tile output from a single display node, including the relative orientation of the tiles in the
case of multi-tile output generated by two ports on a single graphics card or two cards.
display node
Display nodes are standard Linux workstations containing graphics cards. They transfer image
output to the display devices and can synchronize multi-tile displays. The final output of a
visualization application is to display a complete image that is the result of the parallel rendering
that takes place during an application job. To make this possible, a display node must contain
a graphics card connected to a display device. The display can show images integrated with
the application user interface, or full screen images. The output can be a complete display or
one tile of an aggregate display.
Display Surface
A Display Surface is a named assemblage of one or more display nodes, their associated display
devices, including the physical orientation of the display devices relative to one another. A
Display Surface is made up of the output of display nodes, that is, display blocks.
Display Surface
Configuration
Tool
Defines the arrangement of display blocks that make up a Display Surface, including the relative
spatial arrangement of the display blocks. Invoked using the svadisplaysurface command.
Requires root privileges.
DMX
Distributed Multi-Head X is a proxy X Server that provides multi-head support for multiple
displays attached to different machines (each of which is running a typical X Server).
interactive
session
A visualization session typically launched using a VSS script provided by HP. Such a script
allocates the cluster resources, and starts the X Servers. It also starts a desktop environment
(for example, KDE or Gnome) from which you can launch your applications repeatedly while
retaining the same job resources. To launch your application, open a terminal window and then
run your application as usual.
Job Settings File
A Configuration Data File that determines the way in which a visualization job runs. The
visualization job data is defined at job allocation time from options specified to the job launch
scripts, from data access calls embedded in the script, and the other configuration data files.
The Job Settings File is named /hptc_cluster/sva/job/
span equal to that of the job.
LSF
Platform Load Sharing Facility for High Performance Computing. Layered on top of SLURM
to provide high-level scheduling services for the HP XC system software user. LSF can be used
in parallel with SVA job launching techniques that rely on SLURM.
65