beautypg.com

Network load balancing, Investment protection – HP GbE2 User Manual

Page 8

background image

8

Applications that utilize a single uplink port include testing and evaluation systems, server blade
enclosures with a few installed servers, and applications that require minimal bandwidth. On a

heavily utilized system, using a single uplink port for all 32 network adapters can cause a traffic
bottleneck. For example, using one uplink from interconnect switch A requires the traffic from all the

network adapters routed to switch B to travel over the two crosslinks (a 2000 Mb/s path), previously
shown in Figure 2. The crosslinks are intended primarily as a failover route and generally are not
used as a primary path. For more optimal performance, at least one uplink port on each interconnect

switch would be used. However, system administrators may use any combination from one to all
twelve external Ethernet ports to increase bandwidth, to separate network and management data onto
physically isolated ports, or to add redundant connections to the Ethernet network backbone.
Another means to achieve network cable reduction is to link the GbE2 Interconnect Switches with
other ProLiant BL e-Class and p-Class interconnect switches from different server blade enclosures. This

is ideal for customers with multiple blade enclosures within a rack. It also allows the network
administrator to define the desired level of network blocking or oversubscription.

Network load balancing

ProLiant BL systems configured with the interconnect switch support three network load balancing
solutions. Options exist for providing this functionality integrated within or exterior to the server blade

enclosure.
For load balancing within the server blade enclosure, the p-Class GbE2 Interconnect Kit future layer 3-

7 upgrade (discussed in the section titled "10-Gb fabric”) or the currently available F5 Network BIG-IP
Blade Controller (BIG-IP) may be used. BIG-IP is a software option for ProLiant BL e-Class and p-Class
systems that provides a very economical solution to load balancing and traffic management between

multiple server blades that reside in a single to multiple server blade enclosures.
BIG-IP is available from F5 Networks as a software option installed on ProLiant server blades. One
license is installed on a server making the server blade into a “dedicated” load balancer; no

additional software can be installed on the server blade. For a redundant solution, two copies of BIG-
IP are installed on two server blades. The server blade(s) with BIG-IP installed may perform load

balancing on blade servers that reside in the same or different server blade enclosures, both e-Class
and p-Class. The blade servers may be located anywhere on the network as long as load balanced
traffic to and from the servers pass through the BIG-IP Blade Controller. For more information on BIG-

IP Blade Controller for ProLiant BL systems, see

http://h71028.www7.hp.com/enterprise/html/4557-

0-0-0-121.html

.

For load balancing exterior to the server blade enclosure, a third party layer 7 Ethernet switch or

network load balancing appliance may be used. This traditional approach uses a multi-tiered
architecture where the interconnect switches are connected to one or more layer 7 switches or

network load balancing appliances. Layer 7 switches and network load balancing appliances are
available from several network vendors including Cisco, F5, Nortel, and others. This solution is
supported on both e-Class and p-Class configured with any interconnect kit.

Investment protection

The GbE2 Interconnect Kits are fully supported in the p-Class sever blade enclosure with any

combination of ProLiant BL20p, BL20p G2, and BL40p server blades.
Users can upgrade to the GbE2 Interconnect Kit from existing RJ-45 Patch Panel, RJ-45 Patch Panel 2,

and GbE Interconnect Kits without powering down the server blade enclosure or server blades. If
users upgrade from the GbE Interconnect Kit, one GbE Interconnect Switch may be replaced at a time
while the other switch remains operational. This allows critical network resources to be transferred

from one switch to the other.