beautypg.com

Dell Emulex Family of Adapters User Manual

Page 612

background image

Emulex Drivers for Windows User Manual

P010077-01A Rev. A

3. Configuration

NIC Driver Configuration

612

Receive Drops No Memory (DMA
Limited)

The number of packets dropped as a result of insufficient buffers
posted by the driver. This is generally the result of the CPU core
used for any receive queue reaching 100%. The system may lack
sufficient CPU cycles to post receive buffers at the necessary rate.
A lot of small packets lead to this behavior on almost any CPU,
since the processing time for small packets is very high in the
networking stack. Using a teaming driver may also lead to this,
since it increases the CPU load during receive.
Increasing the number of “Receive Buffers” in the advanced
property page may alleviate some of these drops, in particular if
the drops are the result of bursts of small receive packets on the
network. However, if the CPU is the limit, increasing the buffer
resources does not help because the driver cannot post them fast
enough.
Enabling RSS is another strategy to reduce drops since it allows
the NIC driver to use additional CPU cores. The number of RSS
queues may be increased to increase the total number of posted
buffers available to the adapter.
Enabling RSC can also reduce CPU consumption in the networking
stack by combining multiple TCP packets into one larger packet.
For best performance, the system BIOS should be set to “Maximum
Performance” or manually disable C-states. The transitions to low
power, C-states may cause a steady trickle of drops due to
increased latencies from packet reception until the driver's
interrupt processing code is invoked.

Receive Drops No Fragments (CPU
Limited)

The number of receive packets dropped because of a DMA
bottleneck from the network adapter to host memory. This may be
caused by bottlenecks in either the PCIe bus or main memory.
In the Status tab of the Custom property page, the Emulex NIC
reports the PCIe link parameters and the maximum supported
parameters. For example, installing a 8x device in a 4x PCIe slot
cuts the available PCIe bandwidth in half. The PCIe MTU and Read
Request size are also reported, and these may be configurable in
the system BIOS for the computer.
The performance of the main memory is the other major concern
for networking throughput. The ideal situation is using high speed
memory with all memory channels populated per CPU - typically 3
or 4 DIMMs per CPU socket. For the ideal performance, the same
DIMM size should be used in each memory channel to allow perfect
memory channel interleaving. Features such as memory sparing or
memory mirroring dramatically decrease the memory bandwidth
of the system and cause drops.
TCP connection offload may lead to increased drops as a result of
“no memory”. If TCP connection offload is used, enabling flow
control may reduce the drops. Alternatively, disabling TCP
connection offload may improve performance.

Table 3-4 NIC Driver Properties Statistics (Continued)

Statistic Name

Description