Dcbx are, Here, Overview – Dell Intel PRO Family of Adapters User Manual
Page 21: Dcb for linux, Background

Data Center Bridging (DCB) for Intel® Network Connections:
Intel® Ethernet iSCSI Boot User Guide
Overview
Data Center Bridging is a collection of standards-based extensions to classical Ethernet. It provides a lossless data center
transport layer that enables the convergence of LANs and SANs onto a single Unified Fabric. It enhances the operation of
business-critical traffic.
Data Center Bridging is a flexible framework that defines the capabilities required for switches and end points to be part of a
data center fabric. It includes the following capabilities:
Priority-based flow control (PFC; IEEE 802.1Qbb)
Enhanced transmission selection (ETS; IEEE 802.1Qaz)
Congestion notification (CN)
Extensions to the Link Layer Discovery Protocol standard (IEEE 802.1AB) that enable Data Center Bridging Capability
Exchange Protocol (DCBX)
There are two supported versions of DCBX:
Version 1: This version of DCBX is referenced in Annex F of the FC-BB-5 standard (FCoE) as the version of DCBX used
with pre-FIP FCoE implementations.
Version 2: The specification can be found as a link within the following document:
For more information on DCB, including the DCB Capability Exchange Protocol Specification, go
to
.
For system requirements go
.
DCB for Linux
Background
In the 2.4.x kernel, qdiscs were introduced. The rationale behind this effort was to provide QoS in software, as hardware did
not provide the necessary interfaces to support it. In 2.6.23, Intel pushed the notion of multiqueue support into the qdisc
layer. This provides a mechanism to map the software queues in the qdisc structure into multiple hardware queues in
underlying devices. In the case of Intel adapters, this mechanism is leveraged to map qdisc queues onto the queues within
our hardware controllers.