Sorry, you need to enable JavaScript to visit this website.

Resolving the Confusion around DCB (I Hope)

Simon Gordon

Apr 15, 2013

title of post

Storage traffic running over Ethernet-based networks has been around for as long as we have had Ethernet-based networks.  Of course sometimes it is technically not accurate to think of the protocols as fundamentally Ethernet protocols – whilst FCoE, by definition, only runs on Ethernet, iSCSI, SMB, and NFS,  they are, in reality, IP-based storage protocols and whilst most commonly run on Ethernet, could run on any network that supports IP.  That notwithstanding, it is increasingly important to understand the real nature of Ethernet, and in particular, the nature of the new enhancements that we put under the umbrella of Data Center Bridging (DCB).

Although there is a great deal of information around DCB, there is also a lot of confusion where even the best articles miss describing a number of its elements.  As such, with 10GbE ramping now is a good time to try to clarify the reality of what DCB does and does not do.

Perhaps the first and most important point is that DCB is, in reality, a task group in IEEE responsible for the development enhancements to the 802.1 bridge specifications that apply specifically to Ethernet Switching (or as IEEE say Bridging) in the datacenter.  As such, DCB is not in itself a standard nor is the DCB group solely involved in those standards that apply to I/O and network convergence.  The mots recent work of this task group falls into two distinct areas both of which apply to the datacenter ­ one is the now completed standards around network and I/O convergence (802.1Qau,Qaz, Qbb) but the other are those standards that address the impact of server virtualization technology (802.1Qbg, BR, and the now withdrawn Qbh). 

Protocol Tree

Also critical to understand, so that we do not overstate either the limitations of traditional Ethernet nor the advantages of the new standards around I/O and network convergence, is that these new standards build on top of many well understood, well used mature capabilities that already exist within the IEEE Ethernet standards set.  Indeed IMHO, the most important element of this is that the DCB Convergence standards are building on top of the 802.1p capability to specify eight different classes of service through a 3-bit PCP field in the 802.1Q header ­ the VLAN header.  Or to say that in English ­ Ethernet has for some considerable time had the ability to separate traffic into 8 separate categories to ensure that those different categories got different treatment – or more bluntly the fundamentals of I/O and network convergence are nothing new to Ethernet.  Not only that, but the VLAN identification itself can be used to apply QoS on different sets of traffic, as does the fact that we can usually identify different traffic types by the Ethertype or IP socket.

So what is all the fuss about? As much as there were some good convergence capabilities, it was recognized that these could be further enhanced.

802.1Qbb or Priority-based Flow Control (PFC), far from adding into Ethernet a non-existent capability for lossless, simply takes the existing capability for lossless ­ 802.3X ­ and enhances it.  802.3X, when deployed with both RX and TX pause, can give lossless Ethernet as both recognized by many in the iSCSI community as well as by the FCoE specifications.  However, the pause mechanisms apply at the port level, which means giving one traffic class lossless causes blocking of other traffic classes.  All 802.1Qbb does, along with 802.3bd, is allow the pause mechanisms to be applied individually to specific priorities or traffic classes ­ aka pause FCoE or iSCSI or RoCE whilst allowing other traffic to flow. 

PFC

802.1Qaz or ETS (let’s ignore that DCBX is also part of this document and discussed in another SNIA-ESF blog) is not bandwidth allocation to your individual priorities, rather it is the ability to create a group of priorities and apply bandwidth rules to that group.  In English, it adds a new tier to your QoS schedulers so you can now apply bandwidth rules to port, priority or class group, individual priority, and VLAN.  The standard suggests a practice of at least three groups, one for best effort traffic classes, one for PFC-lossless classes, and one for strict priority ­ though it does allow more groups. 

ETS logical view

Last but not least, 802.1Qau or QCN is not a mechanism to provide lossless capabilities.  Where pause and PFC are point-to-point flow control mechanisms QCN allows flow control to be applied by a message from the congestion point all the way back to the source.  Being Ethernet level mechanisms it is of course across multiple hops within a layer 2 domain and so cannot cross either IP routing or FCF FCoE-based forwarding.  If QCN is applied to a non PFC priority, then it would most likely reduce drop by telling the source device to slow down rather than having frames be dropped and allowing the TCP congestion window to trigger slowing down at the TCP level.  If QCN is applied to a PFC priority, then it could reduce back propagation of PFC pause and so congestion propagation within that priority. 

QCN

Although not part of the standards for DCB-based convergence, but mentioned in the standards, devices that implement DCB typically have some form of buffer carving or partitioning such that the different traffic classes are not just on different priorities or classes as they flow through the network, but are being queued in and utilizing separate buffer queues.  This is important as the separated queuing and buffer allocation is another aspect of how fate sharing is limited or avoided between the different traffic classes.  It also makes conversations around microbursts, burst absorption, and latency bubbles all far more complex than before when there was less or no buffer separation.

It is important to remember that what we are describing here are the layer 2 Ethernet mechanisms around I/O and network convergence, QoS, flow control.  These are not the only tools available (or in operation) and any datacenter design needs to fully consider what is happening at every level of the network and server stack ­ including, but not limited to, the TCP/IP layer, SCSI layer, and indeed application layer.  The interactions between the layers are often very interesting ­ but that is perhaps the subject for another blog.

In summary, with the set of enhanced convergence protocols now fully standardized and fairly commonly available on many platforms, along with the many capabilities that exist within Ethernet, and the increasing deployment of networks with 10GbE or above, more organizations are benefiting from convergence – but to do so they quickly find that they need to learn about aspects of Ethernet that in the past were perhaps of less interest in a non-converged world.

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Resolving the Confusion around DCB (I Hope)

Simon Gordon

Apr 15, 2013

title of post
Storage traffic running over Ethernet-based networks has been around for as long as we have had Ethernet-based networks.   Of course sometimes it is technically not accurate to think of the protocols as fundamentally Ethernet protocols – whilst FCoE, by definition, only runs on Ethernet, iSCSI, SMB, and NFS,  they are, in reality, IP-based storage protocols and whilst most commonly run on Ethernet, could run on any network that supports IP.   That notwithstanding, it is increasingly important to understand the real nature of Ethernet, and in particular, the nature of the new enhancements that we put under the umbrella of Data Center Bridging (DCB). Although there is a great deal of information around DCB, there is also a lot of confusion where even the best articles miss describing a number of its elements.   As such, with 10GbE ramping now is a good time to try to clarify the reality of what DCB does and does not do. Perhaps the first and most important point is that DCB is, in reality, a task group in IEEE responsible for the development enhancements to the 802.1 bridge specifications that apply specifically to Ethernet Switching (or as IEEE say Bridging) in the datacenter.   As such, DCB is not in itself a standard nor is the DCB group solely involved in those standards that apply to I/O and network convergence.   The mots recent work of this task group falls into two distinct areas both of which apply to the datacenter ­ one is the now completed standards around network and I/O convergence (802.1Qau,Qaz, Qbb) but the other are those standards that address the impact of server virtualization technology (802.1Qbg, BR, and the now withdrawn Qbh).   Protocol Tree Also critical to understand, so that we do not overstate either the limitations of traditional Ethernet nor the advantages of the new standards around I/O and network convergence, is that these new standards build on top of many well understood, well used mature capabilities that already exist within the IEEE Ethernet standards set.   Indeed IMHO, the most important element of this is that the DCB Convergence standards are building on top of the 802.1p capability to specify eight different classes of service through a 3-bit PCP field in the 802.1Q header ­ the VLAN header.   Or to say that in English ­ Ethernet has for some considerable time had the ability to separate traffic into 8 separate categories to ensure that those different categories got different treatment - or more bluntly the fundamentals of I/O and network convergence are nothing new to Ethernet.   Not only that, but the VLAN identification itself can be used to apply QoS on different sets of traffic, as does the fact that we can usually identify different traffic types by the Ethertype or IP socket. So what is all the fuss about? As much as there were some good convergence capabilities, it was recognized that these could be further enhanced. 802.1Qbb or Priority-based Flow Control (PFC), far from adding into Ethernet a non-existent capability for lossless, simply takes the existing capability for lossless ­ 802.3X ­ and enhances it.   802.3X, when deployed with both RX and TX pause, can give lossless Ethernet as both recognized by many in the iSCSI community as well as by the FCoE specifications.   However, the pause mechanisms apply at the port level, which means giving one traffic class lossless causes blocking of other traffic classes.   All 802.1Qbb does, along with 802.3bd, is allow the pause mechanisms to be applied individually to specific priorities or traffic classes ­ aka pause FCoE or iSCSI or RoCE whilst allowing other traffic to flow.   PFC 802.1Qaz or ETS (let's ignore that DCBX is also part of this document and discussed in another SNIA-ESF blog) is not bandwidth allocation to your individual priorities, rather it is the ability to create a group of priorities and apply bandwidth rules to that group.   In English, it adds a new tier to your QoS schedulers so you can now apply bandwidth rules to port, priority or class group, individual priority, and VLAN.   The standard suggests a practice of at least three groups, one for best effort traffic classes, one for PFC-lossless classes, and one for strict priority ­ though it does allow more groups.   ETS logical view Last but not least, 802.1Qau or QCN is not a mechanism to provide lossless capabilities.   Where pause and PFC are point-to-point flow control mechanisms QCN allows flow control to be applied by a message from the congestion point all the way back to the source.   Being Ethernet level mechanisms it is of course across multiple hops within a layer 2 domain and so cannot cross either IP routing or FCF FCoE-based forwarding.   If QCN is applied to a non PFC priority, then it would most likely reduce drop by telling the source device to slow down rather than having frames be dropped and allowing the TCP congestion window to trigger slowing down at the TCP level.   If QCN is applied to a PFC priority, then it could reduce back propagation of PFC pause and so congestion propagation within that priority.   QCN Although not part of the standards for DCB-based convergence, but mentioned in the standards, devices that implement DCB typically have some form of buffer carving or partitioning such that the different traffic classes are not just on different priorities or classes as they flow through the network, but are being queued in and utilizing separate buffer queues.   This is important as the separated queuing and buffer allocation is another aspect of how fate sharing is limited or avoided between the different traffic classes.   It also makes conversations around microbursts, burst absorption, and latency bubbles all far more complex than before when there was less or no buffer separation. It is important to remember that what we are describing here are the layer 2 Ethernet mechanisms around I/O and network convergence, QoS, flow control.   These are not the only tools available (or in operation) and any datacenter design needs to fully consider what is happening at every level of the network and server stack ­ including, but not limited to, the TCP/IP layer, SCSI layer, and indeed application layer.   The interactions between the layers are often very interesting ­ but that is perhaps the subject for another blog. In summary, with the set of enhanced convergence protocols now fully standardized and fairly commonly available on many platforms, along with the many capabilities that exist within Ethernet, and the increasing deployment of networks with 10GbE or above, more organizations are benefiting from convergence – but to do so they quickly find that they need to learn about aspects of Ethernet that in the past were perhaps of less interest in a non-converged world.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Reaching Nirvana? Maybe Not, But You Can Help In Better Understanding SSD and HDD Performance via SNIA’s Workload I/O Capture Program

Marty Foltyn

Mar 25, 2013

title of post
SNIA’s Solid State Storage Initiative (SSSI) recently rolled out its new Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc).  The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks. SNIA SSSI hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website. How it works The WIOCP software is a safe and thoroughly-tested tool which runs unobtrusively in the background to constantly capture a large set of SSD and HDD I/O metrics that are useful to both the computer user and to SNIA. Users simply enter the drive letters for those drives for which I/O operations metrics are to be collected. The program does not record anything that might be sensitive, including details of your actual workload (for example, files you’ve accessed.) Results are presented in clear and accessible report formats. How would the WIOCP help me as a user of computer systems? Our upcoming white paper gives many reasons why you would want to download and run the WIOCP.  One reason is that empirical file and disk I/O operation performance metrics can be invaluable with regard to theories and claims about disk I/O performance. This is especially so when these metrics reflect the actual file and disk I/O operation activity performed by individual applications/workloads during normal usage. Moreover, such empirical I/O metrics can be instrumental in uncovering/understanding performance “bottlenecks”, determining more precise I/O performance requirements, better matching disk storage purchases to the particular workload usage/needs, and designing/optimizing various disk storage solutions. How can I help this project? by downloading and running the WIOCP you help us collect I/O metrics, which can reveal insights into the particular ways that applications actually perform and experience I/O operation activity in “real-life” use. And using this information,  SNIA member companies will be able to improve the performance of their solid state storage solution, including SSDs and flash storage arrays. Help SNIA get started on this project by clicking http://www.hyperIO.com/hIOmon/hIOmonSSSIworkloadIOcaptureProgram.htm and using the “Download Key Code" enter SSSI52kd9A8Z. The WIOCP tool will be delivered to your system with a unique digital signature. The tool only takes a few minutes to download and initialize, after which you can return to the task at hand! If you have any questions or comments, please contact: SSSI_TechDev-Chair@SNIA.org

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Reaching Nirvana? Maybe Not, But You Can Help In Better Understanding SSD and HDD Performance via SNIA’s Workload I/O Capture Program

Marty Foltyn

Mar 25, 2013

title of post

SNIA’s Solid State Storage Initiative (SSSI) recently rolled out its new Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc).  The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks. SNIA SSSI hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website.

How it works
The WIOCP software is a safe and thoroughly-tested tool which runs unobtrusively in the background to constantly capture a large set of SSD and HDD I/O metrics that are useful to both the computer user and to SNIA. Users simply enter the drive letters for those drives for which I/O operations metrics are to be collected. The program does not record anything that might be sensitive, including details of your actual workload (for example, files you’ve accessed.) Results are presented in clear and accessible report formats.

How would the WIOCP help me as a user of computer systems?
Our upcoming white paper gives many reasons why you would want to download and run the WIOCP.  One reason is that empirical file and disk I/O operation performance metrics can be invaluable with regard to theories and claims about disk I/O performance. This is especially so when these metrics reflect the actual file and disk I/O operation activity performed by individual applications/workloads during normal usage. Moreover, such empirical I/O metrics can be instrumental in uncovering/understanding performance “bottlenecks”, determining more precise I/O performance requirements, better matching disk storage purchases to the particular workload usage/needs, and designing/optimizing various disk storage solutions.

How can I help this project?
by downloading and running the WIOCP you help us collect I/O metrics, which can reveal insights into the particular ways that applications actually perform and experience I/O operation activity in “real-life” use. And using this information,  SNIA member companies will be able to improve the performance of their solid state storage solution, including SSDs and flash storage arrays. Help SNIA get started on this project by clicking http://www.hyperIO.com/hIOmon/hIOmonSSSIworkloadIOcaptureProgram.htm and using the “Download Key Code” enter SSSI52kd9A8Z. The WIOCP tool will be delivered to your system with a unique digital signature. The tool only takes a few minutes to download and initialize, after which you can return to the task at hand!

If you have any questions or comments, please contact: SSSI_TechDev-Chair@SNIA.org

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Red Hat adds commercial support for pNFS

Doug O'Flaherty

Mar 21, 2013

title of post

Red Hat Enterprise Linux shipped their first commercially supported parallel NFS client on February 21st. The Red Hat ecosystem can deploy pNFS with the confidence of engineering, test, and long-term support of the industry standard protocol.

 

Red Hat Engineering has been working with the upstream community and several SNIA ESF member companies to backport code and test interoperability with RHEL6. This release supports all IO functions in pNFS, including Direct IO. Direct IO support is required for KVM virtualization, as well as to support the leading databases. Shared workloads and Big Data have performance and capacity requirements that scale unpredictably with business needs.

 

Parallel NFS (pNFS) enables scaling out NFS to improve performance, manage capacity and reduce complexity.  An IETF standard storage protocol, pNFS can deliver parallelized IO from a scale-out NFS array and uses out-of-band metadata services to deliver high-throughput solutions that are truly industry standard. SNIA ESF has published several papers and a webinar specifically focused on pNFS architecture and benefits. They can be found on the SNIA ESF Knowledge Center.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Red Hat adds commercial support for pNFS

Doug O'Flaherty

Mar 21, 2013

title of post
Red Hat Enterprise Linux shipped their first commercially supported parallel NFS client on February 21st. The Red Hat ecosystem can deploy pNFS with the confidence of engineering, test, and long-term support of the industry standard protocol.   Red Hat Engineering has been working with the upstream community and several SNIA ESF member companies to backport code and test interoperability with RHEL6. This release supports all IO functions in pNFS, including Direct IO. Direct IO support is required for KVM virtualization, as well as to support the leading databases. Shared workloads and Big Data have performance and capacity requirements that scale unpredictably with business needs.   Parallel NFS (pNFS) enables scaling out NFS to improve performance, manage capacity and reduce complexity.   An IETF standard storage protocol, pNFS can deliver parallelized IO from a scale-out NFS array and uses out-of-band metadata services to deliver high-throughput solutions that are truly industry standard. SNIA ESF has published several papers and a webinar specifically focused on pNFS architecture and benefits. They can be found on the SNIA ESF Knowledge Center. [poll id="7"]

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Take Our 10GBASE-T Quick Poll

David Fair

Mar 13, 2013

title of post

I’ve gotten some interesting feedback on my recent 10GBASE-T blog, “How is 10GBASE-T Being Adopted and Deployed.” It’s prompted us at the ESF to learn more about your 10BASE-T plans. Please let us know by taking our 3-question poll. I’ll share the results in a future blog post.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll. Note: There is a poll embedded within this post, please visit the site to participate in this post's poll. Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Take Our 10GBASE-T Quick Poll

David Fair

Mar 13, 2013

title of post
I've gotten some interesting feedback on my recent 10GBASE-T blog, "How is 10GBASE-T Being Adopted and Deployed." It's prompted us at the ESF to learn more about your 10BASE-T plans. Please let us know by taking our 3-question poll. I'll share the results in a future blog post. [poll id="4"] [poll id="5"] [poll id="6"]

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How DCB Makes iSCSI Better

Allen Ordoubadian

Mar 5, 2013

title of post

A challenge with traditional iSCSI deployments is the non-deterministic nature of Ethernet networks. When Ethernet networks only carried non-storage traffic, lost data packets where not a big issue as they would get retransmitted. However; as we layered storage traffic over Ethernet, lost data packets became a “no no” as storage traffic is not as forgiving as non-storage traffic and data retransmissions introduced I/O delays which are unacceptable to storage traffic. In addition, traditional Ethernet also had no mechanism to assign priorities to classes of I/O.

Therefore a new solution was needed. Short of creating a separate Ethernet network to handle iSCSI storage traffic, Data Center Bridging (DCB), was that solution.

The DCB standard is a key enabler of effectively deploying iSCSI over Ethernet infrastructure. The standard provides the framework for high-performance iSCSI deployments with key capabilities that include:
- Priority Flow Control (PFC)—enables “lossless Ethernet”, a consistent stream of data between servers and storage arrays. It basically prevents dropped frames and maximizes network efficiency. PFC also helps to optimize SCSI communication and minimizes the effects of TCP to make the iSCSI flow more reliably.
- Quality of Service (QoS) and Enhanced Transmission Selection (ETS)—support protocol priorities and allocation of bandwidth for iSCSI and IP traffic.
- Data Center Bridging Capabilities eXchange (DCBX) — enables automatic network-based configuration of key network and iSCSI parameters.

With DCB, iSCSI traffic is more balanced over high-bandwidth 10GbE links. From an investment protection perspective, the ability to support iSCSI and LAN IP traffic over a common network makes it possible to consolidate iSCSI storage area networks with traditional IP LAN traffic networks. There is also another key component needed for iSCSI over DCB. This component is part of Data Center Bridging eXchange (DCBx) standard, and it’s called TCP Application Type-Length-Value, or simply “TLV”! TLV allows the DCB infrastructure to apply unique ETS and PFC settings to specific sub-segments of the TCP/IP traffic. This is done through switches which can identify the sub-segments based on their TCP socket or port identifier which are included in the TCP/IP frame. In short, TLV directs servers to place iSCSI traffic on available PFC queues, which separates storage traffic from other IP traffic. PFC also eliminates data retransmission and supports a consistent data flow with low latency. IT administrators can leverage QoS and ETS to assign bandwidth and priority for iSCSI storage traffic, which is crucial to support critical applications.

Therefore, depending on your overall datacenter environment, running iSCSI over DCB can improve:
- Performance by insuring a consistent stream of data, resulting in “deterministic performance” and the elimination of packet loss that can cause high latency
- Quality of service through allocation of bandwidth per protocol for better control of service levels within a converged network
- Network convergence

For more information on this topic or technologies discussed in this blog, please visit some of our other blog articles:
- What Up with DCBX Blog and iSCSI over DCB: Reliability and predictable performance or check out the IEEE website on DCB

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How DCB Makes iSCSI Better

Allen Ordoubadian

Mar 5, 2013

title of post
A challenge with traditional iSCSI deployments is the non-deterministic nature of Ethernet networks. When Ethernet networks only carried non-storage traffic, lost data packets where not a big issue as they would get retransmitted. However; as we layered storage traffic over Ethernet, lost data packets became a "no no" as storage traffic is not as forgiving as non-storage traffic and data retransmissions introduced I/O delays which are unacceptable to storage traffic. In addition, traditional Ethernet also had no mechanism to assign priorities to classes of I/O. Therefore a new solution was needed. Short of creating a separate Ethernet network to handle iSCSI storage traffic, Data Center Bridging (DCB), was that solution. The DCB standard is a key enabler of effectively deploying iSCSI over Ethernet infrastructure. The standard provides the framework for high-performance iSCSI deployments with key capabilities that include: - Priority Flow Control (PFC)—enables "lossless Ethernet", a consistent stream of data between servers and storage arrays. It basically prevents dropped frames and maximizes network efficiency. PFC also helps to optimize SCSI communication and minimizes the effects of TCP to make the iSCSI flow more reliably. - Quality of Service (QoS) and Enhanced Transmission Selection (ETS)—support protocol priorities and allocation of bandwidth for iSCSI and IP traffic. - Data Center Bridging Capabilities eXchange (DCBX) — enables automatic network-based configuration of key network and iSCSI parameters. With DCB, iSCSI traffic is more balanced over high-bandwidth 10GbE links. From an investment protection perspective, the ability to support iSCSI and LAN IP traffic over a common network makes it possible to consolidate iSCSI storage area networks with traditional IP LAN traffic networks. There is also another key component needed for iSCSI over DCB. This component is part of Data Center Bridging eXchange (DCBx) standard, and it's called TCP Application Type-Length-Value, or simply "TLV"! TLV allows the DCB infrastructure to apply unique ETS and PFC settings to specific sub-segments of the TCP/IP traffic. This is done through switches which can identify the sub-segments based on their TCP socket or port identifier which are included in the TCP/IP frame. In short, TLV directs servers to place iSCSI traffic on available PFC queues, which separates storage traffic from other IP traffic. PFC also eliminates data retransmission and supports a consistent data flow with low latency. IT administrators can leverage QoS and ETS to assign bandwidth and priority for iSCSI storage traffic, which is crucial to support critical applications. Therefore, depending on your overall datacenter environment, running iSCSI over DCB can improve: - Performance by insuring a consistent stream of data, resulting in "deterministic performance" and the elimination of packet loss that can cause high latency - Quality of service through allocation of bandwidth per protocol for better control of service levels within a converged network - Network convergence For more information on this topic or technologies discussed in this blog, please visit some of our other blog articles: - What Up with DCBX Blog  and iSCSI over DCB: Reliability and predictable performance  or check out the IEEE website on DCB

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to