Sorry, you need to enable JavaScript to visit this website.

Registration Now Open for Storage Developer Conference India - May 25-26 in Bangalore

khauser

May 15, 2017

title of post
For the third consecutive year, SNIA will present their highly successful Storage Developer Conference (SDC) in Bangalore, India, on May 25-26, 2017 at the My Fortune Hotel.  The 2017 agenda, developed under the supervision of the SNIA India agenda committee, leads off with a keynote by Indian Institute of Science Professor P. Vijay Kumar on Codes for Big Data:  Error-Correction for Distributed Storage, followed by Amar Tunballi, Engineering Manager at Red Hat, speaking on Software Defined Storage and Why It Will Continue To Be Relevant.  Thursday keynotes will feature Anand Ghatnekar, Country Manager, Cloud Systems, Seagate on The Power of Edge, and Chirag Shah, IT Transformation Consultant, DELL EMC on New Age Data Centre. Breakout tracks will feature sessions on storage ecosystem management with SNIA SwordfishTM, hybrid clouds, object storage, open storage, NVMe over Fabrics, and performance optimization.  Also featured at the conference is a Cloud Interoperability Plugfest and the opportunity to network with partners Red Hat, Dell EMC, IBM, Mindteck, NetApp, and Tata Consultancy  Services. Check out the full schedule at https://www.snia.org/events/sdc-india/2017-sdc-india-agenda. According to Paul Talbut, SNIA India Executive Director, SDC India is ”ideal for storage software and hardware developers, storage product and solution architects, product line CTOs, storage product customer support engineers, and in house IT development staff.” Talbut emphasized that SDC India “is the place to learn from the experts, and get insights on the tools, technologies ,and tactics needed in data storage management and the cloud.“  Learn more - conference registration and full details on SDC India can be found at https://www.snia.org/sdcindia.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Too Proud to Ask Webcast Series Continues – Getting from Here to There Pod

Fred Zhang

May 4, 2017

title of post
As part of the SNIA Ethernet Storage Forum's successful "Everything You Wanted To Know About Storage But Were Too Proud To Ask" series, we've discussed numerous topics about storage devices, protocols, and networks. As we examine some of these topics further, we begin to tease out some subtle nuances; subtle, yet important nevertheless. On May 9th we'll take on the terms and concepts that affect Storage Architectures as a whole in "Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Sepia – Getting from Here to There." In particular, we'll be looking at those aspects that can help or hinder storage systems inside the network:
  • Encapsulation vs. Tunneling
  • IOPS vs. Latency vs. Jitter
  • Quality of Service (QoS)
Each of these topics has a profound impact on storage designs and performance, but they are often misunderstood. We're going to help you become clear on all of these very important storage concepts so that you can grok storage just a little bit more. We hope you will join us on May 9th at 10:00 am PT and that you won't be "too proud" to ask our experts your questions! Register today. Think there may be other storage topics you feel you should understand better? Check out the rest of the webcasts in this series here. Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Too Proud to Ask Webcast Series Continues – Getting from Here to There Pod

Fred Zhang

May 4, 2017

title of post
As part of the SNIA Ethernet Storage Forum’s successful “Everything You Wanted To Know About Storage But Were Too Proud To Ask” series, we’ve discussed numerous topics about storage devices, protocols, and networks. As we examine some of these topics further, we begin to tease out some subtle nuances; subtle, yet important nevertheless. On May 9th we’ll take on the terms and concepts that affect Storage Architectures as a whole in “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Sepia – Getting from Here to There.” In particular, we’ll be looking at those aspects that can help or hinder storage systems inside the network:
  • Encapsulation vs. Tunneling
  • IOPS vs. Latency vs. Jitter
  • Quality of Service (QoS)
Each of these topics has a profound impact on storage designs and performance, but they are often misunderstood. We’re going to help you become clear on all of these very important storage concepts so that you can grok storage just a little bit more. We hope you will join us on May 9th at 10:00 am PT and that you won’t be “too proud” to ask our experts your questions! Register today. Think there may be other storage topics you feel you should understand better? Check out the rest of the webcasts in this series here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMB3 – These Questions Rock!

John Kim

Apr 24, 2017

title of post
Earlier this month, the SNIA Ethernet Storage Forum hosted a live webcast on Server Message Block (SMB), "Rockin' and Rollin' with SMB3." Presenting was Ned Pyle, Microsoft SMB Program Manager. If you missed the live event, I encourage you to watch it on-demand. We had a lot of questions from the big audience this event drew, so as promised, here are answers to them all. Q. Other than that audit setup, is there a way to determine, via the OS, which SMB version is in use? A. No. Network captures alone will tell you, but Windows doesn't track this explicitly other than SMB1 with auditing we added specifically for the task of identifying removal options Q. SMB 3.1.1 over Ethernet... can you discuss/compare with SMB 3.1.1 over Infiniband? A. If the question is ‘what's better, Infiniband or Ethernet', my answer is always: it depends. I really don't want to get into a competitive conversation under the guides of SNIA. I simply recommend looking at the vendor stories and make an informed decision. Overall, Ethernet/TCP/IP versions like RoCE and iWARP configurations are generally less expensive than Infiniband ones. They all have tremendous performance. They all have their various ups and downs. Q. Do you have statistics regarding SMB-Direct adoption? A. It's tricky, as our telemetry for Server usage is quite inaccurate due to firewall rules preventing servers from reaching the Internet. I can say indirectly that we know of thousands of customer deployments. Q. What's the name of the IO application? A. DiskSPD Q. I don't believe your I/O data tests, wouldn't you need to trunk 17 10 Gigabit Network Cards to achieve 168 gigabit I/O capability? A.  This was a misunderstanding, you thought I said 10Gb but it was 100Gb. We used 100Gb RDMA NICs in this demo with RoCEv2. The bottleneck was the storage at that point, the network had plenty of bandwidth left over.   Q. These are great, but how many of these new features will end up locking out FOSS/GPL implementations of SMB such as SAMBA? A. Absolutely not! We work with Samba team and Linux to ensure that SMB can be broadly deployed with all of its capabilities inside open source software. Q. NetApp supports CA shares (which uses transparent failover) in two use cases: SQL over SMB and Hyper-V over SMB3. A. This sounds likes someone from NetApp stating a fact, so I will simply say "good!" :) Q.  Can you please post links to the tools mentioned in this presentation, and I/O tests? Is there a comparison using I/O Meter? A. Here you go:
  • https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
  • https://github.com/Microsoft/diskspd
  • https://github.com/Microsoft/diskspd/tree/master/Frameworks/VMFleet
Q. You are forced to use SMB1 because of the Windows 2003 issue? A. Windows Server 2003 and XP (and older, like Win2000) all use SMB1. If they are still around, you will need to leave SMB1 enabled on any machines talking to them. Q. When will Microsoft officially drop support for SMB1? A. Overall for the protocol, there is no timeline. It is deprecated however, so no further work will be done in SMB1 other than critical security patches. SMB1 will start being removed *by default* in a coming release of Windows Server and Windows 10 client. This doesn't mean totally removed forever, but instead "missing by default", where you must directly opt in to adding it back. It will be done on a per-SKU basis, so that enterprises are first likely to see it, since they are equipped better to understand it and less likely to need SMB1 Q. Is there a way to change block size in SMB3 ? A. In SMB2_READ processing section 3.3.5.12 (https://msdn.microsoft.com/en-us/library/cc246729.aspx): The server SHOULD<296> fail the request with STATUS_INVALID_PARAMETER if the Length field is greater than Connection.MaxReadSize. If Connection.SupportsMultiCredit is TRUE the server MUST validate CreditCharge based on Length, as specified in section 3.3.5.2.5. If the validation fails, it MUST fail the read request with STATUS_INVALID_PARAMETER. There is similar text for SMB2_WRITE in 3.3.5.13 (https://msdn.microsoft.com/en-us/library/cc246730.aspx). Then, off to SMB2_NEGOTIATE  in 3.3.5.4 (https://msdn.microsoft.com/en-us/library/cc246768.aspx) to discover:
  • MaxReadSize is set to the maximum size, in bytes, of the Length in an SMB2 READ Request (section 2.2.19) that the server will accept on the transport that established this connection. This value SHOULD<231> be greater than or equal to 65536. MaxReadSize MUST be set to MaxReadSize.
  • MaxWriteSize is set to the maximum size, in bytes, of the Length in an SMB2 WRITE Request (section 2.2.21) that the server will accept on the transport that established this connection. This value SHOULD<232> be greater than or equal to 65536. MaxWriteSize MUST be set to MaxWriteSize.
Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608
<232> Section 3.3.5.4: If the underlying transport is NETBIOS over TCP, Windows servers set MaxWriteSize to 65536. Otherwise, MaxWriteSize is set based on the following table.
Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608
Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMB3 – These Questions Rock!

John Kim

Apr 24, 2017

title of post
Earlier this month, the SNIA Ethernet Storage Forum hosted a live webcast on Server Message Block (SMB), “Rockin’ and Rollin’ with SMB3.” Presenting was Ned Pyle, Microsoft SMB Program Manager. If you missed the live event, I encourage you to watch it on-demand. We had a lot of questions from the big audience this event drew, so as promised, here are answers to them all. Q. Other than that audit setup, is there a way to determine, via the OS, which SMB version is in use? A. No. Network captures alone will tell you, but Windows doesn’t track this explicitly other than SMB1 with auditing we added specifically for the task of identifying removal options Q. Old Linux + NetApp 7-Mode + 2003 Server = Stuck with SMB1.0? A. You have to ask NetApp. ? Q. SMB 3.1.1 over Ethernet… can you discuss/compare with SMB 3.1.1 over Infiniband? A. If the question is ‘what’s better, Infiniband or Ethernet’, my answer is always: it depends. I really don’t want to get into a competitive conversation under the guides of SNIA. I simply recommend looking at the vendor stories and make an informed decision. Overall, Ethernet/TCP/IP versions like RoCE and iWARP configurations are generally less expensive than Infiniband ones. They all have tremendous performance. They all have their various ups and downs. Q. Do you have statistics regarding SMB-Direct adoption? A. It’s tricky, as our telemetry for Server usage is quite inaccurate due to firewall rules preventing servers from reaching the Internet. I can say indirectly that we know of thousands of customer deployments. Q. What’s the name of the IO application? A. DiskSPD Q. I don’t believe your I/O data tests, wouldn’t you need to trunk 17 10 Gigabit Network Cards to achieve 168 gigabit I/O capability? A. This was a misunderstanding, you thought I said 10Gb but it was 100Gb. We used 100Gb RDMA NICs in this demo with RoCEv2. The bottleneck was the storage at that point, the network had plenty of bandwidth left over.  Q. These are great, but how many of these new features will end up locking out FOSS/GPL implementations of SMB such as SAMBA? A. Absolutely not! We work with Samba team and Linux to ensure that SMB can be broadly deployed with all of its capabilities inside open source software. Q. NetApp supports CA shares (which uses transparent failover) in two use cases: SQL over SMB and Hyper-V over SMB3. A. This sounds likes someone from NetApp stating a fact, so I will simply say “good!” ? Q. Can you please post links to the tools mentioned in this presentation, and I/O tests? Is there a comparison using I/O Meter? A. Here you go:
  • https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
  • https://github.com/Microsoft/diskspd
  • https://github.com/Microsoft/diskspd/tree/master/Frameworks/VMFleet
Q. You are forced to use SMB1 because of the Windows 2003 issue? A. Windows Server 2003 and XP (and older, like Win2000) all use SMB1. If they are still around, you will need to leave SMB1 enabled on any machines talking to them. Q. When will Microsoft officially drop support for SMB1? A. Overall for the protocol, there is no timeline. It is deprecated however, so no further work will be done in SMB1 other than critical security patches. SMB1 will start being removed *by default* in a coming release of Windows Server and Windows 10 client. This doesn’t mean totally removed forever, but instead “missing by default”, where you must directly opt in to adding it back. It will be done on a per-SKU basis, so that enterprises are first likely to see it, since they are equipped better to understand it and less likely to need SMB1 Q. Is there a way to change block size in SMB3 ? A. In SMB2_READ processing section 3.3.5.12 (https://msdn.microsoft.com/en-us/library/cc246729.aspx): The server SHOULD<296> fail the request with STATUS_INVALID_PARAMETER if the Length field is greater than Connection.MaxReadSize. If Connection.SupportsMultiCredit is TRUE the server MUST validate CreditCharge based on Length, as specified in section 3.3.5.2.5. If the validation fails, it MUST fail the read request with STATUS_INVALID_PARAMETER. There is similar text for SMB2_WRITE in 3.3.5.13 (https://msdn.microsoft.com/en-us/library/cc246730.aspx). Then, off to SMB2_NEGOTIATE  in 3.3.5.4 (https://msdn.microsoft.com/en-us/library/cc246768.aspx) to discover:
  • MaxReadSize is set to the maximum size, in bytes, of the Length in an SMB2 READ Request (section 2.2.19) that the server will accept on the transport that established this connection. This value SHOULD<231> be greater than or equal to 65536. MaxReadSize MUST be set to MaxReadSize.
  • MaxWriteSize is set to the maximum size, in bytes, of the Length in an SMB2 WRITE Request (section 2.2.21) that the server will accept on the transport that established this connection. This value SHOULD<232> be greater than or equal to 65536. MaxWriteSize MUST be set to MaxWriteSize.
Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608
<232> Section 3.3.5.4: If the underlying transport is NETBIOS over TCP, Windows servers set MaxWriteSize to 65536. Otherwise, MaxWriteSize is set based on the following table.
Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Buffers, Queues and Caches Explained

John Kim

Apr 19, 2017

title of post
Finely tuning buffers, queues and caches can make your storage system hum. And that's exactly what we discussed in our recent SNIA Ethernet Storage Forum webcast, ""Everything You Wanted to Know About Storage But Were Too Proud To Ask – Part Teal: The Buffering Pod." If you missed it, it's now available on-demand. In this blog, you'll find detailed answers from our panel of experts to all the great questions we received during the live event. I also encourage you to check out the other on-demand webcasts in this "Too Proud To Ask" series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF. Q. Question on cache - What would be the right size of cache at each point (clients / Front-end connect / Storage controller / Back end connect / Physical storage). A. Great question! The main consideration for cache sizing at any point is the workload. If the workload is conducive to cache benefits, then the more cache the merrier! However, when workload is not conducive to cache, adding more cache capacity won't be beneficial. For example, if the workload is 100% sequential reads of small 4K IOs, having the data be pre-loaded into cache is going to be extremely helpful, and increasing the size of such cache at the end-point will be good. If the workload is random, and the IO size is changing, pre-fetching data into cache may be not a good idea. Similarly, with write cache, the benefit is realized two-fold: first, when the write is stored in cache and ack'ed back to the host (such write is typically called "dirty", because it hasn't been flashed back to the disk) and second, when the dirty write is overwritten by the host before it is flashed. Any other combination of workloads and IO will only get partial benefit from the cache. Sizing cache is a very difficult exercise and there are no universal answers. Every implementation has its own pluses and minuses. Q. Isn't a higher queue depth increasing latency as well, so applications would run slower as they are waiting longer for IO to complete? A. The answer to this is very dependent on the environment. In general having more outstanding operations would increase the load on the interconnects and storage media which would result in the per-IO latency increasing. The alternative is having a small queue depth which may produce consistently lower per IOP latency at the expense of less throughput and IOPS. There are numerous techniques for dealing with mixed storage traffic, low-latency and high throughput, such as multi-queues, out-of-order completions, immediate and delayed data transfers in-line, ready to transfer, and policies. The NVM media latency roadmap is also helping with these types of latency vs. throughput decisions by enabling devices that achieve full-throughput at very low queue depths. Q. Does SCSI protocol have a max queue depth of 32? A. No, the SCSI Architecture Model allows for up to 64 bits for the command identifier field and each of the SCSI transports (iSCSI, SAS, ...) defines a maximum within that range. There may be implementation-dependent SCSI endpoints that define smaller ranges. Q. How would a distributed software defined storage technology deal with queue depth and how can this be advantageous or not advantageous? A. Interesting question. Distributed software defined storage is by definition made up of multiple autonomous layers of software components orchestrated to provide stable storage. These types of systems will have many outstanding operations (queue depth) at multiple-stages and layers. It's also not uncommon to see SDS file systems front-ended with block-based protocols, such as iSCSI, which enable the initiators to build up large queue depths of operations. Q. Are queue depth and buffer the same? A. No, queue refers to command and response queues, buffers refer to in-flight data buffers. Command and response queues often contain pointers to these buffers embedded in the read or write commands. Q. Are caches and buffers made of the same silicon that makes up SSD disks? Which one is faster? A. As a general idea, yes, SSDs, RAMs, caches, and buffers are all made from the silicon. If we dig a little deeper, device caches and buffers are typically made of high-speed static random access memory (SRAM), which is faster than the slower and cheaper dynamic RAM (DRAM), used for main memory. Modern SSDs are utilizing an even slower memory, which is commonly known as Flash memory, and we differentiate that type of storage by its structure: Single-Level Cell (SLC), Multi-Level Cell (MLC), etc. Although, there are some SSDs that are made out of DRAM, too. And then there are some newer technologies, like NVDIMM, 3D XPoint, etc. So, while the underlying physical material is still the same silicon, it's the architecture that makes all the difference. Q. In PFC.. If there are pending items in P1... can P2 or P3 etc. go ahead? A. Yes. Priority Flow Control (PFC, also called Per Priority Pause, though rarely) is designed specifically to only pause traffic on one priority, allowing the remaining priority Classes of Service to work according to their configurations. So, for example, if PFC were to pause Priority Queue 1, and Priority Queue 3 also had a "no-drop" configuration but was not having any issues, PFC on Queue 1 would be triggered but PFC on Queue 3 would not. In reality, having more than one no-drop lane on a link is very, very rare, but it does illustrate that PFC operates on a per-priority basis, not on the whole link. Q. Do all Ethernet based NVMe-oF (NVMe over Fabrics) implementations require some form of Data Center Bridging (DCB)? Or, are there versions of Ethernet based NVMe-oF (RoCE & iWARP) that run over standard Ethernet without needing DCB? A. Yes, both iWARP and RoCE can be run without DCB. To maintain peak performance either DCB or other flow control mechanisms like ECN are recommended. Q. Do server devices automatically honor the pause frame or does it require configuration? A. I am assuming "server devices" refers to Ethernet ports on a server. It depends on the default settings of the NIC or LOM or those loaded by the driver during initialization. Generally speaking NIC devices that support PFC also support DCBX (Data Center Bridging Exchange). DCBX is a protocol that allows an end device, like a NIC, to get its proper configuration settings from the switch. That means that in an environment where PFC needs to be assigned to a specific Class of Service (CoS), the switch will send the NIC the proper settings during the setup configuration. Q. Is it mandatory for all devices in network, host and storage to have same speed ports? A. No. Q. What are the theoretical devices for modeling and analyzing cache, buffer or queue behaviors? A. Computers with software  :) Q. What if I have really large sized writes and they fill up the cache quickly? Is there a way to bypass the large sized writes? A. The time of the presentation limited the amount of material we were able to share. One of the subjects we didn't talk about was the cache software algorithm. Most storage vendors manage the cache by not letting extremely large IOs to be cached. Back in the spinning storage era, an IO of 2MB would typically be considered too large to be cached, and would be sent directly to disk. Q. What will be the use of cache in all flash storage please? As flash is the highest performance disk. A. See the answer to question above "Are caches and buffers made of the same silicon that makes up SSD disks? Which one is faster?" Hardware cache and buffers are typically made out of the fastest memory, then comes RAM and last are the SSDs aka flash disks. Therefore, storing data on a faster layer is still beneficial to the performance. Q. Does the LUN Queue Depth includes the Queue Depth discussed here? A. Yes, SCSI LUN queue depth enables the initiator(s) to have multi outstanding I/O operations in flight. Q. Will you use a queuing algorithm to manage IO queue? If your answer is yes, which algorithm will you use? A. There are several storage protocols that define mechanisms for a target to dynamically adjust the queue depth available to the initiator through various forms of credit exchanges. Having these types of mechanism enables the target to implement multi-initiator load balancing across targets. Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Buffers, Queues and Caches Explained

John Kim

Apr 19, 2017

title of post
Finely tuning buffers, queues and caches can make your storage system hum. And that’s exactly what we discussed in our recent SNIA Ethernet Storage Forum webcast, ““Everything You Wanted to Know About Storage But Were Too Proud To Ask – Part Teal: The Buffering Pod.” If you missed it, it’s now available on-demand. In this blog, you’ll find detailed answers from our panel of experts to all the great questions we received during the live event. I also encourage you to check out the other on-demand webcasts in this “Too Proud To Ask” series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF. Q. Question on cache – What would be the right size of cache at each point (clients / Front-end connect / Storage controller / Back end connect / Physical storage). A. Great question! The main consideration for cache sizing at any point is the workload. If the workload is conducive to cache benefits, then the more cache the merrier! However, when workload is not conducive to cache, adding more cache capacity won’t be beneficial. For example, if the workload is 100% sequential reads of small 4K IOs, having the data be pre-loaded into cache is going to be extremely helpful, and increasing the size of such cache at the end-point will be good. If the workload is random, and the IO size is changing, pre-fetching data into cache may be not a good idea. Similarly, with write cache, the benefit is realized two-fold: first, when the write is stored in cache and ack’ed back to the host (such write is typically called “dirty”, because it hasn’t been flashed back to the disk) and second, when the dirty write is overwritten by the host before it is flashed. Any other combination of workloads and IO will only get partial benefit from the cache. Sizing cache is a very difficult exercise and there are no universal answers. Every implementation has its own pluses and minuses. Q. Isn’t a higher queue depth increasing latency as well, so applications would run slower as they are waiting longer for IO to complete? A. The answer to this is very dependent on the environment. In general having more outstanding operations would increase the load on the interconnects and storage media which would result in the per-IO latency increasing. The alternative is having a small queue depth which may produce consistently lower per IOP latency at the expense of less throughput and IOPS. There are numerous techniques for dealing with mixed storage traffic, low-latency and high throughput, such as multi-queues, out-of-order completions, immediate and delayed data transfers in-line, ready to transfer, and policies. The NVM media latency roadmap is also helping with these types of latency vs. throughput decisions by enabling devices that achieve full-throughput at very low queue depths. Q. Does SCSI protocol have a max queue depth of 32? A. No, the SCSI Architecture Model allows for up to 64 bits for the command identifier field and each of the SCSI transports (iSCSI, SAS, …) defines a maximum within that range. There may be implementation-dependent SCSI endpoints that define smaller ranges. Q. How would a distributed software defined storage technology deal with queue depth and how can this be advantageous or not advantageous? A. Interesting question. Distributed software defined storage is by definition made up of multiple autonomous layers of software components orchestrated to provide stable storage. These types of systems will have many outstanding operations (queue depth) at multiple-stages and layers. It’s also not uncommon to see SDS file systems front-ended with block-based protocols, such as iSCSI, which enable the initiators to build up large queue depths of operations. Q. Are queue depth and buffer the same? A. No, queue refers to command and response queues, buffers refer to in-flight data buffers. Command and response queues often contain pointers to these buffers embedded in the read or write commands. Q. Are caches and buffers made of the same silicon that makes up SSD disks? Which one is faster? A. As a general idea, yes, SSDs, RAMs, caches, and buffers are all made from the silicon. If we dig a little deeper, device caches and buffers are typically made of high-speed static random access memory (SRAM), which is faster than the slower and cheaper dynamic RAM (DRAM), used for main memory. Modern SSDs are utilizing an even slower memory, which is commonly known as Flash memory, and we differentiate that type of storage by its structure: Single-Level Cell (SLC), Multi-Level Cell (MLC), etc. Although, there are some SSDs that are made out of DRAM, too. And then there are some newer technologies, like NVDIMM, 3D XPoint, etc. So, while the underlying physical material is still the same silicon, it’s the architecture that makes all the difference. Q. In PFC.. If there are pending items in P1… can P2 or P3 etc. go ahead? A. Yes. Priority Flow Control (PFC, also called Per Priority Pause, though rarely) is designed specifically to only pause traffic on one priority, allowing the remaining priority Classes of Service to work according to their configurations. So, for example, if PFC were to pause Priority Queue 1, and Priority Queue 3 also had a “no-drop” configuration but was not having any issues, PFC on Queue 1 would be triggered but PFC on Queue 3 would not. In reality, having more than one no-drop lane on a link is very, very rare, but it does illustrate that PFC operates on a per-priority basis, not on the whole link. Q. Do all Ethernet based NVMe-oF (NVMe over Fabrics) implementations require some form of Data Center Bridging (DCB)? Or, are there versions of Ethernet based NVMe-oF (RoCE & iWARP) that run over standard Ethernet without needing DCB? A. Yes, both iWARP and RoCE can be run without DCB. To maintain peak performance either DCB or other flow control mechanisms like ECN are recommended. Q. Do server devices automatically honor the pause frame or does it require configuration? A. I am assuming “server devices” refers to Ethernet ports on a server. It depends on the default settings of the NIC or LOM or those loaded by the driver during initialization. Generally speaking NIC devices that support PFC also support DCBX (Data Center Bridging Exchange). DCBX is a protocol that allows an end device, like a NIC, to get its proper configuration settings from the switch. That means that in an environment where PFC needs to be assigned to a specific Class of Service (CoS), the switch will send the NIC the proper settings during the setup configuration. Q. Is it mandatory for all devices in network, host and storage to have same speed ports? A. No. Q. What are the theoretical devices for modeling and analyzing cache, buffer or queue behaviors? A. Computers with software ? Q. What if I have really large sized writes and they fill up the cache quickly? Is there a way to bypass the large sized writes? A. The time of the presentation limited the amount of material we were able to share. One of the subjects we didn’t talk about was the cache software algorithm. Most storage vendors manage the cache by not letting extremely large IOs to be cached. Back in the spinning storage era, an IO of 2MB would typically be considered too large to be cached, and would be sent directly to disk. Q. What will be the use of cache in all flash storage please? As flash is the highest performance disk. A. See the answer to question above “Are caches and buffers made of the same silicon that makes up SSD disks? Which one is faster?” Hardware cache and buffers are typically made out of the fastest memory, then comes RAM and last are the SSDs aka flash disks. Therefore, storing data on a faster layer is still beneficial to the performance. Q. Does the LUN Queue Depth includes the Queue Depth discussed here? A. Yes, SCSI LUN queue depth enables the initiator(s) to have multi outstanding I/O operations in flight. Q. Will you use a queuing algorithm to manage IO queue? If your answer is yes, which algorithm will you use? A. There are several storage protocols that define mechanisms for a target to dynamically adjust the queue depth available to the initiator through various forms of credit exchanges. Having these types of mechanism enables the target to implement multi-initiator load balancing across targe

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Expert Takes on Hyperconverged Questions

John Kim

Apr 17, 2017

title of post
Last month, we were fortunate enough to have Greg Schulz, analyst and founder of Server Storage IO, as a guest speaker at our SNIA Ethernet Storage Forum webcast, "What Does Hyperconverged Mean to Storage." If you missed it, it's now available on-demand. Greg fielded many great questions during the live event, but we didn't have time to get to them all. So here they are: Q. What is the difference between Converged Infrastructure (CI) and Hyperconverged Infrastructure (HCI)? A. HCI is aggregated. You scale compute and storage in lock step. Converged is disaggregated. You can scale the compute independently of the storage. There are some software solutions that can support both hyper-converged (aggregated) and converged (disaggregated) deployments.  Q.  What is your definition of "Little Data"? A. Little Data is anything that's not Big Data. It encompasses traditional databases, traditional structured, semi-structured and even some unstructured data. Q. With convergence, what is the impact on the IT organization? A. There is an opportunity for organizations to converge how they manage data infrastructure resources and services delivery. In other words, the technology can be leveraged to help an organization itself converge. Another impact is how converged solutions are protected, backed up, BC/BR/DR and related management done. Traditionally there are separate IT teams for compute, storage, and networking, especially in a large organization. New technology solutions may allow an organization to converge those teams. Q. Is there a hybrid strategy? Where a complete information system is composed of HCI/CI building blocks? If yes, what management tools would span these components? A. Sure, why not? Certainly you can converge your environment into a particular CI/HCI solution or approach, likewise, different CI/HCI solutions can co-exist along with other solutions in a given environment in hybrid ways. Have a hybrid strategy that looks at how technologies and solutions adapt to your needs and environment. Focus on how it's going to work for you, vs. you having to work for them. Q. What does FUZE stand for? A. FUZE is not an acronym. It is the actual fuzing as in melding and bringing together things – literally fuzing thing together. Q. Do HCI vendors re-balance (compute, I/O, storage) automatically as more nodes are added? A. Solutions vary in how they rebalance the workloads. Some are dynamic while others rebalance on intervals; it varies how, when and what they rebalance. So, as you add capacity as you make changes, you need to make sure resources are properly allocated to address performance. Q. Can't you offload those CPU cycles caused by I/O to another CPU? A. That's an interesting question. Yes, move the application to another CPU. There is software that will leverage the resources on another CPU. Most HCI and CI solutions are running on a stack that requires hardware somewhere. Q. This discussion has touched on compute and storage scaling. What about network between compute in the CI/HCI infrastructure and external to other compute, databases, or end-users? A. Both CI and HCI need to connect to other resources, but in most cases the highest levels of network traffic are inside the CI or HCI stack because the compute and storage resources are contained within. Their connections to outside clients or servers data exchange, application integration, or client access is important but usually not very demanding on network bandwidth. (External connections for storage remote replication or backup could be bandwidth-intensive.) Q. How can the current Enterprise Storage Products blend with either CI or HCI? Enterprise Storage is basically centralized storage architecture however the HCI is built mostly on 'distributed storage architecture'. So how can current Enterprise Storage show use cases to the customer to sell their Enterprise Storage either as part of the HCI solution or exist along with HCI? A. Generally enterprise storage products can be included in CI but are not blended with HCI. For example Dell EMC, Cisco (with NetApp and other storage vendors), IBM and Oracle offer CI solutions that include enterprise storage arrays in the rack. Most HCI platforms do not interoperate with enterprise storage arrays because the HCI platforms include their own storage. They can co-exist with enterprise storage arrays and that's how most customers deploy them—some workloads run on the HCI infrastructure while others continue to use enterprise storage arrays. Q. One of the HCI selling points is simplicity and cost reductions from a la carte. It seems that from what is being presented, that may not be the case. Can you elaborate on where HCI may become more complex, costly? A. It comes down to value. You can buy all the components yourself and glue them all together and may come up with a lower total cost, but what is the value of your time? What is the cost of staff time to evaluate, test, deploy and maintain. The total value must be considered. It's possible that HCI will be more costly than a disaggregated deployment that separates compute and storage, but this depends heavily on the workload and specific vendor product solution implementation. Q. Current HCI "full stack" solutions claim compute and storage convergence, but what about the network? Given the east/west traffic introduced by HCI solutions, what networking solutions should customers be looking at? A. Most of the common HCI solutions are packaged with server, storage, compute and most have networking included as well—typically the network adapters and sometimes also the switches. Some even have a backend software defined networking (SDN) capability as part of their stack. Q. Related to HCI answer, what about vendors who allow for storage growth and/or server (compute) and storage additions. This allows for aggregated and dis-aggregated...yes? A. Most HCI vendors require compute and storage to be added simultaneously, though many support different nodes with different ratios of compute and storage. This allow customers to change the ratio of compute and storage by adding different node types. And yes, some HCI vendors also support both a hyper-converged and disaggregated model, with the disaggregated model allowing compute and storage to be added separately. Q. What are the tools available to make HCI work in a hybrid load environment, with different workload requirements, e.g.: VDI and Databases? A. There are tools for moving and migrating applications, workloads, systems and VMs into CI/HCI environments, likewise for tuning, optimizing, gaining insight, analytics and reporting. Most of the CI/HCI solutions have tools built into them for optimizing PACE (Performance, Availability, Capacity, Economics) attributes along with server compute, memory, storage, and I/O resources. Some CI/HCI solutions are optimized for VDI/workspaces, while others are able to support general workloads including databases, and some even support HPC/SC or other specialized workloads. Q. Does network performance affect HCI or CI performance? A. Sometimes. Most hybrid HCI nodes are happy with the bandwidth of 10GbE, but if the nodes are all-flash or have many disks, then a faster speed may be required to avoid a network bottleneck. Network latency could affect HCI or CI performance in some cases, especially with all-flash storage. Of course a reliable network helps ensure reliable CI/HCI operations. Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Expert Takes on Hyperconverged Questions

John Kim

Apr 17, 2017

title of post
Last month, we were fortunate enough to have Greg Schulz, analyst and founder of Server Storage IO, as a guest speaker at our SNIA Ethernet Storage Forum webcast, “What Does Hyperconverged Mean to Storage.” If you missed it, it’s now available on-demand. Greg fielded many great questions during the live event, but we didn’t have time to get to them all. So here they are: Q. What is the difference between Converged Infrastructure (CI) and Hyperconverged Infrastructure (HCI)? A. HCI is aggregated. You scale compute and storage in lock step. Converged is disaggregated. You can scale the compute independently of the storage. There are some software solutions that can support both hyper-converged (aggregated) and converged (disaggregated) deployments.  Q. What is your definition of “Little Data”? A. Little Data is anything that’s not Big Data. It encompasses traditional databases, traditional structured, semi-structured and even some unstructured data. Q. With convergence, what is the impact on the IT organization? A. There is an opportunity for organizations to converge how they manage data infrastructure resources and services delivery. In other words, the technology can be leveraged to help an organization itself converge. Another impact is how converged solutions are protected, backed up, BC/BR/DR and related management done. Traditionally there are separate IT teams for compute, storage, and networking, especially in a large organization. New technology solutions may allow an organization to converge those teams. Q. Is there a hybrid strategy? Where a complete information system is composed of HCI/CI building blocks? If yes, what management tools would span these components? A. Sure, why not? Certainly you can converge your environment into a particular CI/HCI solution or approach, likewise, different CI/HCI solutions can co-exist along with other solutions in a given environment in hybrid ways. Have a hybrid strategy that looks at how technologies and solutions adapt to your needs and environment. Focus on how it’s going to work for you, vs. you having to work for them. Q. What does FUZE stand for? A. FUZE is not an acronym. It is the actual fuzing as in melding and bringing together things – literally fuzing thing together. Q. Do HCI vendors re-balance (compute, I/O, storage) automatically as more nodes are added? A. Solutions vary in how they rebalance the workloads. Some are dynamic while others rebalance on intervals; it varies how, when and what they rebalance. So, as you add capacity as you make changes, you need to make sure resources are properly allocated to address performance. Q. Can’t you offload those CPU cycles caused by I/O to another CPU? A. That’s an interesting question. Yes, move the application to another CPU. There is software that will leverage the resources on another CPU. Most HCI and CI solutions are running on a stack that requires hardware somewhere. Q. This discussion has touched on compute and storage scaling. What about network between compute in the CI/HCI infrastructure and external to other compute, databases, or end-users? A. Both CI and HCI need to connect to other resources, but in most cases the highest levels of network traffic are inside the CI or HCI stack because the compute and storage resources are contained within. Their connections to outside clients or servers data exchange, application integration, or client access is important but usually not very demanding on network bandwidth. (External connections for storage remote replication or backup could be bandwidth-intensive.) Q. How can the current Enterprise Storage Products blend with either CI or HCI? Enterprise Storage is basically centralized storage architecture however the HCI is built mostly on ‘distributed storage architecture’. So how can current Enterprise Storage show use cases to the customer to sell their Enterprise Storage either as part of the HCI solution or exist along with HCI? A. Generally enterprise storage products can be included in CI but are not blended with HCI. For example Dell EMC, Cisco (with NetApp and other storage vendors), IBM and Oracle offer CI solutions that include enterprise storage arrays in the rack. Most HCI platforms do not interoperate with enterprise storage arrays because the HCI platforms include their own storage. They can co-exist with enterprise storage arrays and that’s how most customers deploy them—some workloads run on the HCI infrastructure while others continue to use enterprise storage arrays. Q. One of the HCI selling points is simplicity and cost reductions from a la carte. It seems that from what is being presented, that may not be the case. Can you elaborate on where HCI may become more complex, costly? A. It comes down to value. You can buy all the components yourself and glue them all together and may come up with a lower total cost, but what is the value of your time? What is the cost of staff time to evaluate, test, deploy and maintain. The total value must be considered. It’s possible that HCI will be more costly than a disaggregated deployment that separates compute and storage, but this depends heavily on the workload and specific vendor product solution implementation. Q. Current HCI “full stack” solutions claim compute and storage convergence, but what about the network? Given the east/west traffic introduced by HCI solutions, what networking solutions should customers be looking at? A. Most of the common HCI solutions are packaged with server, storage, compute and most have networking included as well—typically the network adapters and sometimes also the switches. Some even have a backend software defined networking (SDN) capability as part of their stack. Q. Related to HCI answer, what about vendors who allow for storage growth and/or server (compute) and storage additions. This allows for aggregated and dis-aggregated…yes? A. Most HCI vendors require compute and storage to be added simultaneously, though many support different nodes with different ratios of compute and storage. This allow customers to change the ratio of compute and storage by adding different node types. And yes, some HCI vendors also support both a hyper-converged and disaggregated model, with the disaggregated model allowing compute and storage to be added separately. Q. What are the tools available to make HCI work in a hybrid load environment, with different workload requirements, e.g.: VDI and Databases? A. There are tools for moving and migrating applications, workloads, systems and VMs into CI/HCI environments, likewise for tuning, optimizing, gaining insight, analytics and reporting. Most of the CI/HCI solutions have tools built into them for optimizing PACE (Performance, Availability, Capacity, Economics) attributes along with server compute, memory, storage, and I/O resources. Some CI/HCI solutions are optimized for VDI/workspaces, while others are able to support general workloads including databases, and some even support HPC/SC or other specialized workloads. Q. Does network performance affect HCI or CI performance? A. Sometimes. Most hybrid HCI nodes are happy with the bandwidth of 10GbE, but if the nodes are all-flash or have many disks, then a faster speed may be required to avoid a network bottleneck. Network latency could affect HCI or CI performance in some cases, especially with all-flash storage. Of course a reliable network helps ensure reliable CI/HCI operations.  

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Managing Your Computing Ecosystem

Kristen Hauser

Apr 12, 2017

title of post

By George Ericson, Distinguished Engineer, Dell EMC; Member, SNIA Scalable Storage Management Technical Working Group, @GEricson

Introduction

This blog is part one of a three-part series recently published on “The Data Cortex”, which represents the thoughts and opinions from members of the CTO Team of Dell EMC’s Data Protection Division. The author, George Ericson, has been actively participating on the SNIA Scalable Storage Management Technical Working Group which has been developing the SNIA Swordfish storage management specification.

SNIA Swordfish is an extension to the Distributed Management Task Force’s (DMTF’s) open industry Redfish® standard, and the combination offers a unified approach to managing storage
and servers in environments like hyperscale and cloud infrastructures. This makes having a single portal convenient for obtaining feedback on either specification. SNIA’s Storage Management Initiative (SMI) has set up swordfishforum.com as an easy link that goes to the Redfish Forum site. Please visit often and share your thoughts.

Overview

There is a very real opportunity to take a giant step towards universal and interoperable management interfaces that are defined in terms of what your clients want to achieve. In the process, the industry can evolve away from the current complex, proprietary and product specific interfaces.

You’ve heard this promise before, but it’s never come to pass. What’s different this time? Major players are converging storage and servers. Functionality is commoditizing. Customers are demanding it more than ever.

Three industry-led open standards efforts have come together to collectively provide an easy to use and comprehensive API for managing all of the elements in your computing ecosystem, ranging from simple laptops to geographically distributed data centers.

This API is specified by:

  • the Open Data Protocol (OData) from Oasis
  • the Redfish Scalable Platforms Management API from the DMTF
  • the Swordfish Scalable Storage Management API from the SNIA

One can build a management service that is conformant to the Redfish or Swordfish specifications that provides a comprehensive interface for the discovery of the managed physical infrastructure, as well as for the provisioning, monitoring, and management of the environmental, compute, networking, and storage resources provided by that infrastructure. That management service is an OData conformant data service.

These specifications are evolving and certainly are not complete in all aspects. Nevertheless, they are already sufficient to provide comprehensive management of most features of products in the computing ecosystem.

This post and the following two will provide a short overview of each.

This post and the following two will provide a short overview of each.

OData

The first effort is the definition of the Open Data Protocol (OData). OData v4 specifications are OASIS standards that have also begun the international standardization process with ISO.

Simply asserting that a data service has a Restful API does nothing to assure that it is interoperable with any other data service. More importantly, Rest by itself makes no guarantees that a client of one Restful data service will be able to discover or know how to even navigate around the Restful API presented by some other data service.

OData enables interoperable utilization of Restful data services. Such services allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages. In addition to Redfish and Swordfish described below, a growing number of applications support OData data services, e.g. Microsoft Azure, SAP NetWeaver, IBM WebSphere, and Salesforce.

The OData Common Schema Definition Language (CSDL) specifies a standard metamodel used to define an Entity Data Model over which an OData service acts. The metamodel defined by CSDL is consistent with common elements of the UML v2.5 metamodel. This fact enables reliable translation to your programming language of your choice.

OData standardizes the construction of Restful APIs. OData provides standards for navigation between resources, for request and response payloads and for operation syntax. It specifies the discovery of the entity data model for the accessed data service. It also specifies how resources defined by the entity data model can be discovered. While it does not standardize the APIs themselves, OData does standardize how payloads are constructed and a set of query options and many other items that are often different across the many current Restful data services. OData specifications utilize standard HTTP, AtomPub, and JSON. Also, standard URIs are used to address and access resources.

The use of the OData protocol enables a client to access information from a variety of sources including relational databases, servers, storage systems, file systems, content management systems, traditional Web sites, and more.

Ubiquitous use will break down information silos and will enable interoperability between producers and consumers. This will significantly increase the ability to provide new and richer functionality on top of the OData services.

The OData specifications define:

Conclusion:

While Rest is a useful architectural style, it is not a “standard” and the variances in Restful APIs to express similar functions means that there is no standard way to interact with different systems. OData is laying the groundwork for interoperable management by standardizing the construction of Restful APIs. Next up – Redfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to