Sorry, you need to enable JavaScript to visit this website.

An FAQ to Make Your Storage System Hum

Fred Zhang

May 23, 2017

title of post
In our most recent “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series – Part Sepia – Getting from Here to There, we discussed terms and concepts that have a profound impact on storage design and performance. If you missed the live event, I encourage you to check it our on-demand. We had many great questions on encapsulation, tunneling, IOPS, latency, jitter and quality of service (QoS). As promised, our experts have gotten together to answer them all. Q. Is there a way to measure jitter? A. Jitter can be measured directly as a statistical function of the latency, typically as the Variance or Standard Deviation of the latency. For example a storage device might show an average latency of 5ms with a standard deviation of 1.5ms. This means roughly 95% of the transactions have a latency between 2ms and 8ms (average latency plus/minus two standard deviations), however many storage customers measure jitter indirectly by showing the 99.9%, 99.99%, or 99.999% latency. For example if my storage system has 99.99% latency of 8ms, it means 99.99% of transactions have latency <=8ms and 1/10,000 of transactions have latency >8ms. Percentile latency is an indirect measure of jitter but often easier to calculate or understand than the actual jitter. Q. Can jitter be easily characterized for storage, media, and networks.  How and what tools are available for doing this? A. Jitter is usually easy to measure on a network using standard network monitoring and reporting tools. It may or may not be easy to measure on storage systems or storage media, depending on the tools available (either built-in to the storage OS or using an external management or monitoring tool).  If you can record the latency of each transaction or packet, then it’s easy to calculate and show the jitter using standard statistical measures such as Variance or Standard Deviation of the latency. What most customers do is just measure the 99.9%, 99.99%, or 99.999% latency. This is an indirect measure of jitter but is often much easier to report and understand than the actual jitter. Q.  Generally IOPS numbers are published for a particular block size like 8k write/read size, but in reality, IO request per second could be of mixed sizes, what is your perspective on this? A. Most IOPS benchmarks test only one I/O size at a time. Most individual real workloads (for example databases) also use only one I/O size.  It is true that a storage controller or HDD/SSD might need to support multiple workloads simultaneously, each with a different I/O size.  While it is possible to run benchmarks with a mix of different I/O sizes, it’s rarely done because then there are too many workload combinations to test and publish. Some storage devices do not perform well if they must handle both small random and large sequential workloads simultaneously, so a smart storage controller might assign different workload types to different disk groups. Q. One often misconfigured parameter is queue depth. Can you talk about how this relates to IOPS, latency and jitter? A. Queue depth indicates how many tasks or I/Os can be lined up for a particular controller, interface, or CPU. Having a higher queue depth ensures the CPU (or controller or interface) always has a new task to do as soon as it finishes its current task(s). This can result in higher IOPS because the CPU is less likely to have idle time between transactions. But it could also increase latency because the CPU is more likely to be multi-tasking and context switching between different tasks or workloads. Q. Can you please repeat all your examples of tunneling? GRE, MPLS, what others? How can it be IPv4 via IPv6? A. VXLAN, LISP, GRE, MPLS, IPSEC.  Any time you encapsulate and send one protocol over another and decapsulate at the other end to send the original frame that process is tunneling. In the case we showed of IPv6 over IPv4, you are taking an original IPv6 frame with its IPv6 header of source address to destination address all IPv6 and sending it over and IPv4 enabled network we are encapsulating the IPv6 frame with an IPv4 header and “tunneling” IPv6 over the IPv4 network. Q. I think it’d be possible to configure QoS to a point that exceeds the system capacity. Are there any safeguards on avoiding this scenario? A. Some types of QoS allow over-provisioning and others do not. For example a QoS that imposes only maximum limits (and no minimum guarantees) on workloads might not prevent many workloads from exceeding system capacity. If the QoS allows over-provisioning, then you should use system monitoring and alerts to warn you when system capacity has been exceeded, or when any workloads are not getting their minimum guaranteed performance. Q. Is there any research being done on using storage analytics along with artificial intelligence (AI) to assist with QoS?   A. There are a number of storage analytics products, both third party and storage vendor specific that help with QoS. Whether any of these tools may be described as using AI is debatable, since we’re in the early days of using AI to do much in the storage arena. There are many QoS research projects, and no doubt they will eventually make their way into commercially available products if they prove useful. Q. Are there any methods (measurements) to calculate IOPS/MBps in tier capable storage? Would it be wrong metric if we estimate based on medium level, example tier 2 (between 1 and 3)? A. This question needs refinement, since tiering is sometimes a cache model rather than a data movement model. And knowing the answer may not actually help! Vendors do have tools (normally internal, since they are quite complex) that can help with the planning of tiered storage. By now, we hope you’re not “too proud” to ask some of these storage networking questions. We’ve produced four other webcasts in this “Everything You Wanted To Know About Storage,” series to date. They are all available on-demand. And you can register here for our next one on July 6th where we’ll bring in experts to discuss:
  • Storage APIs and POSIX
  • Block, File, and Object storage
  • Byte Addressable and Logical Block Addressing
  • Log Structures and Journaling Systems
The Ethernet Storage Forum team and I hope to see you there!    

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Security and Privacy in the Cloud

Alex McDonald

May 22, 2017

title of post
When it comes to the cloud, security is always a topic for discussion. Standards organizations like SNIA are in the vanguard of describing cloud concepts and usage, and (as you might expect) are leading on how and where security fits in this new world of dispersed and publicly stored and managed data. On July 20th, the SNIA Cloud Storage Initiative is hosting a live webcast “The State of Cloud Security.” In this webcast, I will be joined by SNIA experts Eric Hibbard and Mark Carlson who will take us through a discussion of existing cloud and emerging technologies, such as the Internet of Things (IoT), Analytics & Big Data, and more, and explain how we’re describing and solving the significant security concerns these technologies are creating. They will discuss emerging ISO/IEC standards, SLA frameworks and security and privacy certifications. This webcast will be of interest to managers and acquirers of cloud storage (whether internal or external), and developers of private and public cloud solutions who want to know more about security and privacy in the cloud. Topics covered will include:
  • Summary of the standards developing organization (SDO) activities:
    • Work on cloud concepts, Cloud Data Management Interface (CDMI), an SLA framework, and cloud security and privacy
  • Securing the Cloud Supply Chain:
    • Outsourcing and cloud security, Cloud Certifications (FedRAMP, CSA STAR)
  • Emerging & Related Technologies:
    • Virtualization/Containers, Federation, Big Data/Analytics in the Cloud, IoT and the Cloud
Register today. We hope to see you on July 20th where Eric, Mark and I will be ready to answer your cloud security questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Security and Privacy in the Cloud

Alex McDonald

May 22, 2017

title of post
When it comes to the cloud, security is always a topic for discussion. Standards organizations like SNIA are in the vanguard of describing cloud concepts and usage, and (as you might expect) are leading on how and where security fits in this new world of dispersed and publicly stored and managed data. On July 20th, the SNIA Cloud Storage Initiative is hosting a live webcast “The State of Cloud Security.” In this webcast, I will be joined by SNIA experts Eric Hibbard and Mark Carlson who will take us through a discussion of existing cloud and emerging technologies, such as the Internet of Things (IoT), Analytics & Big Data, and more, and explain how we’re describing and solving the significant security concerns these technologies are creating. They will discuss emerging ISO/IEC standards, SLA frameworks and security and privacy certifications. This webcast will be of interest to managers and acquirers of cloud storage (whether internal or external), and developers of private and public cloud solutions who want to know more about security and privacy in the cloud. Topics covered will include:
  • Summary of the standards developing organization (SDO) activities:
    • Work on cloud concepts, Cloud Data Management Interface (CDMI), an SLA framework, and cloud security and privacy
  • Securing the Cloud Supply Chain:
    • Outsourcing and cloud security, Cloud Certifications (FedRAMP, CSA STAR)
  • Emerging & Related Technologies:
    • Virtualization/Containers, Federation, Big Data/Analytics in the Cloud, IoT and the Cloud
Register today. We hope to see you on July 20th where Eric, Mark and I will be ready to answer your cloud security questions.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

What if Programming and Networking Had a Storage Baby? Say What?

J Metz

May 18, 2017

title of post
The colorful “Everything You Wanted To Know About Storage But Were Too Proud To Ask,” popular webcast series marches on! In this 6th installment, Part – Vermillion – What if Programming and Networking Had a Storage Baby, we look into some of the nitties and the gritties of storage details that are often assumed. When looking at data from the lens of an application, host, or operating system, it’s easy to forget that there are several layers of abstraction underneath each before the actual placement of data occurs. In this webcast we are going to scratch beyond the first layer to understand some of the basic taxonomies of these layers. In this webcast we will show you more about the following:
  • Storage APIs and POSIX
  • Block, File, and Object storage
  • Byte Addressable and Logical Block Addressing
  • Log Structures and Journaling Systems
It’s an ambitious project, but these terms and concepts are at the heart of where compute, networking and storage intersect. Having a good grasp of these concepts ties in with which type of storage networking to use, and how data is actually stored behind the scenes. Register today to join us on July 6th for this session. You can ask all the questions that, until now, you’ve been too proud to ask and we promise not to to show you any baby pictures!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Registration Now Open for Storage Developer Conference India - May 25-26 in Bangalore

khauser

May 15, 2017

title of post
For the third consecutive year, SNIA will present their highly successful Storage Developer Conference (SDC) in Bangalore, India, on May 25-26, 2017 at the My Fortune Hotel.  The 2017 agenda, developed under the supervision of the SNIA India agenda committee, leads off with a keynote by Indian Institute of Science Professor P. Vijay Kumar on Codes for Big Data:  Error-Correction for Distributed Storage, followed by Amar Tunballi, Engineering Manager at Red Hat, speaking on Software Defined Storage and Why It Will Continue To Be Relevant.  Thursday keynotes will feature Anand Ghatnekar, Country Manager, Cloud Systems, Seagate on The Power of Edge, and Chirag Shah, IT Transformation Consultant, DELL EMC on New Age Data Centre. Breakout tracks will feature sessions on storage ecosystem management with SNIA SwordfishTM, hybrid clouds, object storage, open storage, NVMe over Fabrics, and performance optimization.  Also featured at the conference is a Cloud Interoperability Plugfest and the opportunity to network with partners Red Hat, Dell EMC, IBM, Mindteck, NetApp, and Tata Consultancy  Services. Check out the full schedule at https://www.snia.org/events/sdc-india/2017-sdc-india-agenda. According to Paul Talbut, SNIA India Executive Director, SDC India is ”ideal for storage software and hardware developers, storage product and solution architects, product line CTOs, storage product customer support engineers, and in house IT development staff.” Talbut emphasized that SDC India “is the place to learn from the experts, and get insights on the tools, technologies ,and tactics needed in data storage management and the cloud.“  Learn more - conference registration and full details on SDC India can be found at https://www.snia.org/sdcindia.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Too Proud to Ask Webcast Series Continues – Getting from Here to There Pod

Fred Zhang

May 4, 2017

title of post
As part of the SNIA Ethernet Storage Forum's successful "Everything You Wanted To Know About Storage But Were Too Proud To Ask" series, we've discussed numerous topics about storage devices, protocols, and networks. As we examine some of these topics further, we begin to tease out some subtle nuances; subtle, yet important nevertheless. On May 9th we'll take on the terms and concepts that affect Storage Architectures as a whole in "Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Sepia – Getting from Here to There." In particular, we'll be looking at those aspects that can help or hinder storage systems inside the network:
  • Encapsulation vs. Tunneling
  • IOPS vs. Latency vs. Jitter
  • Quality of Service (QoS)
Each of these topics has a profound impact on storage designs and performance, but they are often misunderstood. We're going to help you become clear on all of these very important storage concepts so that you can grok storage just a little bit more. We hope you will join us on May 9th at 10:00 am PT and that you won't be "too proud" to ask our experts your questions! Register today. Think there may be other storage topics you feel you should understand better? Check out the rest of the webcasts in this series here. Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Too Proud to Ask Webcast Series Continues – Getting from Here to There Pod

Fred Zhang

May 4, 2017

title of post
As part of the SNIA Ethernet Storage Forum’s successful “Everything You Wanted To Know About Storage But Were Too Proud To Ask” series, we’ve discussed numerous topics about storage devices, protocols, and networks. As we examine some of these topics further, we begin to tease out some subtle nuances; subtle, yet important nevertheless. On May 9th we’ll take on the terms and concepts that affect Storage Architectures as a whole in “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Sepia – Getting from Here to There.” In particular, we’ll be looking at those aspects that can help or hinder storage systems inside the network:
  • Encapsulation vs. Tunneling
  • IOPS vs. Latency vs. Jitter
  • Quality of Service (QoS)
Each of these topics has a profound impact on storage designs and performance, but they are often misunderstood. We’re going to help you become clear on all of these very important storage concepts so that you can grok storage just a little bit more. We hope you will join us on May 9th at 10:00 am PT and that you won’t be “too proud” to ask our experts your questions! Register today. Think there may be other storage topics you feel you should understand better? Check out the rest of the webcasts in this series here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMB3 – These Questions Rock!

John Kim

Apr 24, 2017

title of post
Earlier this month, the SNIA Ethernet Storage Forum hosted a live webcast on Server Message Block (SMB), "Rockin' and Rollin' with SMB3." Presenting was Ned Pyle, Microsoft SMB Program Manager. If you missed the live event, I encourage you to watch it on-demand. We had a lot of questions from the big audience this event drew, so as promised, here are answers to them all. Q. Other than that audit setup, is there a way to determine, via the OS, which SMB version is in use? A. No. Network captures alone will tell you, but Windows doesn't track this explicitly other than SMB1 with auditing we added specifically for the task of identifying removal options Q. SMB 3.1.1 over Ethernet... can you discuss/compare with SMB 3.1.1 over Infiniband? A. If the question is ‘what's better, Infiniband or Ethernet', my answer is always: it depends. I really don't want to get into a competitive conversation under the guides of SNIA. I simply recommend looking at the vendor stories and make an informed decision. Overall, Ethernet/TCP/IP versions like RoCE and iWARP configurations are generally less expensive than Infiniband ones. They all have tremendous performance. They all have their various ups and downs. Q. Do you have statistics regarding SMB-Direct adoption? A. It's tricky, as our telemetry for Server usage is quite inaccurate due to firewall rules preventing servers from reaching the Internet. I can say indirectly that we know of thousands of customer deployments. Q. What's the name of the IO application? A. DiskSPD Q. I don't believe your I/O data tests, wouldn't you need to trunk 17 10 Gigabit Network Cards to achieve 168 gigabit I/O capability? A.  This was a misunderstanding, you thought I said 10Gb but it was 100Gb. We used 100Gb RDMA NICs in this demo with RoCEv2. The bottleneck was the storage at that point, the network had plenty of bandwidth left over.   Q. These are great, but how many of these new features will end up locking out FOSS/GPL implementations of SMB such as SAMBA? A. Absolutely not! We work with Samba team and Linux to ensure that SMB can be broadly deployed with all of its capabilities inside open source software. Q. NetApp supports CA shares (which uses transparent failover) in two use cases: SQL over SMB and Hyper-V over SMB3. A. This sounds likes someone from NetApp stating a fact, so I will simply say "good!" :) Q.  Can you please post links to the tools mentioned in this presentation, and I/O tests? Is there a comparison using I/O Meter? A. Here you go:
  • https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
  • https://github.com/Microsoft/diskspd
  • https://github.com/Microsoft/diskspd/tree/master/Frameworks/VMFleet
Q. You are forced to use SMB1 because of the Windows 2003 issue? A. Windows Server 2003 and XP (and older, like Win2000) all use SMB1. If they are still around, you will need to leave SMB1 enabled on any machines talking to them. Q. When will Microsoft officially drop support for SMB1? A. Overall for the protocol, there is no timeline. It is deprecated however, so no further work will be done in SMB1 other than critical security patches. SMB1 will start being removed *by default* in a coming release of Windows Server and Windows 10 client. This doesn't mean totally removed forever, but instead "missing by default", where you must directly opt in to adding it back. It will be done on a per-SKU basis, so that enterprises are first likely to see it, since they are equipped better to understand it and less likely to need SMB1 Q. Is there a way to change block size in SMB3 ? A. In SMB2_READ processing section 3.3.5.12 (https://msdn.microsoft.com/en-us/library/cc246729.aspx): The server SHOULD<296> fail the request with STATUS_INVALID_PARAMETER if the Length field is greater than Connection.MaxReadSize. If Connection.SupportsMultiCredit is TRUE the server MUST validate CreditCharge based on Length, as specified in section 3.3.5.2.5. If the validation fails, it MUST fail the read request with STATUS_INVALID_PARAMETER. There is similar text for SMB2_WRITE in 3.3.5.13 (https://msdn.microsoft.com/en-us/library/cc246730.aspx). Then, off to SMB2_NEGOTIATE  in 3.3.5.4 (https://msdn.microsoft.com/en-us/library/cc246768.aspx) to discover:
  • MaxReadSize is set to the maximum size, in bytes, of the Length in an SMB2 READ Request (section 2.2.19) that the server will accept on the transport that established this connection. This value SHOULD<231> be greater than or equal to 65536. MaxReadSize MUST be set to MaxReadSize.
  • MaxWriteSize is set to the maximum size, in bytes, of the Length in an SMB2 WRITE Request (section 2.2.21) that the server will accept on the transport that established this connection. This value SHOULD<232> be greater than or equal to 65536. MaxWriteSize MUST be set to MaxWriteSize.
Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608
<232> Section 3.3.5.4: If the underlying transport is NETBIOS over TCP, Windows servers set MaxWriteSize to 65536. Otherwise, MaxWriteSize is set based on the following table.
Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608
Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMB3 – These Questions Rock!

John Kim

Apr 24, 2017

title of post
Earlier this month, the SNIA Ethernet Storage Forum hosted a live webcast on Server Message Block (SMB), “Rockin’ and Rollin’ with SMB3.” Presenting was Ned Pyle, Microsoft SMB Program Manager. If you missed the live event, I encourage you to watch it on-demand. We had a lot of questions from the big audience this event drew, so as promised, here are answers to them all. Q. Other than that audit setup, is there a way to determine, via the OS, which SMB version is in use? A. No. Network captures alone will tell you, but Windows doesn’t track this explicitly other than SMB1 with auditing we added specifically for the task of identifying removal options Q. Old Linux + NetApp 7-Mode + 2003 Server = Stuck with SMB1.0? A. You have to ask NetApp. ? Q. SMB 3.1.1 over Ethernet… can you discuss/compare with SMB 3.1.1 over Infiniband? A. If the question is ‘what’s better, Infiniband or Ethernet’, my answer is always: it depends. I really don’t want to get into a competitive conversation under the guides of SNIA. I simply recommend looking at the vendor stories and make an informed decision. Overall, Ethernet/TCP/IP versions like RoCE and iWARP configurations are generally less expensive than Infiniband ones. They all have tremendous performance. They all have their various ups and downs. Q. Do you have statistics regarding SMB-Direct adoption? A. It’s tricky, as our telemetry for Server usage is quite inaccurate due to firewall rules preventing servers from reaching the Internet. I can say indirectly that we know of thousands of customer deployments. Q. What’s the name of the IO application? A. DiskSPD Q. I don’t believe your I/O data tests, wouldn’t you need to trunk 17 10 Gigabit Network Cards to achieve 168 gigabit I/O capability? A. This was a misunderstanding, you thought I said 10Gb but it was 100Gb. We used 100Gb RDMA NICs in this demo with RoCEv2. The bottleneck was the storage at that point, the network had plenty of bandwidth left over.  Q. These are great, but how many of these new features will end up locking out FOSS/GPL implementations of SMB such as SAMBA? A. Absolutely not! We work with Samba team and Linux to ensure that SMB can be broadly deployed with all of its capabilities inside open source software. Q. NetApp supports CA shares (which uses transparent failover) in two use cases: SQL over SMB and Hyper-V over SMB3. A. This sounds likes someone from NetApp stating a fact, so I will simply say “good!” ? Q. Can you please post links to the tools mentioned in this presentation, and I/O tests? Is there a comparison using I/O Meter? A. Here you go:
  • https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
  • https://github.com/Microsoft/diskspd
  • https://github.com/Microsoft/diskspd/tree/master/Frameworks/VMFleet
Q. You are forced to use SMB1 because of the Windows 2003 issue? A. Windows Server 2003 and XP (and older, like Win2000) all use SMB1. If they are still around, you will need to leave SMB1 enabled on any machines talking to them. Q. When will Microsoft officially drop support for SMB1? A. Overall for the protocol, there is no timeline. It is deprecated however, so no further work will be done in SMB1 other than critical security patches. SMB1 will start being removed *by default* in a coming release of Windows Server and Windows 10 client. This doesn’t mean totally removed forever, but instead “missing by default”, where you must directly opt in to adding it back. It will be done on a per-SKU basis, so that enterprises are first likely to see it, since they are equipped better to understand it and less likely to need SMB1 Q. Is there a way to change block size in SMB3 ? A. In SMB2_READ processing section 3.3.5.12 (https://msdn.microsoft.com/en-us/library/cc246729.aspx): The server SHOULD<296> fail the request with STATUS_INVALID_PARAMETER if the Length field is greater than Connection.MaxReadSize. If Connection.SupportsMultiCredit is TRUE the server MUST validate CreditCharge based on Length, as specified in section 3.3.5.2.5. If the validation fails, it MUST fail the read request with STATUS_INVALID_PARAMETER. There is similar text for SMB2_WRITE in 3.3.5.13 (https://msdn.microsoft.com/en-us/library/cc246730.aspx). Then, off to SMB2_NEGOTIATE  in 3.3.5.4 (https://msdn.microsoft.com/en-us/library/cc246768.aspx) to discover:
  • MaxReadSize is set to the maximum size, in bytes, of the Length in an SMB2 READ Request (section 2.2.19) that the server will accept on the transport that established this connection. This value SHOULD<231> be greater than or equal to 65536. MaxReadSize MUST be set to MaxReadSize.
  • MaxWriteSize is set to the maximum size, in bytes, of the Length in an SMB2 WRITE Request (section 2.2.21) that the server will accept on the transport that established this connection. This value SHOULD<232> be greater than or equal to 65536. MaxWriteSize MUST be set to MaxWriteSize.
Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608
<232> Section 3.3.5.4: If the underlying transport is NETBIOS over TCP, Windows servers set MaxWriteSize to 65536. Otherwise, MaxWriteSize is set based on the following table.
Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Buffers, Queues and Caches Explained

John Kim

Apr 19, 2017

title of post
Finely tuning buffers, queues and caches can make your storage system hum. And that's exactly what we discussed in our recent SNIA Ethernet Storage Forum webcast, ""Everything You Wanted to Know About Storage But Were Too Proud To Ask – Part Teal: The Buffering Pod." If you missed it, it's now available on-demand. In this blog, you'll find detailed answers from our panel of experts to all the great questions we received during the live event. I also encourage you to check out the other on-demand webcasts in this "Too Proud To Ask" series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF. Q. Question on cache - What would be the right size of cache at each point (clients / Front-end connect / Storage controller / Back end connect / Physical storage). A. Great question! The main consideration for cache sizing at any point is the workload. If the workload is conducive to cache benefits, then the more cache the merrier! However, when workload is not conducive to cache, adding more cache capacity won't be beneficial. For example, if the workload is 100% sequential reads of small 4K IOs, having the data be pre-loaded into cache is going to be extremely helpful, and increasing the size of such cache at the end-point will be good. If the workload is random, and the IO size is changing, pre-fetching data into cache may be not a good idea. Similarly, with write cache, the benefit is realized two-fold: first, when the write is stored in cache and ack'ed back to the host (such write is typically called "dirty", because it hasn't been flashed back to the disk) and second, when the dirty write is overwritten by the host before it is flashed. Any other combination of workloads and IO will only get partial benefit from the cache. Sizing cache is a very difficult exercise and there are no universal answers. Every implementation has its own pluses and minuses. Q. Isn't a higher queue depth increasing latency as well, so applications would run slower as they are waiting longer for IO to complete? A. The answer to this is very dependent on the environment. In general having more outstanding operations would increase the load on the interconnects and storage media which would result in the per-IO latency increasing. The alternative is having a small queue depth which may produce consistently lower per IOP latency at the expense of less throughput and IOPS. There are numerous techniques for dealing with mixed storage traffic, low-latency and high throughput, such as multi-queues, out-of-order completions, immediate and delayed data transfers in-line, ready to transfer, and policies. The NVM media latency roadmap is also helping with these types of latency vs. throughput decisions by enabling devices that achieve full-throughput at very low queue depths. Q. Does SCSI protocol have a max queue depth of 32? A. No, the SCSI Architecture Model allows for up to 64 bits for the command identifier field and each of the SCSI transports (iSCSI, SAS, ...) defines a maximum within that range. There may be implementation-dependent SCSI endpoints that define smaller ranges. Q. How would a distributed software defined storage technology deal with queue depth and how can this be advantageous or not advantageous? A. Interesting question. Distributed software defined storage is by definition made up of multiple autonomous layers of software components orchestrated to provide stable storage. These types of systems will have many outstanding operations (queue depth) at multiple-stages and layers. It's also not uncommon to see SDS file systems front-ended with block-based protocols, such as iSCSI, which enable the initiators to build up large queue depths of operations. Q. Are queue depth and buffer the same? A. No, queue refers to command and response queues, buffers refer to in-flight data buffers. Command and response queues often contain pointers to these buffers embedded in the read or write commands. Q. Are caches and buffers made of the same silicon that makes up SSD disks? Which one is faster? A. As a general idea, yes, SSDs, RAMs, caches, and buffers are all made from the silicon. If we dig a little deeper, device caches and buffers are typically made of high-speed static random access memory (SRAM), which is faster than the slower and cheaper dynamic RAM (DRAM), used for main memory. Modern SSDs are utilizing an even slower memory, which is commonly known as Flash memory, and we differentiate that type of storage by its structure: Single-Level Cell (SLC), Multi-Level Cell (MLC), etc. Although, there are some SSDs that are made out of DRAM, too. And then there are some newer technologies, like NVDIMM, 3D XPoint, etc. So, while the underlying physical material is still the same silicon, it's the architecture that makes all the difference. Q. In PFC.. If there are pending items in P1... can P2 or P3 etc. go ahead? A. Yes. Priority Flow Control (PFC, also called Per Priority Pause, though rarely) is designed specifically to only pause traffic on one priority, allowing the remaining priority Classes of Service to work according to their configurations. So, for example, if PFC were to pause Priority Queue 1, and Priority Queue 3 also had a "no-drop" configuration but was not having any issues, PFC on Queue 1 would be triggered but PFC on Queue 3 would not. In reality, having more than one no-drop lane on a link is very, very rare, but it does illustrate that PFC operates on a per-priority basis, not on the whole link. Q. Do all Ethernet based NVMe-oF (NVMe over Fabrics) implementations require some form of Data Center Bridging (DCB)? Or, are there versions of Ethernet based NVMe-oF (RoCE & iWARP) that run over standard Ethernet without needing DCB? A. Yes, both iWARP and RoCE can be run without DCB. To maintain peak performance either DCB or other flow control mechanisms like ECN are recommended. Q. Do server devices automatically honor the pause frame or does it require configuration? A. I am assuming "server devices" refers to Ethernet ports on a server. It depends on the default settings of the NIC or LOM or those loaded by the driver during initialization. Generally speaking NIC devices that support PFC also support DCBX (Data Center Bridging Exchange). DCBX is a protocol that allows an end device, like a NIC, to get its proper configuration settings from the switch. That means that in an environment where PFC needs to be assigned to a specific Class of Service (CoS), the switch will send the NIC the proper settings during the setup configuration. Q. Is it mandatory for all devices in network, host and storage to have same speed ports? A. No. Q. What are the theoretical devices for modeling and analyzing cache, buffer or queue behaviors? A. Computers with software  :) Q. What if I have really large sized writes and they fill up the cache quickly? Is there a way to bypass the large sized writes? A. The time of the presentation limited the amount of material we were able to share. One of the subjects we didn't talk about was the cache software algorithm. Most storage vendors manage the cache by not letting extremely large IOs to be cached. Back in the spinning storage era, an IO of 2MB would typically be considered too large to be cached, and would be sent directly to disk. Q. What will be the use of cache in all flash storage please? As flash is the highest performance disk. A. See the answer to question above "Are caches and buffers made of the same silicon that makes up SSD disks? Which one is faster?" Hardware cache and buffers are typically made out of the fastest memory, then comes RAM and last are the SSDs aka flash disks. Therefore, storing data on a faster layer is still beneficial to the performance. Q. Does the LUN Queue Depth includes the Queue Depth discussed here? A. Yes, SCSI LUN queue depth enables the initiator(s) to have multi outstanding I/O operations in flight. Q. Will you use a queuing algorithm to manage IO queue? If your answer is yes, which algorithm will you use? A. There are several storage protocols that define mechanisms for a target to dynamically adjust the queue depth available to the initiator through various forms of credit exchanges. Having these types of mechanism enables the target to implement multi-initiator load balancing across targets. Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to