Sorry, you need to enable JavaScript to visit this website.

Solid State Storage Highlighted at Storage Visions 2014

Marty Foltyn

Jan 10, 2014

title of post
The SSSI began 2014 at the Storage Visions Conference. With a theme of Fast is Beautiful, the 2014 conference explored the latest storage technologies and methods to improve performance and efficiency, lower costs, and improve reliability. Solid state storage was a key component of exhibits and technology sessions.IMG_6719 SSSI showcased our latest white papers on solid state storage. See them here.  SSSI members Fastor Systems and Greenliant Systems demonstrated innovative SSD solutions. At Storage Visions,  Paul Wassenberg, SNIA SSSI Chair, discussed findings of a new SSSI survey rating SSD features. An unexpected discovery was the need for education on issues such as data encryption, power management, and others. SSSI will work to provide more information on these issues to help users to make informed choices about SSDs.  Join our dialog on related topics via a dedicated LinkedIn group . SNIA's continuing work in NVM programming and PCIe SSD created a buzz.  Jim Pappas of Intel moderated a panel on Bringing Non-Volatile Memory To Tomorrow's Storage Architectures; Adrian Proctor of Viking Technology gave an NVDIMM overview on a Storage Developments Drive New Storage Options panel; and Eden Kim of Calypso Systems spoke on PCIe SSD activities on a Future Content - What's Ahead for Content Storage panel.  Explore these areas further at our upcoming SNIA Annual Members Meeting, January 27-30, 2014 at the Sainte Claire Hotel in San Jose CA where a Storage Industry Summit-Focus on NVM will take place Tuesday January 28. Registration is complimentary.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Solid State Storage Highlighted at Storage Visions 2014

Marty Foltyn

Jan 10, 2014

title of post

The SSSI began 2014 at the Storage Visions Conference. With a theme of Fast is Beautiful, the 2014 conference explored the latest storage technologies and methods to improve performance and efficiency, lower costs, and improve reliability.

Solid state storage was a key component of exhibits and technology sessions.IMG_6719

SSSI showcased our latest white papers on solid state storage. See them here.  SSSI members Fastor Systems and Greenliant Systems demonstrated innovative SSD solutions.

At Storage Visions,  Paul Wassenberg, SNIA SSSI Chair, discussed findings of a new SSSI survey rating SSD features. An unexpected discovery was the need for education on issues such as data encryption, power management, and others. SSSI will work to provide more information on these issues to help users to make informed choices about SSDs.  Join our dialog on related topics via a dedicated LinkedIn group .

SNIA’s continuing work in NVM programming and PCIe SSD created a buzz.  Jim Pappas of Intel moderated a panel on Bringing Non-Volatile Memory To Tomorrow’s Storage Architectures; Adrian Proctor of Viking Technology gave an NVDIMM overview on a Storage Developments Drive New Storage Options panel; and Eden Kim of Calypso Systems spoke on PCIe SSD activities on a Future Content – What’s Ahead for Content Storage panel.  Explore these areas further at our upcoming SNIA Annual Members Meeting, January 27-30, 2014 at the Sainte Claire Hotel in San Jose CA where a Storage Industry Summit-Focus on NVM will take place Tuesday January 28. Registration is complimentary.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Fibre Channel over Ethernet (FCoE): Hype vs. Reality

title of post

It’s been a bit of a bumpy ride for FCoE, which started out with more promise than it was able to deliver. In theory, the benefits of a single converged LAN/SAN network are fairly easy to see. The problem was, as is often the case with new technology, that most of the theoretical benefit was not available on the initial product release. The idea that storage traffic was no longer confined to expensive SANs, but instead could run on the more commoditized and easier-to-administer IP equipment was intriguing, however, new 10 Gbps Enhanced Ethernet switches were not exactly inexpensive with few products supporting FCoE initially, and those that did, did not play nicely with products from other vendors.

Keeping FCoE “On the Single-Hop”?

The adoption of FCoE to date has been almost exclusively “single-hop”, meaning that FCoE is being deployed to provide connectivity between the server and the Top of Rack switch. Consequently, traffic continues to be broken out one way for IP, and another way for FC. Breaking out the traffic makes sense—by consolidating network adapters and cables, it adds value on the server access side.

A significant portion of FCoE switch ports come from Cisco’s UCS platform, which runs FCoE inside the chassis. In terms of a complete end-to-end FCoE solution, there continues to be very little multi-hop FCoE happening, or ports shipping on storage arrays.

In addition, FCoE connections are more prevalent on blade servers than on stand-alone servers for various reasons.

  • First, blades are used more in a virtualized environment where different types of traffic can travel on the same link.
  • Second, the migration to 10 Gbps has been very slow so far on stand-alone servers; about 80% of these servers are actually still connected with 1 Gbps, which cannot support FCoE.

What portion of FCoE-enabled server ports are actually running storage traffic?

FCoE-enabled ports comprise about a third of total 10 Gbps controller and adapter ports shipped on servers. However, we would like to bring to readers’ attention the wide difference between the portion of 10 Gbps ports that is FCoE-enabled and the portion that is actually running storage traffic. We currently believe less than a third of the FCoE-enabled ports are being used to carry storage traffic. That’s because the FCoE port, in many cases, is provided by default with the server. That’s the case with HP blade servers as well as Cisco’s UCS servers, which together are responsible for around 80% of the FCoE-enabled ports. We believe, however, that in the event that users buy separate adapters they will most likely use that adapter to run storage traffic—but they will need to pay an additional premium for this – about 50% to 100% – for the FCoE license.

The Outlook

That said, whether FCoE-enabled ports are used to carry storage traffic or not, we believe they are being introduced at the expense of some FC adapters. If users deploy a server with an FCoE-enabled port, they most likely will not buy a FC adapter to carry storage traffic. Additionally, as Ethernet speeds reach 40 Gbps, the differential over FC will be too great and FC will be less likely to keep pace.

About the Authors

Casey Quillin is a Senior Analyst, Storage Area Network & Data Center Appliance Market Research with the Dell’Oro Group

Sameh Boujelbene is a Senior Analyst, Server and Controller & Adapter Market Research with the Dell’Oro Group

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Fibre Channel over Ethernet (FCoE): Hype vs. Reality

title of post
It's been a bit of a bumpy ride for FCoE, which started out with more promise than it was able to deliver. In theory, the benefits of a single converged LAN/SAN network are fairly easy to see. The problem was, as is often the case with new technology, that most of the theoretical benefit was not available on the initial product release. The idea that storage traffic was no longer confined to expensive SANs, but instead could run on the more commoditized and easier-to-administer IP equipment was intriguing, however, new 10 Gbps Enhanced Ethernet switches were not exactly inexpensive with few products supporting FCoE initially, and those that did, did not play nicely with products from other vendors. Keeping FCoE "On the Single-Hop"? The adoption of FCoE to date has been almost exclusively "single-hop", meaning that FCoE is being deployed to provide connectivity between the server and the Top of Rack switch. Consequently, traffic continues to be broken out one way for IP, and another way for FC. Breaking out the traffic makes sense—by consolidating network adapters and cables, it adds value on the server access side. A significant portion of FCoE switch ports come from Cisco's UCS platform, which runs FCoE inside the chassis. In terms of a complete end-to-end FCoE solution, there continues to be very little multi-hop FCoE happening, or ports shipping on storage arrays. In addition, FCoE connections are more prevalent on blade servers than on stand-alone servers for various reasons.
  • First, blades are used more in a virtualized environment where different types of traffic can travel on the same link.
  • Second, the migration to 10 Gbps has been very slow so far on stand-alone servers; about 80% of these servers are actually still connected with 1 Gbps, which cannot support FCoE.
What portion of FCoE-enabled server ports are actually running storage traffic? FCoE-enabled ports comprise about a third of total 10 Gbps controller and adapter ports shipped on servers. However, we would like to bring to readers' attention the wide difference between the portion of 10 Gbps ports that is FCoE-enabled and the portion that is actually running storage traffic. We currently believe less than a third of the FCoE-enabled ports are being used to carry storage traffic. That's because the FCoE port, in many cases, is provided by default with the server. That's the case with HP blade servers as well as Cisco's UCS servers, which together are responsible for around 80% of the FCoE-enabled ports. We believe, however, that in the event that users buy separate adapters they will most likely use that adapter to run storage traffic—but they will need to pay an additional premium for this - about 50% to 100% - for the FCoE license. The Outlook That said, whether FCoE-enabled ports are used to carry storage traffic or not, we believe they are being introduced at the expense of some FC adapters. If users deploy a server with an FCoE-enabled port, they most likely will not buy a FC adapter to carry storage traffic. Additionally, as Ethernet speeds reach 40 Gbps, the differential over FC will be too great and FC will be less likely to keep pace. About the Authors Casey Quillin is a  Senior Analyst, Storage Area Network & Data Center Appliance Market Research with the Dell'Oro Group Sameh Boujelbene is a  Senior Analyst, Server and Controller & Adapter Market Research with the Dell'Oro Group

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Chance to Learn about PCIe Storage Protocol Analysis Test and Connectivity Tools and How to Hear On-demand The Latest on NVDIMM

Marty Foltyn

Dec 6, 2013

title of post

Join the SNIA Solid State Storage Initiative for an Open SSSI PCIe SSD Committee call

Who:  John Weidemier, Teledyne LeCroy

What:  PCIe storage protocol analysis test and connectivity tools

When:  Monday December 9, 2013 at  4:00 PM PST

Where:   via WebEx. at http://snia.webex.com Meeting Number: 792 152 928 password: sssipcie  and by dialing in to teleconference: 1-866-439-4480 Passcode: 57236696#

Why:  This OPEN call is an invitation to non SSSI Members to learn more about the exciting work of the PCIe Solid State Drive Committee and 2014 activities of the SSSI.

And, the great NVDIMM talk given on 12/5/13 as part of the BrightTALK Enterprise Storage Summit is now available for on-demand viewing! The session ranked 4.8 out of a 5.0 for quality technical content featuring information on a wide number of vendors who are creating NVDIMM products and solutions.

Click on this link  https://www.brighttalk.com/webcast/663/95329.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA’s Events Strategy Today and Tomorrow

khauser

Dec 5, 2013

title of post
David Dale, SNIA Chairman Last month Computerworld/IDG and the SNIA posted a notice to the SNW website stating that they have decided to conclude the production of SNW.  The contract was expiring and both parties declined to renew.  The IT industry has changed significantly in the 15 years since SNW was first launched, and both parties felt that their individual interests would be best served by pursuing separate events strategies. For the SNIA, events are a strategically important vehicle for fulfilling its mission of developing standards, maintaining an active ecosystem of storage industry experts, and providing vendor-neutral educational materials to enable IT professions to deal with and derive value from constant technology change.  To address the first two categories, SNIA has a strong track record of producing Technical Symposia throughout the year, and the successful Storage Developer Conference in September. To address the third category, IT professionals, SNIA has announced a new event, to be held in Santa Clara, CA, from April 22-24 – the Data Storage Innovation Conference. This event is targeted at IT decision-makers, technology implementers, and those expected to influence, implement and support data storage innovation as actual production solutions.  See the press release and call for presentations for more information.  We are excited to embark on developing this contemporary new event into an industry mainstay in the coming years. Outside of the USA, events are also critically important vehicles for the autonomous SNIA Regional Affiliates to fulfill their mission.  The audience there is typically more biased towards business executives and IT managers, and over the years their events have evolved to incorporate adjacent technology areas, new developments and regional requirements. As an example of this evolution, SNIA Europe’s events partner, Angel Business Communications, recently announced that its very successful October event, SNW Europe/Datacenter Technologies/Virtualization World, will be simply known as Powering the Cloud starting in 2014, in order to unite the conference program and to be more clearly relevant to today’s IT industry. See the press release for more details. Other Regional Affiliates have followed a similar path with events such as Implementing Information Infrastructure Summit and Information Infrastructure Conference – both tailored to meet regional needs. The bottom line on this is that the SNIA is absolutely committed to a global events strategy to enable it to carry out its mission.  We are excited about the evolution of our various events to meet the changing needs of the market and continue to deliver unique vendor-neutral content. IT professionals, partners, vendors and their customers around the globe can continue to rely on SNIA events to inform them about new technologies and developments and help them navigate the rapidly changing world of IT.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Learn about NVDIMM in BrightTalk’s Enterprise Storage Summit

Marty Foltyn

Dec 3, 2013

title of post

If 2013 was the year of software-defined everything, of everything-as-a-service, and big data – what’s on the horizon for storage in 2014? What do we still need to know about the innovations of this year? What hard realities have been missing from the hype? In the  Enterprise Storage Summit, https://www.brighttalk.com/summit/2271, a bi-annual event from BrightTALK, top thought leaders from all over the globe gather to share their insights into one of the most difficult areas of IT infrastructure and shed light on some of the most crucial topics facing enterprises today, including backup/recovery, hardware innovations, continued transition to cloud storage and the implications of big data.

On December 5 at 9:00 am Pacific/12:00 noon Eastern, learn how Non-Volatile DIMMs, or NVDIMMs, provide a persistent memory solution with the endurance and performance of DRAM coupled with the non-volatility of Flash. This webinar, presented by Jeff Chang of AgigaTech, will provide a general overview of this emerging technology and why the industry is starting to take notice.

You will learn what an NVDIMM is, how it works, where it fits and why every system architect should consider them for their next generation enterprise server and storage designs.

Join this summit with other smart people from around the world to participate in live events and ask your questions. Register at https://www.brighttalk.com/webcast/663/95329 to attend live or be notified of on-demand viewing.

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMB 3.0 – Your Questions Asked and Answered

AlexMcDonald

Nov 19, 2013

title of post

Last week we had a large and highly-engaged audience at our live Webcast: “SMB 3.0 – New Opportunities for Windows Environments.” We ran out of time answering all the questions during our event so, as promised, here is a recap of all the questions and answers to attendees’ questions. The Webcast is now available on demand at http://snia.org/forums/esf/knowledge/webcasts. You can also download a copy of the presentation slides there.

Q. Have you tested SMB Direct over 40Gb Ethernet or using RDMA?

A. SMB Direct has been demonstrated using 40Gb Ethernet using TCP or RDMA and Infiniband using RDMA.

Q. 100 iops, really?

A. If you look at the bottom right of slide 27 (Performance Test Results) you will see that the vertical axis is IOPs/sec (Normalized). This is a common method for comparing alternative storage access methods on the same storage server. I think we could have done a better job in making this clear by labeling the vertical axis as “IOPs (Normalized).”

Q. How does SMB 3.0 weigh against NFS-4.1 (with pNFS)?

A. That’s a deep question that probably deserves a webcast of its own. SMB 3 doesn’t have anything like pNFS. However many Windows workloads don’t need the sophisticated distributed namespace that pNFS provides. If they do, the namespace is stitched together on the client using mounts and DFS-N.

Q. In the iSCSI ODX case, how does server1 (source) know about the filesystem structure being stored on the LUN (server2) i.e. how does it know how to send the writes over to the LUN?

A. The SMB server (source) does not care about the filesystem structure on the LUN (destination). The token mechanism only loosely couples the two systems. They must agree that the client has permission to do the copy and then they perform the actual copy of a set of blocks. Metadata for the client’s file system representing the copied file on the LUN is part of the client workflow. Client drag/drops file from share to mounted LUN. Client subsystem determines that ODX is available. Client modifies file system metadata on the LUN as part of the copy operation including block maps. ODX is invoked and the servers are just moving blocks.

Q. Can ODX copies be within the same share or only between?

A. There is no restriction to ODX in this respect. The resource and destination of the copy can be on same shares, different shares, or even completely different protocols as illustrated in the presentation.

Q. Does SMB 3 provide API for integration with storage vendor snapshot other MS VSS?

A. Each storage vendor has to support Microsoft Remote VSS protocol, which is part of SMB 3.0 protocol specification. In Windows 2012 or Windows 8 the VSS APIs were extended to support UNC share path.

Q. How does SMB 3 compare to iSCSI rather than FC?

A. Please examine slide 27, which compares SMB 3, FC and iSCSI on the same storage server configuration.

Q. I have a question between SMB and CIFS. I know both are the protocols used for sharing. But why is CIFS adopted by most of the storage vendors? We are using CIFS shares on our NetApps, and I have seen that most of the other storage vendors are also using CIFS on their NAS devices.

A. There has been confusion between the terms “SMB” and “CIFS” ever since CIFS was introduced in the 90s. Fundamentally, the protocol that manages the data transfer between and client and server is SMB. Always has been. IMO CIFS was a marketing term created in response to Sun’s WebNFS. CIFS became popularized with most SMB server vendors calling their product a CIFS server. Usage is slowly changing but if you have a CIFS server it talks SMB.

Q. What is required on the client? Is this a driver with multi-path capability? Is this at the file system level in the client? What is needed in transport layer for the failover?

A. No special software or driver is required on the client side as long as it is running Windows 8 and later operating environment.

Q. Are all these new features cross-platform or is it something only supported by Windows?

A. SMB 3 implementations by different storage vendors will have some set of these features.

Q. Are virtual servers (cloud based) vs. non-virtual transition speeds greatly different?

A. The speed of a transition, i.e. failover is dependent on two steps. The first is the time needed to detect the failure and the second is the time needed to recover from that failure. While both a virtual and a physical server support transition the speed can significantly vary due to different network configurations. See more with next question.

Q. Is there latency as it fails over?

A. Traditionally SMB timeouts were associated with lower level, i.e. TCP timeouts. Client behavior has varied over the years but a rule-of-thumb was detection of a failure in 45 sec. This error would be passed up the stack to the user/application. With SMB 3 there is a new protocol called SMB Witness. A scale-out SMB server will include nodes providing SMB shares as well those providing Witness service. A client connects to SMB and Witness. If the node hosting the SMB share fails, the Witness node will notify the client indicating the new location for the SMB share. This can significantly reduce the time needed for detection. The scale-out SMB server can implement a proprietary mechanism to quickly detect node failure and trigger a Witness notification.

Q. Sync or Async?

A. Whether state movement between server nodes is sync or async depends on vendor implementation. Typically all updated state needs to be committed to stable storage before returning completion to the client.

Q. How fast is this transition with passing state id’s between hosts?

A. The time taken for the transition includes the time needed to detect the failure of Client A and the time needed to re-establish things using Client B. The time taken for both is highly dependent on the nature of the clustered app as well as the supported use case.

Q. We already have FC (using VMware), why drop down to SMB?

A. If you are using VMware with FC, then moving to SMB is not an option. VMware supports the use of NFS for hypervisor storage but not SMB.

Q. What are the top applications on SMB 3.0?

A. Hyper-V, MS-SQL, IIS

Q. How prevalent is true “multiprotocol sharing” taking place with common datasets being simultaneously accessed via SMB and NFS clients?

A. True “multiprotocol sharing” i.e. simultaneous access of a file by NFS & SMB clients is extremely rare. The NFS and SMB locking models don’t lend themselves to that. Sharing of a multiprotocol directory is an important use case. Users may want access to a common area from Linux, OS X and Windows. But this is sequential access by one OS/protocol at a time not all at once.

Q. Do we know growth % split between NFS and SMB?

A. There is no explicit industry tracker for the protocol split and probably not that much point in collecting them either, as the protocols aren’t really in competition. There is affinity among applications, OSes and protocols – MS products tend to SMB (Hyper-V, SQL Server,…), and non-Microsoft to NFS (VMware, Oracle, …). Cloud products at the point of consumption are normally HTTP RESTless protocols.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMB 3.0 – Your Questions Asked and Answered

Alex McDonald

Nov 19, 2013

title of post
Last week we had a large and highly-engaged audience at our live Webcast: "SMB 3.0 – New Opportunities for Windows Environments." We ran out of time answering all the questions during our event so, as promised, here is a recap of all the questions and answers to attendees' questions. The Webcast is now available on demand at http://snia.org/forums/esf/knowledge/webcasts. You can also download a copy of the presentation slides there. Q. Have you tested SMB Direct over 40Gb Ethernet or using RDMA? A. SMB Direct has been demonstrated using 40Gb Ethernet using TCP or RDMA and Infiniband using RDMA. Q. 100 iops, really? A. If you look at the bottom right of slide 27 (Performance Test Results) you will see that the vertical axis is IOPs/sec (Normalized). This is a common method for comparing alternative storage access methods on the same storage server. I think we could have done a better job in making this clear by labeling the vertical axis as "IOPs (Normalized)." Q. How does SMB 3.0 weigh against NFS-4.1 (with pNFS)? A. That's a deep question that probably deserves a webcast of its own. SMB 3 doesn't have anything like pNFS. However many Windows workloads don't need the sophisticated distributed namespace that pNFS provides. If they do, the namespace is stitched together on the client using mounts and DFS-N. Q. In the iSCSI ODX case, how does server1 (source) know about the filesystem structure being stored on the LUN (server2) i.e. how does it know how to send the writes over to the LUN? A. The SMB server (source) does not care about the filesystem structure on the LUN (destination). The token mechanism only loosely couples the two systems. They must agree that the client has permission to do the copy and then they perform the actual copy of a set of blocks. Metadata for the client's file system representing the copied file on the LUN is part of the client workflow. Client drag/drops file from share to mounted LUN. Client subsystem determines that ODX is available. Client modifies file system metadata on the LUN as part of the copy operation including block maps. ODX is invoked and the servers are just moving blocks. Q. Can ODX copies be within the same share or only between? A. There is no restriction to ODX in this respect. The resource and destination of the copy can be on same shares, different shares, or even completely different protocols as illustrated in the presentation. Q. Does SMB 3 provide API for integration with storage vendor snapshot other MS VSS? A. Each storage vendor has to support Microsoft Remote VSS protocol, which is part of SMB 3.0 protocol specification. In Windows 2012 or Windows 8 the VSS APIs were extended to support UNC share path. Q. How does SMB 3 compare to iSCSI rather than FC? A. Please examine slide 27, which compares SMB 3, FC and iSCSI on the same storage server configuration. Q. I have a question between SMB and CIFS. I know both are the protocols used for sharing. But why is CIFS adopted by most of the storage vendors? We are using CIFS shares on our NetApps, and I have seen that most of the other storage vendors are also using CIFS on their NAS devices. A. There has been confusion between the terms "SMB" and "CIFS" ever since CIFS was introduced in the 90s. Fundamentally, the protocol that manages the data transfer between and client and server is SMB. Always has been. IMO CIFS was a marketing term created in response to Sun's WebNFS. CIFS became popularized with most SMB server vendors calling their product a CIFS server. Usage is slowly changing but if you have a CIFS server it talks SMB. Q. What is required on the client? Is this a driver with multi-path capability? Is this at the file system level in the client? What is needed in transport layer for the failover? A. No special software or driver is required on the client side as long as it is running Windows 8 and later operating environment. Q. Are all these new features cross-platform or is it something only supported by Windows? A. SMB 3 implementations by different storage vendors will have some set of these features. Q. Are virtual servers (cloud based) vs. non-virtual transition speeds greatly different? A. The speed of a transition, i.e. failover is dependent on two steps. The first is the time needed to detect the failure and the second is the time needed to recover from that failure. While both a virtual and a physical server support transition the speed can significantly vary due to different network configurations. See more with next question. Q. Is there latency as it fails over? A. Traditionally SMB timeouts were associated with lower level, i.e. TCP timeouts. Client behavior has varied over the years but a rule-of-thumb was detection of a failure in 45 sec. This error would be passed up the stack to the user/application. With SMB 3 there is a new protocol called SMB Witness. A scale-out SMB server will include nodes providing SMB shares as well those providing Witness service. A client connects to SMB and Witness. If the node hosting the SMB share fails, the Witness node will notify the client indicating the new location for the SMB share. This can significantly reduce the time needed for detection. The scale-out SMB server can implement a proprietary mechanism to quickly detect node failure and trigger a Witness notification. Q. Sync or Async? A. Whether state movement between server nodes is sync or async depends on vendor implementation. Typically all updated state needs to be committed to stable storage before returning completion to the client. Q. How fast is this transition with passing state id's between hosts? A. The time taken for the transition includes the time needed to detect the failure of Client A and the time needed to re-establish things using Client B. The time taken for both is highly dependent on the nature of the clustered app as well as the supported use case. Q. We already have FC (using VMware), why drop down to SMB? A. If you are using VMware with FC, then moving to SMB is not an option. VMware supports the use of NFS for hypervisor storage but not SMB. Q. What are the top applications on SMB 3.0? A. Hyper-V, MS-SQL, IIS Q. How prevalent is true "multiprotocol sharing" taking place with common datasets being simultaneously accessed via SMB and NFS clients? A. True "multiprotocol sharing" i.e. simultaneous access of a file by NFS & SMB clients is extremely rare. The NFS and SMB locking models don't lend themselves to that. Sharing of a multiprotocol directory is an important use case. Users may want access to a common area from Linux, OS X and Windows. But this is sequential access by one OS/protocol at a time not all at once. Q. Do we know growth % split between NFS and SMB? A. There is no explicit industry tracker for the protocol split and probably not that much point in collecting them either, as the protocols aren't really in competition. There is affinity among applications, OSes and protocols - MS products tend to SMB (Hyper-V, SQL Server,...), and non-Microsoft to NFS (VMware, Oracle, ...). Cloud products at the point of consumption are normally HTTP RESTless protocols.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SUSE Announces NFSv4.1 and pNFS Support

AlexMcDonald

Nov 6, 2013

title of post

SUSE, founded in 1992, provides an enterprise ready Linux distribution in the form of SLES; the SUSE Linux Enterprise Server. As of late last month (October 22, 2013), SUSE announced that SLES 11 with service pack 3 now supports the Linux client for NFSv4.1 and pNFS client. This major distribution joins RedHat’s RHEL (RedHat Enterprise Linux) 6.4 in supporting enterprise quality Linux distributions with support for files based NFSv4.1 and pNFS.

For the adventurous, block and object pNFS support is available in the upstream kernel. Most regularly maintained distributions based on a Linux 3.1 or better kernel (if not all distributions now – check with the supplier of the distribution if you’re unsure) should provide the files, block and object compliant client directly in the download.

The future of pNFS looks very exciting. We now have a fully pNFS compliant Linux client, and a number of commercial files, blocks and object servers. Remember that although pNFS block and object support is available, currently these distributions support only the pNFS files layout. For those users not needing pNFS with block or objects support and requiring enterprise quality support, SUSE and RedHat are an excellent solution.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to