Sorry, you need to enable JavaScript to visit this website.

Take the Leap to SMI-S 1.8 v3 for Streamlined Storage Management

title of post

Mike Walker, former chair, SNIA SMI TWG; former IBM engineer

Whether you’re a software provider or a hardware vendor, it’s a good time to check out the latest updates to the Storage Networking Industry Association’s (SNIA’s) Storage Management Initiative Specification (SMI-S) standard. The latest version SMI-S 1.8 v3 is now a SNIA Technical Position that meets your current needs and offers enticing new enhancements for you and your potential new customers. This version will also be sent to the International Organization for Standardization (ISO) for approval, making it a valuable asset worldwide if accepted.

“IT system administrators who demand a choice in storage vendors and infrastructure while ensuring advanced feature enablement through interoperability, have long benefitted from SMI-S,” says Don Deel, chairman, SMI Technical Work Group and SMI Governing Board. “The standard streamlines storage management functions and features into a common set of tools that address the day-to-day tasks of the IT environment.”

Since it was first defined, SMI-S has been continuously updated with new storage management functionality and is now incorporated into over 1,000 storage products. Version 1.5 of the specification received approval by ISO and the International Electrotechnical Commission (IEC) in 2015 and is designated as ISO/IEC 24775.

SMI-S 1.8 v3 represents a significant effort to update the standard. The new version includes a number of editorial changes, clarifications and corrections. It also contains functional enhancements such as new indications, methods, properties and profiles.

Users of SMI-S are being urged to move directly to SMI-S 1.8 v3 since v1.6.1 was the last version that was officially tested. In preparation for submission to ISO, SMI-S 1.8 v3 has been thoroughly reviewed and a number of corrections have been incorporated.

The SMI-S specification is divided into six books which cover the autonomous profiles and component profiles used to manage storage physical and virtual storage area network equipment.

The main new functions since 1.6.1 are in SMI-S 1.8 v3, summarized as follows:

  • Fabric Book (Fabric and Switch)
    • Peer zoning enhancements
    • Enhancements to port speed
  • Block Book
    • New indications for component health and space management
    • Storage Pool Diagnostics
    • New method in Block Services
    • New methods in Group Masking and Mapping
    • Enhancements to Replication Services
    • New method in Volume Composition
    • Advanced Metrics in Block Server Performance
  • Common Profiles Book
    • New profile for WBEM Server Management
    • New method in iSCSI Target Ports Profile
  • Host Book
    • New profiles for memory configuration
  • Filesystem Book
    • New indications for component health and space management

If you would like to hear more details on the recent changes, I recently covered the topic in-depth in a webcast, available as an archived version on the free BrightTALK platform here.

SNIA SMI also offers a comprehensive SMI-S Conformance Testing Program (CTP) to test adherence to the standard. This program offers independent verification of compliance that customers can view directly on the SNIA website at http://www.snia.org/ctp/. Storage buyers can use this information to make sure they are getting software which complies to the latest version of the specification and contains the latest features such as important security functions.

Don’t delay. Update to SMI-S 1.8 v3 today. The specification can be found here. Your one-stop shop for all SMI-S information is: https://www.snia.org/smis.

Get engaged! You can ask and answer questions on the SMI-S Developers Group here

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Take the Leap to SMI-S 1.8 v3 for Streamlined Storage Management

Diane Marsili

Dec 11, 2018

title of post
[caption id="attachment_1634" align="alignright" width="150"] Mike Walker, former chair, SNIA SMI TWG; former IBM engineer[/caption] Whether you’re a software provider or a hardware vendor, it’s a good time to check out the latest updates to the Storage Networking Industry Association’s (SNIA’s) Storage Management Initiative Specification (SMI-S) standard. The latest version SMI-S 1.8 v3 is now a SNIA Technical Position that meets your current needs and offers enticing new enhancements for you and your potential new customers. This version will also be sent to the International Organization for Standardization (ISO) for approval, making it a valuable asset worldwide if accepted. “IT system administrators who demand a choice in storage vendors and infrastructure while ensuring advanced feature enablement through interoperability, have long benefitted from SMI-S,” says Don Deel, chairman, SMI Technical Work Group and SMI Governing Board. “The standard streamlines storage management functions and features into a common set of tools that address the day-to-day tasks of the IT environment.” Since it was first defined, SMI-S has been continuously updated with new storage management functionality and is now incorporated into over 1,000 storage products. Version 1.5 of the specification received approval by ISO and the International Electrotechnical Commission (IEC) in 2015 and is designated as ISO/IEC 24775. SMI-S 1.8 v3 represents a significant effort to update the standard. The new version includes a number of editorial changes, clarifications and corrections. It also contains functional enhancements such as new indications, methods, properties and profiles. Users of SMI-S are being urged to move directly to SMI-S 1.8 v3 since v1.6.1 was the last version that was officially tested. In preparation for submission to ISO, SMI-S 1.8 v3 has been thoroughly reviewed and a number of corrections have been incorporated. The SMI-S specification is divided into six books which cover the autonomous profiles and component profiles used to manage storage physical and virtual storage area network equipment. The main new functions since 1.6.1 are in SMI-S 1.8 v3, summarized as follows:
  • Fabric Book (Fabric and Switch)
    • Peer zoning enhancements
    • Enhancements to port speed
  • Block Book
    • New indications for component health and space management
    • Storage Pool Diagnostics
    • New method in Block Services
    • New methods in Group Masking and Mapping
    • Enhancements to Replication Services
    • New method in Volume Composition
    • Advanced Metrics in Block Server Performance
  • Common Profiles Book
    • New profile for WBEM Server Management
    • New method in iSCSI Target Ports Profile
  • Host Book
    • New profiles for memory configuration
  • Filesystem Book
    • New indications for component health and space management
If you would like to hear more details on the recent changes, I recently covered the topic in-depth in a webcast, available as an archived version on the free BrightTALK platform here. SNIA SMI also offers a comprehensive SMI-S Conformance Testing Program (CTP) to test adherence to the standard. This program offers independent verification of compliance that customers can view directly on the SNIA website at http://www.snia.org/ctp/. Storage buyers can use this information to make sure they are getting software which complies to the latest version of the specification and contains the latest features such as important security functions. Don’t delay. Update to SMI-S 1.8 v3 today. The specification can be found here. Your one-stop shop for all SMI-S information is: https://www.snia.org/smis. Get engaged! You can ask and answer questions on the SMI-S Developers Group here

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Networking Questions for Ethernet Scale-Out Storage

Fred Zhang

Dec 7, 2018

title of post
Unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. That's why the SNIA Networking Storage Forum (NSF) hosted a live webcast "Networking Requirements for Ethernet Scale-Out Storage." Our audience had some insightful questions. As promised, our experts are answering them in this blog. Q. How does scale-out flash storage impact Ethernet networking requirements? A.  Scale-out flash storage demands higher bandwidth and lower latency than scale-out storage using hard drives. As noted in the webcast, it's more likely to run into problems with TCP Incast and congestion, especially with older or slower switches. For this reason it's more likely than scale-out HDD storage to benefit from higher bandwidth networks and modern datacenter Ethernet solutions--such as RDMA, congestion management, and QoS features. Q. What are your thoughts on NVMe-oF TCP/IP and availability? A.  The NVMe over TCP specification was ratified in November 2018, so it is a new standard. Some vendors already offer this as a pre-standard implementation. We expect that several of the scale-out storage vendors who support block storage will support NVMe over TCP as a front-end (client connection) protocol in the near future. It's also possible some vendors will use NVMe over TCP as a back-end (cluster) networking protocol. Q. Which is better: RoCE or iWARP? A.  SNIA is vendor-neutral and does not directly recommend one vendor or protocol over another. Both are RDMA protocols that run on Ethernet, are supported by multiple vendors, and can be used with Ethernet-based scale-out storage. You can learn more about this topic by viewing our recent Great Storage Debate webcast "RoCE vs. iWARP" and checking out the Q&A blog from that webcast. Q. How would you compare use of TCP/IP and Ethernet RDMA networking for scale-out storage? A.  Ethernet RDMA can improve the performance of Ethernet-based scale-out storage for the front-end (client) and/or back-end (cluster) networks. RDMA generally offers higher throughput, lower latency, and reduced CPU utilization when compared to using normal (non-RDMA) TCP/IP networking. This can lead to faster storage performance and leave more storage node CPU cycles available for running storage software. However, high-performance RDMA requires choosing network adapters that support RDMA offloads and in some cases requires modifications to the network switch configurations. Some other types of non-Ethernet storage networking also offer various levels of direct memory access or networking offloads that can provide high-performance networking for scale-out storage. Q. How does RDMA networking enable latency reduction? A. RDMA typically bypasses the kernel TCP/IP stack and offloads networking tasks from the CPU to the network adapter. In essence it reduces the total path length which consequently reduces the latency. Most RDMA NICs (rNICs) perform some level of networking acceleration in an ASIC or FPGA including retransmissions, reordering, TCP operations flow control, and congestion management. Q. Do all scale-out storage solutions have a separate cluster network? A.  Logically all scale-out storage systems have a cluster network. Sometimes it runs on a physically separate network and sometimes it runs on the same network as the front-end (client) traffic. Sometimes the client and cluster networks use different networking technologies.        

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Networking Questions for Ethernet Scale-Out Storage

Fred Zhang

Dec 7, 2018

title of post
Unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. That’s why the SNIA Networking Storage Forum (NSF) hosted a live webcast “Networking Requirements for Ethernet Scale-Out Storage.” Our audience had some insightful questions. As promised, our experts are answering them in this blog. Q. How does scale-out flash storage impact Ethernet networking requirements? A. Scale-out flash storage demands higher bandwidth and lower latency than scale-out storage using hard drives. As noted in the webcast, it’s more likely to run into problems with TCP Incast and congestion, especially with older or slower switches. For this reason it’s more likely than scale-out HDD storage to benefit from higher bandwidth networks and modern datacenter Ethernet solutions–such as RDMA, congestion management, and QoS features. Q. What are your thoughts on NVMe-oF TCP/IP and availability? A. The NVMe over TCP specification was ratified in November 2018, so it is a new standard. Some vendors already offer this as a pre-standard implementation. We expect that several of the scale-out storage vendors who support block storage will support NVMe over TCP as a front-end (client connection) protocol in the near future. It’s also possible some vendors will use NVMe over TCP as a back-end (cluster) networking protocol. Q. Which is better: RoCE or iWARP? A. SNIA is vendor-neutral and does not directly recommend one vendor or protocol over another. Both are RDMA protocols that run on Ethernet, are supported by multiple vendors, and can be used with Ethernet-based scale-out storage. You can learn more about this topic by viewing our recent Great Storage Debate webcast “RoCE vs. iWARP” and checking out the Q&A blog from that webcast. Q. How would you compare use of TCP/IP and Ethernet RDMA networking for scale-out storage? A. Ethernet RDMA can improve the performance of Ethernet-based scale-out storage for the front-end (client) and/or back-end (cluster) networks. RDMA generally offers higher throughput, lower latency, and reduced CPU utilization when compared to using normal (non-RDMA) TCP/IP networking. This can lead to faster storage performance and leave more storage node CPU cycles available for running storage software. However, high-performance RDMA requires choosing network adapters that support RDMA offloads and in some cases requires modifications to the network switch configurations. Some other types of non-Ethernet storage networking also offer various levels of direct memory access or networking offloads that can provide high-performance networking for scale-out storage. Q. How does RDMA networking enable latency reduction? A. RDMA typically bypasses the kernel TCP/IP stack and offloads networking tasks from the CPU to the network adapter. In essence it reduces the total path length which consequently reduces the latency. Most RDMA NICs (rNICs) perform some level of networking acceleration in an ASIC or FPGA including retransmissions, reordering, TCP operations flow control, and congestion management. Q. Do all scale-out storage solutions have a separate cluster network? A. Logically all scale-out storage systems have a cluster network. Sometimes it runs on a physically separate network and sometimes it runs on the same network as the front-end (client) traffic. Sometimes the client and cluster networks use different networking technologies.        

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Why Become a SNIA Certified Information Architect?

Diane Marsili

Nov 28, 2018

title of post
If you’re a storage professional you are likely familiar with the many certifications available to prove competency in a given technical area. Many of the certification options are offered by major IT vendors as a natural extension of their product and service offerings. In fact, if you work with any of the major players it is likely that at some point you will be required to prove your technical skills by acquiring specific credentials through certification. The component that is missing from these product certifications is vendor-neutrality. That’s where the SNIA Storage Networking Certification Program (SNCP) comes in. SNIA certifications provide storage professionals with credentials that demonstrate industry expertise with a broad, big picture skillset that enhance individual product certifications. Continuing our history of offering globally recognized storage certifications, SNIA is excited to announce its newest advanced storage certification – the SNIA Certified Information Architect (SCIA). Earning the SCIA credential is an advanced storage certification that demonstrates the student has an industry accepted knowledge of how to design, plan, and architect a storage infrastructure of storage transport, back-end storage targets and best practices within an efficient total cost of ownership. Why Should I Become SNIA Certified?
  1. Credibility
Certifications in the IT market are a validation of your skills and proficiency in a certain technology area. Both clients and prospective employers will understand that you have breadth and depth in storage technologies.
  1. Personal Marketability
There are certain certifications that will drive your career in a particular direction, may give you an edge for a new job or assignment, or even increase your salary. IT certifications make career advancement more likely.
  1. Personal and Professional Development
New technologies are constantly introduced and most IT professionals are avid consumers of technology news and updates in order to stay current with trends and directions. Certifications are a way of testing that knowledge and demonstrating expertise. A career in data storage Data volumes continue to increase at exponential rates and that data has to be efficiently stored, managed and protected somewhere. An increasing degree of orchestration and automation is applied to solving storage problems with demands from the business to make data more accessible to the application, yet at the same time more private and secure, the fundamental knowledge of how data storage technology works remains a highly valuable asset. The only industry association providing independent, vendor-neutral education and worldwide certification is SNIA. Gaining the SNCP and SCIA credentials form the basis of a sound, industry backed, recognition of storage technology skills. Learn how you can become SNIA certified.      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Virtualization and Storage Networking Best Practices from the Experts

J Metz

Nov 26, 2018

title of post
Ever make a mistake configuring a storage array or wonder if you're maximizing the value of your virtualized environment? With all the different storage arrays and connectivity protocols available today, knowing best practices can help improve operational efficiency and ensure resilient operations. That's why the SNIA Networking Storage Forum is kicking off 2019 with a live webcast "Virtualization and Storage Networking Best Practices." In this webcast, Jason Massae from VMware and Cody Hosterman from Pure Storage will share insights and lessons learned as reported by VMware's storage global services by discussing:
  • Common mistakes when setting up storage arrays
  • Why iSCSI is the number one storage configuration problem
  • Configuring adapters for iSCSI or iSER
  • How to verify your PSP matches your array requirements
  • NFS best practices
  • How to maximize the value of your array and virtualization
  • Troubleshooting recommendations
Register today to join us on January 17th. Whether you've been configuring storage for VMs for years or just getting started, we think you will pick up some useful tips to optimize your storage networking infrastructure.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Virtualization and Storage Networking Best Practices from the Experts

J Metz

Nov 26, 2018

title of post
Ever make a mistake configuring a storage array or wonder if you’re maximizing the value of your virtualized environment? With all the different storage arrays and connectivity protocols available today, knowing best practices can help improve operational efficiency and ensure resilient operations. That’s why the SNIA Networking Storage Forum is kicking off 2019 with a live webcast “Virtualization and Storage Networking Best Practices.” In this webcast, Jason Massae from VMware and Cody Hosterman from Pure Storage will share insights and lessons learned as reported by VMware’s storage global services by discussing:
  • Common mistakes when setting up storage arrays
  • Why iSCSI is the number one storage configuration problem
  • Configuring adapters for iSCSI or iSER
  • How to verify your PSP matches your array requirements
  • NFS best practices
  • How to maximize the value of your array and virtualization
  • Troubleshooting recommendations
Register today to join us on January 17th. Whether you’ve been configuring storage for VMs for years or just getting started, we think you will pick up some useful tips to optimize your storage networking infrastructure.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

RDMA for Persistent Memory over Fabrics – FAQ

John Kim

Nov 14, 2018

title of post
In our most recent SNIA Networking Storage Forum (NSF) webcast Extending RDMA for Persistent Memory over Fabrics, our expert speakers, Tony Hurson and Rob Davis outlined extensions to RDMA protocols that confirm persistence and additionally can order successive writes to different memories within the target system. Hundreds of people have seen the webcast and have given it a 4.8 rating on a scale of 1-5! If you missed, it you can watch it on-demand at your convenience. The webcast slides are also available for download. We had several interesting questions during the live event. Here are answers from our presenters:  Q. For the RDMA Message Extensions, does the client have to qualify a WRITE completion with only Atomic Write Response and not with Commit Response? A. If an Atomic Write must be confirmed persistent, it must be followed by an additional Commit Request. Built-in confirmation of persistence was dropped from the Atomic Request because it adds latency and is not needed for some application streams. Q. Why do you need confirmation for writes? From my point of view, the only thing required is ordering. A. Agreed, but only if the entire target system is non-volatile! Explicit confirmation of persistence is required to cover the “gap” between the Write completing in the network and the data reaching persistence at the target. Q. Where are these messages being generated? Does NIC know when the data is flushed or committed? A. They are generated by the application that has reserved the memory window on the remote node. It can write using RDMA writes to that window all it wants, but to guarantee persistence it must send a flush. Q. How is RPM presented on the client host? A. The application using it sees it as memory it can read and write. Q. Does this RDMA commit response implicitly ACK any previous RDMA sends/writes to same or different MR? A. Yes, the new Commit (and Verify and Atomic Write) Responses have the same acknowledgement coalescing properties as the existing Read Response. That is, a Commit Response is explicit (non-coalesced); but it coalesces/implies acknowledgement of prior Write and/or Send Requests. Q. Does this one still have the current RMDA Write ACK? A. See previous general answer. Yes. A Commit Response implicitly acknowledges prior Writes. Q. With respect to the Race Hazard explained to show the need for explicit completion response, wouldn’t this be the case even with a non-volatile Memory, if the data were to be stored in non-volatile memory. Why is this completion status required only on the non-volatile case? A. Most networked applications that write over the network to volatile memory do not require explicit confirmation at the writer endpoint that data has actually reached there. If so, additional handshake messages are usually exchanged between the endpoint applications. On the other hand, a writer to PERSISTENT memory across a network almost always needs assurance that data has reached persistence, thus the new extension. Q. What if you are using multiple RNIC with multiple ports to multiple ports on a 100Gb fabric for server-to-server RDMA? How is order kept there…by CPU software or ‘NIC teaming plus’? A. This would depend on the RNIC vendor and their implementation. Q. What is the time frame for these new RDMA messages to be available in verbs API? A. This depends on the IBTA standards approval process which is not completely predicable, roughly sometime the first half of 2019. Q. Where could I find more details about the three new verbs (what are the arguments)? A. Please poll/contact/Google the IBTA and IETF organizations towards the end of calendar year 2018, when first drafts of the extension documents are expected to be available. Q. Do you see this technology used in a way similar to Hyperconverged systems now use storage or could you see this used as a large shared memory subsystem in the network? A. High-speed persistent memory, in either NVDIMM or SSD form factor, has enormous potential in speeding up hyperconverged write replication. It will require however substantial re-write of such storage stacks, moving for example from traditional three-phase block storage protocols (command/data/response) to an RDMA write/confirm model. More generally, the RDMA extensions are useful for distributed shared PERSISTENT memory applications. Q. What would be the most useful performance metrics to debug performance issues in such environments? A. Within the RNIC, basic counts for the new message types would be a baseline. These plus total stall times encountered by the RNIC awaiting Commit Responses from the local CPU subsystem would be useful. Within the CPU platform basic counts of device write and read requests targeting persistent memory would be useful. Q. Do all the RDMA NIC’s have to update their firmware to support this new VERB’s? What is the expected performance improvement with the new Commit message? A. Both answers would depend on the RNIC vendor and their implementation. Q. Will the three new verbs be implemented in the RNIC alone, or will they require changes in other places (processor, memory controllers, etc.)? A. The new Commit request requires the CPU platform and its memory controllers to confirm that prior write data has reached persistence. The new Atomic Write and Verify messages however may be executed entirely within the RNIC. Q. What about the future of NVMe over TCP – this would be much simpler for people to implement. Is this a good option? A. Again this would depend on the NIC vendor and their implementation. Different vendors have implemented various tests for performance. It is recommended that readers do their own due diligence.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Scale-Out Storage Changes Networking Demands

Fred Zhang

Oct 23, 2018

title of post
Scale-out storage is increasingly popular for Cloud, High-Performance Computing, Machine Learning, and certain Enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines. But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it's based on flash. That's why the SNIA Networking Storage Forum (NSF) is hosting a live webcast "Networking Requirements for Scale-Out Storage" on November 14th. I hope you will join my NSF colleagues and me to learn about:
  • Scale-out storage solutions and what workloads they can address
  • How your network may need to evolve to support scale-out storage
  • Network considerations to ensure performance for demanding workloads
  • Key considerations for all flash scale-out storage solutions
Register today. Our NSF experts will be on hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Scale-Out Storage Changes Networking Demands

Fred Zhang

Oct 23, 2018

title of post
Scale-out storage is increasingly popular for Cloud, High-Performance Computing, Machine Learning, and certain Enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines. But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webcast “Networking Requirements for Scale-Out Storage” on November 14th. I hope you will join my NSF colleagues and me to learn about:
  • Scale-out storage solutions and what workloads they can address
  • How your network may need to evolve to support scale-out storage
  • Network considerations to ensure performance for demanding workloads
  • Key considerations for all flash scale-out storage solutions
Register today. Our NSF experts will be on hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to