Sorry, you need to enable JavaScript to visit this website.

SNIA Swordfish™ - Your Questions Answered

Richelle Ahlvers

Aug 3, 2018

title of post

The Storage Networking Industry Association’s (SNIA’s) Storage Management Initiative (SMI) took on the topic of SNIA Swordfish™ in a live webcast titled “Introduction to SNIA Swordfish™ – Scalable Storage Management.” The replay is available here. SNIA experts Richelle Ahlvers and Don Deel, responded to questions during the webcast. Here are those questions and responses:

Q. You talked about two different ways to add storage to Redfish – hosted service configuration and integrated service configuration. When would you use one configuration instead of the other?

A. The integrated services configuration was added to clarify support with direct attach configurations using Swordfish constructs. If you have a server that has a RAID card in it, and you want to have it use a more complex storage configuration – storage pools and some notion of class of service, you would use the integrated service configuration. The hosted service configuration is used to model non-direct attach configurations, such as external storage arrays, or file services.

Q. Another service configuration question. For a pure JBOD, which would be the preferred approach?

A. A JBOD configuration configuration could start with either configuration depending on whether it is a standalone system (HSC) or server-attached. If the JBOD has an embedded controller in an enclosure, it could be modeled using the HSC configuration.

Q. Are there provisions for adding custom data in the payloads that Swordfish support. Is there a method to add vendor specific parameters in the payload?

A. Previous standards did not have a good model for adding OEM specific data.

As a result, Redfish, and its extensions such as Swordfish, have ensured that there is a very clean place to add OEM data. Every schema supports OEM extensions in two places. There are OEM extensions for properties and also for OEM actions – a way to support functions that don’t map to REST.

Q. Is there any work related to NVMe over Fabric in development?

A. Both SNIA and the Distributed Management Task Force (DMTF – which developed Redfish) have been working on this. The DMTF’s Redfish Forum developed the base model for SAS/SATA and PCIe fabrics, which is being extended to include NVMe over Fabric. SNIA is also working on adding NVMe over Fabric device connections to their basic models to integrate the storage elements.

Q. I think of Redfish as talking to the Baseboard Management Controller (BMC). Where is Swordfish functionality located? Is it on the CPU running the OS or is it also out of band?

A. Where Swordfish is running will be determined by the implementation. An implementation can choose to run either in band or out of band. In most cases this will be consistent with implementations. If a vendor’s existing architecture supports out of band management, then their Swordfish implementation will also likely be out of band. Note that the Swordfish implementation may leverage existing Redfish instrumentation on integrated components in either case, but this is a completely vendor-specific choice.

Q. What is meant by endpoint?

A. Endpoints are an abstraction of a connection. They describe the connections without needing to define everything about the underlying hardware.

Q. Since JBODs fall within the domain of server hardware, can software RAID solutions take full advantage of Swordfish?

A. The software RAID solutions can absolutely take full advantage of Swordfish. Remember that Swordfish is a schema extension to Redfish for storage functionality; therefore, it doesn’t care what underlying hardware it is running on. Note that many different types of storage solutions today run on “server hardware” – SDS solutions, for example, have no custom hardware, and fall exclusively in this domain, yet are clearly storage solutions.

Q. Is Swordfish planning on staying an extension to Redfish? Does it have a goal of being integrated into Redfish specification at some point?

A. Yes, Swordfish plans to remain an extension to Redfish. There isn’t a reason to integrate it into Redfish, as it is already tightly coupled with Redfish; the schema are delivered publicly on the same site. The SNIA will continue to own Swordfish content separately from DMTF in order to take advantage of the focused attention of the large body of storage domain experts in SNIA. In order to allow the Redfish ecosystem to grow to its maximum potential as quickly as possible, DMTF is partnering with other organizations to add features such as storage and networking to the standard.

Q. Do you have to be a SNIA member to contribute to the open source work?

A. No. You do not have to be a SNIA member to contribute to the open source projects. You will, however, need to sign the SNIA Contributor License Agreement, available at snia.org/cla in order to release any contributions you make to the open source projects to SNIA to allow us to incorporate them back into the open source projects.

Q. Going through the specs for Redfish /Swordfish, I can see that only a few parameters of the schema are really mandatory to be supported by the vendor. Does that not break functionality where a client would be expecting data as per the entire schema?

A. The SSM TWG is working on the development of feature profiles, which will help clarify which functionality is required to be implemented for specific clients, applications, and use cases. In addition to the functionality requirements in the Swordfish specification, the profile definitions will help clarify to both clients and service implementations much more clearly what functionality is required to implement for their specific configurations.

Additional information on SNIA Swordfish is available at: www.snia.org/swordfish. This site contains resources including the latest specification, a Swordfish User Guide, a Swordfish Practical Guide, Swordfish mockups and more.

You can also join the Redfish Specification Forum to ask and answer questions about Swordfish!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Swordfish™ – Your Questions Answered

Diane Marsili

Aug 3, 2018

title of post

The Storage Networking Industry Association’s (SNIA’s) Storage Management Initiative (SMI) took on the topic of SNIA Swordfish™ in a live webcast titled “Introduction to SNIA Swordfish™ – Scalable Storage Management.” The replay is available here. SNIA experts Richelle Ahlvers and Don Deel, responded to questions during the webcast. Here are those questions and responses:

Q. You talked about two different ways to add storage to Redfish – hosted service configuration and integrated service configuration. When would you use one configuration instead of the other?

A. The integrated services configuration was added to clarify support with direct attach configurations using Swordfish constructs. If you have a server that has a RAID card in it, and you want to have it use a more complex storage configuration – storage pools and some notion of class of service, you would use the integrated service configuration. The hosted service configuration is used to model non-direct attach configurations, such as external storage arrays, or file services.

Q. Another service configuration question. For a pure JBOD, which would be the preferred approach?

A. A JBOD configuration configuration could start with either configuration depending on whether it is a standalone system (HSC) or server-attached. If the JBOD has an embedded controller in an enclosure, it could be modeled using the HSC configuration.

Q. Are there provisions for adding custom data in the payloads that Swordfish support. Is there a method to add vendor specific parameters in the payload?

A. Previous standards did not have a good model for adding OEM specific data.

As a result, Redfish, and its extensions such as Swordfish, have ensured that there is a very clean place to add OEM data. Every schema supports OEM extensions in two places. There are OEM extensions for properties and also for OEM actions – a way to support functions that don’t map to REST.

Q. Is there any work related to NVMe over Fabric in development?

A. Both SNIA and the Distributed Management Task Force (DMTF – which developed Redfish) have been working on this. The DMTF’s Redfish Forum developed the base model for SAS/SATA and PCIe fabrics, which is being extended to include NVMe over Fabric. SNIA is also working on adding NVMe over Fabric device connections to their basic models to integrate the storage elements.

Q. I think of Redfish as talking to the Baseboard Management Controller (BMC). Where is Swordfish functionality located? Is it on the CPU running the OS or is it also out of band?

A. Where Swordfish is running will be determined by the implementation. An implementation can choose to run either in band or out of band. In most cases this will be consistent with implementations. If a vendor’s existing architecture supports out of band management, then their Swordfish implementation will also likely be out of band. Note that the Swordfish implementation may leverage existing Redfish instrumentation on integrated components in either case, but this is a completely vendor-specific choice.

Q. What is meant by endpoint?

A. Endpoints are an abstraction of a connection. They describe the connections without needing to define everything about the underlying hardware.

Q. Since JBODs fall within the domain of server hardware, can software RAID solutions take full advantage of Swordfish?

A. The software RAID solutions can absolutely take full advantage of Swordfish. Remember that Swordfish is a schema extension to Redfish for storage functionality; therefore, it doesn’t care what underlying hardware it is running on. Note that many different types of storage solutions today run on “server hardware” – SDS solutions, for example, have no custom hardware, and fall exclusively in this domain, yet are clearly storage solutions.

Q. Is Swordfish planning on staying an extension to Redfish? Does it have a goal of being integrated into Redfish specification at some point?

A. Yes, Swordfish plans to remain an extension to Redfish. There isn’t a reason to integrate it into Redfish, as it is already tightly coupled with Redfish; the schema are delivered publicly on the same site. The SNIA will continue to own Swordfish content separately from DMTF in order to take advantage of the focused attention of the large body of storage domain experts in SNIA. In order to allow the Redfish ecosystem to grow to its maximum potential as quickly as possible, DMTF is partnering with other organizations to add features such as storage and networking to the standard.

Q. Do you have to be a SNIA member to contribute to the open source work?

A. No. You do not have to be a SNIA member to contribute to the open source projects. You will, however, need to sign the SNIA Contributor License Agreement, available at snia.org/cla in order to release any contributions you make to the open source projects to SNIA to allow us to incorporate them back into the open source projects.

Q. Going through the specs for Redfish /Swordfish, I can see that only a few parameters of the schema are really mandatory to be supported by the vendor. Does that not break functionality where a client would be expecting data as per the entire schema?

A. The SSM TWG is working on the development of feature profiles, which will help clarify which functionality is required to be implemented for specific clients, applications, and use cases. In addition to the functionality requirements in the Swordfish specification, the profile definitions will help clarify to both clients and service implementations much more clearly what functionality is required to implement for their specific configurations.

Additional information on SNIA Swordfish is available at: www.snia.org/swordfish. This site contains resources including the latest specification, a Swordfish User Guide, a Swordfish Practical Guide, Swordfish mockups and more.

You can also join the Redfish Specification Forum to ask and answer questions about Swordfish!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Swordfish™ – Your Questions Answered

Diane Marsili

Aug 3, 2018

title of post

The Storage Networking Industry Association’s (SNIA’s) Storage Management Initiative (SMI) took on the topic of SNIA Swordfish™ in a live webcast titled “Introduction to SNIA Swordfish™ – Scalable Storage Management.” The replay is available here. SNIA experts Richelle Ahlvers and Don Deel, responded to questions during the webcast. Here are those questions and responses:

Q. You talked about two different ways to add storage to Redfish – hosted service configuration and integrated service configuration. When would you use one configuration instead of the other?

A. The integrated services configuration was added to clarify support with direct attach configurations using Swordfish constructs. If you have a server that has a RAID card in it, and you want to have it use a more complex storage configuration – storage pools and some notion of class of service, you would use the integrated service configuration. The hosted service configuration is used to model non-direct attach configurations, such as external storage arrays, or file services.

Q. Another service configuration question. For a pure JBOD, which would be the preferred approach?

A. A JBOD configuration configuration could start with either configuration depending on whether it is a standalone system (HSC) or server-attached. If the JBOD has an embedded controller in an enclosure, it could be modeled using the HSC configuration.

Q. Are there provisions for adding custom data in the payloads that Swordfish support. Is there a method to add vendor specific parameters in the payload?

A. Previous standards did not have a good model for adding OEM specific data.

As a result, Redfish, and its extensions such as Swordfish, have ensured that there is a very clean place to add OEM data. Every schema supports OEM extensions in two places. There are OEM extensions for properties and also for OEM actions – a way to support functions that don’t map to REST.

Q. Is there any work related to NVMe over Fabric in development?

A. Both SNIA and the Distributed Management Task Force (DMTF – which developed Redfish) have been working on this. The DMTF’s Redfish Forum developed the base model for SAS/SATA and PCIe fabrics, which is being extended to include NVMe over Fabric. SNIA is also working on adding NVMe over Fabric device connections to their basic models to integrate the storage elements.

Q. I think of Redfish as talking to the Baseboard Management Controller (BMC). Where is Swordfish functionality located? Is it on the CPU running the OS or is it also out of band?

A. Where Swordfish is running will be determined by the implementation. An implementation can choose to run either in band or out of band. In most cases this will be consistent with implementations. If a vendor’s existing architecture supports out of band management, then their Swordfish implementation will also likely be out of band. Note that the Swordfish implementation may leverage existing Redfish instrumentation on integrated components in either case, but this is a completely vendor-specific choice.

Q. What is meant by endpoint?

A. Endpoints are an abstraction of a connection. They describe the connections without needing to define everything about the underlying hardware.

Q. Since JBODs fall within the domain of server hardware, can software RAID solutions take full advantage of Swordfish?

A. The software RAID solutions can absolutely take full advantage of Swordfish. Remember that Swordfish is a schema extension to Redfish for storage functionality; therefore, it doesn’t care what underlying hardware it is running on. Note that many different types of storage solutions today run on “server hardware” – SDS solutions, for example, have no custom hardware, and fall exclusively in this domain, yet are clearly storage solutions.

Q. Is Swordfish planning on staying an extension to Redfish? Does it have a goal of being integrated into Redfish specification at some point?

A. Yes, Swordfish plans to remain an extension to Redfish. There isn’t a reason to integrate it into Redfish, as it is already tightly coupled with Redfish; the schema are delivered publicly on the same site. The SNIA will continue to own Swordfish content separately from DMTF in order to take advantage of the focused attention of the large body of storage domain experts in SNIA. In order to allow the Redfish ecosystem to grow to its maximum potential as quickly as possible, DMTF is partnering with other organizations to add features such as storage and networking to the standard.

Q. Do you have to be a SNIA member to contribute to the open source work?

A. No. You do not have to be a SNIA member to contribute to the open source projects. You will, however, need to sign the SNIA Contributor License Agreement, available at snia.org/cla in order to release any contributions you make to the open source projects to SNIA to allow us to incorporate them back into the open source projects.

Q. Going through the specs for Redfish /Swordfish, I can see that only a few parameters of the schema are really mandatory to be supported by the vendor. Does that not break functionality where a client would be expecting data as per the entire schema?

A. The SSM TWG is working on the development of feature profiles, which will help clarify which functionality is required to be implemented for specific clients, applications, and use cases. In addition to the functionality requirements in the Swordfish specification, the profile definitions will help clarify to both clients and service implementations much more clearly what functionality is required to implement for their specific configurations.

Additional information on SNIA Swordfish is available at: www.snia.org/swordfish. This site contains resources including the latest specification, a Swordfish User Guide, a Swordfish Practical Guide, Swordfish mockups and more.

You can also join the Redfish Specification Forum to ask and answer questions about Swordfish!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Swordfish™ - Your Questions Answered

Diane Marsili

Aug 3, 2018

title of post
The Storage Networking Industry Association’s (SNIA’s) Storage Management Initiative (SMI) took on the topic of SNIA Swordfish™ in a live webcast titled “Introduction to SNIA Swordfish™ - Scalable Storage Management.” The replay is available here. SNIA experts Richelle Ahlvers and Don Deel, responded to questions during the webcast. Here are those questions and responses: Q. You talked about two different ways to add storage to Redfish – hosted service configuration and integrated service configuration. When would you use one configuration instead of the other? A. The integrated services configuration was added to clarify support with direct attach configurations using Swordfish constructs. If you have a server that has a RAID card in it, and you want to have it use a more complex storage configuration - storage pools and some notion of class of service, you would use the integrated service configuration. The hosted service configuration is used to model non-direct attach configurations, such as external storage arrays, or file services. Q. Another service configuration question. For a pure JBOD, which would be the preferred approach? A. A JBOD configuration configuration could start with either configuration depending on whether it is a standalone system (HSC) or server-attached. If the JBOD has an embedded controller in an enclosure, it could be modeled using the HSC configuration. Q. Are there provisions for adding custom data in the payloads that Swordfish support. Is there a method to add vendor specific parameters in the payload? A. Previous standards did not have a good model for adding OEM specific data. As a result, Redfish, and its extensions such as Swordfish, have ensured that there is a very clean place to add OEM data. Every schema supports OEM extensions in two places. There are OEM extensions for properties and also for OEM actions – a way to support functions that don’t map to REST. Q. Is there any work related to NVMe over Fabric in development? A. Both SNIA and the Distributed Management Task Force (DMTF - which developed Redfish) have been working on this. The DMTF’s Redfish Forum developed the base model for SAS/SATA and PCIe fabrics, which is being extended to include NVMe over Fabric. SNIA is also working on adding NVMe over Fabric device connections to their basic models to integrate the storage elements. Q. I think of Redfish as talking to the Baseboard Management Controller (BMC). Where is Swordfish functionality located? Is it on the CPU running the OS or is it also out of band? A. Where Swordfish is running will be determined by the implementation. An implementation can choose to run either in band or out of band. In most cases this will be consistent with implementations. If a vendor’s existing architecture supports out of band management, then their Swordfish implementation will also likely be out of band. Note that the Swordfish implementation may leverage existing Redfish instrumentation on integrated components in either case, but this is a completely vendor-specific choice. Q. What is meant by endpoint? A. Endpoints are an abstraction of a connection. They describe the connections without needing to define everything about the underlying hardware. Q. Since JBODs fall within the domain of server hardware, can software RAID solutions take full advantage of Swordfish? A. The software RAID solutions can absolutely take full advantage of Swordfish. Remember that Swordfish is a schema extension to Redfish for storage functionality; therefore, it doesn’t care what underlying hardware it is running on. Note that many different types of storage solutions today run on “server hardware” – SDS solutions, for example, have no custom hardware, and fall exclusively in this domain, yet are clearly storage solutions. Q. Is Swordfish planning on staying an extension to Redfish? Does it have a goal of being integrated into Redfish specification at some point? A. Yes, Swordfish plans to remain an extension to Redfish. There isn’t a reason to integrate it into Redfish, as it is already tightly coupled with Redfish; the schema are delivered publicly on the same site. The SNIA will continue to own Swordfish content separately from DMTF in order to take advantage of the focused attention of the large body of storage domain experts in SNIA. In order to allow the Redfish ecosystem to grow to its maximum potential as quickly as possible, DMTF is partnering with other organizations to add features such as storage and networking to the standard. Q. Do you have to be a SNIA member to contribute to the open source work? A. No. You do not have to be a SNIA member to contribute to the open source projects. You will, however, need to sign the SNIA Contributor License Agreement, available at snia.org/cla in order to release any contributions you make to the open source projects to SNIA to allow us to incorporate them back into the open source projects. Q. Going through the specs for Redfish /Swordfish, I can see that only a few parameters of the schema are really mandatory to be supported by the vendor. Does that not break functionality where a client would be expecting data as per the entire schema? A. The SSM TWG is working on the development of feature profiles, which will help clarify which functionality is required to be implemented for specific clients, applications, and use cases. In addition to the functionality requirements in the Swordfish specification, the profile definitions will help clarify to both clients and service implementations much more clearly what functionality is required to implement for their specific configurations. Additional information on SNIA Swordfish is available at: www.snia.org/swordfish. This site contains resources including the latest specification, a Swordfish User Guide, a Swordfish Practical Guide, Swordfish mockups and more. You can also join the Redfish Specification Forum to ask and answer questions about Swordfish!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Remote Persistent Memory: It Takes a Village (or Perhaps a City)

Marty Foltyn

Aug 2, 2018

title of post
By Paul Grun, Chair, OpenFabrics Alliance and Senior Technologist, Cray, Inc. Remote Persistent Memory, (RPM), is rapidly emerging as an important new technology. But understanding a new technology, and grasping its significance, requires engagement across a wide range of industry organizations, companies, and individuals. It takes a village, as they say. Technologies that are capable of bending the arc of server architecture come along only rarely. It’s sometimes hard to see one coming because it can be tough to discern between a shiny new thing, an insignificant evolution in a minor technology, and a serious contender for the Technical Disrupter of the Year award. Remote Persistent Memory is one such technology, the ultimate impact of which is only now coming into view. Two relatively recent technologies serve to illustrate the point: The emergence of dedicated, high performance networks beginning in the early 2000s and more recently the arrival of non-volatile memory technologies, both of which are leaving a significant mark on the evolution of computer systems. But what happens when those two technologies are combined to deliver access to persistent memory over a fabric? It seems likely that such a development will positively impact the well-understood memory hierarchies that are the basis of all computer systems today. And that, in turn, could cause system architects and application programmers to re-think the way that information is accessed, shared, and stored. To help us bring the subject of RPM into sharp focus, there is currently a concerted effort underway to put some clear definition around what is shaping up to be a significant disrupter. For those who aren’t familiar, Remote Persistent Memory refers to a persistent memory service that is accessed over a fabric or network. It may be a service shared among multiple users, or dedicated to one user or application. It’s distinguished from local Persistent Memory, which refers to a memory device attached locally to the processor via a memory or I/O bus, in that RPM is accessed via a high performance switched fabric. For our purposes, we’ll further refine our discussion to local fabrics, neglecting any discussion of accessing memory over the wide area. Most important of all, Persistent Memory, including RPM, is definitely distinct from storage, whether that is file, object or block storage. That’s why we label this as a ‘memory’ service - to distinguish it from storage.  The key distinction is that the consumer of the service recognizes and uses it as it would any other level in the memory hierarchy. Even though the service could be implemented using block or file-oriented non-volatile memory devices, the key is in the way that an application accesses and uses the service. This isn’t faster or better storage, it’s a whole different kettle of fish. So how do we go about discovering the ultimate value of a new technology like RPM? So far, a lively discussion has been taking place across multiple venues and industry events. These aren’t ad hoc discussions nor are they tightly scripted events; they are taking place in a loosely organized fashion designed to encourage lots of participation and keep the ball moving forward. Key discussions on the topic have hopscotched from the SNIA’s Storage Developers Conference, to SNIA/SSSI’s Persistent Memory Summit, to the OpenFabrics Alliance (OFA) Workshop and others. Each of these industry events has given us an opportunity for the community at large to discuss and develop the essential ideas surrounding RPM. The next installment will occur at the upcoming Flash Memory Summit in August where there will be four sessions all devoted to discussing Remote Persistent Memory. Having frequent industry gatherings is a good thing, naturally, but that by itself doesn’t answer the question of how we go about progressing a discussion of Remote Persistent Memory in an orderly way.  A pretty clear consensus has emerged that RPM represents a new layer in the memory hierarchy and therefore the best way to approach it is to take a top-down perspective. That means starting with an examination of the various ways that an application could leverage this new player in the memory hierarchy. The idea is to identify and explore several key use cases. Of course, the technology is in its early infancy, so we’re relying on the best instincts of the industry at large to guide the discussion. Once there is a clear idea of the ways that RPM could be applied to improve application performance, efficiency or resiliency, it’ll be time to describe how the features of an RPM service are exposed to an application. That means taking a hard look at network APIs to be sure they export the functions and features that applications will need to access the service. The API is key, because it defines the ways that an application actually accesses a new network service. Keep in mind that such a service may or may not be a natural fit to existing applications; in some cases, it will fit naturally meaning that an existing application can easily begin to utilize the service to improve performance or efficiency. For other applications, more work will be needed to fully exploit the new service. Notice that the development of the API is being driven from the top down by application requirements. This is a clear break from traditional network design, where the underlying network and its associated API are defined roughly in tandem. Contrast that to the approach being taken with RPM, where the set of desired network characteristics is described in terms of how an application will actually use the network. Interesting! Armed with a clear sense of how an application might use Remote Persistent Memory and the APIs needed to access it, now’s the time for network architects and protocol designers to deliver enhanced network protocols and semantics that are best able to deliver the features defined by the new network APIs. And it’s time for hardware and software designers to get to work implementing the service and integrating it into server systems. With all that in mind, here’s the current state of affairs for those who may be interested in participating. SNIA, through its NVM Programming Technical Working Group, has published a public document describing one very important use case for RPM – High Availability. The document describes the requirements that the SNIA NVM Programming Model – first released in December 2013 -- might place on a high-speed network.  That document is available online. In keeping with the ‘top-down’ theme, SNIA’s work begins with an examination of the programming models that might leverage a Remote Persistent Memory service, and then explores the resulting impacts on network design. It is being used today to describe enhancements to existing APIs including both the Verbs API and the libfabric API. In addition, SNIA and the OFA have established a collaboration to explore other use cases, with the idea that those use cases will drive additional API enhancements. That collaboration is just now getting underway and is taking place during open, bi-weekly meetings of the OFA’s OpenFabrics Interfaces Working Group (OFIWG). There is also a mailing list dedicated to the topic to which you can subscribe by going to www.lists.openfabrics.org and subscribing to the Ofa_remotepm mailing list. And finally, we’ll be discussing the topic at the upcoming Flash Memory Summit, August 7-9, 2018.  Just go to the program section and click on the Persistent Memory major topic, and you’ll find a link to PMEM-202-1: Remote Persistent Memory. See you in Santa Clara!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Remote Persistent Memory: It Takes a Village (or Perhaps a City)

Marty Foltyn

Aug 1, 2018

title of post
By Paul Grun, Chair, OpenFabrics Alliance and Senior Technologist, Cray, Inc. Remote Persistent Memory, (RPM), is rapidly emerging as an important new technology. But understanding a new technology, and grasping its significance, requires engagement across a wide range of industry organizations, companies, and individuals. It takes a village, as they say. Technologies that are capable of bending the arc of server architecture come along only rarely. It’s sometimes hard to see one coming because it can be tough to discern between a shiny new thing, an insignificant evolution in a minor technology, and a serious contender for the Technical Disrupter of the Year award. Remote Persistent Memory is one such technology, the ultimate impact of which is only now coming into view. Two relatively recent technologies serve to illustrate the point: The emergence of dedicated, high performance networks beginning in the early 2000s and more recently the arrival of non-volatile memory technologies, both of which are leaving a significant mark on the evolution of computer systems. But what happens when those two technologies are combined to deliver access to persistent memory over a fabric? It seems likely that such a development will positively impact the well-understood memory hierarchies that are the basis of all computer systems today. And that, in turn, could cause system architects and application programmers to re-think the way that information is accessed, shared, and stored. To help us bring the subject of RPM into sharp focus, there is currently a concerted effort underway to put some clear definition around what is shaping up to be a significant disrupter. For those who aren’t familiar, Remote Persistent Memory refers to a persistent memory service that is accessed over a fabric or network. It may be a service shared among multiple users, or dedicated to one user or application. It’s distinguished from local Persistent Memory, which refers to a memory device attached locally to the processor via a memory or I/O bus, in that RPM is accessed via a high performance switched fabric. For our purposes, we’ll further refine our discussion to local fabrics, neglecting any discussion of accessing memory over the wide area. Most important of all, Persistent Memory, including RPM, is definitely distinct from storage, whether that is file, object or block storage. That’s why we label this as a ‘memory’ service – to distinguish it from storage.  The key distinction is that the consumer of the service recognizes and uses it as it would any other level in the memory hierarchy. Even though the service could be implemented using block or file-oriented non-volatile memory devices, the key is in the way that an application accesses and uses the service. This isn’t faster or better storage, it’s a whole different kettle of fish. So how do we go about discovering the ultimate value of a new technology like RPM? So far, a lively discussion has been taking place across multiple venues and industry events. These aren’t ad hoc discussions nor are they tightly scripted events; they are taking place in a loosely organized fashion designed to encourage lots of participation and keep the ball moving forward. Key discussions on the topic have hopscotched from the SNIA’s Storage Developers Conference, to SNIA/SSSI’s Persistent Memory Summit, to the OpenFabrics Alliance (OFA) Workshop and others. Each of these industry events has given us an opportunity for the community at large to discuss and develop the essential ideas surrounding RPM. The next installment will occur at the upcoming Flash Memory Summit in August where there will be four sessions all devoted to discussing Remote Persistent Memory. Having frequent industry gatherings is a good thing, naturally, but that by itself doesn’t answer the question of how we go about progressing a discussion of Remote Persistent Memory in an orderly way.  A pretty clear consensus has emerged that RPM represents a new layer in the memory hierarchy and therefore the best way to approach it is to take a top-down perspective. That means starting with an examination of the various ways that an application could leverage this new player in the memory hierarchy. The idea is to identify and explore several key use cases. Of course, the technology is in its early infancy, so we’re relying on the best instincts of the industry at large to guide the discussion. Once there is a clear idea of the ways that RPM could be applied to improve application performance, efficiency or resiliency, it’ll be time to describe how the features of an RPM service are exposed to an application. That means taking a hard look at network APIs to be sure they export the functions and features that applications will need to access the service. The API is key, because it defines the ways that an application actually accesses a new network service. Keep in mind that such a service may or may not be a natural fit to existing applications; in some cases, it will fit naturally meaning that an existing application can easily begin to utilize the service to improve performance or efficiency. For other applications, more work will be needed to fully exploit the new service. Notice that the development of the API is being driven from the top down by application requirements. This is a clear break from traditional network design, where the underlying network and its associated API are defined roughly in tandem. Contrast that to the approach being taken with RPM, where the set of desired network characteristics is described in terms of how an application will actually use the network. Interesting! Armed with a clear sense of how an application might use Remote Persistent Memory and the APIs needed to access it, now’s the time for network architects and protocol designers to deliver enhanced network protocols and semantics that are best able to deliver the features defined by the new network APIs. And it’s time for hardware and software designers to get to work implementing the service and integrating it into server systems. With all that in mind, here’s the current state of affairs for those who may be interested in participating. SNIA, through its NVM Programming Technical Working Group, has published a public document describing one very important use case for RPM – High Availability. The document describes the requirements that the SNIA NVM Programming Model – first released in December 2013 — might place on a high-speed network.  That document is available online. In keeping with the ‘top-down’ theme, SNIA’s work begins with an examination of the programming models that might leverage a Remote Persistent Memory service, and then explores the resulting impacts on network design. It is being used today to describe enhancements to existing APIs including both the Verbs API and the libfabric API. In addition, SNIA and the OFA have established a collaboration to explore other use cases, with the idea that those use cases will drive additional API enhancements. That collaboration is just now getting underway and is taking place during open, bi-weekly meetings of the OFA’s OpenFabrics Interfaces Working Group (OFIWG). There is also a mailing list dedicated to the topic to which you can subscribe by going to www.lists.openfabrics.org and subscribing to the Ofa_remotepm mailing list. And finally, we’ll be discussing the topic at the upcoming Flash Memory Summit, August 7-9, 2018.  Just go to the program section and click on the Persistent Memory major topic, and you’ll find a link to PMEM-202-1: Remote Persistent Memory. See you in Santa Clara!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

RoCE vs. iWARP – The Next “Great Storage Debate”

John Kim

Jul 16, 2018

title of post
By now, we hope you’ve had a chance to watch one of the webcasts from the SNIA Ethernet Storage Forum’s “Great Storage Debate” webcast series. To date, our experts have had friendly, vendor-neutral debates on File vs. Block vs. Object Storage, Fibre Channel vs. iSCSI, and FCoE vs. iSCSI vs. iSER. The goal of this series is not to have a winner emerge, but rather educate the attendees on how the technologies work, advantages of each, and common use cases. Our next great storage debate will be on August 22, 2018 where our experts will debate RoCE vs. iWARP. They will discuss these two commonly known RDMA protocols that run over Ethernet: RDMA over Converged Ethernet (RoCE) and the IETF-standard iWARP. Both are Ethernet-based RDMA technologies that can increase networking performance. Both reduce the amount of CPU overhead in transferring data among servers and storage systems to support network-intensive applications, like networked storage or clustered computing. Join us on August 22nd, as we’ll address questions like:
  • Both RoCE and iWARP support RDMA over Ethernet, but what are the differences?
  • Use cases for RoCE and iWARP and what differentiates them?
  • UDP/IP and TCP/IP: which RDMA standard uses which protocol, and what are the advantages and disadvantages?
  • What are the software and hardware requirements for each?
  • What are the performance/latency differences of each?
Get this on your calendar by registering now. Our experts will be on-hand to answer your questions on the spot. We hope to see you there! Visit snia.org to learn about the work SNIA is doing to lead the storage industry worldwide in developing and promoting vendor-neutral architectures, standards, and educational services that facilitate the efficient management, movement, and security of information.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Dive Into SDC - A Chat with SNIA Technical Council Co-Chair Mark Carlson on Persistent Memory

khauser

Jul 12, 2018

title of post
The SNIA Storage Developer Conference (SDC) is coming up September 24-27, 2018 at the Hyatt Regency Santa Clara CA.  The agenda is now live! SNIA on Storage is ready to dive into major themes of the 2018 conference, starting with Persistent Memory.   The SNIA Technical Council takes a leadership role to develop the content for each SDC, so SNIA on Storage spoke with Mark Carlson, SNIA Technical Council Co-Chair and Principal Engineer, Industry Standards, Toshiba Memory America, to understand why SDC is bringing Persistent Memory to conference attendees. SNIA on Storage (SOS) – Why has the Technical Council chosen Persistent Memory as a major topic for 2018? Mark Carlson (MC) – For a number of years, SNIA has been a key contributor to industry activities driving system memory and storage into a single Persistent Memory entity.  For Developers just being introduced to this new technology, SNIA has a multitude of educational resources that can be used to come up to speed on Persistent Memory and make the most of their time at SDC 2018. SOS – Where should attendees begin? MC:  The dominant form to deliver Persistent Memory products is Non-Volatile DIMMs (NVDIMMs).  SNIA Board Member Rob Peglar, along with Stephen Bates, SNIA NVM Programming Technical Work Group member, and Arthur Sainio, SNIA Persistent Memory and NVDIMM Special Interest Group Co-Chair, deliver a great explanation of Persistent Memory in this video.  SNIA also has an infographic and a cookbook to get you started. SOS:  Where should those interested in developing Persistent Memory applications look for knowledge? MC:  Persistent Memory is having a large impact on both infrastructure and applications.  Andy Rudoff of the SNIA NVM Programming Technical Work Group has a great introduction to this. SNIA has created the NVM Programming Model standard so that applications can take advantage of the increased performance. Andy explores this topic as well in this video. SNIA’s annual Persistent Memory Summit also had some great videos and talks on this topic. so attendees should explore that content as well. And check out our videos and BrightTalk presentations on this topic. SOS:  OK, so now I’ve done my homework and am ready for SDC 2018.  What sessions should I look for? MC:  At SDC, don’t miss: The Long & Winding Road to Persistent Memories presented by Dr. Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis (Wednesday September 26, 3:05 pm in Cypress) Persistent Memory is getting a lot of attention.  SNIA has released a programming standard, NVDIMM makers, with the help of JEDEC, have created standardized hardware to develop & test PM, and chip makers continue to promote upcoming devices, although few are currently available.  In this talk two industry analysts, Jim Handy & Tom Coughlin, will provide the state of Persistent Memory and show a realistic roadmap of what the industry can expect to see and when they can expect to see it.  The presentation, based on three critical reports covering New Memory Technologies, NVDIMMs, and Intel’s 3D XPoint Memory (also known as Optane) will illustrate the Persistent Memory market, the technologies that vie to play a role, and the critical economic obstacles that continue to impede these technologies’ progress.  We will also explore how advanced logic process technologies are likely to cause persistent memories to become a standard ingredient in embedded applications, such as IoT nodes long before they make sense in servers. Update on the SNIA Persistent Memory Programming Model in Theory and Practice presented by Andy Rudoff of Intel (Thursday, September 27, 8:30 am in Cypress) As a charter member of the SNIA NVM Programming TWG, Andy has seen the Persistent Memory Programming model through its multi-year life so far and has collaborated with various operating system vendors in their implementations.  Andy will go over the current state of the programming model, including ongoing work and some interesting efforts in our future.  While going over the theory published in the SNIA specifications, Andy will report on the actual features that were implemented and what's available today in the ecosystem. SNIA Nonvolatile Memory Programming TWG - Remote Persistent Memory presented by Tom Talpey of Microsoft (Thursday, September 27, 9:30 am in Cypress) The SNIA NVMP TWG continues to make significant progress on defining the architecture for interfacing applications to PM. In this talk, Tom Talpey, an architect in remote storage and networking at Microsoft and an active member of SNIA NVM Persistent Memory Technical Work Group, will focus on the important Remote Persistent Memory scenario, and how the NVMP TWG's programming model applies. Application use of these interfaces, along with fabric support such as RDMA and platform extensions, are part of this, and the talk will describe how the larger ecosystem fits together to support PM as low-latency remote storage SOS:  You've convinced us to attend -  so where do we go for more information? MC – It will be great to see all those developing or looking to develop with Persistent Memory at SDC, where we will do our deep dive.  Other talks will cover PM performance, available open source libraries, and integration with filesystems.  Go here to register for the conference and learn more about the agenda and speakers. SOS - Great to chat with you and looking forward to our next dive - into orchestration.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A from the FCoE vs. iSCSI vs. iSER Debate

Tim Lustig

Jul 8, 2018

title of post
It’s become quite clear to those of us in the SNIA Ethernet Storage Forum (ESF) that everyone loves a great debate. We’ve proved that with our “Great Storage Debates” webcast series which has had over 3,500 views in just a few months! Last month we had another friendly debate on FCoE vs. iSCSI vs. iSER. If you missed the live event, you can watch it now on-demand. Our live audience asked a lot of interesting questions. As promised, here are answers to them all. Q. How often are iSCSI offload adapters used in customer environments as compared to software initiators?  Can these adapters be used for all IP traffic or do they only run iSCSI? A. iSCSI offload adapters are ideally suited for enabling high-performance storage access at up to 100Gbps data rates for business-critical applications, for example, latency-sensitive transactional applications and large-file business intelligence applications. iSCSi offload adapters typically also support offload of other storage protocols such as NVMe-oF, iSER, FCoE as well as regular Ethernet traffic using offload or non-offload means. Q. What you’ve missed with iSCSI is Jumbo Frames. That payload size is one of the biggest advantages over Fibre Channel. The biggest problem with both FCoE and iSCSI is they build the networks too complex, with too many hops, without true redundant isolation. Best Practices with block based FC is to keep the host and storage as close to each other as possible. And to have separate isolated redundant networks/fabric. A. The Jumbo Frame (JF) argument is quite contentious among iSCSI storage and network administrators, even beyond anything to do with Fibre Channel. Considering that the performance advantages of JFs are minimal – only 3%-5% performance boost over default MTU sizes of 1500. In mixed workload environments (which dominate the Data Center application deployments), JFs simply do not provide the kind of benefits that people expect in real-world scenarios. The only time JFs can “push the needle,” so to speak, is when you have massively scaled systems with 100s or 1000s of devices, but this raises other issues. One of those issues is that every device in the system needs to have JFs enabled. This can be something of a problem when systems get as large as they need to be in order to take advantage of JFs. Ensuring that every device is configured properly – especially over time, and especially when considering how iSCSI devices are added to networked environments – is a job that requires the coordination of the server/virtualization teams, the networking teams, and the storage teams. By and large, many people find QoS to be a more productive means of performance improvement for iSCSI systems than JFs. Fibre Channel, on the other hand, has a maximum frame size of 2112 bytes. FCoE, then, only requires “baby jumbo” frames, for which the configuration is pushed from the switch to the end devices (~2.5k). What FC has that iSCSI does not have is the concept of “sequences” and “exchanges,” which ensure that the long-flow of frames (regardless of their size) are sent as an entity. So, regardless of what the frame size is (2.5k or 9k), the data flow is sent with consistency and low-jitter because of the way that the sequences and exchanges are handled. The concern about “too complex” and “too many hops” is an interesting one, as Fibre Channel (and, correspondingly, FCoE) are deliberately kept as simple and straightforward as possible. A FC network, for instance, rarely goes beyond 2 hops (“hops” in FC are measured as the links between switches, whereas in Ethernet “hops” are measured as the switches themselves). Logically, then, there is usually, at most, an edge-core-edge topology with a predeterministic path to be followed thanks to Fibre Channel’s FSPF routing algorithm. iSCSI topologies, on the other hand, can be complex (as Ethernet topologies sometimes can be). For larger iSCSI environments, it is often recommended to isolate the storage traffic out into its own, simplified topology. iSCSI SANs that have grown organically, however, can sometimes struggle to be reined in over time. Best practices for all storage is to keep it as close to the host/source as is reasonably possible, not just block. In backup scenarios for example, you want the storage far enough away to be safe from any catastrophe, but close enough to ensure recovery objectives. The design principle of keeping storage as close to the host is a common best practice, and as mentioned in the webinar it is important that architectural principles ensure high availability (HA) to compensate for the rigidity that block storage systems require to compensate for weaker ULP recovery mechanisms.  Q. Most servers today have enough compute power to not need offload adapters. A. This statement might be true in some situations, but definitely not most. With more and more virtual machines being deployed on physical systems and new storage technologies such as SSDs, and NVMe devices which greatly lower latencies, servers are often CPU bound when moving or retrieving data from storage. Offloading storage related activities to an adapter frees the CPU and increases overall server performance. Q. In which industry is each protocol (i.e. FCOE or ISCSI and iSER) widely used and where? A. iSCSI is the most widely-supported Ethernet SAN protocol  with native initiator support integrated into all the major operating systems and hypervisors, built-in RDMA for high performance offloaded implementations supporting up to 100Gbps and support across major storage platforms and is thus ideally suited for deployment across cloud and enterprise data center environments. Q. Do iSCSI offload adapters provide the IPSec encryption, or is this done in software only solutions? Please answer from both initiator and target perspective. A. Yes, iSCSI protocol offload adapters can optionally provide offload of IPSec encryption for both iSCSI (as well as NVMe-oF) initiator and target operation at data rates of up to 100 Gigabits-per-second. This results in overall higher server and target efficiency including power, cooling, memory, and CPU savings. Q. Does iSER support direct or is a switch between them required? A. A switch is not required. Q. J, you left out the centralized management that Fibre Channel provides for FCoE as a positive. A. I got there eventually! But you are correct, the Fibre Channel tools for a centralized management plane with the name server – regardless of the number of switches in the fabric – is a tremendous positive for FCoE/FC solutions at scale. Q. Is multipath possible on the initiator with ISER and will it scale with high IOPs? A. Yes. Mulitpath is possible on the initiator with iSER and scales with high IOPs. Q. FCoE has been around for a while, but I noticed that some storage vendors are dropping support for it. Do you still see a big future for FCoE? A. As a protocol, FCoE has always been able to be used wherever and whenever needed. Almost all converged infrastructure systems use FCoE, for instance. Given that the key advantage of FCoE has been traffic/protocol consolidation, there is an extremely strong use case for FCoE at “the first hop” – that is, from the server to the first network switch. Q. What is the MTU for iSER ? A. iSER as a protocol that sits above the Layer 2 Data Link Layer, which is where the MTU is set. As a result, iSER will accept/accommodate any MTU setting that is configured at that layer. Please see the answer earlier about Jumbo Frames for more information. Ready for more great storage debates? Our next one will be RoCE vs. iWARP on August 22, 2018. Save you place by registering here. And you can check out our previous debates “File vs. Block vs. Object Storage” and “Fibre Channel vs. iSCSI” on-demand at your convenience too. Happy debating!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Simplifying the Movement of Data from Cloud to Cloud

Alex McDonald

Jul 5, 2018

title of post
We are increasingly living in a multi-cloud world, with potentially multiple private, public and hybrid cloud implementations supporting a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely, from one cloud to another. That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement. On August 7, 2018, the SNIA Cloud Storage Technologies Initiative will tackle this issue in a live webcast, “Cloud Mobility and Data Movement.” We will explore some of these data movement and mobility issues and include real-world examples from the University of Michigan. We’ll discus:
  • How do we secure data both at-rest and in-transit?
  • What are the steps that can be followed to import data securely? What cloud processes and interfaces should we use to make data movement easier?
  • How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
  • Should the application of the data influence how (and even if) we move the data?
  • How can data in the cloud be leveraged for multiple use cases?
Register now for this live webcast. Our SNIA experts will be on-hand to answer you questions.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to