Sorry, you need to enable JavaScript to visit this website.

Managing Your Computing Ecosystem

khauser

Apr 12, 2017

title of post
  By George Ericson, Distinguished Engineer, Dell EMC; Member, SNIA Scalable Storage Management Technical Working Group, @GEricson Introduction This blog is part one of a three-part series recently published on “The Data Cortex”, which represents the thoughts and opinions from members of the CTO Team of Dell EMC’s Data Protection Division.  The author, George Ericson, has been actively participating on the SNIA Scalable Storage Management Technical Working Group which has been developing the SNIA Swordfish storage management specification. SNIA Swordfish is an extension to the Distributed Management Task Force’s (DMTF’s) open industry Redfish® standard, and the combination offers a unified approach to managing storage and servers in environments like hyperscale and cloud infrastructures. This makes having a single portal convenient for obtaining feedback on either specification. SNIA’s Storage Management Initiative (SMI) has set up swordfishforum.com as an easy link that goes to the Redfish Forum site. Please visit often and share your thoughts. Overview There is a very real opportunity to take a giant step towards universal and interoperable management interfaces that are defined in terms of what your clients want to achieve. In the process, the industry can evolve away from the current complex, proprietary and product specific interfaces. You’ve heard this promise before, but it’s never come to pass. What’s different this time? Major players are converging storage and servers. Functionality is commoditizing. Customers are demanding it more than ever. Three industry-led open standards efforts have come together to collectively provide an easy to use and comprehensive API for managing all of the elements in your computing ecosystem, ranging from simple laptops to geographically distributed data centers. This API is specified by:
  • the Open Data Protocol (OData) from Oasis
  • the Redfish Scalable Platforms Management API from the DMTF
  • the Swordfish Scalable Storage Management API from the SNIA
One can build a management service that is conformant to the Redfish or Swordfish specifications that provides a comprehensive interface for the discovery of the managed physical infrastructure, as well as for the provisioning, monitoring, and management of the environmental, compute, networking, and storage resources provided by that infrastructure. That management service is an OData conformant data service. These specifications are evolving and certainly are not complete in all aspects. Nevertheless, they are already sufficient to provide comprehensive management of most features of products in the computing ecosystem. This post and the following two will provide a short overview of each. This post and the following two will provide a short overview of each. OData The first effort is the definition of the Open Data Protocol (OData). OData v4 specifications are OASIS standards that have also begun the international standardization process with ISO. Simply asserting that a data service has a Restful API does nothing to assure that it is interoperable with any other data service. More importantly, Rest by itself makes no guarantees that a client of one Restful data service will be able to discover or know how to even navigate around the Restful API presented by some other data service. OData enables interoperable utilization of Restful data services. Such services allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages.  In addition to Redfish and Swordfish described below, a growing number of applications support OData data services, e.g. Microsoft Azure, SAP NetWeaver, IBM WebSphere, and Salesforce. The OData Common Schema Definition Language (CSDL) specifies a standard metamodel used to define an Entity Data Model over which an OData service acts. The metamodel defined by CSDL is consistent with common elements of the UML v2.5 metamodel.   This fact enables reliable translation to your programming language of your choice. OData standardizes the construction of Restful APIs. OData provides standards for navigation between resources, for request and response payloads and for operation syntax. It specifies the discovery of the entity data model for the accessed data service. It also specifies how resources defined by the entity data model can be discovered. While it does not standardize the APIs themselves, OData does standardize how payloads are constructed and a set of query options and many other items that are often different across the many current Restful data services. OData specifications utilize standard HTTP, AtomPub, and JSON. Also, standard URIs are used to address and access resources. The use of the OData protocol enables a client to access information from a variety of sources including relational databases, servers, storage systems, file systems, content management systems, traditional Web sites, and more. Ubiquitous use will break down information silos and will enable interoperability between producers and consumers. This will significantly increase the ability to provide new and richer functionality on top of the OData services. The OData specifications define: Conclusion: While Rest is a useful architectural style, it is not a “standard” and the variances in Restful APIs to express similar functions means that there is no standard way to interact with different systems. OData is laying the groundwork for interoperable management by standardizing the construction of Restful APIs. Next up – Redfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Managing Your Computing Ecosystem

Kristen Hauser

Apr 12, 2017

title of post

  By George Ericson, Distinguished Engineer, Dell EMC; Member, SNIA Scalable Storage Management Technical Working Group, @GEricson

Introduction

This blog is part one of a three-part series recently published on “The Data Cortex”, which represents the thoughts and opinions from members of the CTO Team of Dell EMC’s Data Protection Division.  The author, George Ericson, has been actively participating on the SNIA Scalable Storage Management Technical Working Group which has been developing the SNIA Swordfish™ storage management specification.

SNIA Swordfish is an extension to the Distributed Management Task Force’s (DMTF’s) open industry Redfish® standard, and the combination offers a unified approach to managing storage
and servers in environments like hyperscale and cloud infrastructures. This makes having a single portal convenient for obtaining feedback on either specification. SNIA’s Storage Management Initiative (SMI) has set up swordfishforum.com as an easy link that goes to the Redfish Forum site. Please visit often and share your thoughts.

Overview

There is a very real opportunity to take a giant step towards universal and interoperable management interfaces that are defined in terms of what your clients want to achieve. In the process, the industry can evolve away from the current complex, proprietary and product specific interfaces.

You’ve heard this promise before, but it’s never come to pass. What’s different this time? Major players are converging storage and servers. Functionality is commoditizing. Customers are demanding it more than ever.

Three industry-led open standards efforts have come together to collectively provide an easy to use and comprehensive API for managing all of the elements in your computing ecosystem, ranging from simple laptops to geographically distributed data centers.

This API is specified by:

  • the Open Data Protocol (OData) from Oasis
  • the Redfish Scalable Platforms Management API from the DMTF
  • the Swordfish Scalable Storage Management API from the SNIA

One can build a management service that is conformant to the Redfish or Swordfish specifications that provides a comprehensive interface for the discovery of the managed physical infrastructure, as well as for the provisioning, monitoring, and management of the environmental, compute, networking, and storage resources provided by that infrastructure. That management service is an OData conformant data service.

These specifications are evolving and certainly are not complete in all aspects. Nevertheless, they are already sufficient to provide comprehensive management of most features of products in the computing ecosystem.

This post and the following two will provide a short overview of each.

This post and the following two will provide a short overview of each.

OData

The first effort is the definition of the Open Data Protocol (OData). OData v4 specifications are OASIS standards that have also begun the international standardization process with ISO.

Simply asserting that a data service has a Restful API does nothing to assure that it is interoperable with any other data service. More importantly, Rest by itself makes no guarantees that a client of one Restful data service will be able to discover or know how to even navigate around the Restful API presented by some other data service.

OData enables interoperable utilization of Restful data services. Such services allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages.  In addition to Redfish and Swordfish described below, a growing number of applications support OData data services, e.g. Microsoft Azure, SAP NetWeaver, IBM WebSphere, and Salesforce.

The OData Common Schema Definition Language (CSDL) specifies a standard metamodel used to define an Entity Data Model over which an OData service acts. The metamodel defined by CSDL is consistent with common elements of the UML v2.5 metamodel.   This fact enables reliable translation to your programming language of your choice.

OData standardizes the construction of Restful APIs. OData provides standards for navigation between resources, for request and response payloads and for operation syntax. It specifies the discovery of the entity data model for the accessed data service. It also specifies how resources defined by the entity data model can be discovered. While it does not standardize the APIs themselves, OData does standardize how payloads are constructed and a set of query options and many other items that are often different across the many current Restful data services. OData specifications utilize standard HTTP, AtomPub, and JSON. Also, standard URIs are used to address and access resources.

The use of the OData protocol enables a client to access information from a variety of sources including relational databases, servers, storage systems, file systems, content management systems, traditional Web sites, and more.

Ubiquitous use will break down information silos and will enable interoperability between producers and consumers. This will significantly increase the ability to provide new and richer functionality on top of the OData services.

The OData specifications define:

Conclusion:

While Rest is a useful architectural style, it is not a “standard” and the variances in Restful APIs to express similar functions means that there is no standard way to interact with different systems. OData is laying the groundwork for interoperable management by standardizing the construction of Restful APIs. Next up – Redfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Managing Your Computing Ecosystem

Kristen Hauser

Apr 12, 2017

title of post

  By George Ericson, Distinguished Engineer, Dell EMC; Member, SNIA Scalable Storage Management Technical Working Group, @GEricson

Introduction

This blog is part one of a three-part series recently published on “The Data Cortex”, which represents the thoughts and opinions from members of the CTO Team of Dell EMC’s Data Protection Division.  The author, George Ericson, has been actively participating on the SNIA Scalable Storage Management Technical Working Group which has been developing the SNIA Swordfish™ storage management specification.

SNIA Swordfish is an extension to the Distributed Management Task Force’s (DMTF’s) open industry Redfish® standard, and the combination offers a unified approach to managing storage
and servers in environments like hyperscale and cloud infrastructures. This makes having a single portal convenient for obtaining feedback on either specification. SNIA’s Storage Management Initiative (SMI) has set up swordfishforum.com as an easy link that goes to the Redfish Forum site. Please visit often and share your thoughts.

Overview

There is a very real opportunity to take a giant step towards universal and interoperable management interfaces that are defined in terms of what your clients want to achieve. In the process, the industry can evolve away from the current complex, proprietary and product specific interfaces.

You’ve heard this promise before, but it’s never come to pass. What’s different this time? Major players are converging storage and servers. Functionality is commoditizing. Customers are demanding it more than ever.

Three industry-led open standards efforts have come together to collectively provide an easy to use and comprehensive API for managing all of the elements in your computing ecosystem, ranging from simple laptops to geographically distributed data centers.

This API is specified by:

  • the Open Data Protocol (OData) from Oasis
  • the Redfish Scalable Platforms Management API from the DMTF
  • the Swordfish Scalable Storage Management API from the SNIA

One can build a management service that is conformant to the Redfish or Swordfish specifications that provides a comprehensive interface for the discovery of the managed physical infrastructure, as well as for the provisioning, monitoring, and management of the environmental, compute, networking, and storage resources provided by that infrastructure. That management service is an OData conformant data service.

These specifications are evolving and certainly are not complete in all aspects. Nevertheless, they are already sufficient to provide comprehensive management of most features of products in the computing ecosystem.

This post and the following two will provide a short overview of each.

This post and the following two will provide a short overview of each.

OData

The first effort is the definition of the Open Data Protocol (OData). OData v4 specifications are OASIS standards that have also begun the international standardization process with ISO.

Simply asserting that a data service has a Restful API does nothing to assure that it is interoperable with any other data service. More importantly, Rest by itself makes no guarantees that a client of one Restful data service will be able to discover or know how to even navigate around the Restful API presented by some other data service.

OData enables interoperable utilization of Restful data services. Such services allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages.  In addition to Redfish and Swordfish described below, a growing number of applications support OData data services, e.g. Microsoft Azure, SAP NetWeaver, IBM WebSphere, and Salesforce.

The OData Common Schema Definition Language (CSDL) specifies a standard metamodel used to define an Entity Data Model over which an OData service acts. The metamodel defined by CSDL is consistent with common elements of the UML v2.5 metamodel.   This fact enables reliable translation to your programming language of your choice.

OData standardizes the construction of Restful APIs. OData provides standards for navigation between resources, for request and response payloads and for operation syntax. It specifies the discovery of the entity data model for the accessed data service. It also specifies how resources defined by the entity data model can be discovered. While it does not standardize the APIs themselves, OData does standardize how payloads are constructed and a set of query options and many other items that are often different across the many current Restful data services. OData specifications utilize standard HTTP, AtomPub, and JSON. Also, standard URIs are used to address and access resources.

The use of the OData protocol enables a client to access information from a variety of sources including relational databases, servers, storage systems, file systems, content management systems, traditional Web sites, and more.

Ubiquitous use will break down information silos and will enable interoperability between producers and consumers. This will significantly increase the ability to provide new and richer functionality on top of the OData services.

The OData specifications define:

Conclusion:

While Rest is a useful architectural style, it is not a “standard” and the variances in Restful APIs to express similar functions means that there is no standard way to interact with different systems. OData is laying the groundwork for interoperable management by standardizing the construction of Restful APIs. Next up – Redfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on All Things iSCSI

Alex McDonald

Apr 7, 2017

title of post
In the recent SNIA Ethernet Storage Forum iSCSI pod webcast, from our "Everything You Wanted To Know About Storage Part Were Too Proud to Ask" series, we discussed all things iSCSI. If you missed the live event, it's now available on-demand. As promised, we've compiled all the webcast questions with answers from our panel of experts. If you have additional questions, please feel free to ask them in the comment field of this blog. I also encourage you to check out the other on-demand webcasts in this "Too Proud To Ask" series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF. Q. What does SPDK stand for? A. SPDK stands for Storage Performance Development Kit. It is comprised of tools and libraries for developers to write high performance and scalable storage applications in user-mode. For details, see www.spdk.io. Q. Can you elaborate on SPDK use? A quick search seems to indicate it is a "half-baked" solution, and available only on Linux systems. A. SPDK isn't a solution, per se – it's a development kit, intended to provide common building blocks (NVMe drivers, NVMe over Fabrics targets & host/initiator, etc.) for solutions developers who care about latency, license (BSD) and efficiency. Q. Is iSCSI ever going to be able to work with object storage? A. iSCSI is a block storage protocol while object storage is normally accessed using a RESTful API such as Amazon's S3 API or the Swift API. For this reason, iSCSI is unlikely to be used for direct access to object storage. However, an object storage system controller could use iSCSI—or other block protocols--to access remote storage enclosures or for data replication. There also could be storage systems that support both iSCSI/block and object storage access simultaneously. Q. Does a high-density virtualized workload represent something better served with a full or partial offload solution? A. The type of workload that is better served with full or partial offload will really depend more on what that workload is doing. If you are processing a lot of very large data segments, LSO or LRO might be very helpful. If you have a lot of smaller data sets, you might be able to benefit from checksum or chimney offload. Unfortunately, the best way to see is to test things out (but not on production, obviously). Q. How does one determine if TOE NIC cards are worth the cost? A. This is a really tough question to answer without context. The best way to look at it is do some digging into what your CPU and memory utilization and IO patters look like on your servers and try to map that to TCP connections. If you have a lot of iSCSI IO and a large amount of TCP connections on a server, that might be a candidate for TOE. That's just a technical response, but then comes the really tricky part - the QUANTITY measurement of how many dollars it is worth... that's way more challenging. For example, if I have a regular 10G NIC that costs $200 and a TOE card that costs 3x that and only saves 5% CPU, then it may not have enough value. On the other hand, if that 5% CPU can be used by your application to transact enough business to pay for the extra $400, then it's worth it. Sorry to say that I have seen no scientific way to enumerate that value outside of specific hands-on testing of the solution with and without TOE NICs. Q. What is the difference between a stateless and stateful TCP offload? Are RSS and TSS (receive-side and transmission-side scaling) offloads a type of TCP offload or are they operating at a lower level like Layer 2? A. Stateless offloading is basically any offload function that can be done without the NIC needing to maintain a connection state table. Checksum offloads are an example. Stateful offloading is any offloading that requires the NIC to maintain a full state connection table. Receive Side Scaling has to do with distributing inbound connections in order to alternate connections coming into the server to different CPUs on a multi-CPU server. There are also some other performance-enhancements that can be done such as RPS, RFS, XPS and some others. These are more about how to get data from the network to the CPU, but are not really specifically TCP functions, as they have to do with uniform processing, not necessarily to do with the TCP stack. Q. Is using the host CPU to run iSCSI really a downside? A. There may be applications where this is a problem, but you're generally right; it's not too much of an issue today. But there are iSCSI-based storage solutions coming up where a consistent 100s of nanoseconds to low microseconds of latency from the device is possible – and that's very fast indeed. So an iSCSI stack in these circumstances needs to ensure that its consumption of CPU doesn't increase the latency (even very efficient stacks can add 100s of micro- to milliseconds of latency), or cause contention for the CPU (busy CPUs mean you may queue for compute resources). Q. Is the term "onload" for iSCSI new – never heard this before? A. It was intended as a quick shorthand word to stand in contrast to iSCSI offload. It will probably not catch on! Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on All Things iSCSI

AlexMcDonald

Apr 7, 2017

title of post
In the recent SNIA Ethernet Storage Forum iSCSI pod webcast, from our “Everything You Wanted To Know About Storage Part Were Too Proud to Ask” series, we discussed all things iSCSI. If you missed the live event, it’s now available on-demand. As promised, we’ve compiled all the webcast questions with answers from our panel of experts. If you have additional questions, please feel free to ask them in the comment field of this blog. I also encourage you to check out the other on-demand webcasts in this “Too Proud To Ask” series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF. Q. What does SPDK stand for? A. SPDK stands for Storage Performance Development Kit. It is comprised of tools and libraries for developers to write high performance and scalable storage applications in user-mode. For details, see www.spdk.io. Q. Can you elaborate on SPDK use? A quick search seems to indicate it is a “half-baked” solution, and available only on Linux systems. A. SPDK isn’t a solution, per se – it’s a development kit, intended to provide common building blocks (NVMe drivers, NVMe over Fabrics targets & host/initiator, etc.) for solutions developers who care about latency, license (BSD) and efficiency. Q. Is iSCSI ever going to be able to work with object storage? A. iSCSI is a block storage protocol while object storage is normally accessed using a RESTful API such as Amazon’s S3 API or the Swift API. For this reason, iSCSI is unlikely to be used for direct access to object storage. However, an object storage system controller could use iSCSI—or other block protocols–to access remote storage enclosures or for data replication. There also could be storage systems that support both iSCSI/block and object storage access simultaneously. Q. Does a high-density virtualized workload represent something better served with a full or partial offload solution? A. The type of workload that is better served with full or partial offload will really depend more on what that workload is doing. If you are processing a lot of very large data segments, LSO or LRO might be very helpful. If you have a lot of smaller data sets, you might be able to benefit from checksum or chimney offload. Unfortunately, the best way to see is to test things out (but not on production, obviously). Q. How does one determine if TOE NIC cards are worth the cost? A. This is a really tough question to answer without context. The best way to look at it is do some digging into what your CPU and memory utilization and IO patters look like on your servers and try to map that to TCP connections. If you have a lot of iSCSI IO and a large amount of TCP connections on a server, that might be a candidate for TOE. That’s just a technical response, but then comes the really tricky part – the QUANTITY measurement of how many dollars it is worth… that’s way more challenging. For example, if I have a regular 10G NIC that costs $200 and a TOE card that costs 3x that and only saves 5% CPU, then it may not have enough value. On the other hand, if that 5% CPU can be used by your application to transact enough business to pay for the extra $400, then it’s worth it. Sorry to say that I have seen no scientific way to enumerate that value outside of specific hands-on testing of the solution with and without TOE NICs. Q. What is the difference between a stateless and stateful TCP offload? Are RSS and TSS (receive-side and transmission-side scaling) offloads a type of TCP offload or are they operating at a lower level like Layer 2? A. Stateless offloading is basically any offload function that can be done without the NIC needing to maintain a connection state table. Checksum offloads are an example. Stateful offloading is any offloading that requires the NIC to maintain a full state connection table. Receive Side Scaling has to do with distributing inbound connections in order to alternate connections coming into the server to different CPUs on a multi-CPU server. There are also some other performance-enhancements that can be done such as RPS, RFS, XPS and some others. These are more about how to get data from the network to the CPU, but are not really specifically TCP functions, as they have to do with uniform processing, not necessarily to do with the TCP stack. Q. Is using the host CPU to run iSCSI really a downside? A. There may be applications where this is a problem, but you’rr generally right; it’s not too much of an issue today. But there are iSCSI-based storage solutions coming up where a consistent 100s of nanoseconds to low microseconds of latency from the device is possible – and that’s very fast indeed. So an iSCSI stack in these circumstances needs to ensure that its consumption of CPU doesn’t increase the latency (even very efficient stacks can add 100s of micro- to milliseconds of latency), or cause contention for the CPU (busy CPUs mean you may queue for compute resources). Q. Is the term “onload” for iSCSI new – never heard this before? A. It was intended as a quick shorthand word to stand in contrast to iSCSI offload. It will probably not catch on!        

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered on Non-Volatile DIMMs

Marty Foltyn

Apr 3, 2017

title of post
  by Arthur Sainio, SNIA NVDIMM SIG Co-Chair, SMART Modular SNIA’s Non-Volatile DIMM (NVDIMM) Special Interest Group (SIG) had a tremendous response to their most recent webcast:  NVDIMM:  Applications are Here!  You can view the webcast on demand. Viewers had many questions during the webcast.  In this blog, the NVDIMM SIG answers those questions and shares the SIG’s knowledge of NVDIMM technology. Have a question?  Send it to nvdimmsigchair@snia.org. 1. What about 3DXpoint, how will this technology impact the market? 3DXPoint DIMMs will likely have a significant impact on the market. They are fast enough to use as a slower tier of memory between NAND and DRAM.  It is still too early to tell though. 2. What are good benchmark tools for DAX and what are the differences between NVML applications and DAX aware applications? For benchmark tools, please see the answer for (11). NVML applications are written specifically for NVM (Non-Volatile Memory). They may use the open source NVML libraries (http://pmem.io/nvml) for their usage. DAX is a File System feature that avoids the usage of Page Cache buffers.  DAX aware applications are aware that the writes and reads would go directly to the underlying NVM without being cached. 3. On the slide talking about NUMA, there was a mention accessing NVDIMMs from a CPU on a different memory bus. The part about larger access times was clear enough. However, I came away with the impression that there is a correctness issue with handling of ADR signal as well. Please clarify. If this question is asking whether the NUMA remote CPU will successfully flush ADR-protected buffers to memory connected to the NUMA near CPU then yes there is the potential for a problem in this area. However ADR is an Intel feature that is not specified in the JEDEC NVDIMM standard, so this is an Intel specific implementation question. The question needs to be posed to Intel. 4. How common is NVDIMM compatible BIOS? How would one check? They are becoming more common all the time. There are at least 8 server/storage systems from Intel and 22 from Supermicro that support NVDIMMs.  Several other motherboard vendors have systems that support NVDIMMs.  Most of the NVDIMM vendors have the lists posted on their websites. 5. How does a system go in to save? How what exactly does the BIOS have to do to get a system before asserting save? The BIOS does the initial checking of making sure the NVDIMM has backup supply on power loss, before it ARMs it. Also, the BIOS makes sure that any RESTORE of the previously saved data is properly done. This involves a set of operations by setting appropriate registers in the NVDIMM module – all that happens during the boot up initialization. On A/C Power Loss, the PCH (Platform Control Hub) detects the condition and initiates what is called the ADR (Asynchronous DRAM Refresh) sequence, terminating in the assertion of SAVE signal by the CPLD. Without the BIOS ARM-ing the NVDIMM module, the NVDIMM module will not respond to the SAVE signal on power loss situation. 6. Could you paint the picture of hardware costs at this point? How soon will NVDIMM-enabled systems be able to become “the rest of us”? The NVDIMM use DRAM, NAND Flash, a controller and well as many other parts in addition to what are used on standard RDIMMs. On that basis the cost of NVDIMM-N is higher that standard RDIMMs.  NVDIMM-enabled systems have been available for several years and are shipping now. 7. Does RHEL 7.3 easily support Linux Kernel 4.4? RHEL 7.3 is still using the 3.10 version of the Linux Kernel. For RHEL related information, please, check with Red Hat. You can also refer to: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.3_Release_Notes/index.html The distribution has drivers to support the persistent memory. They have also packaged the libraries for the persistent memory. 8. What are the usual sizes for NVDIMMs available today? 4GB, 8GB, 16GB, 32GB 9. Are there any case studies of each of the NVDIMM-N applications mentioned? You can find some examples of case studies at these websites:  https://channel9.msdn.com/events/build/2016/p466 and https://msdn.com/events/build/2016/p470 10. What is the difference between pmem lib/pmfs in Linux and an DAX enabled files system (like ext-DAX)? A DAX based File System avoids the usage of Kernel Page Cache Layer for caching its write data. This would make all its write operations go directly to the underlying storage unit. One important thing to understand is, a DAX File System can still use BLOCK DRIVERS for accessing its underlying storage. PMFS is a File System that is optimized to use Persistent Memory, by completely avoiding the Page Cache and the Block Drivers. It is designed to provide efficient access to Persistent Memory that would be directly accessible via CPU load/store instructions. Refer to this link: https://github.com/linux-pmfs/pmfs for more details. PMFS, as of now is only in experimental stages. 11. What tool is used to measure the performance? The performance measurement depends on what kind of Application workload is to be characterized. This is a very complex topic. No single benchmarking tool is good for all the workload characteristics. For File System performance, SpecFS, Bonnie++, IOZone, FFSB, FileBench etc., are good tools. SysBench is good for a variety of performance measurements. Phoronix Test Suite (http://www.phoronix.com/scan.php?page=home) has a variety of tools for Linux based performance measurements. 12. How similar do you expect the OS support for P to be to this support for –N? I don’t see a lot of need for differences at this level (though there certainly will be differences in the BIOS). As of now, the open source libraries (http://pmem.io) are designed to be agnostic about the underlying memory types. They are simply classified as Persistent Memory, meaning, it could be “-N” or “-P” or something else. The libraries are written for User Space, and they assume that any underlying Kernel support should be transparent. The “-P” type has been thought of supporting both the DRAM and the PERSISTENT access at the same time. This might need a separate set of drivers in the Kernel.   13.  Does the PM-based file system appear to be block addressable from the Application? A File System creates a layer of virtualization to support the logical entities such as VOLUMES, DIRECTORIES and FILES. Typically, an Application that is running in the User Space has no knowledge of the underlying mechanisms used by a File System for accessing its storage units such as the Persistent Memory. The access provided by a File System to an Application is typically a POSIX File System interface such as open, close, read, write, seek, etc.,  14. Is ADR a pin? ADR stands for Asynchronous DRAM Refresh. ADR is a feature supported on Intel chipsets that triggers a hardware interrupt to the memory controller which will flush the write-protected data buffers and place the DRAM in self-refresh. This process is critical during a power loss event or system crash to ensure the data is in a “safe” state when the NVDIMM takes control of the DRAM to backup to Flash. Note that ADR does not flush the processor cache. In order to do so, an NMI routine would need to be executed prior to ADR.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Ranked #2 for Storage Certifications - and Now You Can Take Exams at 900 Locations Worldwide

khauser

Mar 29, 2017

title of post
The SNIA Storage Networking Certification Program (SNCP) provides a strong foundation of vendor-neutral, systems-level credentials that integrate with and complement individual vendor certifications. Its four credentials – SNIA Certified Storage Professional; SNIA Certified Storage Engineer; SNIA Certified Storage Architect; and SNIA Certified Storage Networking Expert  - reflect the advancement and growth of storage networking technologies, and establish a uniform standard by which individual knowledge and skill sets can be evaluated, thereby providing employers in the storage industry with an independent assessment of the individual. Should you become certified? With heterogeneous data centers as the de facto environment today, IT certification can be of great value especially if it’s in a career area where you are trying to advance, and even more importantly if it is a vendor-neutral certification which complements specific product skills. And with surveys saying IT storage professionals may anticipate six-figure salaries, going for certification seems like a good idea.  Don't just take our word for it -  CIO Magazine has cited SNIA Certified Storage Networking Expert (SCSN-E) as #2 of their top seven storage certifications - and the way to join an elite group of storage professionals at the top of their games. SNIA now makes it even easier to take its three exams – Foundations; Management/Administration; and Assessment, Planning, and Design.  Exams are now available for on-site test takers globally via a new relationship with Kryterion Testing Network.  The Kryterion Testing Network utilizes over 900 Testing Centers in 120 countries to securely proctor exams worldwide for SNIA Certification Exam candidates. If you would like to know more about Kryterion or locate your nearest testing center please go to: www.kryteriononline.com/Locate-Test-Center . For more information about SNIA’s SNCP, visit https://www.snia.org/education/certification.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS? Users Share Their 2017 Storage Performance Needs

Marty Foltyn

Mar 24, 2017

title of post
New on the Solid State Storage website is a whitepaper from analysts Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis which details what IT manager requirements are for storage performance.The paper examines how requirements have changed over a four-year period for a range of applications, including databases, online transaction processing, cloud and storage services, and scientific and engineering computing. Users disclose how many IOPS are needed, how much storage capacity is required,  and what system bottlenecks prevent them for getting the performance they need. You’ll want to read this report before signing up for a SNIA BrightTalk webcast at 2:00 pm ET/11:00 am PT on May 3, 2017 where Tom and Jim will discuss their research and provide answers to questions like:
  • Does a certain application really need the performance of an SSD?
  • How much should a performance SSD cost?
  • What have other IT managers found to be the right balance of performance and cost?
Register for the “How Many IOPS?  Users Share Their 2017 Storage Performance Needs” at https://www.brighttalk.com/webcast/663/252723

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS? Users Share Their 2017 Storage Performance Needs

Marty Foltyn

Mar 24, 2017

title of post
New on the Solid State Storage website is a whitepaper from analysts Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis which details what IT manager requirements are for storage performance. The paper examines how requirements have changed over a four-year period for a range of applications, including databases, online transaction processing, cloud and storage services, and scientific and engineering computing.  Users disclose how many IOPS are needed, how much storage capacity is required,  and what system bottlenecks prevent them for getting the performance they need. You’ll want to read this report before signing up for a SNIA BrightTalk webcast at 2:00 pm ET/11:00 am PT on May 3, 2017 where Tom and Jim will discuss their research and provide answers to questions like:
  • Does a certain application really need the performance of an SSD?
  • How much should a performance SSD cost?
  • What have other IT managers found to be the right balance of performance and cost?
Register for the “How Many IOPS?  Users Share Their 2017 Storage Performance Needs” at https://www.brighttalk.com/webcast/663/252723

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

IP-Based Object Drives Now Have a Management Standard

Alex McDonald

Mar 9, 2017

title of post
The growing popularity of object-based storage has resulted in the development of Ethernet-connected storage devices, also referred to as IP-Based Drives, that support object interfaces, and in some cases the ability to run applications on the drives themselves. These scale-out storage nodes consist of relatively inexpensive drive-sized enclosures with IP network connectivity, CPU, memory and storage. While inexpensive to deploy, these solutions require more management than a traditional drive. In order to simplify management of these drives, SNIA has developed and approved the release of the IP-Based Drive Management Specification. On April 20th, the SNIA Cloud Storage Initiative is hosting a live webcast, “IP-Based Object Drives Now Have a Management Standard.” It will be a unique opportunity to learn about this specification from the authors who wrote it. In this webcast, we’ll discuss:
  • Major components of the IP-Based Drive Management Standard
  • How the standard leverages the DMTF Redfish management standard to manage IP-Based Drives
  • The standard management interface for drives that are part of JBOD (Just A Bunch Of Disks) or JBOF (Just A Bunch Of Flash) enclosures
This standard allows drive management to scale to data centers and beyond, enabling high degrees of automation and software only management of data centers. Reserve your spot today to learn more and ask questions to the folks behind the spec. I hope to see you on April 20th.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to