Sorry, you need to enable JavaScript to visit this website.

Buffers, Queues and Caches Explained

John Kim

Apr 19, 2017

title of post
Finely tuning buffers, queues and caches can make your storage system hum. And that’s exactly what we discussed in our recent SNIA Ethernet Storage Forum webcast, ““Everything You Wanted to Know About Storage But Were Too Proud To Ask – Part Teal: The Buffering Pod.” If you missed it, it’s now available on-demand. In this blog, you’ll find detailed answers from our panel of experts to all the great questions we received during the live event. I also encourage you to check out the other on-demand webcasts in this “Too Proud To Ask” series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF. Q. Question on cache – What would be the right size of cache at each point (clients / Front-end connect / Storage controller / Back end connect / Physical storage). A. Great question! The main consideration for cache sizing at any point is the workload. If the workload is conducive to cache benefits, then the more cache the merrier! However, when workload is not conducive to cache, adding more cache capacity won’t be beneficial. For example, if the workload is 100% sequential reads of small 4K IOs, having the data be pre-loaded into cache is going to be extremely helpful, and increasing the size of such cache at the end-point will be good. If the workload is random, and the IO size is changing, pre-fetching data into cache may be not a good idea. Similarly, with write cache, the benefit is realized two-fold: first, when the write is stored in cache and ack’ed back to the host (such write is typically called “dirty”, because it hasn’t been flashed back to the disk) and second, when the dirty write is overwritten by the host before it is flashed. Any other combination of workloads and IO will only get partial benefit from the cache. Sizing cache is a very difficult exercise and there are no universal answers. Every implementation has its own pluses and minuses. Q. Isn’t a higher queue depth increasing latency as well, so applications would run slower as they are waiting longer for IO to complete? A. The answer to this is very dependent on the environment. In general having more outstanding operations would increase the load on the interconnects and storage media which would result in the per-IO latency increasing. The alternative is having a small queue depth which may produce consistently lower per IOP latency at the expense of less throughput and IOPS. There are numerous techniques for dealing with mixed storage traffic, low-latency and high throughput, such as multi-queues, out-of-order completions, immediate and delayed data transfers in-line, ready to transfer, and policies. The NVM media latency roadmap is also helping with these types of latency vs. throughput decisions by enabling devices that achieve full-throughput at very low queue depths. Q. Does SCSI protocol have a max queue depth of 32? A. No, the SCSI Architecture Model allows for up to 64 bits for the command identifier field and each of the SCSI transports (iSCSI, SAS, …) defines a maximum within that range. There may be implementation-dependent SCSI endpoints that define smaller ranges. Q. How would a distributed software defined storage technology deal with queue depth and how can this be advantageous or not advantageous? A. Interesting question. Distributed software defined storage is by definition made up of multiple autonomous layers of software components orchestrated to provide stable storage. These types of systems will have many outstanding operations (queue depth) at multiple-stages and layers. It’s also not uncommon to see SDS file systems front-ended with block-based protocols, such as iSCSI, which enable the initiators to build up large queue depths of operations. Q. Are queue depth and buffer the same? A. No, queue refers to command and response queues, buffers refer to in-flight data buffers. Command and response queues often contain pointers to these buffers embedded in the read or write commands. Q. Are caches and buffers made of the same silicon that makes up SSD disks? Which one is faster? A. As a general idea, yes, SSDs, RAMs, caches, and buffers are all made from the silicon. If we dig a little deeper, device caches and buffers are typically made of high-speed static random access memory (SRAM), which is faster than the slower and cheaper dynamic RAM (DRAM), used for main memory. Modern SSDs are utilizing an even slower memory, which is commonly known as Flash memory, and we differentiate that type of storage by its structure: Single-Level Cell (SLC), Multi-Level Cell (MLC), etc. Although, there are some SSDs that are made out of DRAM, too. And then there are some newer technologies, like NVDIMM, 3D XPoint, etc. So, while the underlying physical material is still the same silicon, it’s the architecture that makes all the difference. Q. In PFC.. If there are pending items in P1… can P2 or P3 etc. go ahead? A. Yes. Priority Flow Control (PFC, also called Per Priority Pause, though rarely) is designed specifically to only pause traffic on one priority, allowing the remaining priority Classes of Service to work according to their configurations. So, for example, if PFC were to pause Priority Queue 1, and Priority Queue 3 also had a “no-drop” configuration but was not having any issues, PFC on Queue 1 would be triggered but PFC on Queue 3 would not. In reality, having more than one no-drop lane on a link is very, very rare, but it does illustrate that PFC operates on a per-priority basis, not on the whole link. Q. Do all Ethernet based NVMe-oF (NVMe over Fabrics) implementations require some form of Data Center Bridging (DCB)? Or, are there versions of Ethernet based NVMe-oF (RoCE & iWARP) that run over standard Ethernet without needing DCB? A. Yes, both iWARP and RoCE can be run without DCB. To maintain peak performance either DCB or other flow control mechanisms like ECN are recommended. Q. Do server devices automatically honor the pause frame or does it require configuration? A. I am assuming “server devices” refers to Ethernet ports on a server. It depends on the default settings of the NIC or LOM or those loaded by the driver during initialization. Generally speaking NIC devices that support PFC also support DCBX (Data Center Bridging Exchange). DCBX is a protocol that allows an end device, like a NIC, to get its proper configuration settings from the switch. That means that in an environment where PFC needs to be assigned to a specific Class of Service (CoS), the switch will send the NIC the proper settings during the setup configuration. Q. Is it mandatory for all devices in network, host and storage to have same speed ports? A. No. Q. What are the theoretical devices for modeling and analyzing cache, buffer or queue behaviors? A. Computers with software ? Q. What if I have really large sized writes and they fill up the cache quickly? Is there a way to bypass the large sized writes? A. The time of the presentation limited the amount of material we were able to share. One of the subjects we didn’t talk about was the cache software algorithm. Most storage vendors manage the cache by not letting extremely large IOs to be cached. Back in the spinning storage era, an IO of 2MB would typically be considered too large to be cached, and would be sent directly to disk. Q. What will be the use of cache in all flash storage please? As flash is the highest performance disk. A. See the answer to question above “Are caches and buffers made of the same silicon that makes up SSD disks? Which one is faster?” Hardware cache and buffers are typically made out of the fastest memory, then comes RAM and last are the SSDs aka flash disks. Therefore, storing data on a faster layer is still beneficial to the performance. Q. Does the LUN Queue Depth includes the Queue Depth discussed here? A. Yes, SCSI LUN queue depth enables the initiator(s) to have multi outstanding I/O operations in flight. Q. Will you use a queuing algorithm to manage IO queue? If your answer is yes, which algorithm will you use? A. There are several storage protocols that define mechanisms for a target to dynamically adjust the queue depth available to the initiator through various forms of credit exchanges. Having these types of mechanism enables the target to implement multi-initiator load balancing across targe

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Expert Takes on Hyperconverged Questions

John Kim

Apr 17, 2017

title of post
Last month, we were fortunate enough to have Greg Schulz, analyst and founder of Server Storage IO, as a guest speaker at our SNIA Ethernet Storage Forum webcast, "What Does Hyperconverged Mean to Storage." If you missed it, it's now available on-demand. Greg fielded many great questions during the live event, but we didn't have time to get to them all. So here they are: Q. What is the difference between Converged Infrastructure (CI) and Hyperconverged Infrastructure (HCI)? A. HCI is aggregated. You scale compute and storage in lock step. Converged is disaggregated. You can scale the compute independently of the storage. There are some software solutions that can support both hyper-converged (aggregated) and converged (disaggregated) deployments.  Q.  What is your definition of "Little Data"? A. Little Data is anything that's not Big Data. It encompasses traditional databases, traditional structured, semi-structured and even some unstructured data. Q. With convergence, what is the impact on the IT organization? A. There is an opportunity for organizations to converge how they manage data infrastructure resources and services delivery. In other words, the technology can be leveraged to help an organization itself converge. Another impact is how converged solutions are protected, backed up, BC/BR/DR and related management done. Traditionally there are separate IT teams for compute, storage, and networking, especially in a large organization. New technology solutions may allow an organization to converge those teams. Q. Is there a hybrid strategy? Where a complete information system is composed of HCI/CI building blocks? If yes, what management tools would span these components? A. Sure, why not? Certainly you can converge your environment into a particular CI/HCI solution or approach, likewise, different CI/HCI solutions can co-exist along with other solutions in a given environment in hybrid ways. Have a hybrid strategy that looks at how technologies and solutions adapt to your needs and environment. Focus on how it's going to work for you, vs. you having to work for them. Q. What does FUZE stand for? A. FUZE is not an acronym. It is the actual fuzing as in melding and bringing together things – literally fuzing thing together. Q. Do HCI vendors re-balance (compute, I/O, storage) automatically as more nodes are added? A. Solutions vary in how they rebalance the workloads. Some are dynamic while others rebalance on intervals; it varies how, when and what they rebalance. So, as you add capacity as you make changes, you need to make sure resources are properly allocated to address performance. Q. Can't you offload those CPU cycles caused by I/O to another CPU? A. That's an interesting question. Yes, move the application to another CPU. There is software that will leverage the resources on another CPU. Most HCI and CI solutions are running on a stack that requires hardware somewhere. Q. This discussion has touched on compute and storage scaling. What about network between compute in the CI/HCI infrastructure and external to other compute, databases, or end-users? A. Both CI and HCI need to connect to other resources, but in most cases the highest levels of network traffic are inside the CI or HCI stack because the compute and storage resources are contained within. Their connections to outside clients or servers data exchange, application integration, or client access is important but usually not very demanding on network bandwidth. (External connections for storage remote replication or backup could be bandwidth-intensive.) Q. How can the current Enterprise Storage Products blend with either CI or HCI? Enterprise Storage is basically centralized storage architecture however the HCI is built mostly on 'distributed storage architecture'. So how can current Enterprise Storage show use cases to the customer to sell their Enterprise Storage either as part of the HCI solution or exist along with HCI? A. Generally enterprise storage products can be included in CI but are not blended with HCI. For example Dell EMC, Cisco (with NetApp and other storage vendors), IBM and Oracle offer CI solutions that include enterprise storage arrays in the rack. Most HCI platforms do not interoperate with enterprise storage arrays because the HCI platforms include their own storage. They can co-exist with enterprise storage arrays and that's how most customers deploy them—some workloads run on the HCI infrastructure while others continue to use enterprise storage arrays. Q. One of the HCI selling points is simplicity and cost reductions from a la carte. It seems that from what is being presented, that may not be the case. Can you elaborate on where HCI may become more complex, costly? A. It comes down to value. You can buy all the components yourself and glue them all together and may come up with a lower total cost, but what is the value of your time? What is the cost of staff time to evaluate, test, deploy and maintain. The total value must be considered. It's possible that HCI will be more costly than a disaggregated deployment that separates compute and storage, but this depends heavily on the workload and specific vendor product solution implementation. Q. Current HCI "full stack" solutions claim compute and storage convergence, but what about the network? Given the east/west traffic introduced by HCI solutions, what networking solutions should customers be looking at? A. Most of the common HCI solutions are packaged with server, storage, compute and most have networking included as well—typically the network adapters and sometimes also the switches. Some even have a backend software defined networking (SDN) capability as part of their stack. Q. Related to HCI answer, what about vendors who allow for storage growth and/or server (compute) and storage additions. This allows for aggregated and dis-aggregated...yes? A. Most HCI vendors require compute and storage to be added simultaneously, though many support different nodes with different ratios of compute and storage. This allow customers to change the ratio of compute and storage by adding different node types. And yes, some HCI vendors also support both a hyper-converged and disaggregated model, with the disaggregated model allowing compute and storage to be added separately. Q. What are the tools available to make HCI work in a hybrid load environment, with different workload requirements, e.g.: VDI and Databases? A. There are tools for moving and migrating applications, workloads, systems and VMs into CI/HCI environments, likewise for tuning, optimizing, gaining insight, analytics and reporting. Most of the CI/HCI solutions have tools built into them for optimizing PACE (Performance, Availability, Capacity, Economics) attributes along with server compute, memory, storage, and I/O resources. Some CI/HCI solutions are optimized for VDI/workspaces, while others are able to support general workloads including databases, and some even support HPC/SC or other specialized workloads. Q. Does network performance affect HCI or CI performance? A. Sometimes. Most hybrid HCI nodes are happy with the bandwidth of 10GbE, but if the nodes are all-flash or have many disks, then a faster speed may be required to avoid a network bottleneck. Network latency could affect HCI or CI performance in some cases, especially with all-flash storage. Of course a reliable network helps ensure reliable CI/HCI operations. Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Expert Takes on Hyperconverged Questions

John Kim

Apr 17, 2017

title of post
Last month, we were fortunate enough to have Greg Schulz, analyst and founder of Server Storage IO, as a guest speaker at our SNIA Ethernet Storage Forum webcast, “What Does Hyperconverged Mean to Storage.” If you missed it, it’s now available on-demand. Greg fielded many great questions during the live event, but we didn’t have time to get to them all. So here they are: Q. What is the difference between Converged Infrastructure (CI) and Hyperconverged Infrastructure (HCI)? A. HCI is aggregated. You scale compute and storage in lock step. Converged is disaggregated. You can scale the compute independently of the storage. There are some software solutions that can support both hyper-converged (aggregated) and converged (disaggregated) deployments.  Q. What is your definition of “Little Data”? A. Little Data is anything that’s not Big Data. It encompasses traditional databases, traditional structured, semi-structured and even some unstructured data. Q. With convergence, what is the impact on the IT organization? A. There is an opportunity for organizations to converge how they manage data infrastructure resources and services delivery. In other words, the technology can be leveraged to help an organization itself converge. Another impact is how converged solutions are protected, backed up, BC/BR/DR and related management done. Traditionally there are separate IT teams for compute, storage, and networking, especially in a large organization. New technology solutions may allow an organization to converge those teams. Q. Is there a hybrid strategy? Where a complete information system is composed of HCI/CI building blocks? If yes, what management tools would span these components? A. Sure, why not? Certainly you can converge your environment into a particular CI/HCI solution or approach, likewise, different CI/HCI solutions can co-exist along with other solutions in a given environment in hybrid ways. Have a hybrid strategy that looks at how technologies and solutions adapt to your needs and environment. Focus on how it’s going to work for you, vs. you having to work for them. Q. What does FUZE stand for? A. FUZE is not an acronym. It is the actual fuzing as in melding and bringing together things – literally fuzing thing together. Q. Do HCI vendors re-balance (compute, I/O, storage) automatically as more nodes are added? A. Solutions vary in how they rebalance the workloads. Some are dynamic while others rebalance on intervals; it varies how, when and what they rebalance. So, as you add capacity as you make changes, you need to make sure resources are properly allocated to address performance. Q. Can’t you offload those CPU cycles caused by I/O to another CPU? A. That’s an interesting question. Yes, move the application to another CPU. There is software that will leverage the resources on another CPU. Most HCI and CI solutions are running on a stack that requires hardware somewhere. Q. This discussion has touched on compute and storage scaling. What about network between compute in the CI/HCI infrastructure and external to other compute, databases, or end-users? A. Both CI and HCI need to connect to other resources, but in most cases the highest levels of network traffic are inside the CI or HCI stack because the compute and storage resources are contained within. Their connections to outside clients or servers data exchange, application integration, or client access is important but usually not very demanding on network bandwidth. (External connections for storage remote replication or backup could be bandwidth-intensive.) Q. How can the current Enterprise Storage Products blend with either CI or HCI? Enterprise Storage is basically centralized storage architecture however the HCI is built mostly on ‘distributed storage architecture’. So how can current Enterprise Storage show use cases to the customer to sell their Enterprise Storage either as part of the HCI solution or exist along with HCI? A. Generally enterprise storage products can be included in CI but are not blended with HCI. For example Dell EMC, Cisco (with NetApp and other storage vendors), IBM and Oracle offer CI solutions that include enterprise storage arrays in the rack. Most HCI platforms do not interoperate with enterprise storage arrays because the HCI platforms include their own storage. They can co-exist with enterprise storage arrays and that’s how most customers deploy them—some workloads run on the HCI infrastructure while others continue to use enterprise storage arrays. Q. One of the HCI selling points is simplicity and cost reductions from a la carte. It seems that from what is being presented, that may not be the case. Can you elaborate on where HCI may become more complex, costly? A. It comes down to value. You can buy all the components yourself and glue them all together and may come up with a lower total cost, but what is the value of your time? What is the cost of staff time to evaluate, test, deploy and maintain. The total value must be considered. It’s possible that HCI will be more costly than a disaggregated deployment that separates compute and storage, but this depends heavily on the workload and specific vendor product solution implementation. Q. Current HCI “full stack” solutions claim compute and storage convergence, but what about the network? Given the east/west traffic introduced by HCI solutions, what networking solutions should customers be looking at? A. Most of the common HCI solutions are packaged with server, storage, compute and most have networking included as well—typically the network adapters and sometimes also the switches. Some even have a backend software defined networking (SDN) capability as part of their stack. Q. Related to HCI answer, what about vendors who allow for storage growth and/or server (compute) and storage additions. This allows for aggregated and dis-aggregated…yes? A. Most HCI vendors require compute and storage to be added simultaneously, though many support different nodes with different ratios of compute and storage. This allow customers to change the ratio of compute and storage by adding different node types. And yes, some HCI vendors also support both a hyper-converged and disaggregated model, with the disaggregated model allowing compute and storage to be added separately. Q. What are the tools available to make HCI work in a hybrid load environment, with different workload requirements, e.g.: VDI and Databases? A. There are tools for moving and migrating applications, workloads, systems and VMs into CI/HCI environments, likewise for tuning, optimizing, gaining insight, analytics and reporting. Most of the CI/HCI solutions have tools built into them for optimizing PACE (Performance, Availability, Capacity, Economics) attributes along with server compute, memory, storage, and I/O resources. Some CI/HCI solutions are optimized for VDI/workspaces, while others are able to support general workloads including databases, and some even support HPC/SC or other specialized workloads. Q. Does network performance affect HCI or CI performance? A. Sometimes. Most hybrid HCI nodes are happy with the bandwidth of 10GbE, but if the nodes are all-flash or have many disks, then a faster speed may be required to avoid a network bottleneck. Network latency could affect HCI or CI performance in some cases, especially with all-flash storage. Of course a reliable network helps ensure reliable CI/HCI operations.  

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Managing Your Computing Ecosystem

Kristen Hauser

Apr 12, 2017

title of post

By George Ericson, Distinguished Engineer, Dell EMC; Member, SNIA Scalable Storage Management Technical Working Group, @GEricson

Introduction

This blog is part one of a three-part series recently published on “The Data Cortex”, which represents the thoughts and opinions from members of the CTO Team of Dell EMC’s Data Protection Division. The author, George Ericson, has been actively participating on the SNIA Scalable Storage Management Technical Working Group which has been developing the SNIA Swordfish storage management specification.

SNIA Swordfish is an extension to the Distributed Management Task Force’s (DMTF’s) open industry Redfish® standard, and the combination offers a unified approach to managing storage
and servers in environments like hyperscale and cloud infrastructures. This makes having a single portal convenient for obtaining feedback on either specification. SNIA’s Storage Management Initiative (SMI) has set up swordfishforum.com as an easy link that goes to the Redfish Forum site. Please visit often and share your thoughts.

Overview

There is a very real opportunity to take a giant step towards universal and interoperable management interfaces that are defined in terms of what your clients want to achieve. In the process, the industry can evolve away from the current complex, proprietary and product specific interfaces.

You’ve heard this promise before, but it’s never come to pass. What’s different this time? Major players are converging storage and servers. Functionality is commoditizing. Customers are demanding it more than ever.

Three industry-led open standards efforts have come together to collectively provide an easy to use and comprehensive API for managing all of the elements in your computing ecosystem, ranging from simple laptops to geographically distributed data centers.

This API is specified by:

  • the Open Data Protocol (OData) from Oasis
  • the Redfish Scalable Platforms Management API from the DMTF
  • the Swordfish Scalable Storage Management API from the SNIA

One can build a management service that is conformant to the Redfish or Swordfish specifications that provides a comprehensive interface for the discovery of the managed physical infrastructure, as well as for the provisioning, monitoring, and management of the environmental, compute, networking, and storage resources provided by that infrastructure. That management service is an OData conformant data service.

These specifications are evolving and certainly are not complete in all aspects. Nevertheless, they are already sufficient to provide comprehensive management of most features of products in the computing ecosystem.

This post and the following two will provide a short overview of each.

This post and the following two will provide a short overview of each.

OData

The first effort is the definition of the Open Data Protocol (OData). OData v4 specifications are OASIS standards that have also begun the international standardization process with ISO.

Simply asserting that a data service has a Restful API does nothing to assure that it is interoperable with any other data service. More importantly, Rest by itself makes no guarantees that a client of one Restful data service will be able to discover or know how to even navigate around the Restful API presented by some other data service.

OData enables interoperable utilization of Restful data services. Such services allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages. In addition to Redfish and Swordfish described below, a growing number of applications support OData data services, e.g. Microsoft Azure, SAP NetWeaver, IBM WebSphere, and Salesforce.

The OData Common Schema Definition Language (CSDL) specifies a standard metamodel used to define an Entity Data Model over which an OData service acts. The metamodel defined by CSDL is consistent with common elements of the UML v2.5 metamodel. This fact enables reliable translation to your programming language of your choice.

OData standardizes the construction of Restful APIs. OData provides standards for navigation between resources, for request and response payloads and for operation syntax. It specifies the discovery of the entity data model for the accessed data service. It also specifies how resources defined by the entity data model can be discovered. While it does not standardize the APIs themselves, OData does standardize how payloads are constructed and a set of query options and many other items that are often different across the many current Restful data services. OData specifications utilize standard HTTP, AtomPub, and JSON. Also, standard URIs are used to address and access resources.

The use of the OData protocol enables a client to access information from a variety of sources including relational databases, servers, storage systems, file systems, content management systems, traditional Web sites, and more.

Ubiquitous use will break down information silos and will enable interoperability between producers and consumers. This will significantly increase the ability to provide new and richer functionality on top of the OData services.

The OData specifications define:

Conclusion:

While Rest is a useful architectural style, it is not a “standard” and the variances in Restful APIs to express similar functions means that there is no standard way to interact with different systems. OData is laying the groundwork for interoperable management by standardizing the construction of Restful APIs. Next up – Redfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Managing Your Computing Ecosystem

khauser

Apr 12, 2017

title of post
  By George Ericson, Distinguished Engineer, Dell EMC; Member, SNIA Scalable Storage Management Technical Working Group, @GEricson Introduction This blog is part one of a three-part series recently published on “The Data Cortex”, which represents the thoughts and opinions from members of the CTO Team of Dell EMC’s Data Protection Division.  The author, George Ericson, has been actively participating on the SNIA Scalable Storage Management Technical Working Group which has been developing the SNIA Swordfish storage management specification. SNIA Swordfish is an extension to the Distributed Management Task Force’s (DMTF’s) open industry Redfish® standard, and the combination offers a unified approach to managing storage and servers in environments like hyperscale and cloud infrastructures. This makes having a single portal convenient for obtaining feedback on either specification. SNIA’s Storage Management Initiative (SMI) has set up swordfishforum.com as an easy link that goes to the Redfish Forum site. Please visit often and share your thoughts. Overview There is a very real opportunity to take a giant step towards universal and interoperable management interfaces that are defined in terms of what your clients want to achieve. In the process, the industry can evolve away from the current complex, proprietary and product specific interfaces. You’ve heard this promise before, but it’s never come to pass. What’s different this time? Major players are converging storage and servers. Functionality is commoditizing. Customers are demanding it more than ever. Three industry-led open standards efforts have come together to collectively provide an easy to use and comprehensive API for managing all of the elements in your computing ecosystem, ranging from simple laptops to geographically distributed data centers. This API is specified by:
  • the Open Data Protocol (OData) from Oasis
  • the Redfish Scalable Platforms Management API from the DMTF
  • the Swordfish Scalable Storage Management API from the SNIA
One can build a management service that is conformant to the Redfish or Swordfish specifications that provides a comprehensive interface for the discovery of the managed physical infrastructure, as well as for the provisioning, monitoring, and management of the environmental, compute, networking, and storage resources provided by that infrastructure. That management service is an OData conformant data service. These specifications are evolving and certainly are not complete in all aspects. Nevertheless, they are already sufficient to provide comprehensive management of most features of products in the computing ecosystem. This post and the following two will provide a short overview of each. This post and the following two will provide a short overview of each. OData The first effort is the definition of the Open Data Protocol (OData). OData v4 specifications are OASIS standards that have also begun the international standardization process with ISO. Simply asserting that a data service has a Restful API does nothing to assure that it is interoperable with any other data service. More importantly, Rest by itself makes no guarantees that a client of one Restful data service will be able to discover or know how to even navigate around the Restful API presented by some other data service. OData enables interoperable utilization of Restful data services. Such services allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages.  In addition to Redfish and Swordfish described below, a growing number of applications support OData data services, e.g. Microsoft Azure, SAP NetWeaver, IBM WebSphere, and Salesforce. The OData Common Schema Definition Language (CSDL) specifies a standard metamodel used to define an Entity Data Model over which an OData service acts. The metamodel defined by CSDL is consistent with common elements of the UML v2.5 metamodel.   This fact enables reliable translation to your programming language of your choice. OData standardizes the construction of Restful APIs. OData provides standards for navigation between resources, for request and response payloads and for operation syntax. It specifies the discovery of the entity data model for the accessed data service. It also specifies how resources defined by the entity data model can be discovered. While it does not standardize the APIs themselves, OData does standardize how payloads are constructed and a set of query options and many other items that are often different across the many current Restful data services. OData specifications utilize standard HTTP, AtomPub, and JSON. Also, standard URIs are used to address and access resources. The use of the OData protocol enables a client to access information from a variety of sources including relational databases, servers, storage systems, file systems, content management systems, traditional Web sites, and more. Ubiquitous use will break down information silos and will enable interoperability between producers and consumers. This will significantly increase the ability to provide new and richer functionality on top of the OData services. The OData specifications define: Conclusion: While Rest is a useful architectural style, it is not a “standard” and the variances in Restful APIs to express similar functions means that there is no standard way to interact with different systems. OData is laying the groundwork for interoperable management by standardizing the construction of Restful APIs. Next up – Redfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Managing Your Computing Ecosystem

Kristen Hauser

Apr 12, 2017

title of post

  By George Ericson, Distinguished Engineer, Dell EMC; Member, SNIA Scalable Storage Management Technical Working Group, @GEricson

Introduction

This blog is part one of a three-part series recently published on “The Data Cortex”, which represents the thoughts and opinions from members of the CTO Team of Dell EMC’s Data Protection Division.  The author, George Ericson, has been actively participating on the SNIA Scalable Storage Management Technical Working Group which has been developing the SNIA Swordfish™ storage management specification.

SNIA Swordfish is an extension to the Distributed Management Task Force’s (DMTF’s) open industry Redfish® standard, and the combination offers a unified approach to managing storage
and servers in environments like hyperscale and cloud infrastructures. This makes having a single portal convenient for obtaining feedback on either specification. SNIA’s Storage Management Initiative (SMI) has set up swordfishforum.com as an easy link that goes to the Redfish Forum site. Please visit often and share your thoughts.

Overview

There is a very real opportunity to take a giant step towards universal and interoperable management interfaces that are defined in terms of what your clients want to achieve. In the process, the industry can evolve away from the current complex, proprietary and product specific interfaces.

You’ve heard this promise before, but it’s never come to pass. What’s different this time? Major players are converging storage and servers. Functionality is commoditizing. Customers are demanding it more than ever.

Three industry-led open standards efforts have come together to collectively provide an easy to use and comprehensive API for managing all of the elements in your computing ecosystem, ranging from simple laptops to geographically distributed data centers.

This API is specified by:

  • the Open Data Protocol (OData) from Oasis
  • the Redfish Scalable Platforms Management API from the DMTF
  • the Swordfish Scalable Storage Management API from the SNIA

One can build a management service that is conformant to the Redfish or Swordfish specifications that provides a comprehensive interface for the discovery of the managed physical infrastructure, as well as for the provisioning, monitoring, and management of the environmental, compute, networking, and storage resources provided by that infrastructure. That management service is an OData conformant data service.

These specifications are evolving and certainly are not complete in all aspects. Nevertheless, they are already sufficient to provide comprehensive management of most features of products in the computing ecosystem.

This post and the following two will provide a short overview of each.

This post and the following two will provide a short overview of each.

OData

The first effort is the definition of the Open Data Protocol (OData). OData v4 specifications are OASIS standards that have also begun the international standardization process with ISO.

Simply asserting that a data service has a Restful API does nothing to assure that it is interoperable with any other data service. More importantly, Rest by itself makes no guarantees that a client of one Restful data service will be able to discover or know how to even navigate around the Restful API presented by some other data service.

OData enables interoperable utilization of Restful data services. Such services allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages.  In addition to Redfish and Swordfish described below, a growing number of applications support OData data services, e.g. Microsoft Azure, SAP NetWeaver, IBM WebSphere, and Salesforce.

The OData Common Schema Definition Language (CSDL) specifies a standard metamodel used to define an Entity Data Model over which an OData service acts. The metamodel defined by CSDL is consistent with common elements of the UML v2.5 metamodel.   This fact enables reliable translation to your programming language of your choice.

OData standardizes the construction of Restful APIs. OData provides standards for navigation between resources, for request and response payloads and for operation syntax. It specifies the discovery of the entity data model for the accessed data service. It also specifies how resources defined by the entity data model can be discovered. While it does not standardize the APIs themselves, OData does standardize how payloads are constructed and a set of query options and many other items that are often different across the many current Restful data services. OData specifications utilize standard HTTP, AtomPub, and JSON. Also, standard URIs are used to address and access resources.

The use of the OData protocol enables a client to access information from a variety of sources including relational databases, servers, storage systems, file systems, content management systems, traditional Web sites, and more.

Ubiquitous use will break down information silos and will enable interoperability between producers and consumers. This will significantly increase the ability to provide new and richer functionality on top of the OData services.

The OData specifications define:

Conclusion:

While Rest is a useful architectural style, it is not a “standard” and the variances in Restful APIs to express similar functions means that there is no standard way to interact with different systems. OData is laying the groundwork for interoperable management by standardizing the construction of Restful APIs. Next up – Redfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Managing Your Computing Ecosystem

Kristen Hauser

Apr 12, 2017

title of post

  By George Ericson, Distinguished Engineer, Dell EMC; Member, SNIA Scalable Storage Management Technical Working Group, @GEricson

Introduction

This blog is part one of a three-part series recently published on “The Data Cortex”, which represents the thoughts and opinions from members of the CTO Team of Dell EMC’s Data Protection Division.  The author, George Ericson, has been actively participating on the SNIA Scalable Storage Management Technical Working Group which has been developing the SNIA Swordfish™ storage management specification.

SNIA Swordfish is an extension to the Distributed Management Task Force’s (DMTF’s) open industry Redfish® standard, and the combination offers a unified approach to managing storage
and servers in environments like hyperscale and cloud infrastructures. This makes having a single portal convenient for obtaining feedback on either specification. SNIA’s Storage Management Initiative (SMI) has set up swordfishforum.com as an easy link that goes to the Redfish Forum site. Please visit often and share your thoughts.

Overview

There is a very real opportunity to take a giant step towards universal and interoperable management interfaces that are defined in terms of what your clients want to achieve. In the process, the industry can evolve away from the current complex, proprietary and product specific interfaces.

You’ve heard this promise before, but it’s never come to pass. What’s different this time? Major players are converging storage and servers. Functionality is commoditizing. Customers are demanding it more than ever.

Three industry-led open standards efforts have come together to collectively provide an easy to use and comprehensive API for managing all of the elements in your computing ecosystem, ranging from simple laptops to geographically distributed data centers.

This API is specified by:

  • the Open Data Protocol (OData) from Oasis
  • the Redfish Scalable Platforms Management API from the DMTF
  • the Swordfish Scalable Storage Management API from the SNIA

One can build a management service that is conformant to the Redfish or Swordfish specifications that provides a comprehensive interface for the discovery of the managed physical infrastructure, as well as for the provisioning, monitoring, and management of the environmental, compute, networking, and storage resources provided by that infrastructure. That management service is an OData conformant data service.

These specifications are evolving and certainly are not complete in all aspects. Nevertheless, they are already sufficient to provide comprehensive management of most features of products in the computing ecosystem.

This post and the following two will provide a short overview of each.

This post and the following two will provide a short overview of each.

OData

The first effort is the definition of the Open Data Protocol (OData). OData v4 specifications are OASIS standards that have also begun the international standardization process with ISO.

Simply asserting that a data service has a Restful API does nothing to assure that it is interoperable with any other data service. More importantly, Rest by itself makes no guarantees that a client of one Restful data service will be able to discover or know how to even navigate around the Restful API presented by some other data service.

OData enables interoperable utilization of Restful data services. Such services allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages.  In addition to Redfish and Swordfish described below, a growing number of applications support OData data services, e.g. Microsoft Azure, SAP NetWeaver, IBM WebSphere, and Salesforce.

The OData Common Schema Definition Language (CSDL) specifies a standard metamodel used to define an Entity Data Model over which an OData service acts. The metamodel defined by CSDL is consistent with common elements of the UML v2.5 metamodel.   This fact enables reliable translation to your programming language of your choice.

OData standardizes the construction of Restful APIs. OData provides standards for navigation between resources, for request and response payloads and for operation syntax. It specifies the discovery of the entity data model for the accessed data service. It also specifies how resources defined by the entity data model can be discovered. While it does not standardize the APIs themselves, OData does standardize how payloads are constructed and a set of query options and many other items that are often different across the many current Restful data services. OData specifications utilize standard HTTP, AtomPub, and JSON. Also, standard URIs are used to address and access resources.

The use of the OData protocol enables a client to access information from a variety of sources including relational databases, servers, storage systems, file systems, content management systems, traditional Web sites, and more.

Ubiquitous use will break down information silos and will enable interoperability between producers and consumers. This will significantly increase the ability to provide new and richer functionality on top of the OData services.

The OData specifications define:

Conclusion:

While Rest is a useful architectural style, it is not a “standard” and the variances in Restful APIs to express similar functions means that there is no standard way to interact with different systems. OData is laying the groundwork for interoperable management by standardizing the construction of Restful APIs. Next up – Redfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on All Things iSCSI

Alex McDonald

Apr 7, 2017

title of post
In the recent SNIA Ethernet Storage Forum iSCSI pod webcast, from our "Everything You Wanted To Know About Storage Part Were Too Proud to Ask" series, we discussed all things iSCSI. If you missed the live event, it's now available on-demand. As promised, we've compiled all the webcast questions with answers from our panel of experts. If you have additional questions, please feel free to ask them in the comment field of this blog. I also encourage you to check out the other on-demand webcasts in this "Too Proud To Ask" series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF. Q. What does SPDK stand for? A. SPDK stands for Storage Performance Development Kit. It is comprised of tools and libraries for developers to write high performance and scalable storage applications in user-mode. For details, see www.spdk.io. Q. Can you elaborate on SPDK use? A quick search seems to indicate it is a "half-baked" solution, and available only on Linux systems. A. SPDK isn't a solution, per se – it's a development kit, intended to provide common building blocks (NVMe drivers, NVMe over Fabrics targets & host/initiator, etc.) for solutions developers who care about latency, license (BSD) and efficiency. Q. Is iSCSI ever going to be able to work with object storage? A. iSCSI is a block storage protocol while object storage is normally accessed using a RESTful API such as Amazon's S3 API or the Swift API. For this reason, iSCSI is unlikely to be used for direct access to object storage. However, an object storage system controller could use iSCSI—or other block protocols--to access remote storage enclosures or for data replication. There also could be storage systems that support both iSCSI/block and object storage access simultaneously. Q. Does a high-density virtualized workload represent something better served with a full or partial offload solution? A. The type of workload that is better served with full or partial offload will really depend more on what that workload is doing. If you are processing a lot of very large data segments, LSO or LRO might be very helpful. If you have a lot of smaller data sets, you might be able to benefit from checksum or chimney offload. Unfortunately, the best way to see is to test things out (but not on production, obviously). Q. How does one determine if TOE NIC cards are worth the cost? A. This is a really tough question to answer without context. The best way to look at it is do some digging into what your CPU and memory utilization and IO patters look like on your servers and try to map that to TCP connections. If you have a lot of iSCSI IO and a large amount of TCP connections on a server, that might be a candidate for TOE. That's just a technical response, but then comes the really tricky part - the QUANTITY measurement of how many dollars it is worth... that's way more challenging. For example, if I have a regular 10G NIC that costs $200 and a TOE card that costs 3x that and only saves 5% CPU, then it may not have enough value. On the other hand, if that 5% CPU can be used by your application to transact enough business to pay for the extra $400, then it's worth it. Sorry to say that I have seen no scientific way to enumerate that value outside of specific hands-on testing of the solution with and without TOE NICs. Q. What is the difference between a stateless and stateful TCP offload? Are RSS and TSS (receive-side and transmission-side scaling) offloads a type of TCP offload or are they operating at a lower level like Layer 2? A. Stateless offloading is basically any offload function that can be done without the NIC needing to maintain a connection state table. Checksum offloads are an example. Stateful offloading is any offloading that requires the NIC to maintain a full state connection table. Receive Side Scaling has to do with distributing inbound connections in order to alternate connections coming into the server to different CPUs on a multi-CPU server. There are also some other performance-enhancements that can be done such as RPS, RFS, XPS and some others. These are more about how to get data from the network to the CPU, but are not really specifically TCP functions, as they have to do with uniform processing, not necessarily to do with the TCP stack. Q. Is using the host CPU to run iSCSI really a downside? A. There may be applications where this is a problem, but you're generally right; it's not too much of an issue today. But there are iSCSI-based storage solutions coming up where a consistent 100s of nanoseconds to low microseconds of latency from the device is possible – and that's very fast indeed. So an iSCSI stack in these circumstances needs to ensure that its consumption of CPU doesn't increase the latency (even very efficient stacks can add 100s of micro- to milliseconds of latency), or cause contention for the CPU (busy CPUs mean you may queue for compute resources). Q. Is the term "onload" for iSCSI new – never heard this before? A. It was intended as a quick shorthand word to stand in contrast to iSCSI offload. It will probably not catch on! Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on All Things iSCSI

AlexMcDonald

Apr 7, 2017

title of post
In the recent SNIA Ethernet Storage Forum iSCSI pod webcast, from our “Everything You Wanted To Know About Storage Part Were Too Proud to Ask” series, we discussed all things iSCSI. If you missed the live event, it’s now available on-demand. As promised, we’ve compiled all the webcast questions with answers from our panel of experts. If you have additional questions, please feel free to ask them in the comment field of this blog. I also encourage you to check out the other on-demand webcasts in this “Too Proud To Ask” series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF. Q. What does SPDK stand for? A. SPDK stands for Storage Performance Development Kit. It is comprised of tools and libraries for developers to write high performance and scalable storage applications in user-mode. For details, see www.spdk.io. Q. Can you elaborate on SPDK use? A quick search seems to indicate it is a “half-baked” solution, and available only on Linux systems. A. SPDK isn’t a solution, per se – it’s a development kit, intended to provide common building blocks (NVMe drivers, NVMe over Fabrics targets & host/initiator, etc.) for solutions developers who care about latency, license (BSD) and efficiency. Q. Is iSCSI ever going to be able to work with object storage? A. iSCSI is a block storage protocol while object storage is normally accessed using a RESTful API such as Amazon’s S3 API or the Swift API. For this reason, iSCSI is unlikely to be used for direct access to object storage. However, an object storage system controller could use iSCSI—or other block protocols–to access remote storage enclosures or for data replication. There also could be storage systems that support both iSCSI/block and object storage access simultaneously. Q. Does a high-density virtualized workload represent something better served with a full or partial offload solution? A. The type of workload that is better served with full or partial offload will really depend more on what that workload is doing. If you are processing a lot of very large data segments, LSO or LRO might be very helpful. If you have a lot of smaller data sets, you might be able to benefit from checksum or chimney offload. Unfortunately, the best way to see is to test things out (but not on production, obviously). Q. How does one determine if TOE NIC cards are worth the cost? A. This is a really tough question to answer without context. The best way to look at it is do some digging into what your CPU and memory utilization and IO patters look like on your servers and try to map that to TCP connections. If you have a lot of iSCSI IO and a large amount of TCP connections on a server, that might be a candidate for TOE. That’s just a technical response, but then comes the really tricky part – the QUANTITY measurement of how many dollars it is worth… that’s way more challenging. For example, if I have a regular 10G NIC that costs $200 and a TOE card that costs 3x that and only saves 5% CPU, then it may not have enough value. On the other hand, if that 5% CPU can be used by your application to transact enough business to pay for the extra $400, then it’s worth it. Sorry to say that I have seen no scientific way to enumerate that value outside of specific hands-on testing of the solution with and without TOE NICs. Q. What is the difference between a stateless and stateful TCP offload? Are RSS and TSS (receive-side and transmission-side scaling) offloads a type of TCP offload or are they operating at a lower level like Layer 2? A. Stateless offloading is basically any offload function that can be done without the NIC needing to maintain a connection state table. Checksum offloads are an example. Stateful offloading is any offloading that requires the NIC to maintain a full state connection table. Receive Side Scaling has to do with distributing inbound connections in order to alternate connections coming into the server to different CPUs on a multi-CPU server. There are also some other performance-enhancements that can be done such as RPS, RFS, XPS and some others. These are more about how to get data from the network to the CPU, but are not really specifically TCP functions, as they have to do with uniform processing, not necessarily to do with the TCP stack. Q. Is using the host CPU to run iSCSI really a downside? A. There may be applications where this is a problem, but you’rr generally right; it’s not too much of an issue today. But there are iSCSI-based storage solutions coming up where a consistent 100s of nanoseconds to low microseconds of latency from the device is possible – and that’s very fast indeed. So an iSCSI stack in these circumstances needs to ensure that its consumption of CPU doesn’t increase the latency (even very efficient stacks can add 100s of micro- to milliseconds of latency), or cause contention for the CPU (busy CPUs mean you may queue for compute resources). Q. Is the term “onload” for iSCSI new – never heard this before? A. It was intended as a quick shorthand word to stand in contrast to iSCSI offload. It will probably not catch on!        

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered on Non-Volatile DIMMs

Marty Foltyn

Apr 3, 2017

title of post
  by Arthur Sainio, SNIA NVDIMM SIG Co-Chair, SMART Modular SNIA’s Non-Volatile DIMM (NVDIMM) Special Interest Group (SIG) had a tremendous response to their most recent webcast:  NVDIMM:  Applications are Here!  You can view the webcast on demand. Viewers had many questions during the webcast.  In this blog, the NVDIMM SIG answers those questions and shares the SIG’s knowledge of NVDIMM technology. Have a question?  Send it to nvdimmsigchair@snia.org. 1. What about 3DXpoint, how will this technology impact the market? 3DXPoint DIMMs will likely have a significant impact on the market. They are fast enough to use as a slower tier of memory between NAND and DRAM.  It is still too early to tell though. 2. What are good benchmark tools for DAX and what are the differences between NVML applications and DAX aware applications? For benchmark tools, please see the answer for (11). NVML applications are written specifically for NVM (Non-Volatile Memory). They may use the open source NVML libraries (http://pmem.io/nvml) for their usage. DAX is a File System feature that avoids the usage of Page Cache buffers.  DAX aware applications are aware that the writes and reads would go directly to the underlying NVM without being cached. 3. On the slide talking about NUMA, there was a mention accessing NVDIMMs from a CPU on a different memory bus. The part about larger access times was clear enough. However, I came away with the impression that there is a correctness issue with handling of ADR signal as well. Please clarify. If this question is asking whether the NUMA remote CPU will successfully flush ADR-protected buffers to memory connected to the NUMA near CPU then yes there is the potential for a problem in this area. However ADR is an Intel feature that is not specified in the JEDEC NVDIMM standard, so this is an Intel specific implementation question. The question needs to be posed to Intel. 4. How common is NVDIMM compatible BIOS? How would one check? They are becoming more common all the time. There are at least 8 server/storage systems from Intel and 22 from Supermicro that support NVDIMMs.  Several other motherboard vendors have systems that support NVDIMMs.  Most of the NVDIMM vendors have the lists posted on their websites. 5. How does a system go in to save? How what exactly does the BIOS have to do to get a system before asserting save? The BIOS does the initial checking of making sure the NVDIMM has backup supply on power loss, before it ARMs it. Also, the BIOS makes sure that any RESTORE of the previously saved data is properly done. This involves a set of operations by setting appropriate registers in the NVDIMM module – all that happens during the boot up initialization. On A/C Power Loss, the PCH (Platform Control Hub) detects the condition and initiates what is called the ADR (Asynchronous DRAM Refresh) sequence, terminating in the assertion of SAVE signal by the CPLD. Without the BIOS ARM-ing the NVDIMM module, the NVDIMM module will not respond to the SAVE signal on power loss situation. 6. Could you paint the picture of hardware costs at this point? How soon will NVDIMM-enabled systems be able to become “the rest of us”? The NVDIMM use DRAM, NAND Flash, a controller and well as many other parts in addition to what are used on standard RDIMMs. On that basis the cost of NVDIMM-N is higher that standard RDIMMs.  NVDIMM-enabled systems have been available for several years and are shipping now. 7. Does RHEL 7.3 easily support Linux Kernel 4.4? RHEL 7.3 is still using the 3.10 version of the Linux Kernel. For RHEL related information, please, check with Red Hat. You can also refer to: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.3_Release_Notes/index.html The distribution has drivers to support the persistent memory. They have also packaged the libraries for the persistent memory. 8. What are the usual sizes for NVDIMMs available today? 4GB, 8GB, 16GB, 32GB 9. Are there any case studies of each of the NVDIMM-N applications mentioned? You can find some examples of case studies at these websites:  https://channel9.msdn.com/events/build/2016/p466 and https://msdn.com/events/build/2016/p470 10. What is the difference between pmem lib/pmfs in Linux and an DAX enabled files system (like ext-DAX)? A DAX based File System avoids the usage of Kernel Page Cache Layer for caching its write data. This would make all its write operations go directly to the underlying storage unit. One important thing to understand is, a DAX File System can still use BLOCK DRIVERS for accessing its underlying storage. PMFS is a File System that is optimized to use Persistent Memory, by completely avoiding the Page Cache and the Block Drivers. It is designed to provide efficient access to Persistent Memory that would be directly accessible via CPU load/store instructions. Refer to this link: https://github.com/linux-pmfs/pmfs for more details. PMFS, as of now is only in experimental stages. 11. What tool is used to measure the performance? The performance measurement depends on what kind of Application workload is to be characterized. This is a very complex topic. No single benchmarking tool is good for all the workload characteristics. For File System performance, SpecFS, Bonnie++, IOZone, FFSB, FileBench etc., are good tools. SysBench is good for a variety of performance measurements. Phoronix Test Suite (http://www.phoronix.com/scan.php?page=home) has a variety of tools for Linux based performance measurements. 12. How similar do you expect the OS support for P to be to this support for –N? I don’t see a lot of need for differences at this level (though there certainly will be differences in the BIOS). As of now, the open source libraries (http://pmem.io) are designed to be agnostic about the underlying memory types. They are simply classified as Persistent Memory, meaning, it could be “-N” or “-P” or something else. The libraries are written for User Space, and they assume that any underlying Kernel support should be transparent. The “-P” type has been thought of supporting both the DRAM and the PERSISTENT access at the same time. This might need a separate set of drivers in the Kernel.   13.  Does the PM-based file system appear to be block addressable from the Application? A File System creates a layer of virtualization to support the logical entities such as VOLUMES, DIRECTORIES and FILES. Typically, an Application that is running in the User Space has no knowledge of the underlying mechanisms used by a File System for accessing its storage units such as the Persistent Memory. The access provided by a File System to an Application is typically a POSIX File System interface such as open, close, read, write, seek, etc.,  14. Is ADR a pin? ADR stands for Asynchronous DRAM Refresh. ADR is a feature supported on Intel chipsets that triggers a hardware interrupt to the memory controller which will flush the write-protected data buffers and place the DRAM in self-refresh. This process is critical during a power loss event or system crash to ensure the data is in a “safe” state when the NVDIMM takes control of the DRAM to backup to Flash. Note that ADR does not flush the processor cache. In order to do so, an NMI routine would need to be executed prior to ADR.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to