Sorry, you need to enable JavaScript to visit this website.

Linear Tape File System Now an International Standard

khauser

Jun 9, 2016

title of post
By David Pease, Co-Chair SNIA Linear Tape File System Technical Working Group In 2011 the Linear Tape File System (LTFS) earned IBM an Engineering Emmy Award after being recognized by FOX Networks for “improving the ability of media companies to capture, manage and exploit content in digital form, fundamentally changing the way that audio and video content is managed and stored.”  Now, the International Standardization Organization (ISO) has named LTFS an International Standard (ISO/IEC 20919:2016). LTFS’s road to standardization was a long one.  It started with IBM and the LTO (Linear Tape Open) Consortium jointly publishing the LTFS Format Specification as an open format in April, 2010, the day that LTFS was announced at the NAB (National Association of Broadcasters) show in Las Vegas.  In 2012, at the invitation of the Storage Networking Industry Association (SNIA), we formed the SNIA LTFS Technical Work Group, with a specific goal of moving towards international standardization.  The LTFS TWG and SNIA proceeded to publish several revisions of the LTFS Format Specification, inviting all interested parties to join the work group and contribute, or to comment on the specification before formal publication.  In 2014 SNIA helped the LTFS TWG format the then-current version of the specification (V2.2) to ISO standards and worked with the ISO organization to publish the specification as a draft standard and solicit comments.  After review and comments, the LTFS Format Specification was approved by ISO as an international standard in April of 2016 (just 6 years after it was first announced). We are thrilled by the recognition of LTFS as an ISO standard; it is one more step towards guaranteeing that the LTFS format is a truly open standard that will continue to be available and usable for the foreseeable future.  In my opinion, two of the major inhibitors to the widespread use of tape technology for data storage have been the lack of a standard format for data storage and interchange on tape, and its perceived difficulty of use. LTFS addresses both of these problems by providing a general-purpose, open format that can easily be used like any other storage medium. As the world’s data continues to grow at an increasing pace, and the need for affordable, large-scale storage becomes more important, the standardization of LTFS will make the use of tape for long-term, affordable storage easier and more attractive. Use Case: Making Digital Media Storage Open and Future-Proof Just as in personal photography, the last couple of decades have seen a major shift from analog and film technologies to digital ones in the Media and Entertainment industry, where modern cameras record directly to digital media. This has led to the need for new technologies to replace traditional film as a long-term storage medium for television and movies. Film has some specific advantages for the Media & Entertainment industry that a new technology needs to replicate, including long shelf life, inexpensive, and zero-power storage, and a format that is “future-proof.” Tape storage is a perfect match to several of these criteria, including long (30+ years) shelf life, and zero-power, inexpensive storage. However, a stumbling block for the wide-spread acceptance of tape for digital storage in the media and entertainment business had been the lack of an open, easy-to-use, future-proof standard for the format of the data on tape. You can imagine an entertainment company using proprietary storage software, for example, only to run into problems like the provider going out of business or increasing its software costs to an unacceptable level. We created LTFS to be an open and future-proof format from the beginning: open, because when we published the format, we made it publicly available at no charge, and future-proof because the format is self-documenting and can be easily accessed without the need for proprietary software. Being an international standard should make anyone who is considering the use of LTFS even more comfortable with the fact that it is an open standard that is not owned or controlled by any single company, and is a format that will continue to be supported in the future.  As such, becoming an international standard has the potential to increase the use, and therefore the value, of LTFS across industries. For more information about the work of the SNIA LTFS TWG, please visit www.snia.org/ltfs.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on Exactly How iSCSI has Evolved

David Fair

Jun 3, 2016

title of post

Our recent SNIA ESF Webcast, “The Evolution of iSCSI” drew a big and diverse group of attendees. From beginners looking for iSCSI basics, to experts with a lot of iSCSI deployment experience, there were plenty of good questions. Our presenters, Andy Banta and Fred Knight, did a great job answering as many as they could during the live event, but we didn’t have time to get to them all. So here are answers to them all. And by the way, if you missed the Webcast, it’s now available on-demand.

Q. What are the top 3 reasons to choose iSCSI over FC SAN?

A. 1. Use of commodity equipment and protocols. It means that you don’t have to set up a completely separate network. It means you don’t have to buy separate HBAs. 2. Inherent networking capability. Built on top of TCP/IP, it benefits from any networking technology to come along. These include routing, tunneling, authentication, encryption, etc. 3. Ease of automation and configuration. In it’s simplest form, an iSCSI host only needs to know the IP address of the target system. In more complex systems, hosts and storage provide APIs to allow automation through scripting or management tools.

Q. Please comment on why SCSI went from being a widely used protocol for all sorts of devices to being focused as only essentially a storage protocol?

A. SCSI was originally designed as both a protocol and a bus (original Parallel SCSI). Because there were no other busses, the SCSI bus did it all; disks, tapes, scanners, printers, Optical (CDs), media changers, etc. As other busses came onto the market (think USB), many of those devices moved to the new bus (CDs, printers, scanners, etc.) Commodity devices used commodity busses (IDE, SATA, USB), and enterprise devices used enterprise busses (FC, SAS); and so, disks, tapes, and media changers mostly stayed on SCSI.

The name SCSI can be confusing for some, as the term originally was used for both the SCSI protocol and the SCSI bus. The term for the SCSI protocol is all that remains today; the SCSI bus (the old SCSI parallel bus) is no longer in wide use. Today, the FC bus, or the SAS bus, or the SoP bus, or the SRP bus are used to carry the SCSI protocol. The SCSI Architecture Model (SAM) describes a very distinct separation between the device layer (the SCSI protocol) and the transport layer (the bus).

And, the SCSI command set has become the basis for many subsequent command sets. The JEDEC group used the SCSI command set as a model (JEDEC devices are in your cell phone), the ATAPI devices used SCSI commands, and many SCSI commands and SATA commands have a common heritage. The Mt. Fuji group (a standards group in Japan) also uses SCSI as the basis for new DVD and BlueRay devices. So, while not widely known, the SCSI command family has grown well beyond what is managed by the ANSI/INCITS T10 committee that originally defined SCSI in to a broad set of capabilities that are used across the industry, by a broad group of organizations. But, that all said, scanners and printers are still on USB, and SCSI is almost all about storage in one form or another.

Q. How does iSCSI support software-defined storage?

A. Answered during the talk. SDS provides more automation and knobs on the storage capabilities. But SDS still needs a way to transport the storage and iSCSI works perfectly fine for that. They are complementary technologies, not competing.

Q. With 40Gb and faster coming soon to a server near you, what kind of impact will that have on CPU utilization? Will smaller servers be able to push that much traffic?

A. More throughput simply requires more CPU. With good multithreaded drivers available, this can mean simply adding cores to keep the pipe as full as possible. As we mentioned near the end, using iSCSI with RDMA lightens the load on the CPU even more, so you’ll probably be seeing more of that.

Q. Is IPSec commonly supported on iSCSI targets?  

A. Yes, IPsec is required to be implemented on an iSCSI target to be a compliant device.  However, it is not commonly enabled by customers. If they MUST provide IPsec there are a lot of non-compliant initiators and targets on the market.

Q. I’m told direct connect with iSCSI is discouraged, that there should be a switch in place to handle the buffering, latency, acknowledgement etc….. Is this true or a best practice to make sure switches are part of the design?

A. If you have no need to connect to multiple targets or multiple initiators, there’s no harm in direct connections.

Q. Ethernet was not designed to support storage traffic. The TCP/IP protocol suite was not designed to support storage traffic. SCSI was not designed to be encapsulated. So TCP/IP FTW? I think not. The reason iSCSI is exists is [perceived] cost savings. I get fed up with people constantly looking for ways to squeeze another penny out of something. To me it illustrates that they’re not very creative. Fibre Channel is a stupid name, but it is a purpose built protocol that works as designed to.

A. Ethernet is a general purpose network. It is capable of handling lots of different traffic (including storage). By putting iSCSI onto an existing Ethernet infrastructure, it can (as you point out) create a substantial cost savings over installing a FC network (although that infrastructure savings comes with other costs – such as the impact of a shared wire). However, installing a dedicated Ethernet network provides many of the advantages of a dedicated FC network, but at an added cost over that of a shared Ethernet infrastructure. While most consider FC a purpose-built storage network, it is worth pointing out that some also consider it a general purpose network (for example FC-Avionics is built into Fighter Jets, and it’s not for storage). And while not designed to be encapsulated, (it was designed for a parallel bus), SCSI today is encapsulated on every transport that carries it (yes, that includes FCP and SAS).
There are many kinds of storage at different price points, USB storage, SATA devices, rotating media (at different RPMs), SSD devices, SAS devices, FC devices, single spindles, arrays, cloud, drop boxes, etc., all with the corresponding transport wires. iSCSI is one of those wires. Each protocol and wire offer specific advantages and disadvantages.   There can be a lot of confusion about which to use, but just as everyone does not drive the same type car (a FORD FUSION for example), everyone does not need the same type of storage (FC devices/arrays). Yes, I drive a FORD FUSION, and I like FC storage, but I use a USB stick on my laptop, and I pray my bank never puts my financial records out in the cloud. Selecting the right storage (and wire) for the job at hand can be one of a system administrators most interesting problems to solve.As for the name – that is often what happens in committees…

Q. As a best practice for Windows servers, disable hardware acceleration features in NICs (TOE etc.)? Are any NIC features valuable given modern multicore CPUs?

A. Yes. Typically the only reason to disable TOE is that multiple or virtual TCP/IP stacks are going to be using the same NIC. TSO, LRO and jumbo frames will benefit any OS that can take advantage of them.

Q. What is the advantage of iSCSI when compared with NVMe?

A. NVMe and iSCSI are very different protocols. NVMe started life as a direct attach protocol to communicate to native PCIe devices (not even outside the box). iSCSI was a network protocol from day one. iSCSI has to deal with the potential for long network induced delays, and complex out of order error recovery issues. NVMe operates over an interlocked bus, and as such, does not have those issues.

But, NVMe is now being extended over fabrics. NVMe over a RoCE V1 transport will be a data center network (since there is no IP routing). NVMe over a RoCE V2 transport or an iWARP transport will have the same routing capabilities that iSCSI has.   When it comes to the raw command set, they are very similar (but there are some differences). SCSI is a more full featured command set than NVMe – it has been developed over a span of over 25 years, and has developed solutions for all the problems that have been discovered during that time span. NVMe has a more limited (or more focused) command set (for example, there are no tape commands in the NVMe command set). iSCSI is available today, as is direct attach NVMe, but NVMe over Fabrics is still in the development phases (the specification is expected to be available the first week of June, 2016). NVMe products will take some time to mature and to develop solutions for the problems they have not discovered yet. Another example of this is the ability to support shared storage – it existed on day one in iSCSI, but did not exist in the first NVMe specification. To support shared storage in NVMe over Fabrics, that capability has since been added, and it was done using a SCSI compatible method (to make it easier for host S/W that already performs this function).

There is a large community working to develop NVMe over Fabrics. As memory based storage device get cheaper, and the solution space matures, NVMe will become more attractive.

Q. How often do iSCSI installations provide encryption of data in flight? How: IPsec, IKEv2-SCSI + ESP-SCSI, etc.?

A. Rarely. More often than not, if in-flight data security is needed, it will be run on an isolated network. Well under 100% of installations are 100% compliant.  VMware never qualified IPsec with iSCSI and didn’t have any obvious switch to turn it on. Side note: We standards guys can be overly picky about words.  Since the question is “provide” the answer is – 100% of compliant installations PROVIDE encryption (IPsec V2 – see above), however, in practice, installations that require that type of security typically run on isolated networks, rather than turn on encryption.

Q. How do multiple independent applications inside the same initiator map to iSCSI sessions to the same target? E.g., iSCSI session one-to-one with application?

A. There is no relationship between applications and sessions. When an iSCSI initiator discovers a target, the initiator logs in and establishes a session. If iSCSI MCS (multi connection session) is being used, multiple TCP connections may be established and used in parallel to process operations for that session.

Applications send reads and writes to the operating system. Those IO requests make their way through the file system and caching layers into the device driver. The device driver issues the IO request to the device (over the iSCSI session) and retains information about that IO. When a completion is received from a device (the WRITE command or READ command completed), it is matched up with the request. That completion status (success or error) is passed back through the operating system (file system, etc.) to the application. So it is the responsibility of the device driver to mux/demux the requests from all the applications out over the iSCSI session and track the responses as the operations are completed.

When an operating system is using MPIO (multi-pathing), then the device driver may create multiple sessions between the initiator and the target. This is where operating system MPIO policies such as round-robin, shortest queue, LRU, etc. come into play. In this case, the MPIO driver will send an IO operation to the device using what it considers to be the most appropriate path (based on the selected policy). But again, there is no relationship between the application and the path used for IO (any application can have it’s IO send via any path).

Today, MPIO is used more commonly than MCS.

Q. Will Microsoft iSCSI implement iSER?

A. This is a question for Microsoft or iSER-capable NIC vendor that provides Microsoft drivers.

Q.Zadara has some iSER deployments using Linux and VMware clients going to the Zadara cloud storage.

A. There’s an answer, all by itself.

Q. In the case of iWARP, the TCP layer takes care of out-of-order IP packet receptions. What layer does the out-of-order management of packets in ROCE ?

A. RoCE headers contain a 24 bit “Packet Sequence Number” that is used to validate the required ordering and detect lost packets. As such, ordering still occurs, just in a different way.

Q. Correction: RoCE is over Ethernet packets and is not routable. RoCEv2 is the one over UDP/IP and *is* routable.

A. You are correct. RoCE is not routable by IP. RoCE transmits raw Ethernet frames with just Ethernet MAC headers and no IP headers, and as such, it is not routable by IP. RoCE V2 puts the information into UDP packets (with appropriate IP headers), and therefore it is routable by IP.

Q. How prevalent is iSER today in deployment? And what are some of the typical applications that leverage iSER?

A. Not terribly prevalent today, but higher speed Ethernet might drive more adoption, due to the CPU savings demonstrated.

 

 

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on Exactly How iSCSI has Evolved

David Fair

Jun 3, 2016

title of post
Our recent SNIA ESF Webcast, "The Evolution of iSCSI" drew a big and diverse group of attendees. From beginners looking for iSCSI basics, to experts with a lot of iSCSI deployment experience, there were plenty of good questions. Our presenters, Andy Banta and Fred Knight, did a great job answering as many as they could during the live event, but we didn't have time to get to them all. So here are answers to them all. And by the way, if you missed the Webcast, it's now available on-demand. Q. What are the top 3 reasons to choose iSCSI over FC SAN? A. 1. Use of commodity equipment and protocols. It means that you don't have to set up a completely separate network. It means you don't have to buy separate HBAs. 2. Inherent networking capability. Built on top of TCP/IP, it benefits from any networking technology to come along. These include routing, tunneling, authentication, encryption, etc. 3. Ease of automation and configuration. In it's simplest form, an iSCSI host only needs to know the IP address of the target system. In more complex systems, hosts and storage provide APIs to allow automation through scripting or management tools. Q. Please comment on why SCSI went from being a widely used protocol for all sorts of devices to being focused as only essentially a storage protocol? A. SCSI was originally designed as both a protocol and a bus (original Parallel SCSI). Because there were no other busses, the SCSI bus did it all; disks, tapes, scanners, printers, Optical (CDs), media changers, etc. As other busses came onto the market (think USB), many of those devices moved to the new bus (CDs, printers, scanners, etc.) Commodity devices used commodity busses (IDE, SATA, USB), and enterprise devices used enterprise busses (FC, SAS); and so, disks, tapes, and media changers mostly stayed on SCSI. The name SCSI can be confusing for some, as the term originally was used for both the SCSI protocol and the SCSI bus. The term for the SCSI protocol is all that remains today; the SCSI bus (the old SCSI parallel bus) is no longer in wide use. Today, the FC bus, or the SAS bus, or the SoP bus, or the SRP bus are used to carry the SCSI protocol. The SCSI Architecture Model (SAM) describes a very distinct separation between the device layer (the SCSI protocol) and the transport layer (the bus). And, the SCSI command set has become the basis for many subsequent command sets. The JEDEC group used the SCSI command set as a model (JEDEC devices are in your cell phone), the ATAPI devices used SCSI commands, and many SCSI commands and SATA commands have a common heritage. The Mt. Fuji group (a standards group in Japan) also uses SCSI as the basis for new DVD and BlueRay devices. So, while not widely known, the SCSI command family has grown well beyond what is managed by the ANSI/INCITS T10 committee that originally defined SCSI in to a broad set of capabilities that are used across the industry, by a broad group of organizations. But, that all said, scanners and printers are still on USB, and SCSI is almost all about storage in one form or another. Q. How does iSCSI support software-defined storage? A. Answered during the talk. SDS provides more automation and knobs on the storage capabilities. But SDS still needs a way to transport the storage and iSCSI works perfectly fine for that. They are complementary technologies, not competing. Q. With 40Gb and faster coming soon to a server near you, what kind of impact will that have on CPU utilization? Will smaller servers be able to push that much traffic? A. More throughput simply requires more CPU. With good multithreaded drivers available, this can mean simply adding cores to keep the pipe as full as possible. As we mentioned near the end, using iSCSI with RDMA lightens the load on the CPU even more, so you'll probably be seeing more of that. Q. Is IPSec commonly supported on iSCSI targets?   A. Yes, IPsec is required to be implemented on an iSCSI target to be a compliant device.   However, it is not commonly enabled by customers. If they MUST provide IPsec there are a lot of non-compliant initiators and targets on the market. Q. I'm told direct connect with iSCSI is discouraged, that there should be a switch in place to handle the buffering, latency, acknowledgement etc..... Is this true or a best practice to make sure switches are part of the design? A. If you have no need to connect to multiple targets or multiple initiators, there's no harm in direct connections. Q. Ethernet was not designed to support storage traffic. The TCP/IP protocol suite was not designed to support storage traffic. SCSI was not designed to be encapsulated. So TCP/IP FTW? I think not. The reason iSCSI is exists is [perceived] cost savings. I get fed up with people constantly looking for ways to squeeze another penny out of something. To me it illustrates that they're not very creative. Fibre Channel is a stupid name, but it is a purpose built protocol that works as designed to. A. Ethernet is a general purpose network. It is capable of handling lots of different traffic (including storage). By putting iSCSI onto an existing Ethernet infrastructure, it can (as you point out) create a substantial cost savings over installing a FC network (although that infrastructure savings comes with other costs - such as the impact of a shared wire). However, installing a dedicated Ethernet network provides many of the advantages of a dedicated FC network, but at an added cost over that of a shared Ethernet infrastructure. While most consider FC a purpose-built storage network, it is worth pointing out that some also consider it a general purpose network (for example FC-Avionics is built into Fighter Jets, and it's not for storage). And while not designed to be encapsulated, (it was designed for a parallel bus), SCSI today is encapsulated on every transport that carries it (yes, that includes FCP and SAS). There are many kinds of storage at different price points, USB storage, SATA devices, rotating media (at different RPMs), SSD devices, SAS devices, FC devices, single spindles, arrays, cloud, drop boxes, etc., all with the corresponding transport wires. iSCSI is one of those wires. Each protocol and wire offer specific advantages and disadvantages.     There can be a lot of confusion about which to use, but just as everyone does not drive the same type car (a FORD FUSION for example), everyone does not need the same type of storage (FC devices/arrays). Yes, I drive a FORD FUSION, and I like FC storage, but I use a USB stick on my laptop, and I pray my bank never puts my financial records out in the cloud. Selecting the right storage (and wire) for the job at hand can be one of a system administrators most interesting problems to solve.As for the name - that is often what happens in committees... Q. As a best practice for Windows servers, disable hardware acceleration features in NICs (TOE etc.)? Are any NIC features valuable given modern multicore CPUs? A. Yes. Typically the only reason to disable TOE is that multiple or virtual TCP/IP stacks are going to be using the same NIC. TSO, LRO and jumbo frames will benefit any OS that can take advantage of them. Q. What is the advantage of iSCSI when compared with NVMe? A. NVMe and iSCSI are very different protocols. NVMe started life as a direct attach protocol to communicate to native PCIe devices (not even outside the box). iSCSI was a network protocol from day one. iSCSI has to deal with the potential for long network induced delays, and complex out of order error recovery issues. NVMe operates over an interlocked bus, and as such, does not have those issues. But, NVMe is now being extended over fabrics. NVMe over a RoCE V1 transport will be a data center network (since there is no IP routing). NVMe over a RoCE V2 transport or an iWARP transport will have the same routing capabilities that iSCSI has.     When it comes to the raw command set, they are very similar (but there are some differences). SCSI is a more full featured command set than NVMe - it has been developed over a span of over 25 years, and has developed solutions for all the problems that have been discovered during that time span. NVMe has a more limited (or more focused) command set (for example, there are no tape commands in the NVMe command set). iSCSI is available today, as is direct attach NVMe, but NVMe over Fabrics is still in the development phases (the specification is expected to be available the first week of June, 2016). NVMe products will take some time to mature and to develop solutions for the problems they have not discovered yet. Another example of this is the ability to support shared storage - it existed on day one in iSCSI, but did not exist in the first NVMe specification. To support shared storage in NVMe over Fabrics, that capability has since been added, and it was done using a SCSI compatible method (to make it easier for host S/W that already performs this function). There is a large community working to develop NVMe over Fabrics. As memory based storage device get cheaper, and the solution space matures, NVMe will become more attractive. Q. How often do iSCSI installations provide encryption of data in flight? How: IPsec, IKEv2-SCSI + ESP-SCSI, etc.? A. Rarely. More often than not, if in-flight data security is needed, it will be run on an isolated network. Well under 100% of installations are 100% compliant.  VMware never qualified IPsec with iSCSI and didn't have any obvious switch to turn it on. Side note: We standards guys can be overly picky about words.   Since the question is "provide" the answer is - 100% of compliant installations PROVIDE encryption (IPsec V2 – see above), however, in practice, installations that require that type of security typically run on isolated networks, rather than turn on encryption. Q. How do multiple independent applications inside the same initiator map to iSCSI sessions to the same target? E.g., iSCSI session one-to-one with application? A. There is no relationship between applications and sessions. When an iSCSI initiator discovers a target, the initiator logs in and establishes a session. If iSCSI MCS (multi connection session) is being used, multiple TCP connections may be established and used in parallel to process operations for that session. Applications send reads and writes to the operating system. Those IO requests make their way through the file system and caching layers into the device driver. The device driver issues the IO request to the device (over the iSCSI session) and retains information about that IO. When a completion is received from a device (the WRITE command or READ command completed), it is matched up with the request. That completion status (success or error) is passed back through the operating system (file system, etc.) to the application. So it is the responsibility of the device driver to mux/demux the requests from all the applications out over the iSCSI session and track the responses as the operations are completed. When an operating system is using MPIO (multi-pathing), then the device driver may create multiple sessions between the initiator and the target. This is where operating system MPIO policies such as round-robin, shortest queue, LRU, etc. come into play. In this case, the MPIO driver will send an IO operation to the device using what it considers to be the most appropriate path (based on the selected policy). But again, there is no relationship between the application and the path used for IO (any application can have it's IO send via any path). Today, MPIO is used more commonly than MCS. Q. Will Microsoft iSCSI implement iSER? A. This is a question for Microsoft or iSER-capable NIC vendor that provides Microsoft drivers. Q.Zadara has some iSER deployments using Linux and VMware clients going to the Zadara cloud storage. A. There's an answer, all by itself. Q. In the case of iWARP, the TCP layer takes care of out-of-order IP packet receptions. What layer does the out-of-order management of packets in ROCE ? A. RoCE headers contain a 24 bit "Packet Sequence Number" that is used to validate the required ordering and detect lost packets. As such, ordering still occurs, just in a different way. Q. Correction: RoCE is over Ethernet packets and is not routable. RoCEv2 is the one over UDP/IP and *is* routable. A. You are correct. RoCE is not routable by IP. RoCE transmits raw Ethernet frames with just Ethernet MAC headers and no IP headers, and as such, it is not routable by IP. RoCE V2 puts the information into UDP packets (with appropriate IP headers), and therefore it is routable by IP. Q. How prevalent is iSER today in deployment? And what are some of the typical applications that leverage iSER? A. Not terribly prevalent today, but higher speed Ethernet might drive more adoption, due to the CPU savings demonstrated. Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Podcasts Bring the Sounds of SNIA's Storage Developer Conference to Your Car, Boat, Train, or Plane!

khauser

May 26, 2016

title of post
SNIA's Storage Developer Conference (SDC) offers exactly what a developer of cloud, solid state, security, analytics, or big data applications is looking  for - rich technical content delivered in a no-vendor bias manner by today's leading technologists.  The 2016 SDC agenda is being compiled, but now yousdc podcast pic can get a "sound bite" of what to expect by downloading  SDC podcasts via iTunes, or visiting the SDC Podcast site at http://www.snia.org/podcasts to download the accompanying slides and/or listen to the MP3 version. Each podcast has been selected by the SNIA Technical Council from the 2015 SDC event, and include topics like:
  • Preparing Applications for Persistent Memory from Hewlett Packard Enterprise
  • Managing the Next Generation Memory Subsystem from Intel Corporation
  • NVDIMM Cookbook - a Soup to Nuts Primer on Using NVDIMMs to Improve Your Storage Performance from AgigA Tech and Smart Modular Systems
  • Standardizing Storage Intelligence and the Performance and Endurance Enhancements It Provides from Samsung Corporation
  • Object Drives, a New Architectural Partitioning from Toshiba Corporation
  • Shingled Magnetic Recording- the Next Generation of Storage Technology from HGST, a Western Digital Company
  • SMB 3.1.1 Update from Microsoft
Eight podcasts are now available, with new ones added each week all the way up to SDC 2016 which begins September 19 at the Hyatt Regency Santa Clara.  Keep checking the SDC Podcast website, and remember that registration is now open for the 2016 event at http://www.snia.org/events/storage-developer/registration.  The SDC conference agenda will be up soon at the home page of http://www.storagedeveloper.org. Enjoy these great technical sessions, no matter where you may be!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Open Source Software-Only Storage – Really.

Mario Blandini

May 24, 2016

title of post

Virtually any storage solution is more parts software than hardware. Having said this, users don’t care as much about the percentage of hardware vs. software. They want their consumption experience to be easy and fast to start up, with a pay-as-you-grow model and with the ability to scale without limits. So, it should not be a shock that real IT organizations are using software-only on standard servers to deliver storage to their customers. What’s more, this type of storage can be powered by open source.

At the upcoming SNIA Data Storage Innovation Conference, we are looking forward to discussing software-defined storage (SDS) from a user experience perspective with examples of OpenStack Swift providing an engine for building SDS clusters with any mixed combination of standard server and HDD hardware in a way that is simple enough for any enterprise to dynamically scale.

Swift is a highly available, distributed, scalable object store available as open source.  It is designed to handle non-relational (that is, not just simple row-column data) or unstructured data at large scale with high availability and durability.  For example, it can be used to store files, videos, documents, analytics results, Web content, drawings, voice recordings, images, maps, musical scores, pictures, or multimedia. Organizations can use Swift to store large amounts of data efficiently, safely, and cheaply. It scales horizontally without any single point of failure.  It offers a single multi-tenant storage system for all applications, the ability to use low-cost industry-standard servers and drives, and a rich ecosystem of tools and libraries.  It can serve the needs of any service provider or enterprise working in a cloud environment, regardless of whether the installation is using other OpenStack components.

I know what you are thinking, storage is too critical, so it will never work this way. But the same was said >25 years go when using RAID was seen as too risky given solutions would acknowledge writes while the data was in cache prior to being written to disk. The same was also said >15 years ago when VMware was seen as not robust enough to run any manner of demanding or critical application. Replicas and Erasure Codes are analogous to RAID 1 and RAID 5 respectively, and the uniquely as possible distribution of data behind a single namespace abstracts standard hardware like server virtualization.

Interested in hearing more? Come check out my DSI session, “Swift Use Cases with SwiftStack,” where we look forward to sharing how this new type of storage can work, and to suspend your disbelief that this storage can be enterprise-grade.

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered on NVDIMM

Marty Foltyn

May 23, 2016

title of post

The recent NVDIMM webcasts on the SNIA BrightTALK Channel sparked many questions from the almost 1,000 viewers who have watched it live or downloaded the on-demand cast. Now,  NVDIMM SIG Chairs Arthurnvdimm blog Sainio and Jeff Chang answer 35 of them in this blog.  Did you miss the live broadcasts? No worries, you can view NVDIMM and other webcasts on the SNIA webcast channel https://www.brighttalk.com/channel/663/snia-webcasts.

FUTURES QUESTIONS

What timeframe do you see server hardware, OS, and applications readily adopting/supporting/recognizing NVDIMMs?

DDR4 server and storage platforms are ready now. There are many off-the shelf server and/or storage motherboards that support NVDIMM-N.

Linux version 4.2 and beyond has native support for NVDIMMs. All the necessary drivers are supported in the OS.

NVDIMM adoption is in progress now.

Technical Preview 5 of Windows Server 2016 has NVDIMM-N support
 

How, if at all, does the positioning of NVDIMM-F change after the eventual introduction of new NVM technologies?

If 3DXP is successful it will likely to have a big impact on NVDIMM-F. 3DXP could be seen as an advanced version of a NVDIMM-F product. It sits directly on the DDR4 bus and is byte addressable.

NVDIMM-F products have the challenge of making them BYTE ADDRESSBLE, depending on what kind of persistent media is used.

If NAND flash is used, it would take a lot of techniques and resources to make such a product BYTE ADDRESSABLE.

On the other hand, if the new NVM technologies bring out persistent media that are BYTE ADDRESSABLE then the NVDIMM-F could easily use them for their backend.
How does NVDIMM-N compare to Intel’s 3DXPoint technology?

At this point there is limited technical information available on 3DXP devices.

When the specifications become available the NVDIMM SIG can create a comparison table.

NVDIMM-N products are available now. 3DXP-based products are planned for 2017, 2018. Theoretically 3DXP devices could be used on NVDIMM-N type modules

 

 

 

PERFORMANCE AND ENDURANCE QUESTIONS

What are the NVDIMM performance and endurance requirements?

NVDIMM-N is no different from a RDIMM under normal operating conditions. The endurance of the Flash or NVM technology used on the NVDIMM-N is not a critical factor since it is only used for backup.

NVDIMM-F would depend on various factors: (1) is the backend going to be NAND Flash or some other entity? (2) What kind of access pattern is going to be done by the application? The performance must be at least same as that of NVDIMM-N.

Are there endurance requirements for NVDIMM-F? Won’t the flash wear out quickly when used as memory?

Yes, the aspect of Flash being used as a RANDOM access device with MEMORY access characteristics would definitely have an impact on the endurance.
NVDIMM-F – Doesn’t the performance limitations of the NAND vs. DRAM effect the application?

NAND Flash would never hit the performance requirements of the DRAM when seen as an entity to entity comparison. But, in the whole perspective of a wider solution, the data path of DRAM data -> Persistence Data in a traditional model would have more delays contributed by a good number of software layers involved in making the data persistent versus, in the NVDIMM-F the data that is instantly persistent — for just a short term additional latency.
Is there extra heat being generated….does it need any other cooling (NVDIMM-F, NVDIMM-N)

No
In general, our testing of NVDIMM-F vs PCIe based SSDs has not shown the expected value of NVDIMMs.  The PCIe based NVMe storage still outperforms the NVDIMMs.

TBD
What is the amount of overhead that NVDIMMs are adding on CPUs?

None at normal operation
What can you say about the time required typically to charge the supercaps?  Is the application aware of that status before charge is complete?

Approximately two minutes depending on the density of the NVDIMM and the vendor.

The NVDIMM will not be ready because the charging status and in turn the system BIOS will wait; until it times out if the NVDIMM is not functioning.

USE QUESTIONS

What will happen if a system crashes then comes back before the NVDIMM finishes backup? How the OS know what to continue as the state in the register/L1/L2/L3 cache is already lost?

When system comes back up, it will check if there is valid data backed up in the NVDIMM. If yes, backed up data will be restored first before the BIOS sets up the system.

The OS can’t depend on the contents of the L1/L2/L3 cache. Applications must do I/O fencing, use commit points, etc. to guarantee data consistency.

Power supply should be able to hold power for at least 1ms after the warning of AC power loss.

Is there garbage collection on NVDIMMs?

This depends on individual vendors. NVDIMM-N may have overprovisioning and wear levering management for the NAND Flash.

Garbage collection really only makes sense for NVDIMM-F.
How is byte addressing enabled for NAND storage?

By default, the NAND storage can be addressed only through the BLOCK mode addressing. If BYTE addressability is desired, then the DDR memory at the front must provide sophisticated CACHING TECHNIQUES to trick the Host Memory Controller in to thinking that it is actually accessing a larger capacity DDR memory.
Is the restore command issued over the I2C bus?  Is that also known as the SMBus?

Yes, Yes
Could NVDIMM-F products be used as both storage and memory within the same server?

NVDIMM-F is by definition only block storage. NVDIMM-P is both (block) storage and memory.

 

COMPATIBILITY QUESTIONS

 

Is NVDIMM-N support built into the OS or do the NVDIMM vendors need to provide drivers? What OS’s (Windows version, Linux kernel version) have support?

In Linux, right from 4.2 version of the Kernel, the generic NVDIMM-N support is available.

All the necessary drivers are provided in the OS itself.

Regarding the Linux distributions, only Fedora and Ubuntu have upgraded themselves to the 4.x kernel.

The crucial aspect is, the BIOS/MRC support needed for the vendor specific NVDIMM-N to get exposed to the Host OS.

MS Windows has OS support – need to download.
What OS support is available for NVDIMM-F? I’m assuming some sort of drivers is required.

Diablo has said they worked the BIOS vendors to enable their Memory1 product. We need to check with them.

For other NVDIMM-F vendors they would likely require drivers.

As of now no native OS support is available.
Will NVDIMMs work with typical Intel servers that are 2-3 years old?   What are the hardware requirements?

The depends on the CPU. For Haswell, Grantley, Broadwell, and Purley the NVDIMM-N are and/or will be supported

The hardware requires that the CPLD, SAVE, and ADR signals are present

Is RDMA compatible with NVDIMM-F or NVDIMM-N?

The RDMA (Remote Direct Memory Access) is not available by default for NVDIMM-N and NVDIMM-F.

A software layer/extension needs to be written to accommodate that. Works are in progress by the PMEM committee (www.pmem.io) to make the RDMA feature available transparently for the applications in the future.

SNIA Reference: http://www.snia.org/sites/default/files/SDC15_presentations/persistant_mem/ChetDouglas_RDMA_with_PM.pdf
What’s the highest capacity that an NVDIMM-N can support?

Currently 8GB and 16GB but this depends on individual vendor’s roadmaps.

 

COST QUESTIONS

What is the NVDIMM cost going to look like compared to other flash type storage options?

This relates directly to what types and quantizes of Flash, DRAM, controllers and other components are used for each type.

 

MISCELLANEOUS QUESTIONS

How many vendors offer NVDIMM products?

AgigA Tech, Diablo, Hynix, Micron, Netlist, PNY, SMART, and Viking Technology are among the vendors offering NVDIMM products today.

 

Is encryption on the NVDIMM handled by the controller on the NVDIMM or the OS?

Encryption on the NVDIMM is under discussion at JEDEC. There has been no standard encryption method adopted yet.

If the OS encrypts data in memory the contents of the NVDIMM backup would be encrypted eliminating the need for the NVDIMM to perform encryption. Although because of the performance penalty of OS encryption, NVDIMM encryption is being considered by NVDIMM vendors.
Are memory operations what is known as DAX?

DAX means Direct Access and is the optimization used in the modern file systems – particularly EXT4 – to eliminate the Kernel Cache for holding the write data. With no intermediate cache buffers, the write operations go directly to the media. This makes the writes persistent as soon as they are committed.

Can you give some practical examples of where you would use NVDIMM-N, -F, and –P?

NVDIMM-N: load/store byte access for journaling, tiering, caching, write buffering and metadata storage

NVDIMM-F: block access for in-memory database (moving NAND to the memory channel eliminates traditional HDD/SSD SAS/PCIe link transfer, driver, and software overhead)

NVDIMM-P: can be used either NVDIMM-N or –F applications
Are reads and writes all the same latency for NVDIMM-F?

The answer depends on what kind of persistent layer is used.   If it is the NAND flash, then the random writes would have higher latencies when compared to the reads. If the 3D XPoint kind of persistent layer is used, it might not be that big of a difference.

 

I have interest in the NVDIMMs being used as a replacement for SSD and concerns about clearing cache (including credentials) stored as data moves from NVM to PM on an end user device

The NVDIMM-N uses serialization and fencing with Intel instructions to guarantee data is in the NVDIMM before a power failure and ADR.

 

I am interested in how many banks of NVDIMMs can be added to create a very large SSD replacement in a server storage environment.

NVDIMMs are added to a system in memory module slots. The current maximum density is 16GB or 32GB. Server motherboards may have 16 or 24 slots. If 8 of these slots have 16GB NVDIMMs that should be like a 96GB SSD.
What are the environmental requirements for NVDIMMs (power, cooling, etc.)?

There are some components on NVDIMMs that have a lower operating temperature than RDIMMs like flash and FPGA devices. Refer to each vendor’s data sheet for more information. Backup Energy Sources based on ultracapacitors require health monitoring and a controlled thermal environment to ensure an extended product life.
How about data-at-rest protection management? Is the data in NVDIMM protected/encrypted? Complying with TCG and FIPS seems very challenging. What are the plans to align with these?

As of today, encryption has not been standardized by JEDEC. It is currently up to each NVDIMM vendor whether or not to provide encryption..

 

Could you explain the relationship between the NVDIMM and the IO stack?

In the PMEM mode, the Kernel presents the NVDIMM as a reserved memory, directly accessible by the Host Memory Controller.

In the Block Mode, the Kernel driver presents the NVDIMM as a block device to the IO Block Layer.
With NVDIMMs the data can be in memory or storage. How is the data fragmentation managed?

The NVDIMM-N is managed as regular memory. The same memory allocation fragmentation issues and handling apply. The NVDIMM-F behaves like an SSD. Fragmentation issues on an NVDIMM-F are handled like an SSD with garbage collection algorithms.

 

Is there a plan to support PI type data protection for NVDIMM data? If not, achieving E2E data protection cannot be attained.

As of today, encryption has not been standardized by JEDEC. It is currently up to each NVDIMM vendor whether or not to provide encryption.

 

Since NVDIMM is still slower than DRAM so we still need DRAM in the system? We cannot get rid of DRAM yet?

With NVDIMM-N DRAM is still being used. NVDIMM-N operates at the speed of standard RDIMM

With NVDIMM-F modules, DRAM memory modules are still needed in the system.

With NVDIMM-P modules, DRAM memory modules are still needed in the system.
Can you use NVMe over ethernet?

NVMe over Fabrics is under discussion within SNIA http://www.snia.org/sites/default/files/SDC15_presentations/networking/WaelNoureddine_Implementing_%20NVMe_revision.pdf

 

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Architectural Principles for Networked Solid State Storage Access

J Metz

May 20, 2016

title of post

There are many permutations of technologies, interconnects and application level approaches in play with solid state storage today.  In fact, it is becoming increasingly difficult to reason clearly about which problems are best solved by various permutations of these. That’s why the SNIA Ethernet Storage Forum, together with the SNIA Solid State Storage Initiative, is hosting a live Webcast, “Architectural Principles for Networked Solid State Storage Access,” on June 2nd at 10:00 a.m. PT.

As our presenter, we are fortunate to have Doug Voigt, chair of the SNIA NVM Programming Technical Working Group and a member of the SNIA Technical Council. Doug will outline key architectural principals that may allow us to think about the application of networked solid state technologies more systematically, answering questions such as:

  • How do applications see IO and memory access differently?
  • What is the difference between a memory and an SSD technology?
  • How do application and technology views permute?
  • How do memory and network interconnects change the equation?
  • What are persistence domains and why are they important?

I hope you’ll register today and join us on June 2nd for an hour that is sure to be insightful.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Data Protection and OpenStack Mitaka

Thomas Rivera

May 20, 2016

title of post

Interested in data protection and storage-related features of OpenStack? Then please join us for a live SNIA Webcast “Data Protection and OpenStack Mitaka” on June 22nd. We’ve pulled together an expert team to discuss the data protection capabilities of the OpenStack Mitaka release, which includes multiple new resiliency features. Join Dr. Sam Fineberg, Distinguished Technologist (HPE), and Ben Swartzlander, Project Team Lead OpenStack Manila (NetApp), as they dive into:

  • Storage-related features of Mitaka
  • Data protection capabilities – Snapshots and Backup
  • Manila share replication
  • Live migration
  • Rolling upgrades
  • HA replication

Sam and Ben will be on-hand for a candid Q&A near the end of the Webcast, so please start thinking about your questions and register today. We hope to see you there!

This Webcast is co-sponsored by two groups within the Storage Networking Industry Association (SNIA): the Cloud Storage Initiative (CSI), and the Data Protection & Capacity Optimization Committee (DPCO).

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Data Protection and OpenStack Mitaka

Thomas Rivera

May 20, 2016

title of post
Interested in data protection and storage-related features of OpenStack? Then please join us for a live SNIA Webcast “Data Protection and OpenStack Mitaka” on June 22nd. We’ve pulled together an expert team to discuss the data protection capabilities of the OpenStack Mitaka release, which includes multiple new resiliency features. Join Dr. Sam Fineberg, Distinguished Technologist (HPE), and Ben Swartzlander, Project Team Lead OpenStack Manila (NetApp), as they dive into:
  • Storage-related features of Mitaka
  • Data protection capabilities – Snapshots and Backup
  • Manila share replication
  • Live migration
  • Rolling upgrades
  • HA replication
Sam and Ben will be on-hand for a candid Q&A near the end of the Webcast, so please start thinking about your questions and register today. We hope to see you there! This Webcast is co-sponsored by two groups within the Storage Networking Industry Association (SNIA): the Cloud Storage Initiative (CSI), and the Data Protection & Capacity Optimization Committee (DPCO).  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Out and About with SNIA

khauser

May 6, 2016

title of post
SNIA welcomes their friends and colleagues to join them at one or more upcoming conferences in May and June 2016. The second annual In-Memory Computing Summit is May 23-24, 2016 at the Grand Hyatt San Francisco. This conference is the only industry-wide event of its kind, tailored to in-memory computing-related technologies and solutions. The IMC Summit is a multi-track conference that brings together in-memory computing visionaries, decision makers, experts and developers for the purpose of education, discussion and networking. SNIA’s Solid State Storage Initiative's NVDIMM SIG Co-Chair Arthur Sainio will deliver a keynote, and the SIG will feature an NVDIMM demonstration in their booth. As a friend and colleague of SNIA, you are eligible for a 20% discount on the ticket price. Just register with the discount code: SNIA20 to receive your discount. Next is SNIA's 4th annual Data Storage Innovation (DSI) Conference June 13-15, 2016 in San Mateo CA. The 3-day conference agenda features over 80 sessions spanning 20 themed topics reflecting the leading edge IT trends for next generation data center networked storage and file system solutions, the continued shift of enterprise computing to hybrid cloud, increased business driven big data analytics projects, and managing and optimizing these strategic IT investments. The full agenda is now posted and features SNIA tutorials and sessions from industry leaders and a busy Innovation Expo. Register here using this special code DSI16SocialMedia and receive $100 off the non-member registration fee. And here's a tip - extend your stay and prepare for one of the IT industry’s most in-demand technical certifications - the SNIA Storage Professional Certification - with a Storage Foundations certification training class June 16-17 in San Mateo. Learn more at http://www.snia.org/education/courses/training_tc. Finally, SNIA will wrap up June at the Creative Storage Conference in Culver City June 24, 2016, where digital storage aficionados will gather to explore the conference theme of “The Art of Storage”.  Attendees will find out the latest developments in digital storage for media and entertainment and network with industry professionals.  Six sessions and four keynotes will cover topics like tomorrow's blockbusters, storage for artists, content on the move, archiving, accelerating workflows and a conversations with independent storytellers.  Join SNIA in the exhibit area to discuss long term file storage, data protection and capacity optimization, and solid state storage data recovery and erase.  Register at http://www.creativestorage.org.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to