Sorry, you need to enable JavaScript to visit this website.

SNIA’s Self-contained Information Retention Format (SIRF) v1.0 Published as an ISO Standard

title of post
[caption id="attachment_1934" align="alignright" width="150"] Simona Rabinovici-Cohen
IBM Research - Haifa[/caption] The SNIA standard for a logical container format called the Self-contained Information Retention Format (SIRF) v1.0 has now been published as an ISO standard thanks to the diligence and hard work of SNIA’s Long Term Retention Technical Work Group (LTR TWG).This new ISO standard (ISO/IEC 23681:2019) enables long-term hard disk, cloud, and tape-based containers a way to effectively and efficiently preserve and secure digital information for many decades, even with the ever-changing technology landscape. The demand for digital data preservation has increased in recent years. Maintaining a large amount of data for long periods of time (months, years, decades, or even forever) becomes even more important given government regulations such as HIPAA, Sarbanes-Oxley, OSHA, and many others that define specific preservation periods for critical records. The SIRF standard addresses the technical challenges of long-term digital information retention & preservation for both physical and logical preservation. It is a storage container of digital preservation objects that provides a catalog with metadata related to the entire contents of the container, individual objects, and their relationships. This standardized metadata help interpret the preservation objects in the future. Key value to the industry:
  • Serialization for the cloud is supported using OpenStack Swift object storage as an example, and SIRF serialization for tapes is supported using the LTFS ISO standard
  • Serialization for adapted industry technologies is provided in the specification
  • Plays a key role in the preservation and retention of critical data, and because it is interpretable by future data preservation systems, it has the benefit of greatly reducing the associated costs of digital preservation
For more information, please visit https://www.snia.org/ltr. With the publication of the ISO standard, the work of the LTR TWG has been completed.  We would like to thank the TWG contributors and SNIA management for making this accomplishment happen.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA at Flash Memory Summit 2019 - Your Guide Here!

Marty Foltyn

Jul 8, 2019

title of post

SNIA technical work and education advances will play a prominent role in the program at the 2019 Flash Memory Summit, August 5-8, 2019, in Santa Clara, CA.  Over 40 speakers will present on key standards activities and education initiatives, including the first ever FMS Persistent Memory Hackathon hosted by SNIA.  Check out your favorite technology (or all), and learn what SNIA is doing in these sessions:

SNIA-At-A-Glance

  • •SNIA Solid State Storage Reception
    Monday, August 5, 5:30 pm, Room 209/210
  • •SNIA Standards mainstage presentation by Michael Oros, SNIA Executive Director
    Tuesday, August 6, 2:50 pm, Mission City Ballroom
  • •Beer and Pizza with SNIA Experts on Persistent Memory/NVDIMM, Remote Persistent Memory/Open Fabrics, SNIA Swordfish, and more
    Tuesday, August 6, 7:15 pm – 9:00 pm, Ballrooms A-C
  • •SNIA Solid State Storage Initiative booth #820 featuring Persistent Memory demos and Performance, Computational Storage, and SNIA Swordfish discussions
    Tuesday, August 6, 4:00 pm – 7:00 pm; Wednesday August 7, Noon to 7 pm; and Thursday, August 8, 10:00 am – 2:30 pm, Exhibit Hall

Persistent Memory

  • SNIA Persistent Memory Programming Tutorial and Introduction to the FMS Persistent Memory Hackathon hosted by SNIA
    Learn how programming persistent memory works and get started on your own “hacks”
    Monday, August 5, 1:00 p.m. – 5:00 p.m, Room 209/210

  • •Persistent Memory Hackathon hosted by SNIA
    Bring your laptop and drop by anytime over the two days. SNIA persistent memory experts will support software developers in a live coding exercise to better understand the various tiers and modes of persistent memory and explore existing best practices.
    Tuesday, August 6 and Wednesday August 7, 8:30 am – 7:00 pm, Great America Ballroom Foyer
  • •Persistent Memory Track sessions sponsored by SNIA, JEDEC, and Open Fabrics Alliance
    See experts speak on Advances in Persistent Memory and PM Software and Applications in sessions PMEM-101-1 and PMEM-102-1
    Tuesday, August 6, 8:30 am – 10:50 am in Ballroom E and 3:40 pm – 6:00 pm, in Great America Ballroom J
  • •Persistent Memory Track sessions sponsored by SNIA, JEDEC, and Open Fabrics Alliance
    The track continues with sessions on Remote Persistent Memory and the latest research in the field in sessions PMEM-201-1 and PMEM-202-1
    Wednesday, August 7, 8:30 am – 10:50 am and 3:20 pm – 5:45 pm, Great America Meeting Room 3

Computational Storage

  • •Don’t miss the first ever Computational Storage track at FMS. This SNIA sponsored day features expert presentations and panels on Controllers and Technology, Deploying Solutions, Implementation Methods and Applications.(COMP-301A-1; COMP-301B-1; COMP-302A-1; COMP-302B-1)
    Thursday, August 8, 8:30 am – 10:50 am and 3:20 pm – 5:45 pm, in Ballroom A

Form Factors

  • •Learn what the SFF TA Technical Work Group has been doing in the session New Enterprise and Data Center SSD Form Factors (SSDS-201B-1)
    Wednesday, August 7, 9:45 am -10:50 am, in Great America Ballroom K

SNIA Swordfish

  • •Hear an update on Storage Management with Swordfish APIs for Open-Channel SSDs in session SOFT-201-1
    Wednesday, August 7, 9:45 am -10:50 am, in Ballroom F

Object Drives

  • •Learn about Standardization for a Key Value Interface Underway at NVM Express and SNIA in session NVME-201-1
    Wednesday, August 7,8:30 am – 9:35 am, in Great America Meeting Room 2

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA at Flash Memory Summit 2019 – Your Guide Here!

Marty Foltyn

Jul 8, 2019

title of post
SNIA technical work and education advances will play a prominent role in the program at the 2019 Flash Memory Summit, August 5-8, 2019, in Santa Clara, CA.  Over 40 speakers will present on key standards activities and education initiatives, including the first ever FMS Persistent Memory Hackathon hosted by SNIA.  Check out your favorite technology (or all), and learn what SNIA is doing in these sessions: SNIA-At-A-Glance
  • •SNIA Solid State Storage Reception Monday, August 5, 5:30 pm, Room 209/210
  • •SNIA Standards mainstage presentation by Michael Oros, SNIA Executive Director Tuesday, August 6, 2:50 pm, Mission City Ballroom
  • •Beer and Pizza with SNIA Experts on Persistent Memory/NVDIMM, Remote Persistent Memory/Open Fabrics, SNIA Swordfish, and more Tuesday, August 6, 7:15 pm – 9:00 pm, Ballrooms A-C
  • •SNIA Solid State Storage Initiative booth #820 featuring Persistent Memory demos and Performance, Computational Storage, and SNIA Swordfish discussions Tuesday, August 6, 4:00 pm – 7:00 pm; Wednesday August 7, Noon to 7 pm; and Thursday, August 8, 10:00 am – 2:30 pm, Exhibit Hall
Persistent Memory
  • SNIA Persistent Memory Programming Tutorial and Introduction to the FMS Persistent Memory Hackathon hosted by SNIA Learn how programming persistent memory works and get started on your own “hacks” Monday, August 5, 1:00 p.m. – 5:00 p.m, Room 209/210
  • Persistent Memory Hackathon hosted by SNIA Bring your laptop and drop by anytime over the two days. SNIA persistent memory experts will support software developers in a live coding exercise to better understand the various tiers and modes of persistent memory and explore existing best practices. Tuesday, August 6 and Wednesday August 7, 8:30 am – 7:00 pm, Great America Ballroom Foyer
  • Persistent Memory Track sessions sponsored by SNIA, JEDEC, and OpenFabrics Alliance See experts speak on Advances in Persistent Memory and PM Software and Applications in sessions PMEM-101-1 and PMEM-102-1 Tuesday, August 6, 8:30 am – 10:50 am in  Ballroom E and 3:40 pm – 6:00 pm in Great America Ballroom J
  • Persistent Memory Track sessions sponsored by SNIA, JEDEC, and OpenFabrics Alliance The track continues with sessions on Remote Persistent Memory and the latest research in the field in sessions PMEM-201-1 and PMEM-202-1 Wednesday, August 7, 8:30 am – 10:50 am and 3:20 pm – 5:45 pm in Great America Meeting Room 3
Computational Storage
  • •Don’t miss the first ever Computational Storage track at FMS. This SNIA sponsored day features expert presentations and panels on Controllers and Technology, Deploying Solutions, Implementation Methods and Applications.(COMP-301A-1; COMP-301B-1; COMP-302A-1; COMP-302B-1) Thursday, August 8, 8:30 am – 10:50 am and 3:20 pm – 5:45 pm, in Ballroom A
Form Factors
  • •Learn what the SFF TA Technical Work Group has been doing in the session New Enterprise and Data Center SSD Form Factors (SSDS-201B-1) Wednesday, August 7, 9:45 am -10:50 am, in Great America Ballroom K
SNIA Swordfish
  • •Hear an update on Storage Management with Swordfish APIs for Open-Channel SSDs in session SOFT-201-1 Wednesday, August 7, 9:45 am -10:50 am, in Ballroom F
Object Drives
  • •Learn about Standardization for a Key Value Interface Underway at NVM Express and SNIA in session NVME-201-1 Wednesday, August 7,8:30 am – 9:35 am, in Great America Meeting Room 2

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Congestion on the Network Q&A

Tim Lustig

Jul 1, 2019

title of post
As more storage traffic traverses the network, the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput has become common. That's why the SNIA Networking Storage Forum (NSF) hosted a live webcast earlier this month, Introduction to Incast, Head of Line Blocking, and Congestion Management. In this webcast (which is now available on-demand), our SNIA experts discussed how Ethernet, Fibre Channel and InfiniBand each handles increased traffic. The audience at the live event asked some great questions, as promised, here are answers to them all. Q. How many IP switch vendors today support Data Center TCP (DCTCP)? A. In order to maintain vendor neutrality, we won't get into the details. But several IP switch vendors do support DCTCP. Note that many Ethernet switches support basic explicit congestion notification (ECN), but DCTCP requires a more detailed version of ECN marking on the switch and also requires that at least some of the endpoints (servers and storage) support DCTCP. Q. One point I missed around ECN/DCTCP was that the configuration for DCTCP on the switches is virtually identical to what you need to set for DCQCN (RoCE) - but you'd still want two separate queues between DCTCP and RoCE since they don't really play along well. A. Yes, RoCE congestion control also takes advantage of ECN and has some similarities to DCTCP. Using different priorities for DCTCP. If using Priority Flow Control (PFC), where RoCE is being kept in a no-drop traffic class, you will want to ensure that RoCE storage traffic and TCP-based storage traffic are in separate priorities. If you are not using lossless transport for RoCE, however, using different priorities for DCTCP and RoCE traffic is recommended, but not required. Q. Is over-subscription a case of the server and/or switch/endpoint being faster than the link? A. Over-subscription is not usually caused when one server is faster than one link; in that case the fast server's throughput is simply limited to the link speed (like a 32G FC HBA plugged into a 16G FC switch port). But over-subscription can be caused when multiple nodes write or read more data than one switch port, switch, or switch uplink can handle. For example if six 8G Fibre Channel nodes are connected to one 16G FC switch port, that port is 3X oversubscribed. Or if sixteen 16G FC servers connect to a switch and all of them simultaneously try to send or receive traffic to the rest of the network through two 64G FC switch uplinks, then those switch uplinks are 2X oversubscribed (16x16G is two times the bandwidth of 2x64G). Similar oversubscription scenarios can be created with Ethernet and InfiniBand. Oversubscription is not always bad especially if the "downstream" links are not all expected to be handling data at full line rate all of the time. Q. Can't the switch regulate the incoming flow? A. Yes if you have flow control or a lossless network then each switch can pause incoming traffic on any port if its buffers are getting too full. However, if the switch pauses incoming traffic for too long in a lossless network, this can cause congestion to spread to nearby switches. In a lossy network, the switch could also selectively drop packets to signal congestion to the senders. While the lossless mechanism allows the switch to regulate the incoming flow and deal with congestion, it does not avoid congestion. To avoid too much traffic being generated in the first place, the traffic sources (server or storage) need to throttle the transmission rate. The aggregate traffic generated across all sources to one destination needs to be lower than the link speed of the destination port to prevent oversubscription. The FC standards committee is working on exactly such a proposal. See answer to question below. Q. Is the FC protocol considering a back-off mechanism like DCTCP? A. The Fibre Channel standards organization T11 recently began investigating methods for providing notifications from the Fabric to the end devices to address issues associated with link integrity, congestion, and discarded frames. This effort began in December 2018 and is expected to be complete in 2019. Q. Do long distance FC networks need to have giant buffers to handle all the data required to keep the link full for the time that it takes to release credit? If not, how is the long-distance capability supported at line speed, given the time delay to return credit? A. As the link comes up, the transmitter is initialized credits equal to the number of buffers available in the receiver. This preloaded credits for the transmitter has to be sufficiently high to allow for the time it takes for credits to come back from receiver. Longer delay in credit return requires higher number of buffers/credits to maintain maximum performance on the link. In general, the credit delay increases with link distance because of the increased propagation delay for the frame from transmitter to receiver and for the credit from receiver to transmitter. So, yes you do need more buffers for longer distance. This is true with any lossless network - Fibre Channel, InfiniBand and lossless Ethernet. Q. Shouldn't storage systems have the same credit-based system to regulate the incoming flow to the switch from the storage systems? A. Yes, in a credit-based lossless network (Fibre Channel or InfiniBand) every port, including the port on the storage system, is required to implement the credit-based system to maintain the lossless characteristics. This allows the switch to control how much traffic is sent by the storage system to switch. Q. Is the credit issuance from the switch or from the tx device? A. The credit mechanism works in both ways on a link, bidirectionally. So if a server is exchanging data with a switch, the switch uses credits to regulate traffic coming from the server and the server uses credits to regulate traffic coming from the switch. This mechanism is the same on every Fibre Channel link be it Server-to-Switch, Switch-to-Switch or Switch-to-Server. Q. Can you comment on DCTCP (datacenter TCP), and the current work @IETF (L4S - low loss, low latency, scalable transport)? A. There are several possible means by which congestion can be observed and quite a few ways of managing that congestion. ECN and DCTCP were selected for the simple reason that they are established technologies (even if not widely known), and have been completed. As the commenter notes, however, there are other means by which congestion is being handled. One of these is L4S, which is currently (as of this writing) a work in progress in the IETF. Learn more here. Q. Virtual Lanes / Virtual Channel would be equivalent to Priority Flow control - the trick is, that in standard TCP/IP, no one really uses different queues/ PCP / QoS to really differentiate between flows of the same application but different sessions, only different applications (VoIP, Data, Storage, ...) A. This is not quite correct. PFC has to do with an application of flow control upon a priority; it's not the same thing as a priority/virtual lane/virtual channel itself. The commenter is correct, however, that most people do not see a need for isolating out storage applications on their TCP priorities, but then they wonder why they're not getting stellar performance. Q. Can every ECN capable switch be configured to support DCTCP? A. Switches are, by their nature, stateless. That means that there is no need for a switch to be 'configured' for DCTCP, regardless of whether or not ECN is being used. So, in the strictest sense, any switch that is capable of ECN is already "configured" for DCTCP. Q. Is it true that admission control (FC buffer credit scheme) has the drawback of usually underutilization of the links...especially if your workload uses many small frames, rather than full-sized frames? A. This is correct in certain circumstances. Early in the presentation we discussed how it's important to plan for the  application, not the protocol (see slide #9).  As noted in the presentation, "the application is King."
Part of the process of architecting good FC design is to ensure that the proper oversubscription ratios are used (i.e., oversubscription involves the amount of host devices that are allowed to connect to each storage device). These oversubscription ratios are identified by the applications that have specific requirements, such as databases, etc. If a deterministic network like Fibre Channel is not architected with this in mind, it will indeed seem like a drawback.
       

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Congestion on the Network Q&A

Tim Lustig

Jul 1, 2019

title of post
As more storage traffic traverses the network, the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput has become common. That’s why the SNIA Networking Storage Forum (NSF) hosted a live webcast earlier this month, Introduction to Incast, Head of Line Blocking, and Congestion Management. In this webcast (which is now available on-demand), our SNIA experts discussed how Ethernet, Fibre Channel and InfiniBand each handles increased traffic. The audience at the live event asked some great questions, as promised, here are answers to them all. Q. How many IP switch vendors today support Data Center TCP (DCTCP)? A. In order to maintain vendor neutrality, we won’t get into the details. But several IP switch vendors do support DCTCP. Note that many Ethernet switches support basic explicit congestion notification (ECN), but DCTCP requires a more detailed version of ECN marking on the switch and also requires that at least some of the endpoints (servers and storage) support DCTCP. Q. One point I missed around ECN/DCTCP was that the configuration for DCTCP on the switches is virtually identical to what you need to set for DCQCN (RoCE) – but you’d still want two separate queues between DCTCP and RoCE since they don’t really play along well. A. Yes, RoCE congestion control also takes advantage of ECN and has some similarities to DCTCP. Using different priorities for DCTCP. If using Priority Flow Control (PFC), where RoCE is being kept in a no-drop traffic class, you will want to ensure that RoCE storage traffic and TCP-based storage traffic are in separate priorities. If you are not using lossless transport for RoCE, however, using different priorities for DCTCP and RoCE traffic is recommended, but not required. Q. Is over-subscription a case of the server and/or switch/endpoint being faster than the link? A. Over-subscription is not usually caused when one server is faster than one link; in that case the fast server’s throughput is simply limited to the link speed (like a 32G FC HBA plugged into a 16G FC switch port). But over-subscription can be caused when multiple nodes write or read more data than one switch port, switch, or switch uplink can handle. For example if six 8G Fibre Channel nodes are connected to one 16G FC switch port, that port is 3X oversubscribed. Or if sixteen 16G FC servers connect to a switch and all of them simultaneously try to send or receive traffic to the rest of the network through two 64G FC switch uplinks, then those switch uplinks are 2X oversubscribed (16x16G is two times the bandwidth of 2x64G). Similar oversubscription scenarios can be created with Ethernet and InfiniBand. Oversubscription is not always bad especially if the “downstream” links are not all expected to be handling data at full line rate all of the time. Q. Can’t the switch regulate the incoming flow? A. Yes if you have flow control or a lossless network then each switch can pause incoming traffic on any port if its buffers are getting too full. However, if the switch pauses incoming traffic for too long in a lossless network, this can cause congestion to spread to nearby switches. In a lossy network, the switch could also selectively drop packets to signal congestion to the senders. While the lossless mechanism allows the switch to regulate the incoming flow and deal with congestion, it does not avoid congestion. To avoid too much traffic being generated in the first place, the traffic sources (server or storage) need to throttle the transmission rate. The aggregate traffic generated across all sources to one destination needs to be lower than the link speed of the destination port to prevent oversubscription. The FC standards committee is working on exactly such a proposal. See answer to question below. Q. Is the FC protocol considering a back-off mechanism like DCTCP? A. The Fibre Channel standards organization T11 recently began investigating methods for providing notifications from the Fabric to the end devices to address issues associated with link integrity, congestion, and discarded frames. This effort began in December 2018 and is expected to be complete in 2019. Q. Do long distance FC networks need to have giant buffers to handle all the data required to keep the link full for the time that it takes to release credit? If not, how is the long-distance capability supported at line speed, given the time delay to return credit? A. As the link comes up, the transmitter is initialized credits equal to the number of buffers available in the receiver. This preloaded credits for the transmitter has to be sufficiently high to allow for the time it takes for credits to come back from receiver. Longer delay in credit return requires higher number of buffers/credits to maintain maximum performance on the link. In general, the credit delay increases with link distance because of the increased propagation delay for the frame from transmitter to receiver and for the credit from receiver to transmitter. So, yes you do need more buffers for longer distance. This is true with any lossless network – Fibre Channel, InfiniBand and lossless Ethernet. Q. Shouldn’t storage systems have the same credit-based system to regulate the incoming flow to the switch from the storage systems? A. Yes, in a credit-based lossless network (Fibre Channel or InfiniBand) every port, including the port on the storage system, is required to implement the credit-based system to maintain the lossless characteristics. This allows the switch to control how much traffic is sent by the storage system to switch. Q. Is the credit issuance from the switch or from the tx device? A. The credit mechanism works in both ways on a link, bidirectionally. So if a server is exchanging data with a switch, the switch uses credits to regulate traffic coming from the server and the server uses credits to regulate traffic coming from the switch. This mechanism is the same on every Fibre Channel link be it Server-to-Switch, Switch-to-Switch or Switch-to-Server. Q. Can you comment on DCTCP (datacenter TCP), and the current work @IETF (L4S – low loss, low latency, scalable transport)? A. There are several possible means by which congestion can be observed and quite a few ways of managing that congestion. ECN and DCTCP were selected for the simple reason that they are established technologies (even if not widely known), and have been completed. As the commenter notes, however, there are other means by which congestion is being handled. One of these is L4S, which is currently (as of this writing) a work in progress in the IETF. Learn more here. Q. Virtual Lanes / Virtual Channel would be equivalent to Priority Flow control – the trick is, that in standard TCP/IP, no one really uses different queues/ PCP / QoS to really differentiate between flows of the same application but different sessions, only different applications (VoIP, Data, Storage, …) A. This is not quite correct. PFC has to do with an application of flow control upon a priority; it’s not the same thing as a priority/virtual lane/virtual channel itself. The commenter is correct, however, that most people do not see a need for isolating out storage applications on their TCP priorities, but then they wonder why they’re not getting stellar performance. Q. Can every ECN capable switch be configured to support DCTCP? A. Switches are, by their nature, stateless. That means that there is no need for a switch to be ‘configured’ for DCTCP, regardless of whether or not ECN is being used. So, in the strictest sense, any switch that is capable of ECN is already “configured” for DCTCP. Q. Is it true that admission control (FC buffer credit scheme) has the drawback of usually underutilization of the links…especially if your workload uses many small frames, rather than full-sized frames? A. This is correct in certain circumstances. Early in the presentation we discussed how it’s important to plan for the application, not the protocol (see slide #9 from the presentation). As noted in the presentation, “the application is King.” Part of the process of architecting good FC design is to ensure that the proper oversubscription ratios are used (oversubscription involves the amount of host devices that are allowed to connect to each storage device). These oversubscription ratios are identified by the applications that have specific requirements, such as databases, etc. If a deterministic network like Fibre Channel is not architected with this in mind, it will indeed seem like a drawback.”      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA LTFS Format – New Version with Improved Capacity Efficiency

Diane Marsili

Jul 1, 2019

title of post
The SNIA Linear Tape File System (LTFS) Technical Work Group (TWG) is excited to announce that the new version of LTFS Format Specification has just been approved.  LTFS provides an industry standard format for recording data on modern magnetic tape. LTFS is a file system that allows those stored files to be accessed in a similar fashion to those on disk or removable flash drives. The SNIA standard, also known as an ISO standard ISO/IEC 20919:2016, defines the LTFS Format requirements for interchanged media that claims LTFS compliance. Those requirements are specified as the size and sequence of data blocks and file marks on the media, the content and form of special data constructs (the LTFS Label and LTFS Index), and the content of the partition labels and use of MAM parameters. The data content (not the physical media) of the LTFS format shall be interchangeable among all data storage systems claiming conformance to this format. Physical media interchange is dependent on compatibility of physical media and the media access devices in use. SNIA on Storage sat down with Takeshi Ishimoto, Co-Chair of the SNIA LTFS Technical Work Group, to learn what it all means. Q. What is this standard all about? A. Linear Tape File System (LTFS) utilizes modern tape storage hardware to store files and their metadata on the same tape and makes the data accessible through the file system interface. The SNIA LTFS Format Specification standard defines the data recording structure on tape and the metadata format in XML, so that the LTFS-based tape is interchangeable between LTFS implementations. Because LTFS is an open and self-describing format, the data on tape are transportable and accessible across different locations and on different OS platforms, and thus the tape storage becomes suitable for long term archival and bulk transfer at lower cost. Q. What are the new revisions? A. The LTFS Format Specification version 2.5 extends the standard to use a new incremental index method, in addition to the legacy full index method. This new incremental index method gives a journal-like capability where only changes to the index need to be recorded frequently.  A complete index is required to be written at unmount time (but may also be written at any other time). Q. What are the benefits of these revisions to the end users? A. The incremental index will improve the TCO by reducing the tape space occupied by the indexes, thereby allowing more user data to be stored on tape. It also improves the overall write performance of the file system by reducing the time required to write the index to tape. With the incremental index method, LTFS can be applied for the user with many small files, such as the user managing the data from IOT sensors, without compromising the space efficiency and performance. The incremental index method has been designed to be backwards compatible with previous generations for all normal usage. Q. What problems will this new 2.5 revision solve? A. With the evolution of tape-recording density, current tape hardware on the market can store >10TB in a single palm-size tape cartridge, and future recording technology is projected to go beyond 300TB in the same format factor. The new LTFS Format Specification version 2.5 addresses the challenges in storing a large number of files on a single large capacity tape cartridge. LTFS is widely adopted by various tape hardware vendors as well as many software vendors.  Visit their web pages to get the more information about the LTFS success stories and implementation. The reference implementation of LTFS format version 2.4 is available as open source at GitHub project here. Want more information on LTFS? This primer will quickly get you up to speed.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Register for the PIRL Conference Today

Marty Foltyn

Jun 25, 2019

title of post

Registration is now open for the upcoming Persistent Programming in Real Life (PIRL) Conference – July 22-23, 2019 on the campus of the University of California San Diego (UCSD).

The 2019 PIRL event features a collaboration between UCSD Computer Science and Engineering, the Non-Volatile Systems Laboratory, and the SNIA to bring industry leaders in programming and developing persistent memory applications together for a two-day discussion on their experiences.

PIRL is a small conference, with attendance limited to under 100 people, including speakers.  It will discuss what real developers have done, and want to do, with persistent memory. Most of the presentations will include demonstrations of live code showing new concepts.  The conference is designed to be a meet-up for developers seeking to gain and share knowledge in the growing area of Persistent Memory development.

PIRL features a program of 18 presentations and 5 keynotes from industry-leading developers who have built real systems using persistent memory.  They will share what they have done (and want to do) with persistent memory, what worked, what didn’t, what was hard, what was easy, what was surprising, and what they learned.

This year’s keynote presentations will be:

  • * Pratap Subrahmanyam (Vmware): Programming Persistent Memory In A Virtualized Environment Using Golang
  • * Zuoyu Tao (Oracle): Exadata With Persistent Memory – An Epic Journey
  • * Dan Williams (Intel Corporation): The 3rd Rail Of Linux Filesystems: A Survival Story
  • * Stephen Bates (Eideticom): Successfully Deploying Persistent Memory and Acceleration Via Compute Express Link
  • * Scott Miller (Dreamworks): Persistent Memory In Feature Animation Production

Other speakers include engineers from NetApp, Lawrence Livermore National Laboratory, Oracle, Sandia National Labs, Intel, SAP, Red Hat, and universities from around the world.  Full details are available at the PIRL website.

PIRL will be held on the University of California San Diego campus at Scripps Forum, a state-of-the-art conference facility just a few meters from the beach.  Discounted early registration ends July 10, so register today to ensure your seat.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Register for the PIRL Conference Today

Marty Foltyn

Jun 25, 2019

title of post

Registration is now open for the upcoming Persistent Programming in Real Life (PIRL) Conference – July 22-23, 2019 on the campus of the University of California San Diego (UCSD).

The 2019 PIRL event features a collaboration between UCSD Computer Science and Engineering, the Non-Volatile Systems Laboratory, and the SNIA to bring industry leaders in programming and developing persistent memory applications together for a two-day discussion on their experiences.

PIRL is a small conference, with attendance limited to under 100 people, including speakers.  It will discuss what real developers have done, and want to do, with persistent memory. Most of the presentations will include demonstrations of live code showing new concepts.  The conference is designed to be a meet-up for developers seeking to gain and share knowledge in the growing area of Persistent Memory development.

PIRL features a program of 18 presentations and 5 keynotes from industry-leading developers who have built real systems using persistent memory.  They will share what they have done (and want to do) with persistent memory, what worked, what didn’t, what was hard, what was easy, what was surprising, and what they learned.

This year’s keynote presentations will be:

  • * Pratap Subrahmanyam (Vmware): Programming Persistent Memory In A Virtualized Environment Using Golang
  • * Zuoyu Tao (Oracle): Exadata With Persistent Memory – An Epic Journey
  • * Dan Williams (Intel Corporation): The 3rd Rail Of Linux Filesystems: A Survival Story
  • * Stephen Bates (Eideticom): Successfully Deploying Persistent Memory and Acceleration Via Compute Express Link
  • * Scott Miller (Dreamworks): Persistent Memory In Feature Animation Production

Other speakers include engineers from NetApp, Lawrence Livermore National Laboratory, Oracle, Sandia National Labs, Intel, SAP, Red Hat, and universities from around the world.  Full details are available at the PIRL website.

PIRL will be held on the University of California San Diego campus at Scripps Forum, a state-of-the-art conference facility just a few meters from the beach.  Discounted early registration ends July 10, so register today to ensure your seat.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Register for the PIRL Conference Today

Marty Foltyn

Jun 25, 2019

title of post
Registration is now open for the upcoming Persistent Programming in Real Life (PIRL) Conference – July 22-23, 2019 on the campus of the University of California San Diego (UCSD). The 2019 PIRL event features a collaboration between UCSD Computer Science and Engineering, the Non-Volatile Systems Laboratory, and the SNIA to bring industry leaders in programming and developing persistent memory applications together for a two-day discussion on their experiences. PIRL is a small conference, with attendance limited to under 100 people, including speakers.  It will discuss what real developers have done, and want to do, with persistent memory. Most of the presentations will include demonstrations of live code showing new concepts.  The conference is designed to be a meet-up for developers seeking to gain and share knowledge in the growing area of Persistent Memory development. PIRL features a program of 18 presentations and 5 keynotes from industry-leading developers who have built real systems using persistent memory.  They will share what they have done (and want to do) with persistent memory, what worked, what didn’t, what was hard, what was easy, what was surprising, and what they learned. This year’s keynote presentations will be:
  • * Pratap Subrahmanyam (Vmware): Programming Persistent Memory In A Virtualized Environment Using Golang
  • * Zuoyu Tao (Oracle): Exadata With Persistent Memory – An Epic Journey
  • * Dan Williams (Intel Corporation): The 3rd Rail Of Linux Filesystems: A Survival Story
  • * Stephen Bates (Eideticom): Successfully Deploying Persistent Memory and Acceleration Via Compute Express Link
  • * Scott Miller (Dreamworks): Persistent Memory In Feature Animation Production
Other speakers include engineers from NetApp, Lawrence Livermore National Laboratory, Oracle, Sandia National Labs, Intel, SAP, Red Hat, and universities from around the world.  Full details are available at the PIRL website. PIRL will be held on the University of California San Diego campus at Scripps Forum, a state-of-the-art conference facility just a few meters from the beach.  Discounted early registration ends July 10, so register today to ensure your seat.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Network Speeds Questions Answered

John Kim

Jun 25, 2019

title of post
Last month, the SNIA Networking Storage Forum (NSF) hosted a webcast on how increases in networking speeds are impacting storage. If you missed the live webcast, New Landscape of Network Speeds, it's now available on-demand. We received several interesting questions on this topic. Here are our experts' answers: Q. What are the cable distances for 2.5 and 5G Ethernet? A. 2.5GBASE-T and 5GBASE-T Ethernet are designed to run on existing UTP cabling, so it should reach 100 meters on both Cat5e and Cat6 cabling. Reach of 5GBASE-T on Cat 5e may be less under some conditions, for example if many cables are bundled tightly together. Cabling guidelines and field test equipment are available to aid in the transition. Q. Any comments on why U.2 drives are so rare/uncommon in desktop PC usage? M.2 are very common in laptops, and some desktops, but U.2's large capacity seems a better fit for desktop. A. M.2 SSDs are more popular for laptops and tablets due to their small form factor and sufficient capacity.  U.2 SSDs are used more often in servers, though some desktops and larger laptops also use a U.2 SSD for the larger capacity.   Q. What about using Active Copper cables to get a bit more reach over Passive Copper cables before switching to Active Optical cables? A. Yes active copper cables can provide longer reach than passive copper cables, but you have to look at the expense and power consumption. There may be many cases where using an active optical cable (AOC) will cost the same or less than an active copper cable. Q. For 100Gb/s signaling (future standard) is it expected to work over copper cable (passive or active) or only optical? A. Yes, though the maximum distances will be shorter. With 25Gb/s signaling the maximum copper cable length is 5m. With 50Gb/s signaling the longest copper cables are 3m long. With 100Gb/s we expect the longest copper cables will be about 2m long. Q. So what do you see as the most prevalent LAN speed today and what do you see in next year or two? A. For Ethernet, we see desktops mostly on 1Gb with some moving to 2.5G, 5Gb or 10Gb. Older servers are largely 10Gb but new servers are mostly using 25GbE or 50GbE, while the most demanding servers and fastest flash storage arrays have 100GbE connections. 200GbE will show up in a few servers starting in late 2019, but most 200GbE and 400GbE usage will be for switch-to-switch links during the next few years. In the world of Fibre Channel, most servers today are on 16G FC with a few running 32G and a few of the most demanding servers or fastest flash storage arrays using 64G. 128G FC for now will likely be just for switch-to-switch links. Finally for InfiniBand deployments, older servers are running FDR (56Gb/s) and newer servers are using EDR (100Gb/s). The very newest, fastest HPC and ML/AI servers are starting to use HDR (200Gb/s) InfiniBand. If you're new to SNIA NSF, we encourage you to check out the SNIA NSF webcast library. There you'll find more than 60 educational, vendor-neutral on-demand webcasts produced by SNIA experts.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to