Sorry, you need to enable JavaScript to visit this website.

RoCE vs. iWARP – The Next “Great Storage Debate”

John Kim

Jul 16, 2018

title of post
By now, we hope you’ve had a chance to watch one of the webcasts from the SNIA Ethernet Storage Forum’s “Great Storage Debate” webcast series. To date, our experts have had friendly, vendor-neutral debates on File vs. Block vs. Object Storage, Fibre Channel vs. iSCSI, and FCoE vs. iSCSI vs. iSER. The goal of this series is not to have a winner emerge, but rather educate the attendees on how the technologies work, advantages of each, and common use cases. Our next great storage debate will be on August 22, 2018 where our experts will debate RoCE vs. iWARP. They will discuss these two commonly known RDMA protocols that run over Ethernet: RDMA over Converged Ethernet (RoCE) and the IETF-standard iWARP. Both are Ethernet-based RDMA technologies that can increase networking performance. Both reduce the amount of CPU overhead in transferring data among servers and storage systems to support network-intensive applications, like networked storage or clustered computing. Join us on August 22nd, as we’ll address questions like:
  • Both RoCE and iWARP support RDMA over Ethernet, but what are the differences?
  • Use cases for RoCE and iWARP and what differentiates them?
  • UDP/IP and TCP/IP: which RDMA standard uses which protocol, and what are the advantages and disadvantages?
  • What are the software and hardware requirements for each?
  • What are the performance/latency differences of each?
Get this on your calendar by registering now. Our experts will be on-hand to answer your questions on the spot. We hope to see you there! Visit snia.org to learn about the work SNIA is doing to lead the storage industry worldwide in developing and promoting vendor-neutral architectures, standards, and educational services that facilitate the efficient management, movement, and security of information.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Dive Into SDC - A Chat with SNIA Technical Council Co-Chair Mark Carlson on Persistent Memory

khauser

Jul 12, 2018

title of post
The SNIA Storage Developer Conference (SDC) is coming up September 24-27, 2018 at the Hyatt Regency Santa Clara CA.  The agenda is now live! SNIA on Storage is ready to dive into major themes of the 2018 conference, starting with Persistent Memory.   The SNIA Technical Council takes a leadership role to develop the content for each SDC, so SNIA on Storage spoke with Mark Carlson, SNIA Technical Council Co-Chair and Principal Engineer, Industry Standards, Toshiba Memory America, to understand why SDC is bringing Persistent Memory to conference attendees. SNIA on Storage (SOS) – Why has the Technical Council chosen Persistent Memory as a major topic for 2018? Mark Carlson (MC) – For a number of years, SNIA has been a key contributor to industry activities driving system memory and storage into a single Persistent Memory entity.  For Developers just being introduced to this new technology, SNIA has a multitude of educational resources that can be used to come up to speed on Persistent Memory and make the most of their time at SDC 2018. SOS – Where should attendees begin? MC:  The dominant form to deliver Persistent Memory products is Non-Volatile DIMMs (NVDIMMs).  SNIA Board Member Rob Peglar, along with Stephen Bates, SNIA NVM Programming Technical Work Group member, and Arthur Sainio, SNIA Persistent Memory and NVDIMM Special Interest Group Co-Chair, deliver a great explanation of Persistent Memory in this video.  SNIA also has an infographic and a cookbook to get you started. SOS:  Where should those interested in developing Persistent Memory applications look for knowledge? MC:  Persistent Memory is having a large impact on both infrastructure and applications.  Andy Rudoff of the SNIA NVM Programming Technical Work Group has a great introduction to this. SNIA has created the NVM Programming Model standard so that applications can take advantage of the increased performance. Andy explores this topic as well in this video. SNIA’s annual Persistent Memory Summit also had some great videos and talks on this topic. so attendees should explore that content as well. And check out our videos and BrightTalk presentations on this topic. SOS:  OK, so now I’ve done my homework and am ready for SDC 2018.  What sessions should I look for? MC:  At SDC, don’t miss: The Long & Winding Road to Persistent Memories presented by Dr. Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis (Wednesday September 26, 3:05 pm in Cypress) Persistent Memory is getting a lot of attention.  SNIA has released a programming standard, NVDIMM makers, with the help of JEDEC, have created standardized hardware to develop & test PM, and chip makers continue to promote upcoming devices, although few are currently available.  In this talk two industry analysts, Jim Handy & Tom Coughlin, will provide the state of Persistent Memory and show a realistic roadmap of what the industry can expect to see and when they can expect to see it.  The presentation, based on three critical reports covering New Memory Technologies, NVDIMMs, and Intel’s 3D XPoint Memory (also known as Optane) will illustrate the Persistent Memory market, the technologies that vie to play a role, and the critical economic obstacles that continue to impede these technologies’ progress.  We will also explore how advanced logic process technologies are likely to cause persistent memories to become a standard ingredient in embedded applications, such as IoT nodes long before they make sense in servers. Update on the SNIA Persistent Memory Programming Model in Theory and Practice presented by Andy Rudoff of Intel (Thursday, September 27, 8:30 am in Cypress) As a charter member of the SNIA NVM Programming TWG, Andy has seen the Persistent Memory Programming model through its multi-year life so far and has collaborated with various operating system vendors in their implementations.  Andy will go over the current state of the programming model, including ongoing work and some interesting efforts in our future.  While going over the theory published in the SNIA specifications, Andy will report on the actual features that were implemented and what's available today in the ecosystem. SNIA Nonvolatile Memory Programming TWG - Remote Persistent Memory presented by Tom Talpey of Microsoft (Thursday, September 27, 9:30 am in Cypress) The SNIA NVMP TWG continues to make significant progress on defining the architecture for interfacing applications to PM. In this talk, Tom Talpey, an architect in remote storage and networking at Microsoft and an active member of SNIA NVM Persistent Memory Technical Work Group, will focus on the important Remote Persistent Memory scenario, and how the NVMP TWG's programming model applies. Application use of these interfaces, along with fabric support such as RDMA and platform extensions, are part of this, and the talk will describe how the larger ecosystem fits together to support PM as low-latency remote storage SOS:  You've convinced us to attend -  so where do we go for more information? MC – It will be great to see all those developing or looking to develop with Persistent Memory at SDC, where we will do our deep dive.  Other talks will cover PM performance, available open source libraries, and integration with filesystems.  Go here to register for the conference and learn more about the agenda and speakers. SOS - Great to chat with you and looking forward to our next dive - into orchestration.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A from the FCoE vs. iSCSI vs. iSER Debate

Tim Lustig

Jul 8, 2018

title of post
It’s become quite clear to those of us in the SNIA Ethernet Storage Forum (ESF) that everyone loves a great debate. We’ve proved that with our “Great Storage Debates” webcast series which has had over 3,500 views in just a few months! Last month we had another friendly debate on FCoE vs. iSCSI vs. iSER. If you missed the live event, you can watch it now on-demand. Our live audience asked a lot of interesting questions. As promised, here are answers to them all. Q. How often are iSCSI offload adapters used in customer environments as compared to software initiators?  Can these adapters be used for all IP traffic or do they only run iSCSI? A. iSCSI offload adapters are ideally suited for enabling high-performance storage access at up to 100Gbps data rates for business-critical applications, for example, latency-sensitive transactional applications and large-file business intelligence applications. iSCSi offload adapters typically also support offload of other storage protocols such as NVMe-oF, iSER, FCoE as well as regular Ethernet traffic using offload or non-offload means. Q. What you’ve missed with iSCSI is Jumbo Frames. That payload size is one of the biggest advantages over Fibre Channel. The biggest problem with both FCoE and iSCSI is they build the networks too complex, with too many hops, without true redundant isolation. Best Practices with block based FC is to keep the host and storage as close to each other as possible. And to have separate isolated redundant networks/fabric. A. The Jumbo Frame (JF) argument is quite contentious among iSCSI storage and network administrators, even beyond anything to do with Fibre Channel. Considering that the performance advantages of JFs are minimal – only 3%-5% performance boost over default MTU sizes of 1500. In mixed workload environments (which dominate the Data Center application deployments), JFs simply do not provide the kind of benefits that people expect in real-world scenarios. The only time JFs can “push the needle,” so to speak, is when you have massively scaled systems with 100s or 1000s of devices, but this raises other issues. One of those issues is that every device in the system needs to have JFs enabled. This can be something of a problem when systems get as large as they need to be in order to take advantage of JFs. Ensuring that every device is configured properly – especially over time, and especially when considering how iSCSI devices are added to networked environments – is a job that requires the coordination of the server/virtualization teams, the networking teams, and the storage teams. By and large, many people find QoS to be a more productive means of performance improvement for iSCSI systems than JFs. Fibre Channel, on the other hand, has a maximum frame size of 2112 bytes. FCoE, then, only requires “baby jumbo” frames, for which the configuration is pushed from the switch to the end devices (~2.5k). What FC has that iSCSI does not have is the concept of “sequences” and “exchanges,” which ensure that the long-flow of frames (regardless of their size) are sent as an entity. So, regardless of what the frame size is (2.5k or 9k), the data flow is sent with consistency and low-jitter because of the way that the sequences and exchanges are handled. The concern about “too complex” and “too many hops” is an interesting one, as Fibre Channel (and, correspondingly, FCoE) are deliberately kept as simple and straightforward as possible. A FC network, for instance, rarely goes beyond 2 hops (“hops” in FC are measured as the links between switches, whereas in Ethernet “hops” are measured as the switches themselves). Logically, then, there is usually, at most, an edge-core-edge topology with a predeterministic path to be followed thanks to Fibre Channel’s FSPF routing algorithm. iSCSI topologies, on the other hand, can be complex (as Ethernet topologies sometimes can be). For larger iSCSI environments, it is often recommended to isolate the storage traffic out into its own, simplified topology. iSCSI SANs that have grown organically, however, can sometimes struggle to be reined in over time. Best practices for all storage is to keep it as close to the host/source as is reasonably possible, not just block. In backup scenarios for example, you want the storage far enough away to be safe from any catastrophe, but close enough to ensure recovery objectives. The design principle of keeping storage as close to the host is a common best practice, and as mentioned in the webinar it is important that architectural principles ensure high availability (HA) to compensate for the rigidity that block storage systems require to compensate for weaker ULP recovery mechanisms.  Q. Most servers today have enough compute power to not need offload adapters. A. This statement might be true in some situations, but definitely not most. With more and more virtual machines being deployed on physical systems and new storage technologies such as SSDs, and NVMe devices which greatly lower latencies, servers are often CPU bound when moving or retrieving data from storage. Offloading storage related activities to an adapter frees the CPU and increases overall server performance. Q. In which industry is each protocol (i.e. FCOE or ISCSI and iSER) widely used and where? A. iSCSI is the most widely-supported Ethernet SAN protocol  with native initiator support integrated into all the major operating systems and hypervisors, built-in RDMA for high performance offloaded implementations supporting up to 100Gbps and support across major storage platforms and is thus ideally suited for deployment across cloud and enterprise data center environments. Q. Do iSCSI offload adapters provide the IPSec encryption, or is this done in software only solutions? Please answer from both initiator and target perspective. A. Yes, iSCSI protocol offload adapters can optionally provide offload of IPSec encryption for both iSCSI (as well as NVMe-oF) initiator and target operation at data rates of up to 100 Gigabits-per-second. This results in overall higher server and target efficiency including power, cooling, memory, and CPU savings. Q. Does iSER support direct or is a switch between them required? A. A switch is not required. Q. J, you left out the centralized management that Fibre Channel provides for FCoE as a positive. A. I got there eventually! But you are correct, the Fibre Channel tools for a centralized management plane with the name server – regardless of the number of switches in the fabric – is a tremendous positive for FCoE/FC solutions at scale. Q. Is multipath possible on the initiator with ISER and will it scale with high IOPs? A. Yes. Mulitpath is possible on the initiator with iSER and scales with high IOPs. Q. FCoE has been around for a while, but I noticed that some storage vendors are dropping support for it. Do you still see a big future for FCoE? A. As a protocol, FCoE has always been able to be used wherever and whenever needed. Almost all converged infrastructure systems use FCoE, for instance. Given that the key advantage of FCoE has been traffic/protocol consolidation, there is an extremely strong use case for FCoE at “the first hop” – that is, from the server to the first network switch. Q. What is the MTU for iSER ? A. iSER as a protocol that sits above the Layer 2 Data Link Layer, which is where the MTU is set. As a result, iSER will accept/accommodate any MTU setting that is configured at that layer. Please see the answer earlier about Jumbo Frames for more information. Ready for more great storage debates? Our next one will be RoCE vs. iWARP on August 22, 2018. Save you place by registering here. And you can check out our previous debates “File vs. Block vs. Object Storage” and “Fibre Channel vs. iSCSI” on-demand at your convenience too. Happy debating!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Simplifying the Movement of Data from Cloud to Cloud

Alex McDonald

Jul 5, 2018

title of post
We are increasingly living in a multi-cloud world, with potentially multiple private, public and hybrid cloud implementations supporting a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely, from one cloud to another. That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement. On August 7, 2018, the SNIA Cloud Storage Technologies Initiative will tackle this issue in a live webcast, “Cloud Mobility and Data Movement.” We will explore some of these data movement and mobility issues and include real-world examples from the University of Michigan. We’ll discus:
  • How do we secure data both at-rest and in-transit?
  • What are the steps that can be followed to import data securely? What cloud processes and interfaces should we use to make data movement easier?
  • How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
  • Should the application of the data influence how (and even if) we move the data?
  • How can data in the cloud be leveraged for multiple use cases?
Register now for this live webcast. Our SNIA experts will be on-hand to answer you questions.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Simplifying the Movement of Data from Cloud to Cloud

Alex McDonald

Jul 5, 2018

title of post
We are increasingly living in a multi-cloud world, with potentially multiple private, public and hybrid cloud implementations supporting a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely, from one cloud to another. That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement. On August 7, 2018, the SNIA Cloud Storage Technologies Initiative will tackle this issue in a live webcast, “Cloud Mobility and Data Movement.” We will explore some of these data movement and mobility issues and include real-world examples from the University of Michigan. We’ll discus:
  • How do we secure data both at-rest and in-transit?
  • Why is data so hard to move? What cloud processes and interfaces should we use to make data movement easier?
  • How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
  • Should the application of the data influence how (and even if) we move the data?
  • How can data in the cloud be leveraged for multiple use cases?
Register now for this live webcast. Our SNIA experts will be on-hand to answer you questions.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Accelerating the Adoption of Next-Generation Storage Technologies

Michael Oros

Jun 26, 2018

title of post
Introduction to the Storage Networking Industry Association The Storage Networking Industry Association (SNIA) is the largest storage industry association in existence, and one of the largest in IT. It is comprised of over 170 leading industry organizations, and 2,500 contributing members that serve more than 50,000 IT and storage professionals worldwide. During the nineties, the nascent storage networking field needed a strong voice to communicate the value of storage area networks (SANs) and the Fibre Channel (FC) protocol. SNIA emerged in 1997 when a handful of storage experts realized that there was a need for a unified voice and vendor-neutral education on these emerging technologies to ensure that storage networks became feature complete, interoperable and trusted solutions across the IT landscape. Since then, SNIA has earned a reputation for developing technologies that have emerged as industry standards. There standards relate to data, storage and information management, and address such challenges as interoperability, usability, complexity and security. Today, SNIA is the recognized authority for storage leadership, standards and technology expertise. As such, it is our job to develop and promote vendor-neutral architectures, standards, best practices, certification and educational services that facilitate the efficient management, movement and security of information. Through these and many other avenues, SNIA plays a pivotal role in technology acceleration and adoption. Our current areas of focus are centered upon nine strategic areas:
  • Physical storage: Solid state storage, hyperscale storage, and object drives, as well as related connectors, form factors and transceivers
  • Data management: Protection, integrity and retention of data
  • Data security: Storage security, privacy and data protection regulations
  • Cloud Storage Technologies: Data orchestration, and movement into and out of the cloud
  • Persistent memory: Non-Volatile Memory Programming Model, NVDIMMs and new persistent media
  • Power efficiency measurement: Solid state storage component & system performance, as well as the SNIA Emerald Power Efficiency program
  • New-Generation Data Centers: Software-defined storage, composable infrastructure and new-generation storage management Application Programming Interfaces (APIs)
  • Networked storage: Data access protocols and various networking technologies for storage
  • Storage management: Device and system management
These nine areas of focus are each actively supported by formed and functioning Technical Work Groups. Individuals from all facets of IT dedicate themselves to programs that unite the storage industry with the purpose of taking our technology to the next level. As the Executive Director for SNIA, I find the future of storage to be promising and exciting! We’re in the middle of historical transformation and innovation – and SNIA is center stage for a brave new world of storage. Data is being generated at unprecedented rates…and accelerating still…technology has to change to support the new business models of today and tomorrow. When I look at the strong history, and the robust technical focus of the organization and its members today, I know that together we can write a very bright future for the industry and the world. So join us in this dynamic journey to the next big discovery and fundamental technologies that are the basis for recording and saving the world’s history and humanity’s memories. Sincerely, Michael Oros, Executive Director of the Storage Networking Industry Association

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Controllers – Your Questions Answered

J Metz

Jun 4, 2018

title of post
The term controller is used constantly, but often has very different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. You can even have controllers managing other controllers. It can all get pretty confusing very quickly. That's why the SNIA Ethernet Storage Forum (ESF) hosted our 9th "Too Proud to Ask" webcast. This time it was "Everything You Wanted to Know about Storage but were Too Proud to Ask: Part Aqua – Storage Controllers." Our experts from Microsemi, Cavium, Mellanox and Cisco did a great job explaining the differences between the many types of controllers, but of course there were still questions. Here are answers to all that we received during the live event which you can now view on-demand. Q.Is there a standard for things such as NVMe over TCP/IP? A. NVMe™ is in the process of standardizing a TCP transport. It will be called NVMe over TCP (NVMe™/TCP) and the technical proposal should be completed and public later in 2018. Q. What are the length limits on NVMe over fibre? A. There are no length limits. Multiple Fibre Channel frames can be combined to create any length transfer needed. The Fibre Channel Industry Association has a very good presentation on Long-Distance Fibre Channel, which you can view here. Q. What does the term "Fabrics" mean in the storage context? A. Fabrics typically applies to the switch or switches interconnecting the hosts and storage devices. Specifically, a storage "fabric" maintains some knowledge about itself and the devices that are connected to it, but some people use it to mean any networked devices that provide storage. In this context, "Fabrics" is also shorthand for "NVMe over Fabrics," which refers to the ability to run the NVMe protocol over an agnostic networking transport, such as RDMA-based Ethernet, Fibre Channel, and InfiniBand (TCP/IP coming soon). Q. How does DMA result in lower power consumption? A. DMA is typically done using a harder DMA engine on the controller. This offloads the transfer from the host CPU which is typically higher power than the logic of the DMA engine. Q. How does the latency of NVMe over Fibre compare to NVMe over PCIe? A. The overall goal of having NVMe transported over any fabric is not to exceed 20us of latency above and beyond a PCIe-based NVMe solution. Having said that, there are many aspects of networked storage that can affect latency, including number of hops, topology size, oversubscription ratios, and cut-through/store-and-forward switching. Individual latency metrics are published by specific vendors. We recommend you contact your favorite Fibre Channel vendor for their numbers. Q. Which of these technologies will grow and prevail over the next 5-10 years... A. That is the $64,000 question, isn't it? J The basic premise of this presentation was to help illuminate what controllers are, and the different types that exist within a storage environment. No matter what specific flavor becomes the most popular, these basic tenets will remain in effect for the foreseeable future. Q. I am new to Storage matters, but I have been an IT tech for almost 10 years. Can you explain Block vs. File IO? A. We're glad you asked! We highly recommend you take a look at another one of our webinars, Block vs. File vs. Object Storage, which covers that very subject! If you have an idea for another topic you're "Too Proud to Ask" about, let us know by commenting in this blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Controllers – Your Questions Answered

J Metz

Jun 4, 2018

title of post
The term controller is used constantly, but often has very different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. You can even have controllers managing other controllers. It can all get pretty confusing very quickly. That’s why the SNIA Ethernet Storage Forum (ESF) hosted our 9th “Too Proud to Ask” webcast. This time it was “Everything You Wanted to Know about Storage but were Too Proud to Ask: Part Aqua – Storage Controllers.” Our experts from Microsemi, Cavium, Mellanox and Cisco did a great job explaining the differences between the many types of controllers, but of course there were still questions. Here are answers to all that we received during the live event which you can now view on-demand Q. Is there a standard for things such as NVMe over TCP/IP? A. NVMe™ is in the process of standardizing a TCP transport. It will be called NVMe over TCP (NVMe™/TCP) and the technical proposal should be completed and public later in 2018. Q. What are the length limits on NVMe over fibre? A. There are no length limits. Multiple Fibre Channel frames can be combined to create any length transfer needed. The Fibre Channel Industry Association has a very good presentation on Long-Distance Fibre Channel, which you can view here. Q. What does the term “Fabrics” mean in the storage context? A. Fabrics typically applies to the switch or switches interconnecting the hosts and storage devices. Specifically, a storage “fabric” maintains some knowledge about itself and the devices that are connected to it, but some people use it to mean any networked devices that provide storage. In this context, “Fabrics” is also shorthand for “NVMe over Fabrics,” which refers to the ability to run the NVMe protocol over an agnostic networking transport, such as RDMA-based Ethernet, Fibre Channel, and InfiniBand (TCP/IP coming soon). Q. How does DMA result in lower power consumption? A. DMA is typically done using a harder DMA engine on the controller. This offloads the transfer from the host CPU which is typically higher power than the logic of the DMA engine. Q. How does the latency of NVMe over Fibre compare to NVMe over PCIe? A. The overall goal of having NVMe transported over any fabric is not to exceed 20us of latency above and beyond a PCIe-based NVMe solution. Having said that, there are many aspects of networked storage that can affect latency, including number of hops, topology size, oversubscription ratios, and cut-through/store-and-forward switching. Individual latency metrics are published by specific vendors. We recommend you contact your favorite Fibre Channel vendor for their numbers. Q. Which of these technologies will grow and prevail over the next 5-10 years… A. That is the $64,000 question, isn’t it? J The basic premise of this presentation was to help illuminate what controllers are, and the different types that exist within a storage environment. No matter what specific flavor becomes the most popular, these basic tenets will remain in effect for the foreseeable future. Q. I am new to Storage matters, but I have been an IT tech for almost 10 years. Can you explain Block vs. File IO? A. We’re glad you asked! We highly recommend you take a look at another one of our webinars, Block vs. File vs. Object Storage, which covers that very subject! If you have an idea for another topic you’re “Too Proud to Ask” about, let us know by commenting in this blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Security GDPR, SNIA and You

Diane Marsili

May 16, 2018

title of post
In April 2016, the European Union (EU) approved a new law called the General Data Protection Regulation (GDPR). This coming May 25th, however, is the start of enforcement, meaning that any out-of-compliance organization that does business in the EU could face large fines. Some companies are choosing to not conduct business in the EU as a result, including email services and online games. The GDPR is applicable to any information classified as personal or that can be used to determine your identity, including your name, photo, email address, social media post, personal medical information, IP addresses, bank details and more. There are many key changes in the new regulations (they were revised from a 1995 EU directive). Companies must now get the consent of their customers to collect and/or use their data, and must do so in an understandable way. It must also be easy for customers to revoke their consent. If there is a data breach, companies must notify their customers within 72 hours of the discovery of such a breach. Consumers now have the right to access and obtain a copy of all their personal data, and they must also be able to request their data be expunged from the company's databases, called “the right to be forgotten.” Customer data must also be portable, in that their personal data must be given back to them in a “commonly used and machine-readable format” so they can send that data to a different company. Overall, the new GDPR requires that when designing new systems, privacy must be built into the design from the start, and not added later. SNIA has been tracking the requirements of the GDPR for a while now, and can provide a host of helpful content to introduce the GDPR and explain key elements that are relevant to storage ecosystems. The organization’s latest applicable document, Storage Security Data Protection Technical White Paper, was just released this past March and contains information on ISO/IEC 27040 (Storage security) and is also relevant to protecting your customer’s data. There’s also a slide deck available that relates directly to the issue of Privacy vs Data Protection and the impact of the new legislation on companies who wish to be compliant. SNIA has another set of informational slides, as presented originally by Eric Hibbard, that help explain the difference between Data Protection and Privacy, as well, and how it relates to the new GDPR requirements. Even more specifically, SNIA members Thomas Rivera, Katie Dix Elsner and Eric Hibbard presented a webcast titled: “GDPR & The Role of the DPO (Data Protection Officer).” In addition to these specific products, SNIA offers a wide range of white papers, tutorials, articles and other resources to help you make sure you and your company is ready for GDPR on May 25th.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

File, Block and Object Storage: Real-world Questions, Expert Answers

John Kim

May 16, 2018

title of post
More than 1,200 people have already watched our Ethernet Storage Forum (ESF) Great Storage Debate webcast "File vs. Block vs. Object Storage." If you haven't seen it yet, it's available on demand. This great debate generated many interesting questions. As promised, our experts have answered them all here. Q. What about the encryption technologies on file storage? Do they exist, and how do they affect the performance compared to unencrypted storage? A. Yes, encryption of file data at rest can be done by the storage software, operating system, or the drives themselves (self-encrypting drives). Encryption of file data on the wire can be done by the storage software, OS, or specialized network cards. These methods can usually also be applied to block and object storage. Encryption requires processing power so if it's done by the main CPU it might affect performance. If encryption is offloaded to the HBA, drive, or SmartNIC then it might not affect performance. Q. Regarding block size, I thought that block size settings were also used to tune and optimize file protocol transfer, for example in NFS, am I wrong? A. That is correct, block size refers to the size of data in each I/O and can be applied to block, file and object storage, though it may not be used very often for object storage. NFS and SMB both let you specific block I/O size. Q. What is the main difference between object and file? Is it true that File has a hierarchical structure, while object does not? A. Yes that is one important difference. Another difference is the access method--folder/file/offset for files and key-value for objects.   File storage also often allows access to specific data within a file and in many cases shared writes to the same file, while object storage typically offers only shared reads and most object storage systems do not allow direct updates to existing objects. Q. What is the best way to backup a local Object store system? A. Most object storage systems have built-in data protection using either replication or erasure coding which often replicates the data to one or more remote locations. If you deploy local object storage that does not include any remote replication or erasure coding protection, you should implement some other form of backup or replication, perhaps at the hardware or operating system level. Q. I feel that this discussion conflates object storage with cloud storage features, and presumes certain cloud features (for example security) that are not universally available or really part of Object Storage.   This is a very common problem with discussions of objects -- they typically become descriptions of one vendor's cloud features. A. Cloud storage can be block, file, and/or object, though object storage is perhaps more popular in public and private cloud than it is in non-cloud environments. Security can be required and deployed in both enterprise and cloud storage environments, and for block, file and object storage. It was not the intention of this webinar to conflate cloud and object storage; we leave that to the SNIA Cloud Storage Initiative (CSI). Q. How do open source block, file and object storage products play into the equation? A. Open source software solutions are available for block, file and object storage. As is usually the case with other open-source, these solutions typically make storage (block, file or object) available at a lower acquisition cost than commercial storage software or appliances, but at the cost of higher complexity and higher integration/support effort by the end user. Thus customers who care most about simplicity and minimizing their integration/support work tend to buy commercial appliances or storage software, while large customers who have enough staff to do their own storage integration, testing and support may prefer open-source solutions so they don't have to pay software license fees. Q. How is data [0s and 1s in hard disk] converted to objects or vice versa? A. In the beginning there were electrons, with conductors, insulators, and semi-conductors (we skipped the quantum physics level of explanation). Then there were chip companies, storage companies, and networking companies. Then The Storage Networking Industry Association (SNIA) came along... The short answer is some software (running in the storage server, storage device, or the cloud) organizes the 0s and 1s into objects stored in a file system or object store. The software makes these objects (full of 0s and 1s) available via a key-value systems and/or a RESTful API. You submit data (stream of 1s and 0s) and get a key-value in return. Or you submit a key-value and get the object (stream of 1s and 0s) in return. Q. What is the difference (from an operating system perspective where the file/object resides) between a file in mounted NFS drive and object in, for example Google drive? Isn't object storage (under the hood) just network file system with rest API access? A. Correct--under the hood there are often similarities between file and object storage. Some object storage systems store the underlying data as file and some file storage systems store the underlying data as objects. However, customers and applications usually just care about the access method, performance, and reliability/availability, not the underlying storage method. Q. I've heard that an Achilles' Heel of Object is that if you lose the name/handle, then the object is essentially lost.   If true, are there ways to mitigate this risk? A. If you lose the name/handle or key-value, then you cannot access the object, but most solutions using object storage keep redundant copies of the name/handle to avoid this. In addition, many object storage systems also store metadata about each object and let you search the metadata, so if you lose the name/handle you can regain access to the object by searching the metadata. Q. Why don't you mention concepts like time to first byte for object storage performance? A. Time to first byte is an important performance metric for some applications and that can be true for block, file, and object storage. When using object storage, an application that is streaming out the object (like online video streaming) or processing the object linearly from beginning to end might really care about time to first byte. But an application that needs to work on the entire object might care more about time to load/copy the entire object instead of time to first byte. Q. Could you describe how storage supports data temperatures? A. Data temperatures describe how often data is accessed, where "hot" data is accessed often, "warm" data occasionally, and "cold" data rarely. A storage system can tier data so the hottest data is on the fastest storage while the coldest data is on the least expensive (and presumably slowest) storage. This could mean using block storage for the hot data, file storage for the warm data, and object storage for the cold data, but that is just one option. For example, block storage could be for cold data while file storage is for hot data, or you could have three tiers of file storage. Q. Fibre channel uses SCSI. Does NVMe over Fibre Channel use SCSI too? That would diminish NVMe performance greatly. A. NVMe over Fabrics over Fibre Channel does not use the Fibre Channel Protocol (FCP) and does not use SCSI. It runs the NVMe protocol over a FC-NVMe transport on top of the physical Fibre Channel network.   In fact none of the NVMe over Fabrics options use SCSI. Q. I get confused when some one says block size for block storage, also block size for NFS storage and object storage as well. Does block size means different for different storage type? A. In this case "block size" refers to the size of the data access and it can apply to block, file, or object storage. You can use 4KB "block size" to access file data in 4KB chunks, even though you're accessing it through a folder/file/offset combination instead of a logical block address. Some implementations may limit which block sizes you can use. Object storage tends to use larger block sizes (128KB, 1MB, 4MB, etc.) than block storage, but this is not required. Q. One could argue that file system is not really a good match for big data. Would you agree? A. It depends on the type of big data and the access patterns. Big data that consists of large SQL databases might work better on block storage if low latency is the most important criteria. Big data that consists of very large video or image files might be easiest to manage and protect on object storage. And big data for Hadoop or some machine learning applications might work best on file storage. Q. It is my understanding that the unit for both File Storage & Object storage is File - so what is the key/fundamental difference between the two? A. The unit for file storage is a file (folder/file/offset or directory/file/offset) and the unit for object storage is an object (key-value or object name). They are similar but not identical. For example file storage usually allows shared reads and writes to the same file, while object storage usually allows shared reads but not shared writes to the object. In fact many object storage systems do not allow any writes or updates to the middle of an object--they either allow only appends to the end of the object or don't allow any changes to an object at all once it has been created. Q. Why is key value store more efficient and less costly for PCIe SSD? Can you please expand? A. If the SSD supports key-value storage directly, then the applications or storage servers don't have to perform the key-value translation. They simply submit the key value and then write or read the related data directly from the SSDs. This reduces the cost of the servers and software that would otherwise have to manage the key-value translations, and could also increase object storage performance. (Key-value storage is not inherently more efficient for PCIe SSDs than for other types of SSDs.) Interested in more SNIA ESF Great Storage Debates? Check out: If you have an idea for another storage debate, let us know by commenting on this blog. Happy debating!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to