Sorry, you need to enable JavaScript to visit this website.

Simplifying the Movement of Data from Cloud to Cloud

Alex McDonald

Jul 5, 2018

title of post
We are increasingly living in a multi-cloud world, with potentially multiple private, public and hybrid cloud implementations supporting a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely, from one cloud to another. That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement. On August 7, 2018, the SNIA Cloud Storage Technologies Initiative will tackle this issue in a live webcast, “Cloud Mobility and Data Movement.” We will explore some of these data movement and mobility issues and include real-world examples from the University of Michigan. We’ll discus:
  • How do we secure data both at-rest and in-transit?
  • Why is data so hard to move? What cloud processes and interfaces should we use to make data movement easier?
  • How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
  • Should the application of the data influence how (and even if) we move the data?
  • How can data in the cloud be leveraged for multiple use cases?
Register now for this live webcast. Our SNIA experts will be on-hand to answer you questions.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Accelerating the Adoption of Next-Generation Storage Technologies

Michael Oros

Jun 26, 2018

title of post
Introduction to the Storage Networking Industry Association The Storage Networking Industry Association (SNIA) is the largest storage industry association in existence, and one of the largest in IT. It is comprised of over 170 leading industry organizations, and 2,500 contributing members that serve more than 50,000 IT and storage professionals worldwide. During the nineties, the nascent storage networking field needed a strong voice to communicate the value of storage area networks (SANs) and the Fibre Channel (FC) protocol. SNIA emerged in 1997 when a handful of storage experts realized that there was a need for a unified voice and vendor-neutral education on these emerging technologies to ensure that storage networks became feature complete, interoperable and trusted solutions across the IT landscape. Since then, SNIA has earned a reputation for developing technologies that have emerged as industry standards. There standards relate to data, storage and information management, and address such challenges as interoperability, usability, complexity and security. Today, SNIA is the recognized authority for storage leadership, standards and technology expertise. As such, it is our job to develop and promote vendor-neutral architectures, standards, best practices, certification and educational services that facilitate the efficient management, movement and security of information. Through these and many other avenues, SNIA plays a pivotal role in technology acceleration and adoption. Our current areas of focus are centered upon nine strategic areas:
  • Physical storage: Solid state storage, hyperscale storage, and object drives, as well as related connectors, form factors and transceivers
  • Data management: Protection, integrity and retention of data
  • Data security: Storage security, privacy and data protection regulations
  • Cloud Storage Technologies: Data orchestration, and movement into and out of the cloud
  • Persistent memory: Non-Volatile Memory Programming Model, NVDIMMs and new persistent media
  • Power efficiency measurement: Solid state storage component & system performance, as well as the SNIA Emerald Power Efficiency program
  • New-Generation Data Centers: Software-defined storage, composable infrastructure and new-generation storage management Application Programming Interfaces (APIs)
  • Networked storage: Data access protocols and various networking technologies for storage
  • Storage management: Device and system management
These nine areas of focus are each actively supported by formed and functioning Technical Work Groups. Individuals from all facets of IT dedicate themselves to programs that unite the storage industry with the purpose of taking our technology to the next level. As the Executive Director for SNIA, I find the future of storage to be promising and exciting! We’re in the middle of historical transformation and innovation – and SNIA is center stage for a brave new world of storage. Data is being generated at unprecedented rates…and accelerating still…technology has to change to support the new business models of today and tomorrow. When I look at the strong history, and the robust technical focus of the organization and its members today, I know that together we can write a very bright future for the industry and the world. So join us in this dynamic journey to the next big discovery and fundamental technologies that are the basis for recording and saving the world’s history and humanity’s memories. Sincerely, Michael Oros, Executive Director of the Storage Networking Industry Association

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Controllers – Your Questions Answered

J Metz

Jun 4, 2018

title of post
The term controller is used constantly, but often has very different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. You can even have controllers managing other controllers. It can all get pretty confusing very quickly. That's why the SNIA Ethernet Storage Forum (ESF) hosted our 9th "Too Proud to Ask" webcast. This time it was "Everything You Wanted to Know about Storage but were Too Proud to Ask: Part Aqua – Storage Controllers." Our experts from Microsemi, Cavium, Mellanox and Cisco did a great job explaining the differences between the many types of controllers, but of course there were still questions. Here are answers to all that we received during the live event which you can now view on-demand. Q.Is there a standard for things such as NVMe over TCP/IP? A. NVMe™ is in the process of standardizing a TCP transport. It will be called NVMe over TCP (NVMe™/TCP) and the technical proposal should be completed and public later in 2018. Q. What are the length limits on NVMe over fibre? A. There are no length limits. Multiple Fibre Channel frames can be combined to create any length transfer needed. The Fibre Channel Industry Association has a very good presentation on Long-Distance Fibre Channel, which you can view here. Q. What does the term "Fabrics" mean in the storage context? A. Fabrics typically applies to the switch or switches interconnecting the hosts and storage devices. Specifically, a storage "fabric" maintains some knowledge about itself and the devices that are connected to it, but some people use it to mean any networked devices that provide storage. In this context, "Fabrics" is also shorthand for "NVMe over Fabrics," which refers to the ability to run the NVMe protocol over an agnostic networking transport, such as RDMA-based Ethernet, Fibre Channel, and InfiniBand (TCP/IP coming soon). Q. How does DMA result in lower power consumption? A. DMA is typically done using a harder DMA engine on the controller. This offloads the transfer from the host CPU which is typically higher power than the logic of the DMA engine. Q. How does the latency of NVMe over Fibre compare to NVMe over PCIe? A. The overall goal of having NVMe transported over any fabric is not to exceed 20us of latency above and beyond a PCIe-based NVMe solution. Having said that, there are many aspects of networked storage that can affect latency, including number of hops, topology size, oversubscription ratios, and cut-through/store-and-forward switching. Individual latency metrics are published by specific vendors. We recommend you contact your favorite Fibre Channel vendor for their numbers. Q. Which of these technologies will grow and prevail over the next 5-10 years... A. That is the $64,000 question, isn't it? J The basic premise of this presentation was to help illuminate what controllers are, and the different types that exist within a storage environment. No matter what specific flavor becomes the most popular, these basic tenets will remain in effect for the foreseeable future. Q. I am new to Storage matters, but I have been an IT tech for almost 10 years. Can you explain Block vs. File IO? A. We're glad you asked! We highly recommend you take a look at another one of our webinars, Block vs. File vs. Object Storage, which covers that very subject! If you have an idea for another topic you're "Too Proud to Ask" about, let us know by commenting in this blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Controllers – Your Questions Answered

J Metz

Jun 4, 2018

title of post
The term controller is used constantly, but often has very different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. You can even have controllers managing other controllers. It can all get pretty confusing very quickly. That’s why the SNIA Ethernet Storage Forum (ESF) hosted our 9th “Too Proud to Ask” webcast. This time it was “Everything You Wanted to Know about Storage but were Too Proud to Ask: Part Aqua – Storage Controllers.” Our experts from Microsemi, Cavium, Mellanox and Cisco did a great job explaining the differences between the many types of controllers, but of course there were still questions. Here are answers to all that we received during the live event which you can now view on-demand Q. Is there a standard for things such as NVMe over TCP/IP? A. NVMe™ is in the process of standardizing a TCP transport. It will be called NVMe over TCP (NVMe™/TCP) and the technical proposal should be completed and public later in 2018. Q. What are the length limits on NVMe over fibre? A. There are no length limits. Multiple Fibre Channel frames can be combined to create any length transfer needed. The Fibre Channel Industry Association has a very good presentation on Long-Distance Fibre Channel, which you can view here. Q. What does the term “Fabrics” mean in the storage context? A. Fabrics typically applies to the switch or switches interconnecting the hosts and storage devices. Specifically, a storage “fabric” maintains some knowledge about itself and the devices that are connected to it, but some people use it to mean any networked devices that provide storage. In this context, “Fabrics” is also shorthand for “NVMe over Fabrics,” which refers to the ability to run the NVMe protocol over an agnostic networking transport, such as RDMA-based Ethernet, Fibre Channel, and InfiniBand (TCP/IP coming soon). Q. How does DMA result in lower power consumption? A. DMA is typically done using a harder DMA engine on the controller. This offloads the transfer from the host CPU which is typically higher power than the logic of the DMA engine. Q. How does the latency of NVMe over Fibre compare to NVMe over PCIe? A. The overall goal of having NVMe transported over any fabric is not to exceed 20us of latency above and beyond a PCIe-based NVMe solution. Having said that, there are many aspects of networked storage that can affect latency, including number of hops, topology size, oversubscription ratios, and cut-through/store-and-forward switching. Individual latency metrics are published by specific vendors. We recommend you contact your favorite Fibre Channel vendor for their numbers. Q. Which of these technologies will grow and prevail over the next 5-10 years… A. That is the $64,000 question, isn’t it? J The basic premise of this presentation was to help illuminate what controllers are, and the different types that exist within a storage environment. No matter what specific flavor becomes the most popular, these basic tenets will remain in effect for the foreseeable future. Q. I am new to Storage matters, but I have been an IT tech for almost 10 years. Can you explain Block vs. File IO? A. We’re glad you asked! We highly recommend you take a look at another one of our webinars, Block vs. File vs. Object Storage, which covers that very subject! If you have an idea for another topic you’re “Too Proud to Ask” about, let us know by commenting in this blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Security GDPR, SNIA and You

Diane Marsili

May 16, 2018

title of post
In April 2016, the European Union (EU) approved a new law called the General Data Protection Regulation (GDPR). This coming May 25th, however, is the start of enforcement, meaning that any out-of-compliance organization that does business in the EU could face large fines. Some companies are choosing to not conduct business in the EU as a result, including email services and online games. The GDPR is applicable to any information classified as personal or that can be used to determine your identity, including your name, photo, email address, social media post, personal medical information, IP addresses, bank details and more. There are many key changes in the new regulations (they were revised from a 1995 EU directive). Companies must now get the consent of their customers to collect and/or use their data, and must do so in an understandable way. It must also be easy for customers to revoke their consent. If there is a data breach, companies must notify their customers within 72 hours of the discovery of such a breach. Consumers now have the right to access and obtain a copy of all their personal data, and they must also be able to request their data be expunged from the company's databases, called “the right to be forgotten.” Customer data must also be portable, in that their personal data must be given back to them in a “commonly used and machine-readable format” so they can send that data to a different company. Overall, the new GDPR requires that when designing new systems, privacy must be built into the design from the start, and not added later. SNIA has been tracking the requirements of the GDPR for a while now, and can provide a host of helpful content to introduce the GDPR and explain key elements that are relevant to storage ecosystems. The organization’s latest applicable document, Storage Security Data Protection Technical White Paper, was just released this past March and contains information on ISO/IEC 27040 (Storage security) and is also relevant to protecting your customer’s data. There’s also a slide deck available that relates directly to the issue of Privacy vs Data Protection and the impact of the new legislation on companies who wish to be compliant. SNIA has another set of informational slides, as presented originally by Eric Hibbard, that help explain the difference between Data Protection and Privacy, as well, and how it relates to the new GDPR requirements. Even more specifically, SNIA members Thomas Rivera, Katie Dix Elsner and Eric Hibbard presented a webcast titled: “GDPR & The Role of the DPO (Data Protection Officer).” In addition to these specific products, SNIA offers a wide range of white papers, tutorials, articles and other resources to help you make sure you and your company is ready for GDPR on May 25th.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

File, Block and Object Storage: Real-world Questions, Expert Answers

John Kim

May 16, 2018

title of post
More than 1,200 people have already watched our Ethernet Storage Forum (ESF) Great Storage Debate webcast "File vs. Block vs. Object Storage." If you haven't seen it yet, it's available on demand. This great debate generated many interesting questions. As promised, our experts have answered them all here. Q. What about the encryption technologies on file storage? Do they exist, and how do they affect the performance compared to unencrypted storage? A. Yes, encryption of file data at rest can be done by the storage software, operating system, or the drives themselves (self-encrypting drives). Encryption of file data on the wire can be done by the storage software, OS, or specialized network cards. These methods can usually also be applied to block and object storage. Encryption requires processing power so if it's done by the main CPU it might affect performance. If encryption is offloaded to the HBA, drive, or SmartNIC then it might not affect performance. Q. Regarding block size, I thought that block size settings were also used to tune and optimize file protocol transfer, for example in NFS, am I wrong? A. That is correct, block size refers to the size of data in each I/O and can be applied to block, file and object storage, though it may not be used very often for object storage. NFS and SMB both let you specific block I/O size. Q. What is the main difference between object and file? Is it true that File has a hierarchical structure, while object does not? A. Yes that is one important difference. Another difference is the access method--folder/file/offset for files and key-value for objects.   File storage also often allows access to specific data within a file and in many cases shared writes to the same file, while object storage typically offers only shared reads and most object storage systems do not allow direct updates to existing objects. Q. What is the best way to backup a local Object store system? A. Most object storage systems have built-in data protection using either replication or erasure coding which often replicates the data to one or more remote locations. If you deploy local object storage that does not include any remote replication or erasure coding protection, you should implement some other form of backup or replication, perhaps at the hardware or operating system level. Q. I feel that this discussion conflates object storage with cloud storage features, and presumes certain cloud features (for example security) that are not universally available or really part of Object Storage.   This is a very common problem with discussions of objects -- they typically become descriptions of one vendor's cloud features. A. Cloud storage can be block, file, and/or object, though object storage is perhaps more popular in public and private cloud than it is in non-cloud environments. Security can be required and deployed in both enterprise and cloud storage environments, and for block, file and object storage. It was not the intention of this webinar to conflate cloud and object storage; we leave that to the SNIA Cloud Storage Initiative (CSI). Q. How do open source block, file and object storage products play into the equation? A. Open source software solutions are available for block, file and object storage. As is usually the case with other open-source, these solutions typically make storage (block, file or object) available at a lower acquisition cost than commercial storage software or appliances, but at the cost of higher complexity and higher integration/support effort by the end user. Thus customers who care most about simplicity and minimizing their integration/support work tend to buy commercial appliances or storage software, while large customers who have enough staff to do their own storage integration, testing and support may prefer open-source solutions so they don't have to pay software license fees. Q. How is data [0s and 1s in hard disk] converted to objects or vice versa? A. In the beginning there were electrons, with conductors, insulators, and semi-conductors (we skipped the quantum physics level of explanation). Then there were chip companies, storage companies, and networking companies. Then The Storage Networking Industry Association (SNIA) came along... The short answer is some software (running in the storage server, storage device, or the cloud) organizes the 0s and 1s into objects stored in a file system or object store. The software makes these objects (full of 0s and 1s) available via a key-value systems and/or a RESTful API. You submit data (stream of 1s and 0s) and get a key-value in return. Or you submit a key-value and get the object (stream of 1s and 0s) in return. Q. What is the difference (from an operating system perspective where the file/object resides) between a file in mounted NFS drive and object in, for example Google drive? Isn't object storage (under the hood) just network file system with rest API access? A. Correct--under the hood there are often similarities between file and object storage. Some object storage systems store the underlying data as file and some file storage systems store the underlying data as objects. However, customers and applications usually just care about the access method, performance, and reliability/availability, not the underlying storage method. Q. I've heard that an Achilles' Heel of Object is that if you lose the name/handle, then the object is essentially lost.   If true, are there ways to mitigate this risk? A. If you lose the name/handle or key-value, then you cannot access the object, but most solutions using object storage keep redundant copies of the name/handle to avoid this. In addition, many object storage systems also store metadata about each object and let you search the metadata, so if you lose the name/handle you can regain access to the object by searching the metadata. Q. Why don't you mention concepts like time to first byte for object storage performance? A. Time to first byte is an important performance metric for some applications and that can be true for block, file, and object storage. When using object storage, an application that is streaming out the object (like online video streaming) or processing the object linearly from beginning to end might really care about time to first byte. But an application that needs to work on the entire object might care more about time to load/copy the entire object instead of time to first byte. Q. Could you describe how storage supports data temperatures? A. Data temperatures describe how often data is accessed, where "hot" data is accessed often, "warm" data occasionally, and "cold" data rarely. A storage system can tier data so the hottest data is on the fastest storage while the coldest data is on the least expensive (and presumably slowest) storage. This could mean using block storage for the hot data, file storage for the warm data, and object storage for the cold data, but that is just one option. For example, block storage could be for cold data while file storage is for hot data, or you could have three tiers of file storage. Q. Fibre channel uses SCSI. Does NVMe over Fibre Channel use SCSI too? That would diminish NVMe performance greatly. A. NVMe over Fabrics over Fibre Channel does not use the Fibre Channel Protocol (FCP) and does not use SCSI. It runs the NVMe protocol over a FC-NVMe transport on top of the physical Fibre Channel network.   In fact none of the NVMe over Fabrics options use SCSI. Q. I get confused when some one says block size for block storage, also block size for NFS storage and object storage as well. Does block size means different for different storage type? A. In this case "block size" refers to the size of the data access and it can apply to block, file, or object storage. You can use 4KB "block size" to access file data in 4KB chunks, even though you're accessing it through a folder/file/offset combination instead of a logical block address. Some implementations may limit which block sizes you can use. Object storage tends to use larger block sizes (128KB, 1MB, 4MB, etc.) than block storage, but this is not required. Q. One could argue that file system is not really a good match for big data. Would you agree? A. It depends on the type of big data and the access patterns. Big data that consists of large SQL databases might work better on block storage if low latency is the most important criteria. Big data that consists of very large video or image files might be easiest to manage and protect on object storage. And big data for Hadoop or some machine learning applications might work best on file storage. Q. It is my understanding that the unit for both File Storage & Object storage is File - so what is the key/fundamental difference between the two? A. The unit for file storage is a file (folder/file/offset or directory/file/offset) and the unit for object storage is an object (key-value or object name). They are similar but not identical. For example file storage usually allows shared reads and writes to the same file, while object storage usually allows shared reads but not shared writes to the object. In fact many object storage systems do not allow any writes or updates to the middle of an object--they either allow only appends to the end of the object or don't allow any changes to an object at all once it has been created. Q. Why is key value store more efficient and less costly for PCIe SSD? Can you please expand? A. If the SSD supports key-value storage directly, then the applications or storage servers don't have to perform the key-value translation. They simply submit the key value and then write or read the related data directly from the SSDs. This reduces the cost of the servers and software that would otherwise have to manage the key-value translations, and could also increase object storage performance. (Key-value storage is not inherently more efficient for PCIe SSDs than for other types of SSDs.) Interested in more SNIA ESF Great Storage Debates? Check out: If you have an idea for another storage debate, let us know by commenting on this blog. Happy debating!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

File, Block and Object Storage: Real-world Questions, Expert Answers

John Kim

May 16, 2018

title of post
More than 1,200 people have already watched our Ethernet Storage Forum (ESF) Great Storage Debate webcast “File vs. Block vs. Object Storage.” If you haven’t seen it yet, it’s available on demand. This great debate generated many interesting questions. As promised, our experts have answered them all here. Q. What about the encryption technologies on file storage? Do they exist, and how do they affect the performance compared to unencrypted storage? A. Yes, encryption of file data at rest can be done by the storage software, operating system, or the drives themselves (self-encrypting drives). Encryption of file data on the wire can be done by the storage software, OS, or specialized network cards. These methods can usually also be applied to block and object storage. Encryption requires processing power so if it’s done by the main CPU it might affect performance. If encryption is offloaded to the HBA, drive, or SmartNIC then it might not affect performance. Q. Regarding block size, I thought that block size settings were also used to tune and optimize file protocol transfer, for example in NFS, am I wrong? A. That is correct, block size refers to the size of data in each I/O and can be applied to block, file and object storage, though it may not be used very often for object storage. NFS and SMB both let you specific block I/O size. Q. What is the main difference between object and file? Is it true that File has a hierarchical structure, while object does not? A. Yes that is one important difference. Another difference is the access method–folder/file/offset for files and key-value for objects.  File storage also often allows access to specific data within a file and in many cases shared writes to the same file, while object storage typically offers only shared reads and most object storage systems do not allow direct updates to existing objects. Q. What is the best way to backup a local Object store system? A. Most object storage systems have built-in data protection using either replication or erasure coding which often replicates the data to one or more remote locations. If you deploy local object storage that does not include any remote replication or erasure coding protection, you should implement some other form of backup or replication, perhaps at the hardware or operating system level. Q. I feel that this discussion conflates object storage with cloud storage features, and presumes certain cloud features (for example security) that are not universally available or really part of Object Storage.  This is a very common problem with discussions of objects — they typically become descriptions of one vendor’s cloud features. A. Cloud storage can be block, file, and/or object, though object storage is perhaps more popular in public and private cloud than it is in non-cloud environments. Security can be required and deployed in both enterprise and cloud storage environments, and for block, file and object storage. It was not the intention of this webinar to conflate cloud and object storage; we leave that to the SNIA Cloud Storage Initiative (CSI). Q. How do open source block, file and object storage products play into the equation? A. Open source software solutions are available for block, file and object storage. As is usually the case with other open-source, these solutions typically make storage (block, file or object) available at a lower acquisition cost than commercial storage software or appliances, but at the cost of higher complexity and higher integration/support effort by the end user. Thus customers who care most about simplicity and minimizing their integration/support work tend to buy commercial appliances or storage software, while large customers who have enough staff to do their own storage integration, testing and support may prefer open-source solutions so they don’t have to pay software license fees. Q. How is data [0s and 1s in hard disk] converted to objects or vice versa? A. In the beginning there were electrons, with conductors, insulators, and semi-conductors (we skipped the quantum physics level of explanation). Then there were chip companies, storage companies, and networking companies. Then The Storage Networking Industry Association (SNIA) came along… The short answer is some software (running in the storage server, storage device, or the cloud) organizes the 0s and 1s into objects stored in a file system or object store. The software makes these objects (full of 0s and 1s) available via a key-value systems and/or a RESTful API. You submit data (stream of 1s and 0s) and get a key-value in return. Or you submit a key-value and get the object (stream of 1s and 0s) in return. Q. What is the difference (from an operating system perspective where the file/object resides) between a file in mounted NFS drive and object in, for example Google drive? Isn’t object storage (under the hood) just network file system with rest API access? A. Correct–under the hood there are often similarities between file and object storage. Some object storage systems store the underlying data as file and some file storage systems store the underlying data as objects. However, customers and applications usually just care about the access method, performance, and reliability/availability, not the underlying storage method. Q. I’ve heard that an Achilles’ Heel of Object is that if you lose the name/handle, then the object is essentially lost.  If true, are there ways to mitigate this risk? A. If you lose the name/handle or key-value, then you cannot access the object, but most solutions using object storage keep redundant copies of the name/handle to avoid this. In addition, many object storage systems also store metadata about each object and let you search the metadata, so if you lose the name/handle you can regain access to the object by searching the metadata. Q. Why don’t you mention concepts like time to first byte for object storage performance? A. Time to first byte is an important performance metric for some applications and that can be true for block, file, and object storage. When using object storage, an application that is streaming out the object (like online video streaming) or processing the object linearly from beginning to end might really care about time to first byte. But an application that needs to work on the entire object might care more about time to load/copy the entire object instead of time to first byte. Q. Could you describe how storage supports data temperatures? A. Data temperatures describe how often data is accessed, where “hot” data is accessed often, “warm” data occasionally, and “cold” data rarely. A storage system can tier data so the hottest data is on the fastest storage while the coldest data is on the least expensive (and presumably slowest) storage. This could mean using block storage for the hot data, file storage for the warm data, and object storage for the cold data, but that is just one option. For example, block storage could be for cold data while file storage is for hot data, or you could have three tiers of file storage. Q. Fibre channel uses SCSI. Does NVMe over Fibre Channel use SCSI too? That would diminish NVMe performance greatly. A. NVMe over Fabrics over Fibre Channel does not use the Fibre Channel Protocol (FCP) and does not use SCSI. It runs the NVMe protocol over a FC-NVMe transport on top of the physical Fibre Channel network.  In fact none of the NVMe over Fabrics options use SCSI. Q. I get confused when some one says block size for block storage, also block size for NFS storage and object storage as well. Does block size means different for different storage type? A. In this case “block size” refers to the size of the data access and it can apply to block, file, or object storage. You can use 4KB “block size” to access file data in 4KB chunks, even though you’re accessing it through a folder/file/offset combination instead of a logical block address. Some implementations may limit which block sizes you can use. Object storage tends to use larger block sizes (128KB, 1MB, 4MB, etc.) than block storage, but this is not required. Q. One could argue that file system is not really a good match for big data. Would you agree? A. It depends on the type of big data and the access patterns. Big data that consists of large SQL databases might work better on block storage if low latency is the most important criteria. Big data that consists of very large video or image files might be easiest to manage and protect on object storage. And big data for Hadoop or some machine learning applications might work best on file storage. Q. It is my understanding that the unit for both File Storage & Object storage is File – so what is the key/fundamental difference between the two? A. The unit for file storage is a file (folder/file/offset or directory/file/offset) and the unit for object storage is an object (key-value or object name). They are similar but not identical. For example file storage usually allows shared reads and writes to the same file, while object storage usually allows shared reads but not shared writes to the object. In fact many object storage systems do not allow any writes or updates to the middle of an object–they either allow only appends to the end of the object or don’t allow any changes to an object at all once it has been created. Q. Why is key value store more efficient and less costly for PCIe SSD? Can you please expand? A. If the SSD supports key-value storage directly, then the applications or storage servers don’t have to perform the key-value translation. They simply submit the key value and then write or read the related data directly from the SSDs. This reduces the cost of the servers and software that would otherwise have to manage the key-value translations, and could also increase object storage performance. (Key-value storage is not inherently more efficient for PCIe SSDs than for other types of SSDs.) Interested in more SNIA ESF Great Storage Debates? Check out: If you have an idea for another storage debate, let us know by commenting on this blog. Happy debating!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

FCoE vs. iSCSI vs. iSER: Get Ready for Another Great Storage Debate

Alex McDonald

May 1, 2018

title of post
As a follow up our first two hugely successful "Great Storage Debate" webcasts, Fibre Channel vs. iSCSI and File vs. Block vs. Object Storage, the SNIA Ethernet Storage Forum will be presenting another great storage debate on June 21, 2018. This time we'll take on FCoE vs. iSCSI vs. iSER. For those of you who've seen these webcasts, you know that the goal of these debates is not to have a winner emerge, but rather provide unbiased education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions. Here's what you can expect from this session: One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective. Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying. That leads to several questions about FCoE, iSCSI and iSER:
  • If we can run various network storage protocols over Ethernet, what differentiates them?
  • What are the advantages and disadvantages of FCoE, iSCSI and iSER?
  • How are they structured?
  • What software and hardware do they require?
  • How are they implemented, configured and managed?
  • Do they perform differently?
  • What do you need to do to take advantage of them in the data center?
  • What are the best use cases for each?
Register today to join our SNIA experts as they answer all these questions and more on the next Great Storage Debate: FCoE vs. iSCSI vs. iSER. We look forward to seeing you on June 21st.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

FCoE vs. iSCSI vs. iSER: Get Ready for Another Great Storage Debate

AlexMcDonald

May 1, 2018

title of post
As a follow up our first two hugely successful “Great Storage Debate” webcasts, Fibre Channel vs. iSCSI and File vs. Block vs. Object Storage, the SNIA Ethernet Storage Forum will be presenting another great storage debate on June 21, 2018. This time we’ll take on FCoE vs. iSCSI vs. iSER. For those of you who’ve seen these webcasts, you know that the goal of these debates is not to have a winner emerge, but rather provide unbiased education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions. Here’s what you can expect from this session: One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective. Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying. That leads to several questions about FCoE, iSCSI and iSER:
  • If we can run various network storage protocols over Ethernet, what differentiates them?
  • What are the advantages and disadvantages of FCoE, iSCSI and iSER?
  • How are they structured?
  • What software and hardware do they require?
  • How are they implemented, configured and managed?
  • Do they perform differently?
  • What do you need to do to take advantage of them in the data center?
  • What are the best use cases for each?
Register today to join our SNIA experts as they answer all these questions and more on the next Great Storage Debate: FCoE vs. iSCSI vs. iSER. We look forward to seeing you on June 21st.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Data Security is an Integral Part of any Business Endeavor

Diane Marsili

Apr 18, 2018

title of post
In the wake of all the data breaches, privacy scandals, and cybercrime in the world these days, it can be worrisome if you’re responsible for keeping your company and customer data safe. Sure, there are standards to help you plan and implement policies and procedures around data security, like the ISO/IEC 27040:2015 document. It provides detailed technical guidance on how organizations can be consistent in their approach to plan, design, document and implement data storage security. While the ISO/IEC 27040 standard is fairly thorough, there are some specific elements in the area of data protection — including data preservation, data authenticity, archival security and data disposition — that the ISO document doesn’t fully get into. The Storage Networking Industry Association (SNIA) Security Technical Working Group (TWG) has released a whitepaper that addresses these specific topics in data protection. One of a series of educational documents provided by the TWG, this one extends, builds on, and complements the ISO 27040 standard, while also suggesting best practices.
SNIA’s Technical Work Group Activity for 2018
Data protection is an essential element of storage security, with many nuanced issues to work through. Data must be stored, it must be kept private, and clear decisions must be made about who needs access to the data, where that data resides, what types of devices and data exist in the system, how data is recovered during disasters or regular operations, and what best practice technologies should be in place in your organization. The SNIA’s new technical whitepaper addresses these issues in depth in order to raise awareness of data protection and help educate those in the storage security business (and most companies are, these days). The document also highlights relevant data protection guidance from ISO/IEC27040 so that you can get a complete picture of the things you need to do to keep your data secure. Data security is an integral part of any business endeavor; making sure that your organization has considered and implemented as many best practices in the area of data security as possible is made easier by this current publication, which comes from (and also benefits) SNIA’s own members of the storage security technical working group. For more information about the work of SNIA’s storage security group, visit: www.snia.org/security. Click here to download the complete Storage Security: Data Protection white paper. .

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to