Sorry, you need to enable JavaScript to visit this website.

25 Questions (and Answers) on Ethernet-attached SSDs

Ted Vojnovich

Apr 14, 2020

title of post
The SNIA Networking Storage Forum celebrated St. Patrick’s Day by hosting a live webcast, “Ethernet-attached SSDs – Brilliant Idea or Storage Silliness?” Even though we didn’t serve green beer during the event, the response was impressive with hundreds of live attendees who asked many great questions – 25 to be exact. Our expert presenters have answered them all here: Q. Has a prototype drive been built today that includes the Ethernet controller inside the NVMe SSD? A. There is an Interposing board that extends the length by a small amount. Integrated functionality will come with volume and a business case. Some SSD vendors have plans to offer SSDs with fully-integrated Ethernet controllers. Q. Costs seem to be the initial concern… true apples to apples between JBOF? A. Difference is between a PCIe switch and an Ethernet switch. Ethernet switches usually cost more but provide more bandwidth than PCIe switches. An EBOF might cost more than a JBOF with the same number of SSDs and same capacity, but the EBOF is likely to provide more performance than the JBOF. Q. What are the specification names and numbers. Which standards groups are involved? A. The Native NVMe-oF Drive Specification from the SNIA is the primary specification. A public review version is here. Within that specification, multiple other standards are referenced from SFF, NVMe, and DMTF. Q. How is this different than “Kenetic”, “Object Storage”, etc. effort a few years ago? Is there any true production quality open source available or planned (if so when), if so, by whom and where? A. Kinetic drives were hard disks and thus did not need high speed Ethernet. In fact, new lower-speed Ethernet was developed for this case. The pins chosen for Kinetic would not accommodate the higher Ethernet speeds that SSDs need, so the new standard re-uses the same lanes defined for PCIe for use by Ethernet. Kinetic was a brand-new protocol and application interface rather than leveraging an existing standard interface such as NVMe-oF. Q. Can OpenChannel SSDs be used as EBOF? A. To the extent that Open Channel can work over NVMe-oF it should work. Q. Define the Signal Integrity challenges of routing Ethernet at these speeds compared to PCIe. A. The signal integrity of the SFF 8639 connector is considered good through 25Gb Ethernet. The SFF 1002 connector has been tested to 50Gb speeds with good signal integrity and may go higher. Ethernet is able to carry data with good signal integrity much farther than a PCIe connection of similar speed. Q. Is there a way to expose Intel Optane DC Persistent Memory through NVMe-oF? A. For now, it would need to be a block-based NVMe device. Byte addressability might be available in the future Q. Will there be interposer to send the Block IO directly over the Switch? A. For the Ethernet Drive itself, there is a dongle available for standard PCIe SSDs to become an Ethernet Drive that supports block IO over NVMe-oF. Q. Do NVMe drives fail? Where is HA implemented? I never saw VROC from Intel adopted. So, does the user add latency when adding their own HA? A. Drive reliability is not impacted by the fact that it uses Ethernet. HA can be implemented by dual port versions of Ethernet drives. Dual port dongles are available today. For host or network-based data protection, the fact that Ethernet Drives can act as a secondary location for multiple hosts, makes data protection easier. Q. Ethernet is a contention protocol and TCP has overhead to deliver reliability.  Is there any work going on to package something like Fibre Channel/QUIC or other solutions to eliminate the downsides of Ethernet and TCP? A. FC-NVMe has been approved as a standard since 2017 and is available and maturing as a solution. NVMe-oF on Ethernet can run on RoCE or TCP with the option to use lossless Ethernet and/or congestion management to reduce contention, or to use accelerator NICs to reduce TCP overhead. QUIC is growing in popularity for web traffic but it’s not clear yet if QUIC will prove popular for storage traffic. Q. Is Lenovo or other OEM’s building standard EBOF storage servers? Is OCP having a work group on EBOF supporting hardware architecture and specification? A. Currently, Lenovo does not offer an EBOF.  However, many ODMs are offering JBOFs and a few are offering EBOFs. OCP is currently focusing on NVMe SSD specifics, including form factor.  While several JBOFs have been introduced into OCP, we are not aware of an OCP EBOF specification per se. There are OCP initiatives to optimize the form factors of SSDs and there are also OCP storage designs for JBOF that could probably evolve into an Ethernet SSD enclosure with minimal changes. Q. Is this an accurate statement on SAS latency. Where are you getting and quoting your data? A. SAS is a transaction model, meaning the preceding transaction must complete before the next transaction can be started (QD does ameliorate this to some degree but end points still have to wait). With the initiator and target having to wait for the steps to complete, overall throughput slows. SAS HDD = milliseconds per IO (governed by seek and rotation); SAS SSD = 100s of microseconds (governed by transaction nature); NVMe SSD = 10s of microseconds (governed by queuing paradigm). Q. Regarding performance & scaling, a 50GbE has less bandwidth than a PCIe Gen3 x4 connection. How is converting to Ethernet helping performance of the array? Doesn’t it face the same bottleneck of the NICs connecting the JBOF/eBOF to the rest of the network? A. It eliminates the JBOF’s CPU and NIC(s) from the data path and replaces them with an Ethernet switch. Math:  1P 50G = 5GBps  1P 4X Gen 3 = 4 GBps,. because PCIe Gen 3 =  8 Gbps per lane so a single 25GbE NIC is usually connected to 4 lanes of PCIe Gen3 and a single 50GbE NIC is usually connected to 8 lanes of PCIe Gen3 (or 4 lanes of PCIe Gen4). But that is half of the story:  2 other dimensions to consider. First, getting all this BW (either way) out the JBOF vs. an EBOF. Second, at the solution level, all these ports (connectivity) and scaling (bandwidth) present their own challenges Q. What about Persistent Memory? Can you present Optane DC through NVMe-Of? A. Interesting idea!!! Today persistent memory DIMMs sit on memory bus so they would not benefit directly from Ethernet architecture. But with the advent of CXL and PCIe Gen 5, there may be a place for persistent memory in “bays” for a more NUMA-like architecture Q. For those of us that use Ceph, this might be an interesting vertical integration, but feels like there’s more to the latency to “finding” and “balancing” the data on the arrays of Ethernet-attached NVMe. Has there been any software suites to accompany this hardware changes and are whitepapers published? A. Ceph nodes are generally slower (for like-to-like HW than non-Ceph storage solutions, so Ceph might be less likely to benefit from Ethernet SSDs, especially NVMe-oF SSDs. That said, if the cost model for ESSDs works out (really cheap Ceph nodes to overcome “throwing HW at the problem”), one could look at Ceph solutions using ESSDs, either via NVMe-oF or by creating ESSDs with a key-value interface that can be accessed directly by Ceph. Q. Can the traditional Array functions be moved to the LAN switch layer, either included in the switch (~the Cisco MDS and IBM SVC “”experiment””) or connect the controller functionality to the LAN switch backbone with the SSD’s in a separate VLAN? A. Many storage functions are software/firmware driven. Certainly, a LAN switch with a rich X86 complex could do this…or…a server with a switch subsystem could. I can see low level storage functions (RAID XOR, compression, maybe snapshots) translated to switch HW, but I don’t see a clear path for high level functions (dedupe, replication, etc) translated to switch HW.  However, since hyperscale does not perform many high-level storage functions at the storage node, perhaps enough can be moved to switch HW over time. Q. ATA over Ethernet has been working for nearly 18 years now. What is the difference? A. ATA over Ethernet is more of a work group concept and has never gone mainstream (to be honest, your question is the first time I heard this since 2001). In any event, ATA does not take advantage of queuing nature of NVMe so it’s still held hostage by transaction latency.  Also, no high availability (HA) in ATA (at least I am not aware of any HA standards for ATA), which presents a challenge because HA at the box or storage controller level does NOT solve the SPOF problem at the drive level. Q. Request for comment – Ethernet 10G, 25G, 50G, 100G per lane (all available today), and Ethernet MAC speeds of 10G, 25G, 40G, 50G, 100G, 200G, 400G (all available today), Ethernet is far more scalable compared to PCIe.  Comparing Ethernet Switch relative cost to PCIe switch, Ethernet Switch is far more economical.  Why shouldn’t we switch? A. Yes Ethernet is more scalable than PCIe, but 3 things need to happen. 1) Solution level orchestration has to happen (putting an EBOF behind an RBOF is okay but only the first step);  2) The Ethernet world has to start understanding how storage works (multipathing, ACLs, baseband drive management, etc.);  3) Lower cost needs to be proven–jury still out on cost (on paper, it’s a no brainer, but costs of the Ethernet switch in the I/O Module can rival an X86 complex). Note that Ethernet with 100Gb/s per lane is not yet broadly available as of Q2 2020. Q. We’ve seen issues with single network infrastructure from an availability perspective. Why would anyone put their business at risk in this manner? Second question is how will this work with multiple vendor hosts or drive vendors, each having different specifications? A. Customers already connect their traditional storage arrays to either single or dual fabrics, depending on their need for redundancy, and an Ethernet drive can do the same, so there is no rule that an Ethernet SSD must rely on a single network infrastructure. Some large cloud customers use data protection and recovery at the application level that spans multiple drives (or multiple EBOFS), providing high levels of data availability without needing dual fabric connections to every JBOF or to every Ethernet drive. For the second part of the question, it seems likely that all the Ethernet drives will support a standard Ethernet interface and most of them will support the NVMe-oF standard, so multiple host and drive vendors will interoperate using the same specifications. This is already been happening through UNH plug fests at the NIC/Switch level. Areas where Ethernet SSDs might use different specifications might include a key-value or object interface, computational storage APIs, and management tools (if the host or drive maker don’t follow one of the emerging SNIA specifications). Q. Will there be a Plugfest or certification test for Ethernet SSDs? A. Those Ethernet SSDs that use the NVMe-oF interface will be able to join the existing UNH IOL plugfests for NVMe-oF. Whether there are plugfests for any other aspects of Ethernet SSDs–such as key-value or computational storage APIs–likely depends on how many customers want to use those aspects and how many SSD vendors support them. Q. Do you anticipate any issues with mixing control (Redfish/Swordfish) and data over the same ports? A. No, it should be fine to run control and data over the same Ethernet ports. The only reason to run management outside of the data connection would be to diagnose or power cycle an SSD that is still alive but not responding on its Ethernet interface. If out-of-band management of power connections is required, it could be done with a separate management Ethernet connection to the EBOF enclosure. Q. We will require more Switch ports would it mean more investment to be spent Also how is the management of Ethernet SSD’s done. A. Deploying Ethernet SSDs will require more Ethernet switch ports, though it will likely decrease the needed number of other switch or repeater ports (PCIe, SAS, Fibre Channel, InfiniBand, etc.). Also, there are models showing that Ethernet SSDs have certain cost advantages over traditional storage arrays even after including the cost of the additional Ethernet switch ports. Management of the Ethernet SSDs can be done via standard Ethernet mechanisms (such as SNMP), through NVMe commands (for NVMe-oF SSDs), and through the evolving DTMF Redfish/SNIA Swordfish management frameworks mentioned by Mark Carlson during the webcast. You can find more information on SNIA Swordfish here. Q. Is it assumed that Ethernet connected SSDs need to implement/support congestion control management, especially for cases of overprovision in EBOF (i.e. EBOF bandwidth is less than sum of the underlying SSDs under it)? If so – is that standardized? A. Yes, but both NVMe/TCP and NVMe/RoCE protocols have congestion management as part of the protocol, so it is baked in. The eSSDs can connect to either a switch inside the EBOF enclosure or to an external Top-of-Rack (ToR) switch. That Ethernet switch may or may not be oversubscribed, but either way the protocol-based congestion management on the individual Ethernet SSDs will kick in if needed. But if the application does not access all the eSSDs in the enclosure at the same time, the aggregate throughput from the SSDs being used might not exceed the throughput of the switch. If most or all of the SSDs in the enclosure will be accessed simultaneously, then it could make sense to use a non-blocking switch (that will not be oversubscribed) or rely on the protocol congestion management. Q. Are the industry/standards groups developing application protocol (IOS layers 5 thru 7) to allow customers to use existing OS/Apps without modification? If so when will these be available and via what delivery to the market such as new IETF Application Protocol, Consortium,…? A.  Applications that can directly use individual SSDs can access a NVMe-oF Ethernet SSD directly as block storage, without modification and without using any other protocols. There are also software-defined storage solutions that already manage and virtualize access to NVMe-oF arrays and they could be modified to allow applications to access multiple Ethernet SSDs without modifications to the applications. At higher levels of the IOS stack, the computational storage standard under development within SNIA or a key-value storage API could be other solutions to allow applications to access Ethernet SSDs, though in some cases the applications might need to be modified to support the new computational storage and/or key-value APIs. Q. In an eSSD implementation what system element implements advanced features like data streaming and IO determinism? Maybe a better question is does the standard support this at the drive level? A. Any features such as these that are already part of NVMe will work on Ethernet drives.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Challenges IoT Brings to Storage and Data Strategy

Alex McDonald

Apr 13, 2020

title of post

Data generated from the Internet of Things (IoT) is increasing exponentially. More and more we are seeing compute and inference move to the edge. This is driven by the growth in capability to not only generate data from sensors, devices, and by people operating in the field, but also by the interaction between those devices.

This new source of IoT data and information brings with it unique challenges to the way we store and transmit data as well as the way we need to curate it. It’s the topic the SNIA Cloud Storage Technologies Initiative will tackle at our live webcast on May 14, 2020, The influence of IoT on Data Strategy. In this webcast we will look at:

  • New patterns generated by the explosion of the Internet of Things
  • How IoT is impacting storage and data strategies
  • Security and privacy issues and considerations
  • How to think about the lifecycle of our information in this new environment

The SNIA experts presenting are sure to offer new insights into the challenges IoT presents. And since this will be live, they’ll be on-hand to answer your questions on the spot. Register today. We hope you’ll join us.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Challenges IoT Brings to Storage and Data Strategy

Alex McDonald

Apr 13, 2020

title of post
Data generated from the Internet of Things (IoT) is increasing exponentially. More and more we are seeing compute and inference move to the edge. This is driven by the growth in capability to not only generate data from sensors, devices, and by people operating in the field, but also by the interaction between those devices. This new source of IoT data and information brings with it unique challenges to the way we store and transmit data as well as the way we need to curate it. It’s the topic the SNIA Cloud Storage Technologies Initiative will tackle at our live webcast on May 14, 2020, The influence of IoT on Data Strategy. In this webcast we will look at:
  • New patterns generated by the explosion of the Internet of Things
  • How IoT is impacting storage and data strategies
  • Security and privacy issues and considerations
  • How to think about the lifecycle of our information in this new environment
The SNIA experts presenting are sure to offer new insights into the challenges IoT presents. And since this will be live, they’ll be on-hand to answer your questions on the spot. Register today. We hope you’ll join us.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Addressing Cloud Security Threats with Standards

Eric Hibbard

Apr 8, 2020

title of post
In a recent SNIA webinar, Cloud Standards: What They Are, Why You Should Care, the SNIA Cloud Storage Technologies Initiative (CSTI) highlighted some of the key cloud computing standards being developed and published by the ISO/IEC JTC 1/SC 38 (Cloud Computing and Distributed Platforms) and SC 27 (Information security, cybersecurity and privacy protection) standards committees. While ISO and IEC are not the only organizations producing cloud computing standards and specifications (e.g., ITU-T, OASIS, NIST, ENISA, SNIA, etc.), their standards, sometime developed jointly with ITU-T, can play a role in addressing WTO Agreement on Technical Barriers to Trade (TBT) issues. More importantly, they provide a baseline of cloud terminology, concepts, guidance/requirements, and expectations that are recognized internationally. Cloud Terminology As highlighted in the SNIA CSTI webinar, establishing a common cloud vocabulary was an early concern because several software providers invoked a bit of cloud washing, which injected confusion into the market space. ISO/IEC 17788 | ITU-T Y.3500 (Cloud computing – Overview and vocabulary), which drew heavily on NIST Special Publication 800-145 (The NIST Definition of Cloud Computing), and ISO/IEC 17789 | ITU-T Y.3502 (Cloud computing – Reference architecture) clarified many aspects of cloud computing (e.g., key characteristics, deployment models, roles and activities, service categories, frameworks, etc.). Since their publication, however, there have been many developments and clarifications within cloud, so SC 38 is working to capture these details in a new multi-part standard, ISO/IEC 22123, with Part 1 focused on cloud terminology and Part 2 expanding the cloud concepts; look for Part 1 later in 2020. Both ISO/IEC 17788 and ISO/IEC 17789 are available at no cost from the ISO web site (see https://standards.iso.org/ittf/PubliclyAvailableStandards/) as well as the ITU-T SG13 web site (see https://www.itu.int/en/ITU-T/studygroups/2017-2020/13/Pages/default.aspx). Cloud Computing – SLA Framework Another cloud standard highlighted in the SNIA CSTI webinar was the multi-part, ISO/IEC 19086 (Cloud computing – Service level agreement (SLA) framework). This service and vendor-neutral standard offers a unified set of considerations for organizations to help them make decisions about cloud adoption, as well as create a common ground for comparing cloud service offerings. Part 1 establishes a set of common cloud SLA building blocks (concepts, terms, definitions, contexts) that can be used to create cloud SLAs. Part 2 defines a model for specifying metrics for cloud SLAs. Part 3 specifies the core conformance requirements for SLAs for cloud services based on Part 1 and guidance on the core conformance requirements. Part 4 specifies conformance requirements for SLAs that address security and protection of PII components. Both parts ISO/IEC 19086-1 and ISO/IEC 19086-2 are available at no cost from the ISO web site (see https://standards.iso.org/ittf/PubliclyAvailableStandards/). Security Techniques for Supplier Relationships The next standard highlighted in the webinar was the ISO/IEC 27036 (Security techniques – Information security for supplier relationships). As the title implies, this multi-part standard offers guidance on the evaluation and treatment of information risks involved in the acquisition of goods and services from suppliers (i.e., supply chain security).
  • Part 1 (Overview and concepts) provides general background information and introduces the key terms and concepts in relation to information security in supplier relationships, including information risks commonly arising from or relating to business relationships between acquirers and suppliers.
  • Part 2 (Requirements) specifies fundamental information security requirements pertaining to business relationships between suppliers and acquirers of various products (goods and services); although Part 2 contains requirements, the document explicitly states that it is not intended for certification purposes.
  • Part 3 (Guidelines for ICT supply chain security) guides both suppliers and acquirers of ICT goods and services on information risk management relating to the widely dispersed and complex supply chain (e.g., malware, counterfeit products, organizational risks); Part 3 does not address business continuity management.
  • Part 4 (Guidelines for security of cloud services) guides cloud providers and customers on gaining visibility into the information security risks associated with the use of cloud services and managing those risks effectively, and responding to risks specific to the acquisition or provision of cloud services that can have an information security impact on organizations using these services. SC 27 has initiated efforts to revise ISO/IEC 27036, but new versions are unlikely to be available before 2023. ISO/IEC 27036-1 is available at no cost from the ISO web site (see https://standards.iso.org/ittf/PubliclyAvailableStandards/).
Cloud Security & Privacy The last group of cloud standards covered webinar were a few from SC 27 that are related to cloud security and privacy. ISO/IEC 27017 | ITU-T X.1631 (Security techniques – Code of practice for information security controls based on ISO/IEC 27002 for cloud services) provides both cloud customers and providers with additional information security controls and implementation advice beyond that provided in ISO/IEC 27002, in the cloud computing context; this document was not intended to certify the security of cloud service providers specifically because they can be certified compliant with ISO/IEC 27001, like any other organization. ISO/IEC 27018 (Security techniques – Code of practice for protection of Personally Identifiable Information (PII) in public clouds acting as PII processors) expands upon ISO/IEC 27002 and provides guidance aimed at ensuring that cloud service providers (public cloud) offer suitable information security controls to protect the privacy of their customers’ clients by securing PII entrusted to them. ISO/IEC 27040 (Security techniques – Storage security) provide guidance on securing most forms of storage technology, which cloud is often dependent on, as well as specifically addressing cloud storage. SC 27 has initiated efforts to revise ISO/IEC 27040, but new a version is unlikely to be available before 2023. While not specific to cloud, the webinar also covered ISO/IEC 27701 (Security techniques — Extension to ISO/IEC 27001 and to ISO/IEC 27002 for privacy information management — Requirements and guidelines) because its recent publication is likely to have an impact on ISO/IEC 27018, especially since certified compliance with this standard is under discussion within SC 27. But Wait, There’s More There are several other published cloud standards, technical reports (TR), and technical specifications (TS) that were not addressed in the webinar including:
  • ISO/IEC 17826:2012, Information technology — Cloud Data Management Interface (CDMI)
  • ISO/IEC 19941:2017, Information technology — Cloud computing — Interoperability and portability
  • ISO/IEC 19944:2017, Information technology — Cloud computing — Cloud services and devices: data flow, data categories and data use
  • ISO/IEC 22624:2020, Information Technology Cloud Computing Taxonomy based data handling for cloud services
  • ISO/IEC TR 22678:2019, Information Technology Cloud Computing Guidance for policy development
  • ISO/IEC TS 23167:2018, Information Technology Cloud Computing Common technologies and techniques
  • ISO/IEC TR 23186:2018, Information Technology Cloud Computing Framework of trust for processing of multi-sourced data
  • ISO/IEC TR 23188:2020, Information Technology Cloud Computing Edge computing landscape
Additionally, there are several other cloud projects in various stages of development, including:
  • ISO/IEC AWI TR 3445, Information technology — Cloud computing — Guidance and best practices for cloud audits
  • ISO/IEC TR 23187, Information Technology Cloud Computing — Interacting with cloud service partners (CSNs)
  • ISO/IEC 23613, Information Technology Cloud Computing — Cloud service metering elements and billing modes
  • ISO/IEC 23751, Information Technology Cloud Computing and distributed platforms — Data sharing agreement (DSA) framework
  • ISO/IEC 23951, Information Technology Cloud Computing — Guidance for using the cloud SLA metric model
Cloud standardization continues to be an active area of work for ISO and there are likely to be many more standards to come.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Object Storage Questions: Asked and Answered

John Kim

Mar 20, 2020

title of post
Last month, the SNIA Networking Storage Forum (NSF) hosted a live webcast, “Object Storage: What, How and Why.” As the title suggests, our NSF members and invited guest experts delivered foundational knowledge on object storage, explaining how object storage works, use cases, and standards. They even shared a little history on how object storage originated.  If you missed the live event, you can watch the on-demand webcast or find it on our SNIAVideo YouTube Channel. We received some great questions from our live audience. As promised, here are the answers to them all. Q. How can you get insights into object storage on premises? E.g. quota, who consumes what, auditing the data for data leaks, etc. Is there a tool for understanding and managing object data solutions? A.Yes, on-premises storage systems have quota management including enforcement options (and even that can be a hard or soft enforcement). As for data leaks, object storage access and consumption are always logged but even on-premises, the security model of the Internet should be used there as well (take security seriously). If you’re unsure about where to start with security, may we suggest our Storage Security Series. Q. Are object sizes a consideration if it makes sense to use object storage? A. Yes, both the size of the objects and the overall amount of data are important, especially around egress from the public cloud. This is where modeling and testing a solution will provide valuable feedback to the performance and economics of object storage for a use case. Q. I hear that object storage equals SLOW storage i.e. for backup or archives, but can object storage have high performance and if so, what use cases are there for high performance object storage? A. Object storage is not necessarily slow; in some cases it can be quite fast. It depends on how the application writes and reads data from object storage and on the media used for the object storage. A single monolithic file simply put in object storage will have a different behavior characteristic than a more data-specific placement in object storage. Different public cloud service levels (Hot, Cool, Archive for example on Azure) make a difference in performance as well. Lastly, public cloud throttling can come into play. Q. You mentioned that on-premises-deployed S3 compatible object storage solutions support object locking or enforced retention (like Amazon’s Object Lock feature). Cloudian’s HyperStore supports a fully compliant and SEC17a-4/FINRA Object Lock WORM solution. Does NetApp StorageGrid support Object Lock? A.Some on-premises can do object lock as well. We recommend that you take a specific look at each vendor’s support specifications for more details. Q. What is the minimum size of data below which object storage becomes inefficient, and other types of block and file-system storage are more efficient? A. There is no single number that would answer this question for every application. Q. How do current customers define the common meta-data format, when you have a variety of data which are hard to group? A. This would really be determined at the application level; specifically, what application is reading and writing data from object storage. Q. Can Object storage be enabled with versioning capabilities? Is there any limit on the total number of versions for an object? A. In some cases (for instance, at least for S3 storage), the version is a copy of an object. Each copy has a version. It will depend on the specific vendor’s solution as to whether there are limits of the versions, and it is best to always check the vendor’s available information for their implementation details. Q. At a high-level, how would you differentiate on premises-based object storage solutions from public, cloud offerings A. The main differences are that with on-premises solutions the customers have full control whereas the public cloud and service provider offerings are more globally accessible. Additionally, public offerings may have additional features, functionality, and service levels available. Q. Fundamentally, object storage eventually lives on block storage. For things like erasure coding for geo distributed access and protection, does the object storage engine handle that replication of data blocks on SSD/HDD storage up at the application layer?   A. Each object storage provider will ensure the availability of the data in their own way at the hardware control plane level. The public cloud providers intentionally abstract the details of the hardware in many cases; the shared responsibility model however puts the ultimate control of the data on the tenant. Q. What are the leading object storage solutions in the Gartner benchmark? A.As recently as 2019, Gartner did issue a Magic Quadrant for Distributed File Systems and Object Storage which showcases the industry solutions for non-hyperscale object storage implementations. Many of the vendors allow reprints and we recommend you read the full report for those implementations.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How AI Impacts Storage and IT

Alex McDonald

Mar 13, 2020

title of post
Artificial intelligence (AI) and machine learning (ML) have had quite the impact on most industries in the last couple of years, but what about the effect on our own IT industry? On April 1, 2020, the SNIA Cloud Storage Technologies Initiative will host a live webcast, “The Impact of Artificial Intelligence on Storage and IT, where our experts will explore how AI is changing the nature of applications, the shape of the data center, and its demands on storage. Learn how the rise of ML can develop new insights and capabilities for IT operations. In this webcast, we will explore:
  • What is meant by Artificial Intelligence, Machine Learning and Deep Learning?
  • The AI market opportunity
  • The anatomy of an AI Solution
  • Typical storage requirements of AI and the demands on the supporting infrastructure
  • The growing field of IT operations leveraging AI (aka AIOps)
Yes, we know this is on April 1st, but it’s no joke! So, don’t be fooled and find out why everyone is talking about AI now. Register today

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How AI Impacts Storage and IT

Alex McDonald

Mar 13, 2020

title of post
Artificial intelligence (AI) and machine learning (ML) have had quite the impact on most industries in the last couple of years, but what about the effect on our own IT industry? On April 1, 2020, the SNIA Cloud Storage Technologies Initiative will host a live webcast, “The Impact of Artificial Intelligence on Storage and IT, where our experts will explore how AI is changing the nature of applications, the shape of the data center, and its demands on storage. Learn how the rise of ML can develop new insights and capabilities for IT operations. In this webcast, we will explore:
  • What is meant by Artificial Intelligence, Machine Learning and Deep Learning?
  • The AI market opportunity
  • The anatomy of an AI Solution
  • Typical storage requirements of AI and the demands on the supporting infrastructure
  • The growing field of IT operations leveraging AI (aka AIOps)
Yes, we know this is on April 1st, but it’s no joke! So, don’t be fooled and find out why everyone is talking about AI now. Register today

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Networking Security Series: Protecting Data at Rest

Steve Vanderlinden

Mar 13, 2020

title of post
Contrary to popular belief, securing “data at rest” does not simply mean encrypting the data prior to storage. While it is true that data encryption plays a major role in securing “data at rest,” there are several other factors that come into play and are as important as encryption. It’s the next topic the SNIA Networking Storage Forum (NSF) will cover in our Storage Networking Security Series. On April 29, 2020, we will host a live webcast, “Storage Networking Security Series: Protecting Data at Rest,” where we will cover the end-to-end process of securing “data at rest,” and discuss all the factors and trade-offs that must be considered, and some of the general risks that need to be mitigated. As this series shows, there are many places along the chain where a weak link can break the entire process. One of the key aspects of keeping data secure – and probably the place where most people think of security – is what happens when the data is “at rest,” or being stored in some sort of stable media. Join us as we break down the aspects of securing data at rest as part of the overall goal of understanding storage security. In particular, we’ll be looking at:
  • How the requirements for “data at rest” differ from “data in flight”
  • Understanding the costs of ransomware
  • How to protect cryptographic keys from malicious actors
  • Using key managers to properly manage cryptographic keys
  • Strengths and weaknesses of relying on government security recommendations
  • The importance of validating data backups… how stable is your media?
As the process for storing data securely is involved, this Storage Networking Security Series is dedicated to providing ongoing education for placing these very important parts into the much larger whole. We hope you are able to join us on April 29th as we spend some time on this very important piece of the puzzle. Register today.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Potential Impact of QUIC – Will it Replace TCP/IP?

Tim Lustig

Mar 3, 2020

title of post
Have you heard about QUIC? Although initially proposed as the acronym for “Quick UDP Internet Connections,” IETF’s use of the word QUIC is not an acronym; it is merely the name of the protocol. QUIC is a new UDP-based transport protocol for the Internet, and specifically, the web. Originally designed and deployed by Google, it already makes up 35% of Google’s egress traffic, which corresponds to about 7% of all Internet traffic. Due to its ability to improve connection-oriented web application performance, it is gaining enthusiastic interest by many other large Internet players in the ongoing IETF standardization process, which is likely to lead to an even greater deployment. The SNIA Networking Storage Forum (NSF) is going to explore the potential impact of QUIC in our live webcast on April 2, 2020 “QUIC – Will it Replace TCP/IP?” In this session, Lars Eggert, Chair of the QUIC Working Group within IEFT, will discuss:
  • Unique design aspects of QUIC
  • Differences to the conventional HTTP/TLS/TCP web stack
  • Early performance numbers
  • Potential side effects of a broader deployment of QUIC
It should be an insightful overview of this interesting technology. Please register today to save your spot. We hope to see you on April 2nd.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Tracking Consumer Personal Data – A Major Headache for Data Administrators

Thomas Rivera

Feb 25, 2020

title of post

First, it is now well understood that the CCPA* mandates strict requirements for companies to notify users about how their data will be used, along with giving customers the ability to “Opt Out” and request that their data be deleted, mirroring some of the primary aspects of the EU GDPR legislation known as the ‘right to be forgotten.’

I was reading a recent article from ThreatPost, entitled: “California’s Tough New Privacy Law and its Biggest Challenges,” and I realized that this article brought up something that I was thinking about even before the California Consumer Privacy Act (CCPA) was enacted at the beginning of this year (2020).

*CCPA applies to companies that are storing 50,000+ records worth of consumer data.

The interesting part is that companies may have quite a hard time keeping track of the actual stored location of the user data that they initially collected.

The example cited in the article is in the difficulties posed in tracking data that has been collected, then placed in a database, or even given to a third-party to carry out a marketing campaign. It may be a marketing database or just a one-month long program that gave some kind of special promotion to encourage people to register, and once the campaign is over it’s hard to find the data, especially the older it is.

There are likely to be many such examples where consumer data does not typically carry sophisticated tracking to the point where it will be difficult to prove compliance when the legislation demands it. Businesses will be expected to show:

1.    How consumer data is going to be used

2.    How consumer data is going to be protected while being used

3.    How consumer data will be deleted

4.    Proof of all the above

Ultimately, how well a company tracks the data it collects, along with the associated processes and procedures to prove that these activities are being performed, will dictate their success or failure in complying with the CCPA.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to