Sorry, you need to enable JavaScript to visit this website.

Business Resiliency in a Kubernetes World

Jim Fister

May 12, 2020

title of post
At the 2018 KubeCon keynote, Monzo Bank explained the potential risk of running a single massive Kubernetes cluster. A minor conflict between etcd and Java led to an outage during one of their busiest business days, prompting questions, like “If a cluster goes down can our business keep functioning?”  Understanding the business continuity implications of multiple Kubernetes clusters is an important topic and key area of debate. It’s an opportunity for the SNIA Cloud Storage Technologies Initiative (CSTI) to host “A Multi-tenant Multi-cluster Kubernetes “Datapocalypse” is Coming” – a live webcast on June 23, 2020 where Kubernetes expert, Paul Burt, will dive into:
  • The history of multi-cluster Kubernetes
  • How multi-cluster setups could affect data heavy workloads (such as multiple microservices backed by independent data stores)
  • Managing multiple clusters
  • Keeping the business functioning if a cluster goes down
  • How to prepare for the coming “datapocalypse”
Multi-cluster Kubernetes that provides robustness and resiliency is rapidly moving from “best practice” to a “must have”. Register today to save your spot on June 23rd to learn more and have your questions answered. Need a refresher on Kubernetes? Check out the CSTI’s 3-part Kubernetes in the Cloud series to get up-to-speed.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

May 12, 2020

title of post

There's a lot that goes into effective key management. In order to properly use cryptography to protect information, one has to ensure that the associated cryptographic keys themselves are also protected. Careful attention must be paid to how cryptographic keys are generated, distributed, used, stored, replaced and destroyed in order to ensure that the security of cryptographic implementations is not compromised.

It's the next topic the SNIA Networking Storage Forum is going to cover in our Storage Networking Security Webcast Series. Join us on June 10, 2020 for Key Management 101 where security expert and Dell Technologies distinguished engineer, Judith Furlong, will introduce the fundamentals of cryptographic key management.

Key (see what I did there?) topics will include:

  • Key lifecycles
  • Key generation
  • Key distribution
  • Symmetric vs. asymmetric key management, and
  • Integrated vs. centralized key management models

In addition, Judith will also dive into relevant standards, protocols and industry best practices. Register today to save your spot for June 10th we hope to see you there.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

May 12, 2020

title of post
There’s a lot that goes into effective key management. In order to properly use cryptography to protect information, one has to ensure that the associated cryptographic keys themselves are also protected. Careful attention must be paid to how cryptographic keys are generated, distributed, used, stored, replaced and destroyed in order to ensure that the security of cryptographic implementations is not compromised. It’s the next topic the SNIA Networking Storage Forum is going to cover in our Storage Networking Security Webcast Series. Join us on June 10, 2020 for Key Management 101 where security expert and Dell Technologies distinguished engineer, Judith Furlong, will introduce the fundamentals of cryptographic key management. Key (see what I did there?) topics will include:
  • Key lifecycles
  • Key generation
  • Key distribution
  • Symmetric vs. asymmetric key management, and
  • Integrated vs. centralized key management models
In addition, Judith will also dive into relevant standards, protocols and industry best practices. Register today to save your spot for June 10th we hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Exhibits at OCP Virtual Summit May 12-15, 2020 – SFF Standards featured in Thursday Sessions

Marty Foltyn

May 6, 2020

title of post

All SNIA members and colleagues are welcome to register to attend the free online Open Compute Project (OCP) Virtual Summit, May 12-15, 2020.

SNIA SFF standards will be represented in the following presentations (all times Pacific):

 Thursday, May 14:

The SNIA booth (link live at 9:00 am May 12) will be open from 9 am to 4 pm each day of OCP Summit, and feature Chat with SNIA Experts:  scheduled times where SNIA volunteer leadership will answer questions on SNIA technical work, SNIA education, standards, and adoption, and vision for 2020 and beyond.

The current schedule is below – and is continually being updated with more speakers and topics – so  be sure to bookmark this page.

And let us know if you want to add a topic to discuss!

Tuesday, May 12:

  • 10:00 am - 11:00 am - SNIA Education, Standards, and Technology Adoption, Erin Weiner, SNIA Membership Services Manager
  • 11:00 am - 12:00 pm - Uniting Compute, Memory, and Storage, Jenni Dietz, Intel/SNIA CMSI Co-Chair
  • 11:00 am – 12:00 pm – Computational Storage standards and direction, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 11:00 am - 12:00 pm - SNIA SwordfishTM and Redfish Standards and Adoption, Don Deel, NetApp, SNIA SMI Chair
  • 12:00 pm – 1:00 pm – SSDs and Form Factors, Cameron Brett, KIOXIA, SNIA SSD SIG Co-Chair
  • 12:00 pm - 1:00 pm - Persistent Memory Standards and Adoption, Jim Fister, SNIA Persistent Memory Enablement Director
  • 1:00 pm – 2:00 pm - SNIA Technical Activities and Direction, Mark Carlson, KIOXIA, SNIA Technical Council Co-Chair
  • 1:00 pm - 2:00 pm - SNIA Education, Standards, Promotion, Technology Adoption, Michael Meleedy, SNIA Business Operations Director

Wednesday, May 13:

  • 11:00 am – 12:00 pm – SNIA Education, Standards, and Technology Adoption, Arnold Jones, SNIA Technical Council Managing Director
  • 11:00 am – 12:00 pm – Persistent Memory Standards and Adoption, Jim Fister, SNIA Persistent Memory Enablement Director
  • 12:00 pm – 1:00 pm – Computational Storage Standards and Adoption, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 12:00 pm - 1:00 pm - SNIA Technical Activities and Direction, Bill Martin, Samsung, SNIA Technical Council Co-Chair
  • 12:00 pm - 1:00 pm - SSDs and Form Factors, Jonmichael Hands, Intel, SNIA SSD SIG Co-Chair
  • 1:00 pm – 2:00 pm – SNIA NVMe and NVMe-oF standards and direction, Mark Carlson, KIOXIA, SNIA Technical Council Co-Chair
  • 1:00 pm - 2:00 pm - SNIA SwordfishTM and Redfish standards and direction, Richelle Ahlvers, Broadcom, SNIA SSM TWG Chair, and Barry Kittner, Intel, SNIA SMI Marketing Chair

Thursday, May 14

  • 11:00 am - 12:00 pm - Uniting Compute, Memory, and Storage, Jenni Dietz, Intel, SNIA CMSI Co-Chair
  • 11:00 am - 12:00 pm - SNIA Education, Standards, and Technology Adoption, Erin Weiner, SNIA Membership Services Manager
  • 12:00 pm – 1:00 pm – SNIA SwordfishTM and Redfish standards activities and direction, Richelle Ahlvers, Broadcom, SNIA SSM TWG Chair, and Barry Kittner, Intel, SNIA SMI Marketing Chair
  • 12:00 pm - 1:00 pm - SNIA Technical Activities and Direction, Bill Martin, Samsung, SNIA Technical Council Co-Chair
  • 1:00 pm – 2:00 pm – Computational Storage Standards and Adoption, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 1:00 pm - 2:00 pm - SNIA NVMe and NVMe-oF standards activities and direction , Bill Martin, Samsung, SNIA Technical Council Co-Chair

Friday, May 15: 

  • 11:00 am – 12:00 pm – Computational Storage Standards and Adoption, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 11:00 am - 12:00 pm - SNIA SwordfishTM and Redfish standards and direction, Richelle Ahlvers, Broadcom, SNIA SSM TWG Chair
  • 12:00 pm– 1:00 pm – SNIA NVMe and NVMe-oF standards activities and direction, Mark Carlson, KIOXIA, SNIA Technical Council Co-Chair
  • 12:00 pm – 1:00 pm – SNIA Technical Activities and Direction, Bill Martin, Samsung, SNIA Technical Council Co-Chair
  • 1:00 pm - 2:00 pm - SNIA Education, Standards and Technology Adoption, Michael Meleedy, SNIA Business Services Director


Register today to attend the OCP Virtual Summit. Registration is free for all attendees and is open for everyone, not just those who were registered for the in-person Summit. The SNIA exhibit will be found here once the Summit is live.
 
Please note, the virtual summit will be a 3D environment that will be best experienced on a laptop or desktop computer, however a simplified mobile responsive version will also be available for attendees. No additional hardware, software or plugins are required.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Exhibits at OCP Virtual Summit May 12-15, 2020 – SFF Standards featured in Thursday Sessions

Marty Foltyn

May 6, 2020

title of post
All SNIA members and colleagues are welcome to register to attend the free online Open Compute Project (OCP) Virtual Summit, May 12-15, 2020. SNIA SFF standards will be represented in the following presentations (all times Pacific): Thursday, May 14: The SNIA booth (link live at 9:00 am May 12) will be open from 9 am to 4 pm each day of OCP Summit, and feature Chat with SNIA Experts:  scheduled times where SNIA volunteer leadership will answer questions on SNIA technical work, SNIA education, standards, and adoption, and vision for 2020 and beyond. The current schedule is below – and is continually being updated with more speakers and topics – so  be sure to bookmark this page. And let us know if you want to add a topic to discuss! Tuesday, May 12:
  • 11:00 am – 12:00 noon – Computational Storage
  • 12:00 noon – 1:00 pm – SSDs and Form Factors
  • 1:00 pm – 2:00 pm – SNIA Technical Activities and Direction
Wednesday, May 13:
  • 11:00 am – 12:00 noon – SNIA Education, Standards, Promotion, Technology Adoption
  • 11:00 am – 12:00 noon – Persistent Memory Standards and Adoption
  • 12:00 noon – 1:00 pm – Computational Storage
  • 1:00 pm – 2:00 pm – SNIA NVMe and NVMe-oF standards activities and direction
Thursday, May 14
  • 12:00 pm – 1:00 pm – SNIA  SwordfishTM and Redfish
  • 1:00 pm – 2:00 pm – Computational Storage
Friday, May 15: 
  • 11:00 am – 12:00 noon – Computational Storage
  • 12:00 noon – 1:00 pm – SNIA Education – Standards Promotion and Technical Adoption
  • 1:00 pm -2:00 pm – SNIA Technical Activities and Direction
Register today to attend the OCP Virtual Summit. Registration is free for all attendees and is open for everyone, not just those who were registered for the in-person Summit. The SNIA exhibit will be found here once the Summit is live. Please note, the virtual summit will be a 3D environment that will be best experienced on a laptop or desktop computer, however a simplified mobile responsive version will also be available for attendees. No additional hardware, software or plugins are required.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered on Persistent Memory Programming

Jim Fister

May 5, 2020

title of post
On April 14, the SNIA Compute Memory and Storage Initiative (CMSI) held a webcast asking the question – Do You Wanna Program Persistent Memory? We had some answers in the affirmative – answering the call of the NVDIMM Programming Challenge. The Challenge utilizes a set of systems SNIA provides for the development of applications that take advantage of persistent memory. These systems support persistent memory types that can utilize the SNIA Persistent Memory Programming Model, and that are also supported by the Persistent Memory Development Kit (PMDK) Libraries. The NVDIMM Programming Challenge seeks innovative applications and tools that showcase the features persistent memory will enable. Submissions are judged by a panel of SNIA leaders and individual contest sponsors.  Judging is scheduled at the convenience of the submitter and judges, and done via conference call.  The program or results should be able to be visually demonstrated using remote access to a PM-enabled server. NVDIMM Programming Challenge participant Steve Heller from Chrysalis Software joined the webcast to discuss the Three Misses Hash Table, which uses persistent memory to store large amounts of data that greatly increases the speed of data access for programs that use it.  During the webcast a small number of questions came up that this blog answers, and we’ve also provided answers to subjects stimulated by our conversation. Q: What are the rules/conditions to access SNIA PM hardware test system to get hands on experience? What kind of PM hardware is there? Windows/Linux? A: Persistent memory, such as NVDIMM or Intel Optane memory, enables many new capabilities in server systems.  The speed of storage in the memory tier is one example, as is the ability to hold and recover data over system or application resets.  The programming challenge is seeking innovative applications and tools that showcase the features persistent memory will enable. The specific systems for the different challenges will vary depending on the focus.  The current system is built using NVDIMM-N.  Users are given their own Linux container with simple examples in a web-based interface.  The users can also work directly in the Linux shell if they are comfortable with it. Q: During the presentation there was a discussion on why it was important to look for “corner cases” when developing programs using Persistent Memory instead of regular storage.  Would you elaborate on this? A: As you can see in the chart at the top of the blog post, persistent memory significantly reduces the amount of time to access a piece of data in stored memory.  As such, the amount of time that the program normally takes to process the data becomes much more important.  Programs that are used to data retrieval taking a significant amount of time can then occasionally absorb a “processing” performance hit that an extended data sort might imply.  Simply porting a file system access to persistent memory could result in strange performance bottlenecks, and potentially introduce race conditions or strange bugs in the software.  The rewards of fixing these issues will be significant performance, as demonstrated in the webcast. Q: Can you please comment on the scalability of your HashMap implementation, both on a single socket and across multiple sockets? The implementation is single threaded. Multiple threading poses lots of overhead and opportunity for mistakes. It is easy to saturate performance that only persistent memory can provide. There is likely no benefit to the hash table in going multi-threaded. It is not impossible – one could do an example of a hash table per volume. I have run across multiple sockets that were slower with an 8% to 10% variation in performance in an earlier version.  There are potential cache pollution issues with going multi-threaded as well. The existing implementation will scale one to 15 billion records, and we would see the same thing if we have enough storage. The implementation does not use much RAM if it does not cache the index.  It only uses 100mb of RAM for test data and does not use memory. Q: How would you compare your approach to having smarter compilers that are address aware of “preferred” addresses to exploit faster memories? The Three Misses implementation invented three new storage management algorithms.  I don’t believe that compilers can invent new storage algorithms.  Compilers are much improved since their beginnings 50+ years ago when you could not mix integers and floating-point numbers, but they cannot figure out how to minimize accesses.  Smart compilers will probably not help solve this specific problem. The SNIA CMSI is continuing its efforts on persistent memory programming.  If you’re interested in learning more about persistent memory programming, you can contact us at pmhackathon@snia.org to get updates on existing contests or programming workshops.  Additionally, SNIA would be willing to work with you to host physical or virtual programming workshops. Please view the webcast and contact us with any questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Standards Watch: Storage Security Update

Eric Hibbard

May 5, 2020

title of post

The world of storage security standards continues to evolve. In fact, it can be hard to keep up with all that’s happening. Here’s a quick recap of SNIA’s involvement and impact on some notable storage security work – past, present and future.

The Storage Security ISO/IEC 27040 standard provides security techniques and detailed technical guidance on how organizations can define an appropriate level of risk mitigation by employing a well-proven and consistent approach to the planning, design, documentation, and implementation of data storage security. SNIA has been a key industry advocate of this standard by providing many of the concepts and best practices dating back to 2006. Recently, the SNIA Storage Security Technical Work Group (TWG) authored a series of white papers that explored a range of topics covered by the ISO/IEC 27040 standard. 

At the recent ISO/IEC JTC 1/SC 27 (Information security, cybersecurity, and privacy protection) meeting, there were several developments that are likely to have impacts on the storage industry and consumers of storage technology in the future. In particular, three projects are worth noting: 

  • The first is ISO/IEC 27050-4 (Information technology — Electronic discovery — Part 4: Technical readiness), which includes guidance on data/ESI preservation and retention that was derived in part from the SNIA Storage Security: Data Protection White Paper; this project progressed to draft international standard (DIS) and is expected to be published in early 2021. 
  • Next, steps were taken to restart the revision of the ISO/IEC 27031 (Information technology — Security techniques — Guidelines for information and communication technology readiness for business continuity) with an initial focus on what constitutes ICT readiness; recent events (i.e., Covid-19) have highlighted the need for this readiness. SNIA has an obvious interest in this area and anticipates offering contributions. 
  • Last, but not least, work has started on the revision of the ISO/IEC 27040 (Information technology — Storage security) standard. The decision to include requirements (i.e., allowing for conformance) has already been made and will likely increase the importance of the standard when it is published. 

SNIA has published several technical white papers associated with current ISO/IEC 27040 standard and it is expected that SNIA-identified issues and suggestions will be addressed in the standard. For example, concerns raised about the staleness of the media-specific guidance for sanitization has already resulted in IEEE initiating a project authorization request (PAR) for a new standard focused just on Storage Sanitization.

Interested in getting involved in this important work? You can contact the SNIA Storage Security TWG and/or the SNIA Data Protection and Privacy Committee by sending an email to the SNIA Technical Council Managing Director at tcmd@snia.org.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Disaster Recovery in Times of Crisis – What Role Can the Cloud Play?

Mounir Elmously

Apr 30, 2020

title of post

While humanity has always hoped to not have to deal with disasters the last few months have shown that we live in uncertain times and the impact of global warming and pandemics can reach all corners of the globe.

In learning to deal with assessing the risks and how we build resilience in the future could be as simple as finding an alternative way to perform vital tasks or having additional resources to perform those same tasks.

Following the advancement in networking, processor and storage technologies and as the IT industry adopts more standardized components (e.g. Ethernet, x86, Linux, etc.,) the information technology industry has become dependent on the Cloud. The Cloud has its advantages in flexibility and cost especially where IT resources are externally hosted, eliminating the need to build and sustain in-house IT infrastructure and the associated lead time to build it.

The Evolution of IT Disaster Recovery

Since the inception of the computer in the middle of last century data protection has always been an obsession for IT organizations, and we now find ourselves with data sensitive applications where access to the data in a timely manner is a business imperative. Traditional data protection (backup) has many issues and constraints that in many situations cannot meet most organizations recoverability requirements in terms of speed of IT service resumption and potential loss of data in-flight at the time of disaster. Additionally, in many disaster scenarios, the IT infrastructure might become unavailable or inaccessible. Hence, recovering the backup requires a new set of hardware.

Investing in disaster recovery was always an unjustifiable expense considering the potentially prohibitive cost to stand an alternative IT infrastructure that in most cases will sit idle in the expectation of a disaster.

Over time many technologies emerged with innovative approaches for creating recovery tiers and prioritize those tiers based on the critical nature of the business application, however regardless of the efforts, disaster recovery has always been a high cost item that was first to be dropped from the budget.

However, with the introduction of the internet followed by eCommerce, almost all organizations changed their business model to provide non-stop services to their clients. This has put additional pressure to ensure undisrupted business even following a major disaster.

While the term “disaster recovery as a service” is relatively new, the concept has been broadly accepted since it was a low-cost option for organizations to have some level of guarantee to resume business. While this concept was acceptable to some extent, there was a challenge due to the fact that when a major event occurs, multiple organizations could be impacted and if they are subscribed to the same providers, they end up competing for the same infrastructure resources.

Since the Infrastructure as a Service (Cloud) has evolved and matured, it is now readily available in “utility-like” models that eliminate the scalability limitations, and organizations can look into extending their IT disaster recovery capabilities with their preferred cloud provider.

There are a number of factors that have accelerated this change:

  • Social media and its impact of high visibility for organizations, a simple glitch in any organization’s ability to conduct business becomes a public media incident that can severely impact the organization’s reputation.
  • Dramatic enhancement in WAN and LAN technology along with consistent reduction of the WAN services costs provided organizations with a lower point of entry cost into off-site disaster recovery.
  • The commoditization of server and storage hardware coupled with virtualization, have resulted in much lower costs to build an alternate infrastructure which has led to a proliferation of service organizations with major household brands.

So, in most situations, even if an organization has an active disaster recovery site, chances are that keeping it current as the IT infrastructure undergoes a technology refresh is extremely difficult and potentially cost prohibitive to manage, support and keep operational.

In 2002, the first infrastructure as a service (the cloud as we now know it) provider was introduced and the market has expanded rapidly to provide scalable, secure infrastructure services at an extremely low-cost entry point.

The cloud concept has introduced two major values that are pivotal for disaster recovery

  1. Infinite scalability: which eliminated the contention for infrastructure resources between subscribers under a large-scale disaster incident, hence organizations can always have access to infrastructure when needed.
  2. On-demand:  organizations are charged on hourly or daily basis only during the use of the cloud infrastructure, upon completing the recovery back to the original data center, with no further billing and no need for operational support beyond the use case.

Those values have provided small and medium organizations with a smooth entry into disaster recovery with virtually zero Capital Expenditure costs.

It all sounds great, however there are still issues that are holding back adoption:

  1. Security: with on-demand cloud services comes the concept of multi-tenancy, where DR as a service client will share the same hardware with other clients, this creates a level of concern that in many cases CIO’s decide to keep the DR under their control. The alternative would be to use dedicated cloud hardware which defeats the purpose of “on-demand” and brings back the DR TCO to traditional cost of ownership.
  2. Data locality, privacy legislation and safe harbor of data: many organizations will require a level of assurance that their data should remain within a certain geography. Under the cloud concept, data locality can become uncertain depending on the cloud provider.
  3. Legacy infrastructure: despite the ongoing transformation of IT, most organizations are still running many business applications on legacy infrastructure (e.g. z/OS, AIX, Solaris, HP-UX, etc.,).
  4. Storage based replication: Since its introduction in the late nineties, storage array-based replication has become the de-facto standard for data replication and the foundation for most in-house disaster recovery. This storage-based replication is tightly coupled with underlying infrastructure that creates vendor lock and constitutes a major obstacle in any cloud based DR since cloud vendors do not support proprietary hardware.

The above obstacles have kept many regulated industries (e.g. financial services, healthcare, utilities, etc.) away from DR as a service for now and for the foreseeable future.

So, what will happen in the future?  As we have discussed above, disaster recovery to the cloud is an eye opener for an efficient disaster recovery plan that is the ultimate goal for every organization regardless of industry sector.

As all industry sectors undergo technology transformation, they will need to have their eyes set on the Cloud as the path of least financial resistance especially for disaster recovery. However, as the technology matures, every organization should also consider the following:

  1. Consider migrating workloads from legacy infrastructure to hypervisor-based technology. In some situations, this transformation might be extremely challenging and may extend for a few years.
  2. Consider minimizing the dependency on hardware replication and evaluate the option to use software-based replication (e.g. hypervisor-based replication, database replication, etc.,). This will free the organization from hardware vendor lock in and will pave the way to cloud-based disaster recovery.
  3. Carefully consider the regulatory compliance and security aspects of placing sensitive data in the cloud and deploy encryption techniques for data-in-flight and data-at-rest.


About the SNIA Data Protection & Privacy Committee

The SNIA Data Protection & Privacy Committee (DPPC) exists to further the awareness and adoption of data protection technology, and to provide education, best practices and technology guidance on all matters related to the protection and privacy of data.

Within SNIA, Data Protection is defined as the assurance that data is usable and accessible for authorized purposes only, with acceptable performance and in compliance with applicable requirements. The technology behind data protection will remain a primary focus for the DPPC. However, there is now also a wider context which is being driven by increasing legislation to keep personal data private. The term data protection also extends into areas of resilience to cyber attacks and to threat management. To join the DPPC, login to SNIA and request to join the DPPC Governing Committee.

Written by Mounir Elmously, Governing Committee Member, SNIA Data Protection & Privacy Committee and Executive, Advisory Services, Ernst & Young, LLP

Why did I join SNIA DPPC?

With my long history with storage and data protection technologies, along with my current job roles and responsibilities, I can bring my expertise to influence the storage industry technology education and drive industry awareness. During my days with storage vendors, I did not have the freedom to critique specific storage technologies or products. With SNIA I enjoy the freedom of independence to critique and use my knowledge and expertise to help others improve their understanding.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

What’s New with SNIA Swordfish™?

title of post

If you haven’t caught the new wave in storage management, it’s time to dive in and catch up on the latest developments of the SNIA Swordfish specification.

First, a quick recap.

SNIA Swordfish is the storage extension to the DMTF Redfish® specification providing customer-centric, easy integration of scalable storage management solutions. This unified, RESTful approach provides IT administrators, DevOps and others the ability to manage storage equipment, data services and servers in converged, hyperconverged, hyperscale or cloud infrastructure environments as well as traditional data centers.

So, what’s new?

SNIA Swordfish v1.1.0b is now a SNIA Technical Position, available for immediate download here. This version of the specification has been updated to include Features and Profiles.  There are significant enhancements in volumes, storage pools and consistency groups to support management for all scale of devices, from direct-attach to external storage. We have also moved the class of service functionality to become a value-added, optional feature set. The 1.1.0b version also includes a new type of schema that enables the Redfish Device Enablement (RDE) over PLDM Specification. Please see the release bundle for full v1.1.0b change details.

Get involved!

If you’re still feeling left behind, we’d love to take you deep-sea fishing. There are several ways you can speed your Swordfish implementation:

  • Join SNIA’s Scalable Storage Management Technical Working Group (SSM TWG) and help shape future revisions of Swordfish. Send us an email at storagemanagement@snia.org.
  • Download the Swordfish User’s Guide here and the Swordfish Practical Guide here.
  • Watch the Swordfish School Videos on our YouTube channel here.
  • Check out the Swordfish Open Source Tools on SNIA’s GitHub here.
  • Your one-stop place for all Swordfish information is http://www.snia.org/swordfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Feedback Needed on New Persistent Memory Performance White Paper

Marty Foltyn

Apr 28, 2020

title of post

A new SNIA Technical Work draft is now available for public review and comment – the SNIA Persistent Memory Performance Test Specification (PTS) White Paper.

A companion to the SNIA NVM Programming Model, the SNIA PM PTS White Paper (PM PTS WP) focuses on describing the relationship between traditional block IO NVMe SSD based storage and the migration to Persistent Memory block and byte addressable storage.

The PM PTS WP reviews the history and need for storage performance benchmarking beginning with Hard Disk Drive corner case stress tests, the increasing gap between CPU/SW/HW Stack performance and storage performance, and the resulting need for faster storage tiers and storage
products.

The PM PTS WP discusses the introduction of NAND Flash SSD performance testing that incorporates pre-conditioning and steady state measurement (as described in the SNIA Solid State Storage PTS), the effects of – and need for testing using – Real World Workloads on Datacenter Storage (as described in the SNIA Real World Storage Workload PTS for Datacenter Storage), the development of the NVM Programming model, the introduction of PM storage and the need for a Persistent Memory PTS.

The PM PTS focuses on the characterization, optimization,
and test of persistent memory storage architectures – including 3D XPoint,
NVDIMM-N/P, DRAM, Phase Change Memory, MRAM, ReRAM, STRAM, and others – using
both synthetic and real-world workloads. It includes test settings, metrics,
methodologies, benchmarks, and reference options to provide reliable and
repeatable test results. Future tests would use the framework established in the
first tests.

The SNIA PM PTS White Paper targets storage professionals involved
with:

  1. Traditional NAND Flash based SSD storage over
    the PCIe bus;
  2. PM storage utilizing PM aware drivers that
    convert block IO access to loads and stores; and
  3. Direct In-memory storage and applications that
    take full advantage of the speed and persistence of PM storage and
    technologies.

The PM PTS WP discussion on the differences between byte and
block addressable storage is intended to help professionals optimize
application and storage technologies and to help storage professionals
understand the market and technical roadmap for PM storage.

Eden Kim, chair of the SNIA Solid State Storage TWG and a co-author, explained that SNIA is seeking comment from Cloud Infrastructure, IT, and Data Center professionals looking to balance server and application loads, integrate PM storage for in-memory applications, and understand how response time and latency spikes are being influenced by applications, storage and the SW/HW stack.

The SNIA Solid State Storage Technical Work Group (TWG) has published several papers on performance testing and real-world workloads, and the  SNIA PM PTS White Paper includes both synthetic and real world workload tests.  The authors are seeking comment from industry professionals, researchers, academics and other interested parties on the PM PTS WP and anyone interested to participate in development of the PM PTS.

Use the SNIA
Feedback Portal
to submit your comments.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to