Sorry, you need to enable JavaScript to visit this website.

Show Your Persistent Stuff – and Win!

Marty Foltyn

Oct 4, 2019

title of post

Persistent Memory software development has been a source of server development innovation for the last couple years.  The availability of the open source PMDK libraries (http://pmem.io/pmdk/) has provided a common interface for developing across PM types as well as server architectures.  Innovation beyond PMDK also continues to grow, as more experimentation yields open and closed source products and tools.

However, there is still hesitation to develop without physical systems.  While systems are available from a variety of outlets, the costs of those systems and the memory can still be a barrier for small developers.  Recognizing that there’s a need to grow both outlet and opportunity, Now, however, the Storage Networking Industry Association (SNIA) is announcing the availability of NVDIMM-based Persistent Memory systems for developers along with a programming challenge.

Interested developers can get credentials to access systems in the SNIA Technology Center in Colorado Springs, CO for development and testing of innovative applications or tools that can utilize persistent memory.  The challenge is open to any developer or community interested in testing code.

Participants will have the opportunity to demonstrate their output to a panel of judges.  The most innovative solutions will have a showcase opportunity at upcoming SNIA events in 2020. The first opportunity will be the SNIA Persistent Memory Summit.  Judges will be looking for applications and tools that best highlight the values of persistent memory, including persistence in the memory tier, improved performance of applications using PM, and crash resilience and recovery of data across application or system restarts.

To register, contact SNIA at pmhackathon@snia.org.  The challenge will be available starting immediately through at least the first half of 2020.

Check out the Persistent Programming in Real Life (PIRL) blog as well for information on this challenge and other upcoming activities.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Show Your Persistent Stuff

Marty Foltyn

Oct 4, 2019

title of post
Persistent Memory software development has been a source of server development innovation for the last couple years.  The availability of the open source PMDK libraries (http://pmem.io/pmdk/) has provided a common interface for developing across PM types as well as server architectures.  Innovation beyond PMDK also continues to grow, as more experimentation yields open and closed source products and tools. However, there is still hesitation to develop without physical systems.  While systems are available from a variety of outlets, the costs of those systems and the memory can still be a barrier for small developers.  Recognizing that there’s a need to grow both outlet and opportunity, Now, however, the Storage Networking Industry Association (SNIA) is announcing the availability of NVDIMM-based Persistent Memory systems for developers along with a programming challenge. Interested developers can get credentials to access systems in the SNIA Technology Center in Colorado Springs, CO for development and testing of innovative applications or tools that can utilize persistent memory.  The challenge is open to any developer or community interested in testing code. Participants will have the opportunity to demonstrate their output to a panel of judges.  The most innovative solutions will have a showcase opportunity at upcoming SNIA events in 2020. The first opportunity will be the SNIA Persistent Memory Summit Judges will be looking for applications and tools that best highlight the values of persistent memory, including persistence in the memory tier, improved performance of applications using PM, and crash resilience and recovery of data across application or system restarts. To register, contact SNIA at pmhackathon@snia.org.  The challenge will be available starting immediately through at least the first half of 2020. Check out the Persistent Programming in Real Life (PIRL) blog as well for information on this challenge and other upcoming activities.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

What Does Software Defined Storage Means for Storage Networking?

Tim Lustig

Sep 27, 2019

title of post

Software defined storage (SDS) is growing in popularity in both cloud and enterprise accounts. But why is it appealing to some customers and what is the impact on storage networking? Find out at our SNIA Networking Storage Forum webcast on October 22, 2019 “What Software Defined Storage Means for Storage Networking” where our experts will discuss:

  • What makes SDS different from traditional storage arrays?
  • Does SDS have different networking requirements than traditional storage appliances?
  • Does SDS really save money?
  • Does SDS support block, file and object storage access?
  • How data availability is managed in SDS vs. traditional storage
  • What are potential issues when deploying SDS?

Register today to save your spot on Oct. 22nd.  This event is live, so as always, our SNIA experts will be on-hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Answering Your Kubernetes Storage Questions

Paul Burt

Sep 25, 2019

title of post

Our recent SNIA Cloud Storage Technologies Initiative (CSTI) Kubernetes in the Cloud series generated a lot of interest, but also more than a few questions. The interest is a great indicator of Kubernetes rising profile in the world of computing.

Following the third episode in the series, we’ve chosen a few questions that might help to better explain (or bring additional context to) our presentation. This post is our answer to your very important questions.

If you’re new to this webcast series about running Kubernetes in the cloud, you can catch the three parts here:

The rest of this article includes your top questions from, and our answers to, Part 3:

Q. What databases are best suited to run on Kubernetes?

A. The databases better suited are the ones which have leaned into the container revolution (which has been happening over the past five years). Typically, those databases have a native clustering capability. For example, CockroachDB has great documentation and examples available showing how to set it up on Kubernetes.

On the other hand, Vitess provides clustering capabilities on top of MariaDB or MySQL to better enable them to run on Kubernetes. It has been accepted as an incubation project by the Cloud Native Computing Foundation (CNCF), so there is a lot of development expertise behind it to ensure it runs smoothly on Kubernetes.

Traditional relational databases like Postgres or single-node databases like Neo4j are fine for Kubernetes. The big caveat is that these are not designed to easily scale. That means the responsibility is on you to understand the limits of that DB, and any services it might support. Scaling these pre-cloud solutions tends to require sharding, or other similarly tricky techniques.

As long as you maintain a comfortable distance from the point where you’d need to scale a pre-cloud DB, you should be fine.

Q. When is it appropriate to run Kubernetes containerized applications on-premises versus in the cloud?

A. In the cloud you tend to benefit most from the managed service or elasticity. All of the major cloud providers have database offerings which run in Kubernetes environments. For example, Amazon Aurora, Google Cloud SQL, and Microsoft Azure DB. These offerings can be appropriate if you are a smaller shop without a lot of database architects.

The decision to run on-premises is usually dictated by external factors. Regulatory requirements like GDPR, country requirements, or customer requirements may require you to run on-premises. This is often where the concept of data gravity comes into effect – it is easier to move your compute to your data, versus moving your data to your compute.

This is one of the reasons why Kubernetes is popular. Wherever your data lives, you’ll find you can bring your compute closer (with minimal modifications), thanks to the consistency provided by Kubernetes and containers.

Q. Why did the third presentation feature old quotes about the challenges of Kubernetes and databases?

A. Part of that is simply my (the presenter’s) personal bias, and I apologize. I was primarily interested in finding a quote by recognizable brands. Kelsey Hightower and Uber are both highly visible in the container community, so the immediate interest was in finding the right words from them.

That does raise a relevant question, though. Has something changed recently? Are containers and Kubernetes becoming a better fit for stateful workloads? The answers is yes and no.

Running stateful workloads on Kubernetes has remained about the same for a while now. As an example, a text search for “stateful” on the 1.16 changelog returns no results. If a change was made for StatefulSets, or “Stateful workloads” we would hope to see a result. Of course, free text search is never quite that easy. Additional searches for “persist” show only results for CSI and ingress related changes.

The point is that stateful workloads are running the same way now, as they were with the prior versions. Part of the reason for the lack of change, is that stateful workloads are difficult to generalize. In the language of Fred Brooks, we could say that stateful workloads are essentially complex. In other words, the difficulty of generalizing is unavoidable and it must be addressed with a complex solution.

Solutions like operators are tackling some of this complexity, and some changes to storage components indicates progress is occurring elsewhere. A good example is in the 1.14 release (March 2019), where local persistent volumes graduated to general availability (GA). That’s a nice resolution to this blog from 2016, which said,

“Currently, we recommend using StatefulSets with remote storage. Therefore, you must be ready to tolerate the performance implications of network attached storage. Even with storage optimized instances, you won’t likely realize the same performance as locally attached, solid state storage media.”

Local persistent volumes fix some of the performance concerns around running stateful workloads on Kubernetes. Has Kubernetes made progress? Undeniably. Are stateful workloads still difficult? Absolutely.

Transitioning databases and complex, or stateful work to Kubernetes still requires a steep learning curve. Running a database through your current setup requires no additional knowledge. Your current database works, and you and your team already know everything you need to keep that going.

Running stateful applications on Kubernetes requires you attain knowledge of init containers, persistent volumes (PVs), persistent volume claims (PVCs), storage classes, service accounts, services, pods, config maps, and more. This large learning requirement has remained one of the biggest challenges. That means Kelsey and Uber’s cautionary notes remain relevant.

It’s a big undertaking to run stateful workloads on Kubernetes. It can be very rewarding, but it also comes with a large cost.

Q. If Kubernetes introduces so much complexity, why should we use it?

A. One of the main points of part three of our Kubernetes in the Cloud series, is that you can pick and choose. Kubernetes is flexible, and if you find running projects outside of it easier, you still have that option. Kubernetes is designed so that it’s easy to mix and match with other solutions.

Aside from the steep learning curve, it can seem like there are a number of other major challenges that come with Kubernetes. These challenges are often due to other design choices, like a move towards microservices, infrastructure as code, or etc. These other philosophies are shifts in perspective, that change how we and our teams have to think about infrastructure.

As an example, James Lewis and Martin Fowler note how a shift towards microservices will complicate storage:

“As well as decentralizing decisions about conceptual models, microservices also decentralize data storage decisions. While monolithic applications prefer a single logical database for persistent data, enterprises often prefer a single database across a range of applications - many of these decisions driven through vendor's commercial models around licensing. Microservices prefer letting each service manage its own database, either different instances of the same database technology, or entirely different database systems - an approach called Polyglot Persistence.”

Failing to move from a single enormous database to a unique datastore per service can lead to a distributed monolith. That is, an architecture which looks superficially like microservices, but furtively still contains many of the problems of a monolithic architecture.

Kubernetes and containers align well with newer cloud native philosophies like microservices. It’s no surprise then, that a project to move towards microservices will often involve Kubernetes. A lot of the apparent complexity of Kubernetes actually happens to be due to these accompanying philosophies. They’re often paired with Kubernetes, and anytime we stumble over a rough area, it can be easy to just blame Kubernetes for the issue.

Kubernetes and these newer philosophies are popular for a number of reasons. They’ve been proven to work at mega-scales, at companies like Google and Netflix (eg Borg and the microservice re-architecting on AWS). When done right, development teams also seem more productive.

If you are working at a larger scale and struggling, this probably sounds great. On the other hand, if you have yet to feel the pain of scaling, all of this Kubernetes and microservices stuff might seem a bit silly. There are a number of good reasons to avoid Kubernetes. There are also a number of great reasons to embrace it. We should be mindful that the value of Kubernetes and the associated philosophies is very dependent on where your business is on “feeling the pain of scaling.”

Q. Where can I learn more about Kubernetes?

A. Glad you asked! We provided more than 25 links to useful and informative resources during our live webcast. You can access all of them here.

Conclusion

Thanks for sending in your questions. In addition to the each of the webcasts being available on the SNIAVideo YouTube channel, you can also download PDFs of the presentations and check out the previous Q&A blogs from this series to learn more.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Answering Your Kubernetes Storage Questions

Paul Burt

Sep 25, 2019

title of post
Our recent SNIA Cloud Storage Technologies Initiative (CSTI) Kubernetes in the Cloud series generated a lot of interest, but also more than a few questions. The interest is a great indicator of Kubernetes rising profile in the world of computing. Following the third episode in the series, we’ve chosen a few questions that might help to better explain (or bring additional context to) our presentation. This post is our answer to your very important questions. If you’re new to this webcast series about running Kubernetes in the cloud, you can catch the three parts here: The rest of this article includes your top questions from, and our answers to, Part 3: Q. What databases are best suited to run on Kubernetes? A. The databases better suited are the ones which have leaned into the container revolution (which has been happening over the past five years). Typically, those databases have a native clustering capability. For example, CockroachDB has great documentation and examples available showing how to set it up on Kubernetes. On the other hand, Vitess provides clustering capabilities on top of MariaDB or MySQL to better enable them to run on Kubernetes. It has been accepted as an incubation project by the Cloud Native Computing Foundation (CNCF), so there is a lot of development expertise behind it to ensure it runs smoothly on Kubernetes. Traditional relational databases like Postgres or single-node databases like Neo4j are fine for Kubernetes. The big caveat is that these are not designed to easily scale. That means the responsibility is on you to understand the limits of that DB, and any services it might support. Scaling these pre-cloud solutions tends to require sharding, or other similarly tricky techniques. As long as you maintain a comfortable distance from the point where you’d need to scale a pre-cloud DB, you should be fine. Q. When is it appropriate to run Kubernetes containerized applications on-premises versus in the cloud? A. In the cloud you tend to benefit most from the managed service or elasticity. All of the major cloud providers have database offerings which run in Kubernetes environments. For example, Amazon Aurora, Google Cloud SQL, and Microsoft Azure DB. These offerings can be appropriate if you are a smaller shop without a lot of database architects. The decision to run on-premises is usually dictated by external factors. Regulatory requirements like GDPR, country requirements, or customer requirements may require you to run on-premises. This is often where the concept of data gravity comes into effect – it is easier to move your compute to your data, versus moving your data to your compute. This is one of the reasons why Kubernetes is popular. Wherever your data lives, you’ll find you can bring your compute closer (with minimal modifications), thanks to the consistency provided by Kubernetes and containers. Q. Why did the third presentation feature old quotes about the challenges of Kubernetes and databases? A. Part of that is simply my (the presenter’s) personal bias, and I apologize. I was primarily interested in finding a quote by recognizable brands. Kelsey Hightower and Uber are both highly visible in the container community, so the immediate interest was in finding the right words from them. That does raise a relevant question, though. Has something changed recently? Are containers and Kubernetes becoming a better fit for stateful workloads? The answers is yes and no. Running stateful workloads on Kubernetes has remained about the same for a while now. As an example, a text search for “stateful” on the 1.16 changelog returns no results. If a change was made for StatefulSets, or “Stateful workloads” we would hope to see a result. Of course, free text search is never quite that easy. Additional searches for “persist” show only results for CSI and ingress related changes. The point is that stateful workloads are running the same way now, as they were with the prior versions. Part of the reason for the lack of change, is that stateful workloads are difficult to generalize. In the language of Fred Brooks, we could say that stateful workloads are essentially complex. In other words, the difficulty of generalizing is unavoidable and it must be addressed with a complex solution.

Solutions like operators are tackling some of this complexity, and some changes to storage components indicates progress is occurring elsewhere. A good example is in the 1.14 release (March 2019), where local persistent volumes graduated to general availability (GA). That’s a nice resolution to this blog from 2016, which said,

“Currently, we recommend using StatefulSets with remote storage. Therefore, you must be ready to tolerate the performance implications of network attached storage. Even with storage optimized instances, you won’t likely realize the same performance as locally attached, solid state storage media.”

Local persistent volumes fix some of the performance concerns around running stateful workloads on Kubernetes. Has Kubernetes made progress? Undeniably. Are stateful workloads still difficult? Absolutely. Transitioning databases and complex, or stateful work to Kubernetes still requires a steep learning curve. Running a database through your current setup requires no additional knowledge. Your current database works, and you and your team already know everything you need to keep that going. Running stateful applications on Kubernetes requires you attain knowledge of init containers, persistent volumes (PVs), persistent volume claims (PVCs), storage classes, service accounts, services, pods, config maps, and more. This large learning requirement has remained one of the biggest challenges. That means Kelsey and Uber’s cautionary notes remain relevant. It’s a big undertaking to run stateful workloads on Kubernetes. It can be very rewarding, but it also comes with a large cost. Q. If Kubernetes introduces so much complexity, why should we use it? A. One of the main points of part three of our Kubernetes in the Cloud series, is that you can pick and choose. Kubernetes is flexible, and if you find running projects outside of it easier, you still have that option. Kubernetes is designed so that it’s easy to mix and match with other solutions. Aside from the steep learning curve, it can seem like there are a number of other major challenges that come with Kubernetes. These challenges are often due to other design choices, like a move towards microservices, infrastructure as code, or etc. These other philosophies are shifts in perspective, that change how we and our teams have to think about infrastructure. As an example, James Lewis and Martin Fowler note how a shift towards microservices will complicate storage:

“As well as decentralizing decisions about conceptual models, microservices also decentralize data storage decisions. While monolithic applications prefer a single logical database for persistent data, enterprises often prefer a single database across a range of applications – many of these decisions driven through vendor’s commercial models around licensing. Microservices prefer letting each service manage its own database, either different instances of the same database technology, or entirely different database systems – an approach called Polyglot Persistence.”

Failing to move from a single enormous database to a unique datastore per service can lead to a distributed monolith. That is, an architecture which looks superficially like microservices, but furtively still contains many of the problems of a monolithic architecture. Kubernetes and containers align well with newer cloud native philosophies like microservices. It’s no surprise then, that a project to move towards microservices will often involve Kubernetes. A lot of the apparent complexity of Kubernetes actually happens to be due to these accompanying philosophies. They’re often paired with Kubernetes, and anytime we stumble over a rough area, it can be easy to just blame Kubernetes for the issue. Kubernetes and these newer philosophies are popular for a number of reasons. They’ve been proven to work at mega-scales, at companies like Google and Netflix (eg Borg and the microservice re-architecting on AWS). When done right, development teams also seem more productive. If you are working at a larger scale and struggling, this probably sounds great. On the other hand, if you have yet to feel the pain of scaling, all of this Kubernetes and microservices stuff might seem a bit silly. There are a number of good reasons to avoid Kubernetes. There are also a number of great reasons to embrace it. We should be mindful that the value of Kubernetes and the associated philosophies is very dependent on where your business is on “feeling the pain of scaling.” Q. Where can I learn more about Kubernetes? A. Glad you asked! We provided more than 25 links to useful and informative resources during our live webcast. You can access all of them here. Conclusion Thanks for sending in your questions. In addition to the each of the webcasts being available on the SNIAVideo YouTube channel, you can also download PDFs of the presentations and check out the previous Q&A blogs from this series to learn more.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Introducing the Storage Networking Security Webcast Series

J Metz

Sep 3, 2019

title of post
This series of webcasts, hosted by the SNIA Networking Storage Forum, is going to tackle an ambitious project – the scope of securing data, namely storage systems and storage networks. Obviously, many of the concepts and realities contained in this series are going to be broadly applicable to all kinds of data protection, but there are some aspects of security that have a unique impact on storage, storage systems, and storage networks. Because of the fact that security is a holistic concern, there has to be more than "naming the parts." It's important to understand how the pieces fit together, because it's where those joints exist that many of the threats become real. Understanding Storage Security and Threats This presentation is going to go into the broad introduction of security principles in general. This will include some of the main aspects of security, including defining the terms that you must know, if you hope to have a good grasp of what makes something secure or not. We'll be talking about the scope of security, including threats, vulnerabilities, and attacks – and what that means in real storage terms. Securing the Data at Rest When you look at the holistic concept of security, one of the most obvious places to start are the threats to the physical realm. Among the topics here, we will include: ransomware, physical security, self-encrypting drives, and other aspects of how data and media are secured at the hardware level. In particular, we'll be focusing on the systems and mechanisms of securing the data, and even touch on some of the requirements that are being placed on the industry by government security recommendations. Storage Encryption This is a subject so important that it deserves its own specific session. It is a fundamental element that affects hardware, software, data-in-flight, data-at-rest, and regulations. In this session, we're going to be laying down the taxonomy of what encryption is (and isn't), how it works, what the trade-offs are, and how storage professionals choose between the different options for their particular needs. This session is the "deep dive" that explains what goes on underneath the covers when encryption is used for data in flight or at rest. Key Management In order to effectively use cryptography to protect information, one has to ensure that the associated cryptographic keys are also protected.   Attention must be paid to how cryptographic keys are generated, distributed, used, stored, replaced and destroyed in order to ensure that the security of cryptographic implementations are not compromised. This webinar will introduce the fundamentals of cryptographic key management including key lifecycles, key generation, key distribution, symmetric vs asymmetric key management and integrated vs centralized key management models. Relevant standards, protocols and industry best practices will also be presented. Securing Data in Flight Getting from here to there, securely and safely. Whether it's you in a car, plane, or train – or your data going across a network, it's critical to make sure that you get there in one piece. Just like you, your data must be safe and sound as it makes its journey. This webcast is going to talk about the threats to your data as it's transmitted, how interference happens along the way, and the methods of protecting that data when this happens. Securing the Protocol Different storage networks have different means for creating security beyond just encrypting the wire. We'll be discussing some of the particular threats to storage that are specific to attacking the vulnerabilities to data-in-flight. Here we will be discussing various security features of Ethernet and Fibre Channel, in particular, secure data in flight at the protocol level, including (but not limited to): MACSec, IPSec, and FC-SP2. Security Regulations It's impossible to discuss storage security without examining the repercussions at the regulatory level. In this webcast, we're going to take a look at some of the common regulatory requirements that require specific storage security configurations, and what those rules mean in a practical sense. In other words, how do you turn those requirements into practical reality? GDPR, the California Consumer Privacy Act (CCPA), other individual US States' laws – all of these require more than just ticking a checkbox. What do these things mean in terms of applying them to storage and storage networking? Securing the System: Hardening Methods "Hardening" is something that you do to an implementation, which means understanding how all of the pieces fit together. We'll be talking about different methods and mechanisms for creating secure end-to-end implementations. Topics such as PCI compliance, operating system hardening, and others will be included. Obviously, storage security is a huge subject. This ambitious project certainly doesn't end here, and there will always be additional topics to cover. For now, however, we want to provide you with the industry's best experts in storage and security to help you navigate the labyrinthian maze of rules and technology... in plain English. Please join us and register for the first webcast in the series, Understanding Storage Security and Threats on October 8th.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Introducing the Storage Networking Security Webcast Series

J Metz

Sep 3, 2019

title of post
This series of webcasts, hosted by the SNIA Networking Storage Forum, is going to tackle an ambitious project – the scope of securing data, namely storage systems and storage networks. Obviously, many of the concepts and realities contained in this series are going to be broadly applicable to all kinds of data protection, but there are some aspects of security that have a unique impact on storage, storage systems, and storage networks. Because of the fact that security is a holistic concern, there has to be more than “naming the parts.” It’s important to understand how the pieces fit together, because it’s where those joints exist that many of the threats become real. Understanding Storage Security and Threats This presentation is going to go into the broad introduction of security principles in general. This will include some of the main aspects of security, including defining the terms that you must know, if you hope to have a good grasp of what makes something secure or not. We’ll be talking about the scope of security, including threats, vulnerabilities, and attacks – and what that means in real storage terms. Storage Encryption This is a subject so vast that it deserves its own specific session. It is a foundational element that affects hardware, software, data-in-flight, data-at-rest, and regulations. In this session, we’re going to be laying down the taxonomy of what encryption is (and isn’t), how it works, what the trade-offs are, and how storage professionals choose between the different options for their particular needs. This session will also lay the groundwork for future webcasts (especially Securing the Data and Storage Regulations). Securing the Data and the Media When you look at the holistic concept of security, one of the most obvious places to start are the threats to the physical realm. Among the topics here, we will include “Data Encryption at Rest,” physical security, self-encrypting drives, and other aspects of how data and media are secured at the hardware level. Securing the Protocol Different storage networks have different means for creating security beyond just encrypting the wire. We’ll be discussing some of the particular threats to storage that are specific to attacking the vulnerabilities to data-in-flight. Here we will be discussing various security features of Ethernet and Fibre Channel, in particular, secure data in flight at the protocol level, including (but not limited to): MACSec, IPSec, and FC-SP2. Security Regulations It’s impossible to discuss storage security without examining the repercussions at the regulatory level. In this webinar, we’re going to take a look at some of the common regulatory requirements that require specific storage security configurations, and what those rules mean in a practical sense. In other words, how do you turn those requirements into practical reality? GDPR, the California Consumer Privacy Act (CCPA), other individual US States’ laws – all of these require more than just ticking a checkbox. What do these things mean in terms of applying them to storage and storage networking? Securing the System: Hardening Methods “Hardening” is something that you do to an implementation, which means understanding how all of the pieces fit together. We’ll be talking about different methods and mechanisms for creating secure end-to-end implementations. Topics such as PCI compliance, operating system hardening, and others will be included. Obviously, storage and security is a huge subject. This ambitious project certainly doesn’t end here, and there will always be additional topics to cover. For now, however, we want to provide you with the industry’s best experts in storage and security to help you navigate the labyrinthian maze of rules and technology… in plain English. Please join us and register for the first webcast in the series, Understanding Storage Security and Threats on October 8th.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

It’s a Wrap for SNIA and the Solid State Storage Initiative at Flash Memory Summit 2019

Marty Foltyn

Aug 30, 2019

title of post

A Best of Show award, over 12 hours of content, three days of demos, and a new program drawing attention to persistent memory programming completed – Flash Memory Summit 2019 is officially a success!

SNIA volunteers were again recognized for their hard work developing standards for datacenters and storage professionals with a “Most Innovative Flash Memory Technology” FMS Best of Show award. This year, it was SNIA’s Object Drive Technical Work Group who received kudos for the SNIA Technical Position Key Value Storage API Specification.  Jay Kramer, head of the FMS awards program, presented the award to Bill Martin, Chair of the Object Drive TWG, commenting “Key value store technology can enable NVM storage devices to map and store data more efficiently and with enhanced performance, which is of paramount significance to facilitate computational storage.  Flash Memory Summit is proud to recognize the SNIA Object Drive Technical Work Group (TWG) for creating the SNIA Technical Position Key Value Storage API Specification Version 1.0 defining an application programming interface (API) for key value storage devices and making this available to the public for download.

SNIA Sessions at FMS Now Available for Viewing and Download

SNIA Executive Director Michael Oros again took the mainstage to describe “Standards that Can Change Your Job and Your Life” encapsulating SNIA work in three core areas:  persistent memory, computational storage, and storage management.

Also at Flash Memory Summit, SNIA work and volunteers were on display in eight sessions on persistent memory, highlighting advances in persistent memory, PM software and applications, remote persistent memory, and current research in PM, sponsored by SNIA, JEDEC, and the OpenFabrics Alliance.   A new 2019 SNIA-sponsored track on computational storage featured four sessions on controllers and technology, deploying solutions, implementation methods, and applications.

SNIA’s SFF Technology Affiliate highlighted their work on the Enterprise and Datacenter 1U Short SSD Form Factor (E1.S) specification SFF-TA 1006,  while the Object Drive TWG expanded on their work in standardization for a key value storage interface underway at SNIA and NVM Express.   SNIA also presented a preconference seminar tutorial on persistent memory and NVDIMM, and a session on Storage Management with Swordfish APIs for Open Channel SSDs.

A new session on programming to persistent memory featured a tutorial (video available soon) and a 2 ½ day Persistent Memory Programming Hackathon where attendees programmed to persistent memory systems and discussed their applications.  Next up for the Hackathon series – a 2-day event at SNIA Storage Developer Conference.

Find PDFs of these sessions by clicking on Flash Memory Summit 2019 under Associated Event in the SNIA Educational Library.

We continued our discussions on the exhibit floor featuring JEDEC-compliant NVDIMM-Ns from SNIA Persistent Memory and NVDIMM SIG members AgigA Tech, Micron, SMART Modular Technologies, and Viking in a Supermicro box running an open source performance demonstration.  If you missed it, the SIG will showcase a similar demonstration at the upcoming SNIA Storage Developer Conference September 23-26, 2019, and at the 2020 SNIA Persistent Memory Summit January 23, 2020, both at the Hyatt Regency Santa Clara.  Click on the conference names to register for both events.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

It’s a Wrap for SNIA and the Solid State Storage Initiative at Flash Memory Summit 2019

Marty Foltyn

Aug 30, 2019

title of post
A Best of Show award, over 12 hours of content, three days of demos, and a new program drawing attention to persistent memory programming completed, Flash Memory Summit 2019 is officially a success! SNIA volunteers were again recognized for their hard work developing standards for datacenters and storage professionals with a “Most Innovative Flash Memory Technology” FMS Best of Show award. This year, it was SNIA’s Object Drive Technical Work Group who received kudos for the SNIA Technical Position Key Value Storage API Specification.  Jay Kramer, head of the FMS awards program, presented the award to Bill Martin, Chair of the Object Drive TWG, commenting “Key value store technology can enable NVM storage devices to map and store data more efficiently and with enhanced performance, which is of paramount significance to facilitate computational storage.  Flash Memory Summit is proud to recognize the SNIA Object Drive Technical Work Group (TWG) for creating the SNIA Technical Position Key Value Storage API Specification Version 1.0 defining an application programming interface (API) for key value storage devices and making this available to the public for download. SNIA Sessions at FMS Now Available for Viewing and Download SNIA Executive Director Michael Oros again took the mainstage to describe “Standards that Can Change Your Job and Your Life” encapsulating SNIA work in three core areas:  persistent memory, computational storage, and storage management. Also at Flash Memory Summit, SNIA work and volunteers were on display in eight sessions on persistent memory, highlighting advances in persistent memory, PM software and applications, remote persistent memory, and current research in PM, sponsored by SNIA, JEDEC, and the OpenFabrics Alliance.   A new 2019 SNIA-sponsored track on computational storage featured four sessions on controllers and technology, deploying solutions, implementation methods, and applications. SNIA’s SFF Technology Affiliate highlighted their work on the Enterprise and Datacenter 1U Short SSD Form Factor (E1.S) specification SFF-TA 1006, , while the Object Drive TWG expanded on their work in standardization for a key value storage interface underway at SNIA and NVM Express.   SNIA also presented a preconference seminar tutorial on persistent memory and NVDIMM, and a session on Storage Management with Swordfish APIs for Open Channel SSDs. A new session on programming to persistent memory featured a tutorial (video available soon) and a 2 ½ day Persistent Memory Programming Hackathon where attendees programmed to persistent memory systems and discussed their applications. Find PDFs of these sessions by clicking on Flash Memory Summit 2019 under Associated Event in the SNIA Educational Library. We continued our discussions on the exhibit floor featuring JEDEC-compliant NVDIMM-Ns from SNIA Persistent Memory and NVDIMM SIG members AgigA Tech, Micron, SMART Modular Technologies, and Viking in a Supermicro box running an open source performance demonstration.  If you missed it, the SIG will showcase a similar demonstration at the upcoming SNIA Storage Developer Conference September 23-26, 2019, and at the 2020 SNIA Persistent Memory Summit January 23, 2020, both at the Hyatt Regency Santa Clara.  Click on the conference names to register for both events.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Kubernetes Links & Resources to Keep You in the Know

Mike Jochimsen

Aug 20, 2019

title of post

Our recent SNIA CSTI webcast, “Kubernetes in the Cloud (Part 3): (Almost) Everything You Need to Know about Stateful Workloads” offered a wealth of insight on how to address the challenges of running stateful workloads in Kubernetes. This webcast was the third installment of our Kubernetes in the Cloud webcast series and it is now available on-demand as are “Kubernetes in the Cloud (Part 1)” and “Kubernetes in the Cloud (Part 2).”

Our expert presenters, Paul Burt and Ingo Fuchs, have provided additional resources to help keep you in the know on Kubernetes. Here they all are:

If you know of some Kubernetes resources to share, please comment on this blog and we’ll add them to our list.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to