Sorry, you need to enable JavaScript to visit this website.

Cloud Analytics Drives Airplanes-as-a-Service Business

Jim Fister

Feb 25, 2021

title of post
On-demand flying through an app sounds like something for only the rich and famous, yet the use of cloud analytics is making flexible flying a reality at start-up airline, KinectAir.  On April 7, 2021, The CTO of KinectAir, Ben Howard, will join the SNIA Cloud Storage Technologies Initiative (CSTI) for a fascinating discussion on first-hand experiences of leveraging cloud analytics methods to bring new business models to life that are competitive and profitable. And since start-up companies may not have legacy data and analytics to consider, we’ll also explore what established businesses using traditional analytics methods can learn from this use case. Join us on April 7th for our live webcast “Adapting Cloud Analytics for Practical Business Use” for views from both start-up and established companies on how to revisit the analytics decision process with a discussion on:
  • How to build and take advantage of a data ecosystem
  • Overcoming challenges and roadblocks
  • How to use cloud resources in unique ways to accomplish business and engineering goals
  • Considerations for business requirements and developing technical metrics
  • Thoughts on when to start new vs. adapt existing analytics processes
  • Real-world examples of cloud analytics and AI
Register today. Our panelists will be on-hand to answer questions. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Cloud Analytics Drives Airplanes-as-a-Service Business

Jim Fister

Feb 25, 2021

title of post
On-demand flying through an app sounds like something for only the rich and famous, yet the use of cloud analytics is making flexible flying a reality at start-up airline, KinectAir.  On April 7, 2021, The CTO of KinectAir, Ben Howard, will join the SNIA Cloud Storage Technologies Initiative (CSTI) for a fascinating discussion on first-hand experiences of leveraging cloud analytics methods to bring new business models to life that are competitive and profitable. And since start-up companies may not have legacy data and analytics to consider, we’ll also explore what established businesses using traditional analytics methods can learn from this use case. Join us on April 7th for our live webcast “Adapting Cloud Analytics for Practical Business Use” for views from both start-up and established companies on how to revisit the analytics decision process with a discussion on:
  • How to build and take advantage of a data ecosystem
  • Overcoming challenges and roadblocks
  • How to use cloud resources in unique ways to accomplish business and engineering goals
  • Considerations for business requirements and developing technical metrics
  • Thoughts on when to start new vs. adapt existing analytics processes
  • Real-world examples of cloud analytics and AI
Register today. Our panelists will be on-hand to answer questions. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMI-S Storage Management Quick Start Guide Series Kicks-Off

Mike Walker

Feb 24, 2021

title of post

Twenty-year SNIA veteran Mike Walker has created a series of videos titled “SMI-S Quick Start Guides” that provides developers using the SMI-S storage management specification instructions on how to find useful information in a SMI-S server using the python-based PyWBEM open source tool.

“Using the PyWBEM tool, I created a set of mock SMI-S 1.8 servers which I have shared with the world on GitHub,” said Walker. “I also created a set of PDFs called ‘Quick Start Guides’ and a series of videos demonstrating some of the most recent capabilities of the SMI-S 1.8 specification. Storage equipment vendors and management software vendors seeking to address the day-to-day tasks of the IT environment can use this information to work with SMI-S 1.8.”

The first two videos of this series now available on the SNIA Video YouTube channel are listed below. Be sure to check back or subscribe to the SNIA Video YouTube channel for future video installments.

• A short trailer explaining the content you can expect to see in the series here.
• A SNIA SMI-S Storage Management Spec. Mockups, Installation and Setup video here.

The Quick Start Guide PDFs and a set of mock WBEM servers that support SMI-S 1.8 storage management servers can be found on GitHub here. You can also learn more about PyWBEM here.

About the SMI-S Storage Management Specification

SMI-S was first approved as an ISO standard in 2002. Today, it has been implemented in over 1,350 storage products that provide access to common storage management functions and features.
During its lifetime, several versions of the SMI-S standard have been approved by ISO. The current international standard for SMI-S was based on SMI-S v1.5, which was completed in 2011, submitted for ISO approval in 2012, and formally adopted in 2014 as the latest revision of ISO/IEC 24775.

SMI-S 1.8 rev 5 was sent to ISO as an update to ISO/IEC 24775 and is expected to become an internationally recognized standard in the first half of 2021. SMI-S 1.8 rev 5 is the recommended and final version of the specification as no further updates are planned.

Subscribe to the SNIA Matters Newsletter here to stay up-to-date on all SNIA announcements and be one of the first to learn the ISO approval status of the SMI-S 1.8 rev 5 storage specification.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMI-S Storage Management Quick Start Guide Series Kicks-Off

Linda Capcara

Feb 24, 2021

title of post
Twenty-year SNIA veteran Mike Walker has created a series of videos titled “SMI-S Quick Start Guides” that provides developers using the SMI-S storage management specification instructions on how to find useful information in a SMI-S server using the python-based PyWBEM open source tool. “Using the PyWBEM tool, I created a set of mock SMI-S 1.8 servers which I have shared with the world on GitHub,” said Walker. “I also created a set of PDFs called ‘Quick Start Guides’ and a series of videos demonstrating some of the most recent capabilities of the SMI-S 1.8 specification. Storage equipment vendors and management software vendors seeking to address the day-to-day tasks of the IT environment can use this information to work with SMI-S 1.8.” The first two videos of this series now available on the SNIA Video YouTube channel are listed below. Be sure to check back or subscribe to the SNIA Video YouTube channel for future video installments. • A short trailer explaining the content you can expect to see in the series here. • A SNIA SMI-S Storage Management Spec. Mockups, Installation and Setup video here. The Quick Start Guide PDFs and a set of mock WBEM servers that support SMI-S 1.8 storage management servers can be found on GitHub here. You can also learn more about PyWBEM here. About the SMI-S Storage Management Specification SMI-S was first approved as an ISO standard in 2002. Today, it has been implemented in over 1,350 storage products that provide access to common storage management functions and features. During its lifetime, several versions of the SMI-S standard have been approved by ISO. The current international standard for SMI-S was based on SMI-S v1.5, which was completed in 2011, submitted for ISO approval in 2012, and formally adopted in 2014 as the latest revision of ISO/IEC 24775. SMI-S 1.8 rev 5 was sent to ISO as an update to ISO/IEC 24775 and is expected to become an internationally recognized standard in the first half of 2021. SMI-S 1.8 rev 5 is the recommended and final version of the specification as no further updates are planned. Subscribe to the SNIA Matters Newsletter here to stay up-to-date on all SNIA announcements and be one of the first to learn the ISO approval status of the SMI-S 1.8 rev 5 storage specification.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

NVMe® over Fabrics for Absolute Beginners

J Metz

Feb 23, 2021

title of post

A while back I wrote an article entitled “NVMe™ for Absolute Beginners.” It seems to have resonated with a lot of people and it appears there might be a call for doing the same thing for NVMe® over Fabrics (NVMe-oF™).

This article is for absolute beginners. If you are a seasoned (or even moderately-experienced) technical person, this probably won’t be news to you. However, you are free (and encouraged!) to point people to this article who need Plain English™ to get started.

A Quick Refresher

Any time an application on a computer (or server, or even a consumer device like a phone) needs to talk to a storage device, there are a couple of things that you need to have. First, you need to have memory (like RAM), you need to have a CPU, and you also need to have something that can hold onto your data for the long haul (also called storage).

Another thing you need to have is a way for the CPU to talk to the memory device (on one hand) and the storage device (on the other). Thing is, CPUs talk a very specific language, and historically memory could speak that language, but storage could not.

For many years, things ambled along in this way. The CPU would talk natively with memory, which made it very fast but also was somewhat risky because memory was considered volatile. That is, if there was a power blip (or went out completely), any data in memory would be wiped out.

Not fun.

So, you wanted to have your data stored somewhere permanently, i.e., on a non-volatile medium. For many years, that meant hard disk drives (HDDs). This was great, and worked well, but didn’t really work fast.

Solid State Disks, or SSDs, changed all that. SSDs don’t have moving parts, which ultimately meant that you could get your data to and from the device faster. Much faster. However, as they got faster, it became clear that because the CPU didn’t talk to SSDs natively using the same language – and needed an adapter of some kind – we weren’t getting as fast as we wanted to be.

Enter Non-Volatile Memory Express (NVMe).

NVMe changed the nature of the game completely. For one, it removed the need for an adapter by allowing the CPU to talk to the storage natively. (In technical terms, what it did was allow the CPU to treat storage as if it were memory, with which it could already speak natively through a protocol called PCIe).

The second thing that was pretty cool was NVMe changed the nature of the relationship with storage from this:

… which was necessarily a 1:1 relationship, to this:

… which now meant that you could have more than one relationship between devices.

Very cool.

Since I wrote the “NVMe for Absolute Beginners” article a few years ago, the technology has taken off like wildfire. In only a few short years, there have been more NVMe storage drives shipped than the previous go-to technology (i.e., SATA).

By this point, there are many, many more articles written about NVMe than there were back then. Now, however, there are a lot of questions about what happens when you want to go outside of the range of PCIe.

NVMe® over Fabrics

Thing is, NVMe using PCIe is a technology that is best used inside a computer or server. PCIe is not generally regarded as a “fabric” technology.

So what is a “fabric” technology, and what makes it so special?

Like anything else, there are trade-offs when it comes to technology. The great thing about NVMe using PCIe is that it is wicked fast. The not-so-great thing about NVMe using PCIe is that it’s contained inside of a single computer. If you want to go outside of the computer, well, things get tricky… unless you do something special.

In general terms, a “fabric” is that “something special.” It’s not as easy as putting a storage device at the end of a wire and calling it quits. Oh no; there is so much more that needs to be done.

Any time you want to go outside of a computer or server, you need to be extra careful, because there are a lot more things that can go wrong. As in, an exponential number of things can go wrong. Not only do you need to try your best to make sure that things don’t go wrong in the first place, but you need to put systems in place to handle those problems when they do.

The good news is that there are a lot of choices when it comes to solving this problem. These storage networks have been the tried and true means by which people have handled storage solutions at scale. Technologies like Fibre Channel, Ethernet and InfiniBand have been used to connect servers and storage for years. Each one has its place, and each one has its fans (and with good reasons).

Because of this, there was no reason for the NVM Express group (the people behind the NVMe protocol) to create their own, new, fabric. Why re-invent the wheel? Instead, it was much better to use the battle-hardened technologies that were already available.

That’s why it’s called NVMe over Fabrics; we are simply using the NVMe protocol to piggy-back on top of networking technologies that already exist.

The Magic of Bindings

Imagine you’re rebuilding a Jeep. At a high level, you have two basic parts to a Jeep’s structure -  you have the chassis, and you have the body. As you can imagine, you can’t simply place the body on top of the chassis and start driving around. The body is going to eventually slide right off the chassis. Not exactly safe.

By the same logic, we can’t simply place the NVMe commands on top of a Fabric and expect, magically, that everything is going to work out all the time. Just like our Jeep body, there needs to be a strong connection with what happens underneath.

In NVMe-oF parlance, these are called bindings.

Bindings solve a number of problems. They are the glue that holds the NVMe communication language to the underlying fabric transport (whether it is Fibre Channel, InfiniBand, or various forms of Ethernet).

In particular, Bindings:

  • Define the establishment of a connection between NVMe and the transport/fabric
  • Restrict capabilities based upon what the transport fabric can (or can’t) do
  • Identify how NVMe is managed, administratively, using the transport/fabric
  • Establish requirements of size, authentication, type of information, etc., depending upon specific transport fabric methods

If you consider that with networking technology we think in terms of layers, the NVMe over Fabrics bindings sit on top of the transport fabrics layer, and it is the responsibility of the organizations who represent those transport fabrics to make sure that there are appropriate connections into the bindings from the other side.

For instance, the T11 Standards body is responsible for creating the changes to the Fibre Channel standards so that it can interact with the bindings appropriately, not just simply sling the NVMe commands from one side to the other.

You can find out more about how this works in the Fibre Channel example by watching the FCIA BrightTalk Webinar – Introduction to FC-NVMe by yours truly and Craig Carlson from Cavium, now Marvell).

Types of Fabrics

Now, I’ve given you an example of one type of Fabric that can be used for NVMe-oF, but Fibre Channel is not the only one. In fact, the magic of NVMe-oF is that you can choose one from a number of transport types:

At the top of the graphic you can see the host, and at the bottom you can see the storage. In the middle, you can see all of the different networking options that could be used.

Now, the interesting thing here, is that NVMe-oF is not those different types of transports. On the contrary, there are different technological bodies that work on those different transports. Instead, the magic of NVMe over Fabrics is the part represented by this:

And this:

To Bind or Not To Bind

Now, it’s important to know that just because the NVMe Express group defines the bindings format for NVMe-oF™ (the “™” is intentional, here), it doesn’t mean that this is the only way to do it. In fact, before the NVMe over Fabrics standard was ratified, there were quite a few companies who created their own forms of moving NVMe commands from one device to another.

Let me be absolutely clear here: there is nothing wrong with this!

Just because someone has a solution that isn’t standardized does not mean that they are doing something wrong or, worse, doing something nefarious. All it means is that they have figured out a different way to handle the means by which they send NVMe commands from one place to another.

However

It’s valuable to know whether or not a company is using a standardized version of NVMe over Fabrics, or whether someone is using a proprietary version of using a fabric to transport NVMe. The reason why it’s important is that storage is an end-to-end problem that needs solving, and you need to know how all of the parts fit together, and what (if any) kind of special attention needs to be made in order to make everything work together seamlessly.

For that reason, even though the acronym NVMe-oF™ looks funny[1], it is the official acronym for NVMe™ over Fabrics. There are a number of other popular acronyms, however, that have been used to represent networked NVMe:

  • NVM/f
  • NVMe/F
  • NVMf
  • NVMe-F
  • NVMe-oE (“over Ethernet”)
  • And so on…

Most of the time these are innocent and harmless mistakes, or simply affectations for a particular type of acronym. The problem comes when a vendor uses a different acronym because it looks like they are using a standardized version of the bindings when in fact it is not.

Taking advantage of people’s ignorance over the proper terminology in order to make your product look like it’s something it isn’t is, well, it’s uncool. You should especially beware if someone uses a trademark symbol (“™”) with an incorrect acronym.

Bottom Line

NVMe over Fabrics is a way of extending NVMe outside of a computer/server. It is more than simply slapping the commands onto a network, and it still helps to know the pros and cons of each transport fabric as it applies to what you need to do.

Remember, there is no such thing as a panacea for storage. Storage still has a very, very hard job:

Give me back the correct bit I asked you to hold on to for me.

Everything that happens inside of NVMe and NVMe-oF is designed to help make sure that happens.

If you are interested in learning more about NVMe, and NVMe over Fabrics, may I recommend some additional reading and videos (whichever you prefer) from the SNIA Educational Library.

[1] The reason why the acronym was chosen was because it was supposed to reflect the various forms of NVMe. For instance, the NVMe Management Interface is known as NVMe-MI™, and the group wished for there to be consistency across all the acronyms.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

NVMe® over Fabrics for Absolute Beginners

J Metz

Feb 23, 2021

title of post
A while back I write an article entitled “NVMe™ for Absolute Beginners.” It seems to have resonated with a lot of people and it appears there might be a call for doing the same thing for NVMe® over Fabrics (NVMe-oF™). This article is for absolute beginners. If you are a seasoned (or even moderately-experienced) technical person, this probably won’t be news to you. However, you are free (and encouraged!) to point people to this article who need Plain English™ to get started. A Quick Refresher Any time an application on a computer (or server, or even a consumer device like a phone) needs to talk to a storage device, there are a couple of things that you need to have. First, you need to have memory (like RAM), you need to have a CPU, and you also need to have something that can hold onto your data for the long haul (also called storage). Another thing you need to have is a way for the CPU to talk to the memory device (on one hand) and the storage device (on the other). Thing is, CPUs talk a very specific language, and historically memory could speak that language, but storage could not. For many years, things ambled along in this way. The CPU would talk natively with memory, which made it very fast but also was somewhat risky because memory was considered volatile. That is, if there was a power blip (or went out completely), any data in memory would be wiped out. Not fun. So, you wanted to have your data stored somewhere permanently, i.e., on a non-volatile medium. For many years, that meant hard disk drives (HDDs). This was great, and worked well, but didn’t really work fast. Solid State Disks, or SSDs, changed all that. SSDs don’t have moving parts, which ultimately meant that you could get your data to and from the device faster. Much faster. However, as they got faster, it became clear that because the CPU didn’t talk to SSDs natively using the same language – and needed an adapter of some kind – we weren’t getting as fast as we wanted to be. Enter Non-Volatile Memory Express (NVMe). NVMe changed the nature of the game completely. For one, it removed the need for an adapter by allowing the CPU to talk to the storage natively. (In technical terms, what it did was allow the CPU to treat storage as if it were memory, with which it could already speak natively through a protocol called PCIe). The second thing that was pretty cool was NVMe changed the nature of the relationship with storage from this:
… which was necessarily a 1:1 relationship, to this:
… which now meant that you could have more than one relationship between devices. Very cool. Since I wrote the “NVMe for Absolute Beginners” article a few years ago, the technology has taken off like wildfire. In only a few short years, there have been more NVMe storage drives shipped than the previous go-to technology (i.e., SATA). By this point, there are many, many more articles written about NVMe than there were back then. Now, however, there are a lot of questions about what happens when you want to go outside of the range of PCIe. NVMe® over Fabrics Thing is, NVMe using PCIe is a technology that is best used inside a computer or server. PCIe is not generally regarded as a “fabric” technology. So what is a “fabric” technology, and what makes it so special? Like anything else, there are trade-offs when it comes to technology. The great thing about NVMe using PCIe is that it is wicked fast. The not-so-great thing about NVMe using PCIe is that it’s contained inside of a single computer. If you want to go outside of the computer, well, things get tricky… unless you do something special. In general terms, a “fabric” is that “something special.” It’s not as easy as putting a storage device at the end of a wire and calling it quits. Oh no; there is so much more that needs to be done. Any time you want to go outside of a computer or server, you need to be extra careful, because there are a lot more things that can go wrong. As in, an exponential number of things can go wrong. Not only do you need to try your best to make sure that things don’t go wrong in the first place, but you need to put systems in place to handle those problems when they do. The good news is that there are a lot of choices when it comes to solving this problem. These storage networks have been the tried and true means by which people have handled storage solutions at scale. Technologies like Fibre Channel, Ethernet and InfiniBand have been used to connect servers and storage for years. Each one has its place, and each one has its fans (and with good reasons).
Because of this, there was no reason for the NVM Express group (the people behind the NVMe protocol) to create their own, new, fabric. Why re-invent the wheel? Instead, it was much better to use the battle-hardened technologies that were already available. That’s why it’s called NVMe over Fabrics; we are simply using the NVMe protocol to piggy-back on top of networking technologies that already exist. The Magic of Bindings Imagine you’re rebuilding a Jeep. At a high level, you have two basic parts to a Jeep’s structure – you have the chassis, and you have the body. As you can imagine, you can’t simply place the body on top of the chassis and start driving around. The body is going to eventually slide right off the chassis. Not exactly safe. By the same logic, we can’t simply place the NVMe commands on top of a Fabric and expect, magically, that everything is going to work out all the time. Just like our Jeep body, there needs to be a strong connection with what happens underneath. In NVMe-oF parlance, these are called bindings. Bindings solve a number of problems. They are the glue that holds the NVMe communication language to the underlying fabric transport (whether it is Fibre Channel, InfiniBand, or various forms of Ethernet).
In particular, Bindings:
  • Define the establishment of a connection between NVMe and the transport/fabric
  • Restrict capabilities based upon what the transport fabric can (or can’t) do
  • Identify how NVMe is managed, administratively, using the transport/fabric
  • Establish requirements of size, authentication, type of information, etc., depending upon specific transport fabric methods
If you consider that with networking technology we think in terms of layers, the NVMe over Fabrics bindings sit on top of the transport fabrics layer, and it is the responsibility of the organizations who represent those transport fabrics to make sure that there are appropriate connections into the bindings from the other side. For instance, the T11 Standards body is responsible for creating the changes to the Fibre Channel standards so that it can interact with the bindings appropriately, not just simply sling the NVMe commands from one side to the other. You can find out more about how this works in the Fibre Channel example by watching the FCIA BrightTalk Webinar – Introduction to FC-NVMe by yours truly and Craig Carlson from Cavium, now Marvell).
Types of Fabrics Now, I’ve given you an example of one type of Fabric that can be used for NVMe-oF, but Fibre Channel is not the only one. In fact, the magic of NVMe-oF is that you can choose one from a number of transport types:
At the top of the graphic you can see the host, and at the bottom you can see the storage. In the middle, you can see all of the different networking options that could be used. Now, the interesting thing here, is that NVMe-oF is not those different types of transports. On the contrary, there are different technological bodies that work on those different transports. Instead, the magic of NVMe over Fabrics is the part represented by this:
And this:
To Bind or Not To Bind
Now, it’s important to know that just because the NVMe Express group defines the bindings format for NVMe-oF™ (the “™” is intentional, here), it doesn’t mean that this is the only way to do it. In fact, before the NVMe over Fabrics standard was ratified, there were quite a few companies who created their own forms of moving NVMe commands from one device to another. Let me be absolutely clear here: there is nothing wrong with this! Just because someone has a solution that isn’t standardized does not mean that they are doing something wrong or, worse, doing something nefarious. All it means is that they have figured out a different way to handle the means by which they send NVMe commands from one place to another. However… It’s valuable to know whether or not a company is using a standardized version of NVMe over Fabrics, or whether someone is using a proprietary version of using a fabric to transport NVMe. The reason why it’s important is that storage is an end-to-end problem that needs solving, and you need to know how all of the parts fit together, and what (if any) kind of special attention needs to be made in order to make everything work together seamlessly. For that reason, even though the acronym NVMe-oF™ looks funny[1], it is the official acronym for NVMe™ over Fabrics. There are a number of other popular acronyms, however, that have been used to represent networked NVMe:
  • NVM/f
  • NVMe/F
  • NVMf
  • NVMe-F
  • NVMe-oE (“over Ethernet”)
  • And so on…
Most of the time these are innocent and harmless mistakes, or simply affectations for a particular type of acronym. The problem comes when a vendor uses a different acronym because it looks like they are using a standardized version of the bindings when in fact it is not. Taking advantage of people’s ignorance over the proper terminology in order to make your product look like it’s something it isn’t is, well, it’s uncool. You should especially beware if someone uses a trademark symbol (“™”) with an incorrect acronym. Bottom Line NVMe over Fabrics is a way of extending NVMe outside of a computer/server. It is more than simply slapping the commands onto a network, and it still helps to know the pros and cons of each transport fabric as it applies to what you need to do. Remember, there is no such thing as a panacea for storage. Storage still has a very, very hard job: Give me back the correct bit I asked you to hold on to for me. Everything that happens inside of NVMe and NVMe-oF is designed to help make sure that happens. If you are interested in learning more about NVMe, and NVMe over Fabrics, may I recommend some additional reading and videos (whichever you prefer): [1] The reason why the acronym was chosen was because it was supposed to reflect the various forms of NVMe. For instance, the NVMe Management Interface is known as NVMe-MI™, and the group wished for there to be consistency across all the acronyms.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Does this Look Outdated to You?

Tom Friend

Feb 22, 2021

title of post

Last month, the SNIA Networking Storage Forum (NSF) took a different perspective on the storage networking technologies we cover by discussing technologies and practices that you may want to reconsider. The webcast was called “Storage Technologies & Practices Ripe for Refresh.”  I encourage you to watch it on-demand.  It was an interesting session where my colleagues Eric Hibbard, John Kim, and Alex McDonald explored security problems, aging network protocols, and NAS protocols. It was quite popular. In fact, we’re planning more in this series, so stay tuned.

The audience asked us some great questions during the live event and as promised, here are our answers: 

Q. How can I tell if my SSH connections are secure?

A. Short of doing a security scan of a server’s SSH port (typically TCP/IP port 22) it can be difficult to know if your connection is secure. In general, the following are recommended: 

  1. Use SSH version 2 or later
  2. Disable server SSH root logins
  3. Authenticate clients to servers by using SSH key pairs (don’t use the same keys on multiple systems)
  4. Change the default SSH port
  5. Filter connections using TCP wrappers or similar network filtering
  6. Set idle timeouts that close SSH connections. If you don’t need SSH on a server, make sure it is disabled.

Q.  How can customers determine if they are using updated security technologies? 

A. Security technologies can be both security features/capabilities as well as elements that address the security posture of a system at any given point in time. From a feature perspective, it is often difficult to change or add them, so it is important to consider requirements for things like encryption, key management, access controls, etc. up front; assume that what you start with is probably all that you will get going forward. Security posture, on the other hand, can be very different. It typically involves configuration changes (e.g., enabling/disabling a security feature), applying patches to operating systems and applications, and updating software to newer versions when security patches are no longer available or are inadequate. Performing regular security scans of systems is also an important element because they will help verify the system is being maintained properly as well as to provide alerts for new problems as the threat landscape changes.

Q. This is not really a question, but rather a comment on NAS protocols, their security is only as good as the authorization on the files. e.g. 777 or everyone type ACLs.    

A. The NFSv4 and SMB3 protocols are as secure as you want to make them. Assigning inappropriate authorization is a user error, not a protocol problem.

Q. Can most modern storage systems and operating systems support NFSv4 and SMBv3?         

A. The majority of NAS systems from most vendors can support NFSv4 and SMB3, and many will allow access to the same files with either protocol. (But see the caveats below.) There’s the open source Samba (see here  for a protocol that’s SMB3 for Linux and Unix), and Microsoft Windows Server supports NFS v2 v3 and v4.1. 

Q. Do obsolete protocols have an impact on multi-protocol (NFS + SMB) access to data? 

A. Yes, in several areas; the two biggies are security and locking. On security, NFS and SMB share in common the same terminology (ACLs or access control lists) to describe the security on objects like files and directories; but the underlying security models are different. See this NFS4 ACL overview for more details. Locking is a complex area, and the general rule is; don’t share files between SMB and NFS unless you’re fully aware of how locking works. Obsolete protocols definitely don’t help here, so best avoided. Even with up-to-date protocol stacks there are lots of other gotchas. If you must share between NFS and SMB, involve the vendor of the system that is providing you with this capability, and adhere to their best practices. 

From a security perspective, multi-protocol access to data is fraught with access control problems because the access privilege models can vary significantly. This can lead to a situation where an escalation of privileges can occur, granting someone access to data that they should not be allowed to access. Adding obsolete protocols to this mix can further expose data because of the granularity of the access privilege model or complete lack of one.

Q: Could we use robust log system and real-time analysis and real-time configuration, in the transport layer?

A: The network transport layer is Layer 4 in the 7-layer OSI model, most commonly using the TCP or UDP protocols. Both packet logging and filtering tools can be used to monitor Layer 4 traffic, and real-time analysis can be done by a packet analyzer, firewall, intrusion detection/prevention system (IDS/IPS). These tools typically allow capture or filtering of packets based on a combination of their source and destination IP addresses, source and destination ports, and the protocol type (TCP/UDP). More sophisticated networking equipment might also track connections and use deep packet inspection to identify applications at OSI layers 5-7 in the network traffic. Doing such analysis can identify the use of obsolete protocols or applications or detect malware or suspicious activity. Real-time configuration could be used to turn off obsolete or unneeded protocols on servers that no longer need them or to block their traffic from using the network.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Does this Look Outdated to You?

Tom Friend

Feb 22, 2021

title of post
Last month, the SNIA Networking Storage Forum (NSF) took a different perspective on the storage networking technologies we cover by discussing technologies and practices that you may want to reconsider. The webcast was called “Storage Technologies & Practices Ripe for Refresh.”  I encourage you to watch it on-demand.  It was an interesting session where my colleagues Eric Hibbard, John Kim, and Alex McDonald explored security problems, aging network protocols, and NAS protocols. It was quite popular. In fact, we’re planning more in this series, so stay tuned. The audience asked us some great questions during the live event and as promised, here are our answers: Q. How can I tell if my SSH connections are secure? A. Short of doing a security scan of a server’s SSH port (typically TCP/IP port 22) it can be difficult to know if your connection is secure. In general, the following are recommended:
  1. Use SSH version 2 or later
  2. Disable server SSH root logins
  3. Authenticate clients to servers by using SSH key pairs (don’t use the same keys on multiple systems)
  4. Change the default SSH port
  5. Filter connections using TCP wrappers or similar network filtering
  6. Set idle timeouts that close SSH connections. If you don’t need SSH on a server, make sure it is disabled.
Q.  How can customers determine if they are using updated security technologies?  A. Security technologies can be both security features/capabilities as well as elements that address the security posture of a system at any given point in time. From a feature perspective, it is often difficult to change or add them, so it is important to consider requirements for things like encryption, key management, access controls, etc. up front; assume that what you start with is probably all that you will get going forward. Security posture, on the other hand, can be very different. It typically involves configuration changes (e.g., enabling/disabling a security feature), applying patches to operating systems and applications, and updating software to newer versions when security patches are no longer available or are inadequate. Performing regular security scans of systems is also an important element because they will help verify the system is being maintained properly as well as to provide alerts for new problems as the threat landscape changes. Q. This is not really a question, but rather a comment on NAS protocols, their security is only as good as the authorization on the files. e.g. 777 or everyone type ACLs.     A. The NFSv4 and SMB3 protocols are as secure as you want to make them. Assigning inappropriate authorization is a user error, not a protocol problem. Q. Can most modern storage systems and operating systems support NFSv4 and SMBv3? A. The majority of NAS systems from most vendors can support NFSv4 and SMB3, and many will allow access to the same files with either protocol. (But see the caveats below.) There’s the open source Samba (see here  for a protocol that’s SMB3 for Linux and Unix), and Microsoft Windows Server supports NFS v2 v3 and v4.1. Q. Do obsolete protocols have an impact on multi-protocol (NFS + SMB) access to data?  A. Yes, in several areas; the two biggies are security and locking. On security, NFS and SMB share in common the same terminology (ACLs or access control lists) to describe the security on objects like files and directories; but the underlying security models are different. See this NFS4 ACL overview for more details. Locking is a complex area, and the general rule is; don’t share files between SMB and NFS unless you’re fully aware of how locking works. Obsolete protocols definitely don’t help here, so best avoided. Even with up-to-date protocol stacks there are lots of other gotchas. If you must share between NFS and SMB, involve the vendor of the system that is providing you with this capability, and adhere to their best practices. From a security perspective, multi-protocol access to data is fraught with access control problems because the access privilege models can vary significantly. This can lead to a situation where an escalation of privileges can occur, granting someone access to data that they should not be allowed to access. Adding obsolete protocols to this mix can further expose data because of the granularity of the access privilege model or complete lack of one. Q: Could we use robust log system and real-time analysis and real-time configuration, in the transport layer? A: The network transport layer is Layer 4 in the 7-layer OSI model, most commonly using the TCP or UDP protocols. Both packet logging and filtering tools can be used to monitor Layer 4 traffic, and real-time analysis can be done by a packet analyzer, firewall, intrusion detection/prevention system (IDS/IPS). These tools typically allow capture or filtering of packets based on a combination of their source and destination IP addresses, source and destination ports, and the protocol type (TCP/UDP). More sophisticated networking equipment might also track connections and use deep packet inspection to identify applications at OSI layers 5-7 in the network traffic. Doing such analysis can identify the use of obsolete protocols or applications or detect malware or suspicious activity. Real-time configuration could be used to turn off obsolete or unneeded protocols on servers that no longer need them or to block their traffic from using the network.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on Protecting Data from New COVID Threats

Alex McDonald

Feb 17, 2021

title of post
The SNIA Cloud Storage Technologies Initiative began 2021 discussing the topic that has been on everyone’s mind for the last year – COVID-19. But rather than talking about positive cases or vaccine availability, our experts, Eric Hibbard and Mounir Elmously, explored how COVID has increased cybersecurity concerns and impacted the way organizations must adapt their security practices in order to ensure data privacy and data protection. If you missed our live webcast “Data Privacy and Data Protection in the COIVD Era” it’s available on-demand. As expected, the session raised several questions on how to mitigate the risks from increased social engineering and ransomware attacks and how to limit increased vulnerabilities from the flood of remote workers. Here are answers to the session’s questions from our experts. Q: Do you have any recommendations for structuring a rapid response to an ongoing security threat? A:  When considering rapid responses to threats, an organization must develop an incident response plan. Waiting to do this in the middle of an incident all but guarantees mistakes and inadequate responses (and possibly liabilities). As part of this planning, we’ve seen some companies form a rapid response security team. This consists of IT and business managers, security teams, communications and public relations personnel, and potentially legal representatives. The goal of the teams is to assemble in response to an emergency to cut across different responsibilities and make faster decisions. This would enable a mix of responses such as isolating infected areas of the network, putting business continuity plans in place, and even potentially securing physical assets or sending teams to update systems. In addition to mitigating problems, the organization may need to handle public and/or regulatory disclosures. Q: Isn’t there a tension between a continuously connected backup and ransomware protections?  Are there other conflicts with regulations or policies that could reduce or compromise data security? A: In security, there will always be some aspect of compromise. For instance, the fact that your backup system is connected to the network and continuously updating places it at risk for being involved in a ransomware attack. This can be mitigated with additional offline backups. But the value of the connected backup is the recovery time involved in resetting your environment. There are always conflicts in a complex security scheme, and care should be taken to examine all vectors of attack to mitigate risk. It is worth noting that the National Institute of Standards (NIST) in it recently published NIST SP 800-209 (Security Guidelines for Storage Infrastructure) recommends that cyber-attack recovery (e.g., due to a ransomware attack) be handled independently of non-malicious recovery (i.e., the backups for each type of recovery are completely separate). Q. What about air gapping the backups in the vault? A. Air gapping the backup in the vault is a valid option, however most IT shops provide end users with access to backups in case they experience limited data loss. Applying air gap to such backup will result in taking this capability away from users. on the other hand, air gapping database backups will have a significant pay off since no user access is provided except for a very limited admin group. Q. Is ransomware a good reason to go back to real tape backup or at least some form of unconnected archive? A. Tapes provide an excellent air gap media that is extremely cost effective. On the other hand, managing tape has proven to be a very cumbersome process that has been rejected by most IT shops. Even if you decided to use tapes as air gaps, do not abandon your backup to disk since this is your first line of defense. Tape should be a tertiary copy of backup and ideally you should not consider moving it off site due to potential multiple tape handling problems. Q. Is the current threat landscape larger or smaller with all the distributed work from home efforts ongoing? A. The current threat landscape has increased by multiple folds due to: 1- Internet of things 2- Exponential growth of work from home-remote users 3- Bring your own device 4- 5G connectivity Q. I often hear IT pros say something like, “It’s secure enough, let’s deploy.” Once deployed, how often should security be re-assessed, and do you have any methodologies for that? A. As was discussed in this presentation, no matter how much you invest in the police and associated technology, there will be a bad actor confident that he can get away with the crime. Similarly, the threat landscape is evolving almost by the minute, and ransomware has proven to be an excellent way to make money, so consider that you should revisit your security as frequently as your budget permits, keep your security tools updated as soon as an update is available, and create a strict patch update schedule to your environment including OS, database, drivers. Q. The information presented at this session was pretty basic. I thought it would be more in-depth. A. Since the level of expertise of our audience varies widely and with the potential of first-time attendees, we needed to start with a foundation to ensure no ambiguity on the subject. As more follow up webcasts will occur on related topics, we will start from where we left off in the previous session. We expect to continue this discussion. Follow us on Twitter @sniacloud_com so that you don’t miss any announcements on upcoming events.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Cutting Edge Persistent Memory Education – Hear from the Experts!

Jim Fister

Feb 16, 2021

title of post
Most of the US is currently experiencing an epic winter.  So much for 2021 being less interesting than 2020.  Meanwhile, large portions of the world are also still locked down waiting for vaccine production. So much for 2020 ending in 2020.  What, oh what, can possibly take our minds off the boredom? Here’s an idea – what about some education in persistent memory programming?  SNIA and UCSD recently hosted an online conference on Persistent Programming In Real Life (PIRL), and the videos of all the sessions are now available online.  There are nearly 20 hours of content including panel discussions, academic, and industry presentations.  Recordings and PDFs of the presentations have been posted on the PIRL site as well as in the SNIA Educational Library. In addition, SNIA is now in planning for our April 21-22, 2021 virtual Persistent Memory and Computational Storage Summit, where we’ll be featuring the latest content from the data center to the edge. Complimentary registration is now open. If you’re interested in helping us plan, or proposing content, you can contact us to provide input. Spring will be here soon, with some freedom from cold, lockdown, and boredom.  We hope to see you virtually at the summit, full of knowledge from your perusal of SNIA education content. The post Cutting Edge Persistent Memory Education – Hear from the Experts! first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to