Sorry, you need to enable JavaScript to visit this website.

Evaluator Group to Share Hybrid Cloud Research

Alex McDonald

Nov 17, 2017

title of post
In a recent survey of enterprise hybrid cloud users, the Evaluator Group saw that nearly 60% of respondents indicated that lack of interoperability is a significant technology issue that they must overcome in order to move forward. In fact, lack of interoperability was the number one issue, surpassing public cloud security and network security as significant inhibitors. The SNIA Cloud Storage Initiative (CSI) is pleased to have John Webster, Senior Partner at Evaluator Group, who will join us on December 12th for a live webcast to dive into the findings of their research. In this webcast, Multi-Cloud Storage: Addressing the Need for Portability and Interoperability, my SNIA Cloud colleague, Mark Carlson, and John will discuss enterprise hybrid cloud objectives and barriers to adoption. John and Mark will focus on cloud interoperability within the storage domain and the CSI’s work that promotes interoperability and portability of data stored in the cloud. As moderator of this webcast, I’ll make sure we offer great insights on real-world cloud deployment challenges. As always, we will be available to answer your questions on the spot. I encourage you to register today. We hope to see you on the 12th.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Evaluator Group to Share Hybrid Cloud Research

Alex McDonald

Nov 17, 2017

title of post

In a recent survey of enterprise hybrid cloud users, the Evaluator Group saw that nearly 60% of respondents indicated that lack of interoperability is a significant technology issue that they must overcome in order to move forward. In fact, lack of interoperability was the number one issue, surpassing public cloud security and network security as significant inhibitors.

The SNIA Cloud Storage Initiative (CSI) is pleased to have John Webster, Senior Partner at Evaluator Group, who will join us on December 12th for a live webcast to dive into the findings of their research. In this webcast, Multi-Cloud Storage: Addressing the Need for Portability and Interoperability, my SNIA Cloud colleague, Mark Carlson, and John will discuss enterprise hybrid cloud objectives and barriers to adoption. John and Mark will focus on cloud interoperability within the storage domain and the CSI’s work that promotes interoperability and portability of data stored in the cloud.

As moderator of this webcast, I’ll make sure we offer great insights on real-world cloud deployment challenges. As always, we will be available to answer your questions on the spot. I encourage you to register today. We hope to see you on the 12th.

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The OpenFabrics Alliance and the Pursuit of Efficient Access to Persistent Memory over Fabrics

Marty Foltyn

Nov 13, 2017

title of post
  Guest Columnist:  Paul Grun, Advanced Technology Development, Cray, Inc. and Vice-Chair, Open Fabrics Alliance (OFA) Earlier this year, SNIA hosted its one-day Persistent Memory Summit in San Jose; it was my pleasure to be invited to participate by delivering a presentation on behalf of the OpenFabrics Alliance.  Check it out here. The day long Summit program was chock full of deeply technical, detailed information about the state of the art in persistent memory technology coupled with previews of some possible future directions this exciting technology could conceivably take.  The Summit played to a completely packed house, including an auxiliary room equipped with a remote video feed.  Quite the event! But why would the OpenFabrics Alliance (the OFA) be offering a presentation at a Persistent Memory (PM) Summit, you ask?  Fabrics!  Which just happens to be the OFA’s forte. For several years now, SNIA’s NVM Programming Model Technical Working Group (NVMP TWG) has been describing programming models designed to deliver high availability, the primary thesis for which is simply stated – data isn’t truly ‘highly available’ until it is stored persistently in at least two places.  Hence the need to access remote persistent memory via a fabric in a highly efficient, and performant, manner.  And that’s where the OFA comes in. For those unfamiliar with us, the OFA concerns itself with developing open source network software to allow applications to get the most performance possible from the network.  Historically, that has meant that the OFA has developed libraries and kernel modules that conform to the Verbs specification as defined in the InfiniBand Architecture specifications.  Over time, the suite has expanded to include software for derivative specifications such as RoCE (RDMA over Converged Ethernet) and iWARP (RDMA over TCP/IP).  In today’s world, much of the work of maintaining the Verbs API has been assumed by the open source community itself.  Success! Several years ago, the OFA began an effort called the OpenFabrics Interfaces project to define a network API now known as ‘libfabric’.  This API complements the Verbs API; Verbs continues into the foreseeable future as the API of choice for verbs-based fabrics such as InfiniBand.  The idea was that the libfabric API  would be driven mainly by the unique requirements of the consumers of network services.  The result would be networking solutions that are transport independent and that meet the needs of application and middleware developers through a freely available open source API. So, what does all this have to do with persistent memory?  A great deal! By now, we have come to the realization that remote, fabric attached persistent memory, while very much like local memory in many respects, has some unique characteristics.  To state the obvious, it has persistence characteristics akin to those found in classical file systems, but it has the potential to be accessed using fast memory semantics instead of conventional file-based POSIX semantics.  Accomplishing this implies a need for new features exposed to consumers through the API, giving the consumer greater control over the persistence of data written to the remote memory.  Fortunately, the libfabric framework was designed from the outset for flexibility, which should make it straightforward to define the new structures needed to support Persistent Memory accesses over a high performance, switched fabric. My presentation at the Persistent Memory Summit had two main goals; the first was to introduce the OpenFabrics Alliance’s approach to API development.  The second was to begin the discussion of API requirements to support Persistent Memory.  For example, during the talk we drew a distinction between ‘Data Storage’ (what it means to access conventional storage) versus ‘Data Access’ (accessing persistent memory over a fabric).  As the slides in the presentation make clear, these are two very different use cases, and yet the enhanced libfabric API must support both equally well.  At the end of the presentation, we presented some ideas for what a converged I/O stack designed to support both use cases might look like.  Naturally, this is just the beginning of the story. There is much work to do. The work on the libfabric API is now underway in earnest in the OpenFabrics Interfaces Working Group (OFIWG), and as with the original libfabric work, we are beginning with a requirement gathering phase.  We want to be sure that the resulting enhancements to the libfabric API meet the needs of applications accessing remote persistent memory. The OFA is looking forward to their presentation at the next Persistent Memory Summit to be held January 24, 2018 at the Westin in San Jose, CA, where we will provide updates on OFA activities.  Details on the Summit can be found here. Being an open source organization, the OFA welcomes input from all interested parties in our efforts to support the emergence of this exciting new technology.    For more information on how to get involved, please go to the OpenFabrics website (https://openfabrics.org) to find information about regular working group meetings and how you can get involved.  Or, feel free to reach out to me directly for more information – grun@cray.com

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The OpenFabrics Alliance and the Pursuit of Efficient Access to Persistent Memory over Fabrics

Marty Foltyn

Nov 13, 2017

title of post
  Guest Columnist:  Paul Grun, Advanced Technology Development, Cray, Inc. and Vice-Chair, Open Fabrics Alliance (OFA) Earlier this year, SNIA hosted its one-day Persistent Memory Summit in San Jose; it was my pleasure to be invited to participate by delivering a presentation on behalf of the OpenFabrics Alliance.  Check it out here. The day long Summit program was chock full of deeply technical, detailed information about the state of the art in persistent memory technology coupled with previews of some possible future directions this exciting technology could conceivably take.  The Summit played to a completely packed house, including an auxiliary room equipped with a remote video feed.  Quite the event! But why would the OpenFabrics Alliance (the OFA) be offering a presentation at a Persistent Memory (PM) Summit, you ask?  Fabrics!  Which just happens to be the OFA’s forte. For several years now, SNIA’s NVM Programming Model Technical Working Group (NVMP TWG) has been describing programming models designed to deliver high availability, the primary thesis for which is simply stated – data isn’t truly ‘highly available’ until it is stored persistently in at least two places.  Hence the need to access remote persistent memory via a fabric in a highly efficient, and performant, manner.  And that’s where the OFA comes in. For those unfamiliar with us, the OFA concerns itself with developing open source network software to allow applications to get the most performance possible from the network.  Historically, that has meant that the OFA has developed libraries and kernel modules that conform to the Verbs specification as defined in the InfiniBand Architecture specifications.  Over time, the suite has expanded to include software for derivative specifications such as RoCE (RDMA over Converged Ethernet) and iWARP (RDMA over TCP/IP).  In today’s world, much of the work of maintaining the Verbs API has been assumed by the open source community itself.  Success! Several years ago, the OFA began an effort called the OpenFabrics Interfaces project to define a network API now known as ‘libfabric’.  This API complements the Verbs API; Verbs continues into the foreseeable future as the API of choice for verbs-based fabrics such as InfiniBand.  The idea was that the libfabric API  would be driven mainly by the unique requirements of the consumers of network services.  The result would be networking solutions that are transport independent and that meet the needs of application and middleware developers through a freely available open source API. So, what does all this have to do with persistent memory?  A great deal! By now, we have come to the realization that remote, fabric attached persistent memory, while very much like local memory in many respects, has some unique characteristics.  To state the obvious, it has persistence characteristics akin to those found in classical file systems, but it has the potential to be accessed using fast memory semantics instead of conventional file-based POSIX semantics.  Accomplishing this implies a need for new features exposed to consumers through the API, giving the consumer greater control over the persistence of data written to the remote memory.  Fortunately, the libfabric framework was designed from the outset for flexibility, which should make it straightforward to define the new structures needed to support Persistent Memory accesses over a high performance, switched fabric. My presentation at the Persistent Memory Summit had two main goals; the first was to introduce the OpenFabrics Alliance’s approach to API development.  The second was to begin the discussion of API requirements to support Persistent Memory.  For example, during the talk we drew a distinction between ‘Data Storage’ (what it means to access conventional storage) versus ‘Data Access’ (accessing persistent memory over a fabric).  As the slides in the presentation make clear, these are two very different use cases, and yet the enhanced libfabric API must support both equally well.  At the end of the presentation, we presented some ideas for what a converged I/O stack designed to support both use cases might look like.  Naturally, this is just the beginning of the story. There is much work to do. The work on the libfabric API is now underway in earnest in the OpenFabrics Interfaces Working Group (OFIWG), and as with the original libfabric work, we are beginning with a requirement gathering phase.  We want to be sure that the resulting enhancements to the libfabric API meet the needs of applications accessing remote persistent memory. The OFA is looking forward to their presentation at the next Persistent Memory Summit to be held January 24, 2018 at the Westin in San Jose, CA, where we will provide updates on OFA activities.  Details on the Summit can be found here. Being an open source organization, the OFA welcomes input from all interested parties in our efforts to support the emergence of this exciting new technology.    For more information on how to get involved, please go to the OpenFabrics website (https://openfabrics.org) to find information about regular working group meetings and how you can get involved.  Or, feel free to reach out to me directly for more information – grun@cray.com

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on Storage Management – These Folks Weren’t Too Proud to Ask!

Richelle Ahlvers

Oct 26, 2017

title of post

The most recent installment of our SNIA ESF webcast series “Everything You Wanted To Know About Storage But Were Too Proud To Ask” took on a broad topic – storage management. Our experts, Richelle Ahlvers, Mark Rogov and Alex McDonald did a great job explaining the basics and have now answered the questions that attendees asked here in this blog. If you missed the live webcast, check it out on-demand and download a copy of the slides if you’d like to take notes.

Q: What is the difference between storage and a database? Could a database be considered storage?

A: The short answer is no. The long answer relies on the fact that a database doesn’t just store data: it modifies the data to fit into its schema (table, index, etc.) A storage solution doesn’t mutate the data in any shape—the data is always preserved as is.

Q: Doesn’t provisioning a storage array mean setting it up?

A: Within the storage community, provisioning is akin to serving a cake at a party. To provision storage to a server means cutting a slice of usable capacity and allocating it to a very specific server. The record of the particular pairing is carefully recorded.

Q: Does deduplication fall into Configuration? Even when it is done only on cold data?

A: Great question! Deduplication is one of the services that a storage array may offer, therefore enabling it is configuring such service. To further clarify your question, the point of deduplication is irrelevant: it may happen with cold data (the data that is stored on the array but applications haven’t accessed it in a long time); it may happen to hot or in-flight data (frequently accessed data or data inside cache).

Q. Do Hyperscale vendors (like AWS) use any of the storage management?

A. Hyperscale vendors, like all consumers of storage, use storage management to configure their storage. They use a combination of vendor device tools and custom development scripts/tools, but are not heavy consumers of industry standard storage interfaces today. Swordfish’s RESTful interface will provide an easy-to-consume API for hyperscale vendors to integrate into their management ecosystem as vendors start delivering Swordfish-based solutions.

Q. It was mentioned that there was a ‘steep learning curve’ for previous SNIA storage management model. Any idea how much easier this is to learn?

A. One of the major advantages for Swordfish is that the RESTful API’s are standardized and can take advantage of readily available tools and infrastructure. With the JSON-based payload, you can use standard plug-ins for browsers, as well as Python scripting languages to immediately interact with the Swordfish API’s. This is a distinct difference from the SMI-S API’s, which although they are also XML-based APIs, required custom or third-party tools to interact with the SMI-S providers.

Q. You talked about how Swordfish is being designed as more user and client centric. How are you doing this?

A. We are starting with very specific use cases and scenarios (rather than looking at “what is all the functionality we could expose”) as we build both the structure of the API and the amount of information returned. We’ve also documented a lot of the basic use cases, and who might like to do them, in a user’s guide, and published that alongside the Swordfish specification. You can find links to this at the SNIA Swordfish page: snia.org/swordfish

Q. You weren’t specific on storage management tools, and I was expecting more detail. I’m wondering why you did this at such a high level, as this really hasn’t helped me.

A. We were primarily referring to ITIL –(The Information Technology Infrastructure Library). It’s a framework designed to standardize the selection, planning, delivery and support of IT services to a business. Learn more here.

Q. While most of the products today support SMI-S, it’s not something that DevOps or Storage Admins use directly. How, or is, Swordfish going to be different?

A. There are two primary ways we see the Swordfish API being much more accessible directly to the individual admins. First, as a RESTful interface, it is very easy to access and traverse with the tools that they use daily – from web browsers directly, to REST plugins, to simple (or complex) python scripts. The learning curve to start interacting with Swordfish is extremely small. You can get a sense by going to an online “mockup” site here: http://swordfishmockups.com – there are some simple instructions to either browse the system directly or some standard clients to make it easier. That will give you an idea of how easy it will be to start interacting with Swordfish (plus security for a real system, of course).

Remember the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a series. We’ve covered 8 storage topics to date and have a library of on-demand webcasts you can watch at your convenience. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on Storage Management – These Folks Weren’t Too Proud to Ask!

J Metz

Oct 26, 2017

title of post

The most recent installment of our SNIA ESF webcast series “Everything You Wanted To Know About Storage But Were Too Proud To Ask” took on a broad topic – storage management. Our experts, Richelle Ahlvers, Mark Rogov and Alex McDonald did a great job explaining the basics and have now answered the questions that attendees asked here in this blog. If you missed the live webcast, check it out on-demand and download a copy of the slides if you’d like to take notes.

Q: What is the difference between storage and a database? Could a database be considered storage?

A: The short answer is no. The long answer relies on the fact that a database doesn’t just store data: it modifies the data to fit into its schema (table, index, etc.) A storage solution doesn’t mutate the data in any shape—the data is always preserved as is.

Q: Doesn’t provisioning a storage array mean setting it up?

A: Within the storage community, provisioning is akin to serving a cake at a party. To provision storage to a server means cutting a slice of usable capacity and allocating it to a very specific server. The record of the particular pairing is carefully recorded.

Q: Does deduplication fall into Configuration? Even when it is done only on cold data?

A: Great question! Deduplication is one of the services that a storage array may offer, therefore enabling it is configuring such service. To further clarify your question, the point of deduplication is irrelevant: it may happen with cold data (the data that is stored on the array but applications haven’t accessed it in a long time); it may happen to hot or in-flight data (frequently accessed data or data inside cache).

Q. Do Hyperscale vendors (like AWS) use any of the storage management?

A. Hyperscale vendors, like all consumers of storage, use storage management to configure their storage. They use a combination of vendor device tools and custom development scripts/tools, but are not heavy consumers of industry standard storage interfaces today. Swordfish’s RESTful interface will provide an easy-to-consume API for hyperscale vendors to integrate into their management ecosystem as vendors start delivering Swordfish-based solutions.

Q. It was mentioned that there was a ‘steep learning curve’ for previous SNIA storage management model. Any idea how much easier this is to learn?

A. One of the major advantages for Swordfish is that the RESTful API’s are standardized and can take advantage of readily available tools and infrastructure. With the JSON-based payload, you can use standard plug-ins for browsers, as well as Python scripting languages to immediately interact with the Swordfish API’s. This is a distinct difference from the SMI-S API’s, which although they are also XML-based APIs, required custom or third-party tools to interact with the SMI-S providers.

Q. You talked about how Swordfish is being designed as more user and client centric.  How are you doing this?   

A. We are starting with very specific use cases and scenarios  (rather than looking at “what is all the functionality we could expose”) as we build both the structure of the API and the amount of information returned.   We’ve also documented a lot of the basic use cases, and who might like to do them, in a user’s guide, and published that alongside the Swordfish specification.  You can find links to this at the SNIA Swordfish page: snia.org/swordfish

Q. You weren’t specific on storage management tools, and I was expecting more detail. I’m wondering why you did this at such a high level, as this really hasn’t helped me.

A. We were primarily referring to ITIL –(The Information Technology Infrastructure Library). It’s a framework designed to standardize the selection, planning, delivery and support of IT services to a business. Learn more here.

Q. While most of the products today support SMI-S, it’s not something that DevOps or Storage Admins use directly.  How, or is, Swordfish going to be different?

A. There are two primary ways we see the Swordfish API being much more accessible directly to the individual admins.  First, as a RESTful interface, it is very easy to access and traverse with the tools that they use daily – from web browsers directly, to REST plugins, to simple (or complex) python scripts.  The learning curve to start interacting with Swordfish is extremely small.  You can get a sense by going to an online “mockup” site here:  http://swordfishmockups.com – there are some simple instructions to either browse the system directly or some standard clients to make it easier.  That will give you an idea of how easy it will be to start interacting with Swordfish (plus security for a real system, of course).

Remember the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a series. We’ve covered 8 storage topics to date and have a library of on-demand webcasts you can watch at your convenience. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on Storage Management – These Folks Weren't Too Proud to Ask!

J Metz

Oct 26, 2017

title of post
The most recent installment of our SNIA ESF webcast series "Everything You Wanted To Know About Storage But Were Too Proud To Ask" took on a broad topic – storage management. Our experts, Richelle Ahlvers, Mark Rogov and Alex McDonald did a great job explaining the basics and have now answered the questions that attendees asked here in this blog. If you missed the live webcast, check it out on-demand and download a copy of the slides if you'd like to take notes. Q: What is the difference between storage and a database? Could a database be considered storage? A: The short answer is no. The long answer relies on the fact that a database doesn't just store data: it modifies the data to fit into its schema (table, index, etc.) A storage solution doesn't mutate the data in any shape—the data is always preserved as is. Q: Doesn't provisioning a storage array mean setting it up? A: Within the storage community, provisioning is akin to serving a cake  at a party. To provision storage to a server means cutting a slice of usable capacity and allocating it to a very specific server. The record of the particular pairing is carefully recorded. Q: Does deduplication fall into Configuration? Even when it is done only on cold data? A: Great question! Deduplication is one of the services that a storage array may offer, therefore enabling it is configuring such service. To further clarify your question, the point of deduplication is irrelevant: it may happen with cold data (the data that is stored on the array but applications haven't accessed it in a long time); it may happen to hot or in-flight data (frequently accessed data or data inside cache). Q. Do Hyperscale vendors (like AWS) use any of the storage management? A. Hyperscale vendors, like all consumers of storage, use storage management to configure their storage. They use a combination of vendor device tools and custom development scripts/tools, but are not heavy consumers of industry standard storage interfaces today. Swordfish's RESTful interface will provide an easy-to-consume API for hyperscale vendors to integrate into their management ecosystem as vendors start delivering Swordfish-based solutions. Q. It was mentioned that there was a 'steep learning curve' for previous SNIA storage management  model. Any idea how much easier this is to learn? A. One of the major advantages for Swordfish is that the RESTful API's are standardized and can take advantage of readily available tools and infrastructure. With the JSON-based payload, you can use standard plug-ins for browsers, as well as Python scripting languages to immediately interact with the Swordfish API's. This is a distinct difference from the SMI-S API's, which although they are also XML-based APIs, required custom or third-party tools to interact with the SMI-S providers. Q. You talked about how Swordfish is being designed as more user and client centric.   How are you doing this?     A. We are starting with very specific use cases and scenarios  (rather than looking at "what is all the functionality we could expose") as we build both the structure of the API and the amount of information returned.     We've also documented a lot of the basic use cases, and who might like to do them, in a user's guide, and published that alongside the Swordfish specification.   You can find links to this at the SNIA Swordfish page:  snia.org/swordfish Q. You weren't specific on storage management tools, and I was expecting more detail. I'm wondering why you did this at such a high level, as this really hasn't helped me. A. We were primarily referring to ITIL –(The Information Technology Infrastructure Library). It's a framework designed to standardize the selection, planning, delivery and support of IT services to a business.  Learn more here. Q. While most of the products today support SMI-S, it's not something that DevOps or Storage Admins use directly.   How, or is, Swordfish going to be different? A. There are two primary ways we see the Swordfish API being much more accessible directly to the individual admins.   First, as a RESTful interface, it is very easy to access and traverse with the tools that they use daily – from web browsers directly, to REST plugins, to simple (or complex) python scripts.   The learning curve to start interacting with Swordfish is extremely small.   You can get a sense by going to an online "mockup" site here:   http://swordfishmockups.com  – there are some simple instructions to either browse the system directly or some standard clients to make it easier.   That will give you an idea of how easy it will be to start interacting with Swordfish (plus security for a real system, of course). Remember the "Everything You Wanted To Know About Storage But Were Too Proud To Ask" is a series. We've covered 8 storage topics to date and have a library of on-demand webcasts you can watch at your convenience. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on Storage Management – These Folks Weren’t Too Proud to Ask!

J Metz

Oct 26, 2017

title of post
The most recent installment of our SNIA ESF webcast series “Everything You Wanted To Know About Storage But Were Too Proud To Ask” took on a broad topic – storage management. Our experts, Richelle Ahlvers, Mark Rogov and Alex McDonald did a great job explaining the basics and have now answered the questions that attendees asked here in this blog. If you missed the live webcast, check it out on-demand and download a copy of the slides if you’d like to take notes. Q: What is the difference between storage and a database? Could a database be considered storage? A: The short answer is no. The long answer relies on the fact that a database doesn’t just store data: it modifies the data to fit into its schema (table, index, etc.) A storage solution doesn’t mutate the data in any shape—the data is always preserved as is. Q: Doesn’t provisioning a storage array mean setting it up? A: Within the storage community, provisioning is akin to serving a cake at a party. To provision storage to a server means cutting a slice of usable capacity and allocating it to a very specific server. The record of the particular pairing is carefully recorded. Q: Does deduplication fall into Configuration? Even when it is done only on cold data? A: Great question! Deduplication is one of the services that a storage array may offer, therefore enabling it is configuring such service. To further clarify your question, the point of deduplication is irrelevant: it may happen with cold data (the data that is stored on the array but applications haven’t accessed it in a long time); it may happen to hot or in-flight data (frequently accessed data or data inside cache). Q. Do Hyperscale vendors (like AWS) use any of the storage management? A. Hyperscale vendors, like all consumers of storage, use storage management to configure their storage. They use a combination of vendor device tools and custom development scripts/tools, but are not heavy consumers of industry standard storage interfaces today. Swordfish’s RESTful interface will provide an easy-to-consume API for hyperscale vendors to integrate into their management ecosystem as vendors start delivering Swordfish-based solutions. Q. It was mentioned that there was a ‘steep learning curve’ for previous SNIA storage management model. Any idea how much easier this is to learn? A. One of the major advantages for Swordfish is that the RESTful API’s are standardized and can take advantage of readily available tools and infrastructure. With the JSON-based payload, you can use standard plug-ins for browsers, as well as Python scripting languages to immediately interact with the Swordfish API’s. This is a distinct difference from the SMI-S API’s, which although they are also XML-based APIs, required custom or third-party tools to interact with the SMI-S providers. Q. You talked about how Swordfish is being designed as more user and client centric.  How are you doing this?    A. We are starting with very specific use cases and scenarios  (rather than looking at “what is all the functionality we could expose”) as we build both the structure of the API and the amount of information returned.   We’ve also documented a lot of the basic use cases, and who might like to do them, in a user’s guide, and published that alongside the Swordfish specification.  You can find links to this at the SNIA Swordfish page: snia.org/swordfish Q. You weren’t specific on storage management tools, and I was expecting more detail. I’m wondering why you did this at such a high level, as this really hasn’t helped me. A. We were primarily referring to ITIL –(The Information Technology Infrastructure Library). It’s a framework designed to standardize the selection, planning, delivery and support of IT services to a business. Learn more here. Q. While most of the products today support SMI-S, it’s not something that DevOps or Storage Admins use directly.  How, or is, Swordfish going to be different? A. There are two primary ways we see the Swordfish API being much more accessible directly to the individual admins.  First, as a RESTful interface, it is very easy to access and traverse with the tools that they use daily – from web browsers directly, to REST plugins, to simple (or complex) python scripts.  The learning curve to start interacting with Swordfish is extremely small.  You can get a sense by going to an online “mockup” site here:  http://swordfishmockups.com – there are some simple instructions to either browse the system directly or some standard clients to make it easier.  That will give you an idea of how easy it will be to start interacting with Swordfish (plus security for a real system, of course). Remember the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a series. We’ve covered 8 storage topics to date and have a library of on-demand webcasts you can watch at your convenience. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Expert Answers to Cloud Object Storage and Gateways Questions

Alex McDonald

Oct 24, 2017

title of post
In our most recent SNIA Cloud webcast, “Cloud Object Storage and the Use of Gateways,” we discussed market trends toward the adoption of object storage and the use of gateways to execute on a cloud strategy.  If you missed the live event, it’s now available on-demand together with the webcast slides. There were many good questions at the live event and our expert, Dan Albright, has graciously answered them in this blog. Q. Can object storage be accessed by tools for use with big data? A. Yes. Technically, access to big data is in real-time with HDFS connectors like S3, but it is  conditional on latency and if it is based on local hard drives, it should not be used as the primary storage as it would run very slowly. The guidance is to use hard drive based object storage either as an online archive or a backup target for HDFS. Q. Will current block storage or NAS be replaced with cloud object storage + gateway? A. Yes and no.  It’s dependent on the use case. For ILM (Information Lifecycle Management) uses, only the aged and infrequently accessed data is moved to the gateway+cloud object storage, to take advantage of a lower cost tier of storage, while the more recent and active data remains on the primary block or file storage.  For file sync and share, the small office/remote office data is moved off of the local NAS and consolidated/centralized and managed on the gateway file system. In practice, these methods will vary based on the enterprise’s requirements. Q. Can we use cloud object storage for IoT storage that may require high IOPS? A. High IOPS workloads are best supported by local SSD based Object, Block or NAS storage.  remote or hard drive based Object storage is better deployed with low IOPS workloads. Q. What about software defined storage? A. Cloud object storage may be implemented as SDS (Software Defined Storage) but may also be implemented by dedicated appliances. Most cloud Object storage services are SDS based. Q. Can you please define NAS? A. The SNIA Dictionary defines Network Attached Storage (NAS) as: 1. [Storage System] A term used to refer to storage devices that connect to a network and provide file access services to computer systems. These devices generally consist of an engine that implements the file services, and one or more devices, on which data is stored. 2. [Network] A class of systems that provide file services to host computers using file access protocols such as NFS or CIFS. Q. What are the challenges with NAS gateways into object storage? Aren’t there latency issues that NAS requires that aren’t available in a typical Object store solution? A. The key factor to consider is workload.  If the workload of applications accessing data residing on NAS experiences high frequency of reads and writes then that data is not a good candidate for remote or hard drive based object storage. However, it is commonly known that up to 80% of data residing on NAS is infrequently accessed.  It is this data that is best suited for migration to remote object storage. Thanks for all the great questions. Please check out our library of SNIA Cloud webcasts to learn more. And follow us on Twitter @sniacloud_com for announcements of future webcasts.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Expert Answers to Cloud Object Storage and Gateways Questions

Alex McDonald

Oct 24, 2017

title of post
In our most recent SNIA Cloud webcast, “Cloud Object Storage and the Use of Gateways,” we discussed market trends toward the adoption of object storage and the use of gateways to execute on a cloud strategy.  If you missed the live event, it’s now available on-demand together with the webcast slides. There were many good questions at the live event and our expert, Dan Albright, has graciously answered them in this blog. Q. Can object storage be accessed by tools for use with big data? A. Yes. Technically, access to big data is in real-time with HDFS connectors like S3, but it is  conditional on latency and if it is based on local hard drives, it should not be used as the primary storage as it would run very slowly. The guidance is to use hard drive based object storage either as an online archive or a backup target for HDFS. Q. Will current block storage or NAS be replaced with cloud object storage + gateway? A. Yes and no.  It’s dependent on the use case. For ILM (Information Lifecycle Management) uses, only the aged and infrequently accessed data is moved to the gateway+cloud object storage, to take advantage of a lower cost tier of storage, while the more recent and active data remains on the primary block or file storage.  For file sync and share, the small office/remote office data is moved off of the local NAS and consolidated/centralized and managed on the gateway file system. In practice, these methods will vary based on the enterprise’s requirements. Q. Can we use cloud object storage for IoT storage that may require high IOPS? A. High IOPS workloads are best supported by local SSD based Object, Block or NAS storage.  remote or hard drive based Object storage is better deployed with low IOPS workloads. Q. What about software defined storage? A. Cloud object storage may be implemented as SDS (Software Defined Storage) but may also be implemented by dedicated appliances. Most cloud Object storage services are SDS based. Q. Can you please define NAS? A. The SNIA Dictionary defines Network Attached Storage (NAS) as: 1. [Storage System] A term used to refer to storage devices that connect to a network and provide file access services to computer systems. These devices generally consist of an engine that implements the file services, and one or more devices, on which data is stored. 2. [Network] A class of systems that provide file services to host computers using file access protocols such as NFS or CIFS. Q. What are the challenges with NAS gateways into object storage? Aren’t there latency issues that NAS requires that aren’t available in a typical Object store solution? A. The key factor to consider is workload.  If the workload of applications accessing data residing on NAS experiences high frequency of reads and writes then that data is not a good candidate for remote or hard drive based object storage. However, it is commonly known that up to 80% of data residing on NAS is infrequently accessed.  It is this data that is best suited for migration to remote object storage. Thanks for all the great questions. Please check out our library of SNIA Cloud webcasts to learn more. And follow us on Twitter @sniacloud_com for announcements of future webcasts.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to