Sorry, you need to enable JavaScript to visit this website.

A Q&A on Storage Management – These Folks Weren't Too Proud to Ask!

J Metz

Oct 26, 2017

title of post
The most recent installment of our SNIA ESF webcast series "Everything You Wanted To Know About Storage But Were Too Proud To Ask" took on a broad topic – storage management. Our experts, Richelle Ahlvers, Mark Rogov and Alex McDonald did a great job explaining the basics and have now answered the questions that attendees asked here in this blog. If you missed the live webcast, check it out on-demand and download a copy of the slides if you'd like to take notes. Q: What is the difference between storage and a database? Could a database be considered storage? A: The short answer is no. The long answer relies on the fact that a database doesn't just store data: it modifies the data to fit into its schema (table, index, etc.) A storage solution doesn't mutate the data in any shape—the data is always preserved as is. Q: Doesn't provisioning a storage array mean setting it up? A: Within the storage community, provisioning is akin to serving a cake  at a party. To provision storage to a server means cutting a slice of usable capacity and allocating it to a very specific server. The record of the particular pairing is carefully recorded. Q: Does deduplication fall into Configuration? Even when it is done only on cold data? A: Great question! Deduplication is one of the services that a storage array may offer, therefore enabling it is configuring such service. To further clarify your question, the point of deduplication is irrelevant: it may happen with cold data (the data that is stored on the array but applications haven't accessed it in a long time); it may happen to hot or in-flight data (frequently accessed data or data inside cache). Q. Do Hyperscale vendors (like AWS) use any of the storage management? A. Hyperscale vendors, like all consumers of storage, use storage management to configure their storage. They use a combination of vendor device tools and custom development scripts/tools, but are not heavy consumers of industry standard storage interfaces today. Swordfish's RESTful interface will provide an easy-to-consume API for hyperscale vendors to integrate into their management ecosystem as vendors start delivering Swordfish-based solutions. Q. It was mentioned that there was a 'steep learning curve' for previous SNIA storage management  model. Any idea how much easier this is to learn? A. One of the major advantages for Swordfish is that the RESTful API's are standardized and can take advantage of readily available tools and infrastructure. With the JSON-based payload, you can use standard plug-ins for browsers, as well as Python scripting languages to immediately interact with the Swordfish API's. This is a distinct difference from the SMI-S API's, which although they are also XML-based APIs, required custom or third-party tools to interact with the SMI-S providers. Q. You talked about how Swordfish is being designed as more user and client centric.   How are you doing this?     A. We are starting with very specific use cases and scenarios  (rather than looking at "what is all the functionality we could expose") as we build both the structure of the API and the amount of information returned.     We've also documented a lot of the basic use cases, and who might like to do them, in a user's guide, and published that alongside the Swordfish specification.   You can find links to this at the SNIA Swordfish page:  snia.org/swordfish Q. You weren't specific on storage management tools, and I was expecting more detail. I'm wondering why you did this at such a high level, as this really hasn't helped me. A. We were primarily referring to ITIL –(The Information Technology Infrastructure Library). It's a framework designed to standardize the selection, planning, delivery and support of IT services to a business.  Learn more here. Q. While most of the products today support SMI-S, it's not something that DevOps or Storage Admins use directly.   How, or is, Swordfish going to be different? A. There are two primary ways we see the Swordfish API being much more accessible directly to the individual admins.   First, as a RESTful interface, it is very easy to access and traverse with the tools that they use daily – from web browsers directly, to REST plugins, to simple (or complex) python scripts.   The learning curve to start interacting with Swordfish is extremely small.   You can get a sense by going to an online "mockup" site here:   http://swordfishmockups.com  – there are some simple instructions to either browse the system directly or some standard clients to make it easier.   That will give you an idea of how easy it will be to start interacting with Swordfish (plus security for a real system, of course). Remember the "Everything You Wanted To Know About Storage But Were Too Proud To Ask" is a series. We've covered 8 storage topics to date and have a library of on-demand webcasts you can watch at your convenience. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on Storage Management – These Folks Weren’t Too Proud to Ask!

J Metz

Oct 26, 2017

title of post
The most recent installment of our SNIA ESF webcast series “Everything You Wanted To Know About Storage But Were Too Proud To Ask” took on a broad topic – storage management. Our experts, Richelle Ahlvers, Mark Rogov and Alex McDonald did a great job explaining the basics and have now answered the questions that attendees asked here in this blog. If you missed the live webcast, check it out on-demand and download a copy of the slides if you’d like to take notes. Q: What is the difference between storage and a database? Could a database be considered storage? A: The short answer is no. The long answer relies on the fact that a database doesn’t just store data: it modifies the data to fit into its schema (table, index, etc.) A storage solution doesn’t mutate the data in any shape—the data is always preserved as is. Q: Doesn’t provisioning a storage array mean setting it up? A: Within the storage community, provisioning is akin to serving a cake at a party. To provision storage to a server means cutting a slice of usable capacity and allocating it to a very specific server. The record of the particular pairing is carefully recorded. Q: Does deduplication fall into Configuration? Even when it is done only on cold data? A: Great question! Deduplication is one of the services that a storage array may offer, therefore enabling it is configuring such service. To further clarify your question, the point of deduplication is irrelevant: it may happen with cold data (the data that is stored on the array but applications haven’t accessed it in a long time); it may happen to hot or in-flight data (frequently accessed data or data inside cache). Q. Do Hyperscale vendors (like AWS) use any of the storage management? A. Hyperscale vendors, like all consumers of storage, use storage management to configure their storage. They use a combination of vendor device tools and custom development scripts/tools, but are not heavy consumers of industry standard storage interfaces today. Swordfish’s RESTful interface will provide an easy-to-consume API for hyperscale vendors to integrate into their management ecosystem as vendors start delivering Swordfish-based solutions. Q. It was mentioned that there was a ‘steep learning curve’ for previous SNIA storage management model. Any idea how much easier this is to learn? A. One of the major advantages for Swordfish is that the RESTful API’s are standardized and can take advantage of readily available tools and infrastructure. With the JSON-based payload, you can use standard plug-ins for browsers, as well as Python scripting languages to immediately interact with the Swordfish API’s. This is a distinct difference from the SMI-S API’s, which although they are also XML-based APIs, required custom or third-party tools to interact with the SMI-S providers. Q. You talked about how Swordfish is being designed as more user and client centric.  How are you doing this?    A. We are starting with very specific use cases and scenarios  (rather than looking at “what is all the functionality we could expose”) as we build both the structure of the API and the amount of information returned.   We’ve also documented a lot of the basic use cases, and who might like to do them, in a user’s guide, and published that alongside the Swordfish specification.  You can find links to this at the SNIA Swordfish page: snia.org/swordfish Q. You weren’t specific on storage management tools, and I was expecting more detail. I’m wondering why you did this at such a high level, as this really hasn’t helped me. A. We were primarily referring to ITIL –(The Information Technology Infrastructure Library). It’s a framework designed to standardize the selection, planning, delivery and support of IT services to a business. Learn more here. Q. While most of the products today support SMI-S, it’s not something that DevOps or Storage Admins use directly.  How, or is, Swordfish going to be different? A. There are two primary ways we see the Swordfish API being much more accessible directly to the individual admins.  First, as a RESTful interface, it is very easy to access and traverse with the tools that they use daily – from web browsers directly, to REST plugins, to simple (or complex) python scripts.  The learning curve to start interacting with Swordfish is extremely small.  You can get a sense by going to an online “mockup” site here:  http://swordfishmockups.com – there are some simple instructions to either browse the system directly or some standard clients to make it easier.  That will give you an idea of how easy it will be to start interacting with Swordfish (plus security for a real system, of course). Remember the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a series. We’ve covered 8 storage topics to date and have a library of on-demand webcasts you can watch at your convenience. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Expert Answers to Cloud Object Storage and Gateways Questions

Alex McDonald

Oct 24, 2017

title of post
In our most recent SNIA Cloud webcast, “Cloud Object Storage and the Use of Gateways,” we discussed market trends toward the adoption of object storage and the use of gateways to execute on a cloud strategy.  If you missed the live event, it’s now available on-demand together with the webcast slides. There were many good questions at the live event and our expert, Dan Albright, has graciously answered them in this blog. Q. Can object storage be accessed by tools for use with big data? A. Yes. Technically, access to big data is in real-time with HDFS connectors like S3, but it is  conditional on latency and if it is based on local hard drives, it should not be used as the primary storage as it would run very slowly. The guidance is to use hard drive based object storage either as an online archive or a backup target for HDFS. Q. Will current block storage or NAS be replaced with cloud object storage + gateway? A. Yes and no.  It’s dependent on the use case. For ILM (Information Lifecycle Management) uses, only the aged and infrequently accessed data is moved to the gateway+cloud object storage, to take advantage of a lower cost tier of storage, while the more recent and active data remains on the primary block or file storage.  For file sync and share, the small office/remote office data is moved off of the local NAS and consolidated/centralized and managed on the gateway file system. In practice, these methods will vary based on the enterprise’s requirements. Q. Can we use cloud object storage for IoT storage that may require high IOPS? A. High IOPS workloads are best supported by local SSD based Object, Block or NAS storage.  remote or hard drive based Object storage is better deployed with low IOPS workloads. Q. What about software defined storage? A. Cloud object storage may be implemented as SDS (Software Defined Storage) but may also be implemented by dedicated appliances. Most cloud Object storage services are SDS based. Q. Can you please define NAS? A. The SNIA Dictionary defines Network Attached Storage (NAS) as: 1. [Storage System] A term used to refer to storage devices that connect to a network and provide file access services to computer systems. These devices generally consist of an engine that implements the file services, and one or more devices, on which data is stored. 2. [Network] A class of systems that provide file services to host computers using file access protocols such as NFS or CIFS. Q. What are the challenges with NAS gateways into object storage? Aren’t there latency issues that NAS requires that aren’t available in a typical Object store solution? A. The key factor to consider is workload.  If the workload of applications accessing data residing on NAS experiences high frequency of reads and writes then that data is not a good candidate for remote or hard drive based object storage. However, it is commonly known that up to 80% of data residing on NAS is infrequently accessed.  It is this data that is best suited for migration to remote object storage. Thanks for all the great questions. Please check out our library of SNIA Cloud webcasts to learn more. And follow us on Twitter @sniacloud_com for announcements of future webcasts.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Expert Answers to Cloud Object Storage and Gateways Questions

Alex McDonald

Oct 24, 2017

title of post
In our most recent SNIA Cloud webcast, “Cloud Object Storage and the Use of Gateways,” we discussed market trends toward the adoption of object storage and the use of gateways to execute on a cloud strategy.  If you missed the live event, it’s now available on-demand together with the webcast slides. There were many good questions at the live event and our expert, Dan Albright, has graciously answered them in this blog. Q. Can object storage be accessed by tools for use with big data? A. Yes. Technically, access to big data is in real-time with HDFS connectors like S3, but it is  conditional on latency and if it is based on local hard drives, it should not be used as the primary storage as it would run very slowly. The guidance is to use hard drive based object storage either as an online archive or a backup target for HDFS. Q. Will current block storage or NAS be replaced with cloud object storage + gateway? A. Yes and no.  It’s dependent on the use case. For ILM (Information Lifecycle Management) uses, only the aged and infrequently accessed data is moved to the gateway+cloud object storage, to take advantage of a lower cost tier of storage, while the more recent and active data remains on the primary block or file storage.  For file sync and share, the small office/remote office data is moved off of the local NAS and consolidated/centralized and managed on the gateway file system. In practice, these methods will vary based on the enterprise’s requirements. Q. Can we use cloud object storage for IoT storage that may require high IOPS? A. High IOPS workloads are best supported by local SSD based Object, Block or NAS storage.  remote or hard drive based Object storage is better deployed with low IOPS workloads. Q. What about software defined storage? A. Cloud object storage may be implemented as SDS (Software Defined Storage) but may also be implemented by dedicated appliances. Most cloud Object storage services are SDS based. Q. Can you please define NAS? A. The SNIA Dictionary defines Network Attached Storage (NAS) as: 1. [Storage System] A term used to refer to storage devices that connect to a network and provide file access services to computer systems. These devices generally consist of an engine that implements the file services, and one or more devices, on which data is stored. 2. [Network] A class of systems that provide file services to host computers using file access protocols such as NFS or CIFS. Q. What are the challenges with NAS gateways into object storage? Aren’t there latency issues that NAS requires that aren’t available in a typical Object store solution? A. The key factor to consider is workload.  If the workload of applications accessing data residing on NAS experiences high frequency of reads and writes then that data is not a good candidate for remote or hard drive based object storage. However, it is commonly known that up to 80% of data residing on NAS is infrequently accessed.  It is this data that is best suited for migration to remote object storage. Thanks for all the great questions. Please check out our library of SNIA Cloud webcasts to learn more. And follow us on Twitter @sniacloud_com for announcements of future webcasts.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

It's a Wrap! SNIA’s 20th Storage Developer Conference a Success!

khauser

Oct 5, 2017

title of post
Reviews are in for the 20th Storage Developer Conference (SDC) and they are thumbs up! The 2017 SDC was the largest ever- expanding to four full days with seven keynotes, five SNIA Tutorials, and 92 sessions.  The SNIA Technical Council, who oversees conference content, compiled a rich agenda of 18 topic categories focused on technology growth markets of physical storage, storage management, data, object storage, and cloud storage.  Storage Architecture led the way with 20 individual sessions, followed by 15 Solid State/Non-Volatile Memory, eight SMB, and six NVMe sessions.  Storage management, including SMI and SNIA Swordfish® presentations, had an entire track with eight sessions on Thursday. Attendees called out a number of sessions as their favorites - and the ratings proved it.  But don’t be alarmed if you missed out – SNIA has recordings and downloads of the presentations at your fingertips.  Check out these top-rated sessions and more: Attendees were also treated to NVMe and Persistent Memory demonstrations by SDC sponsors, three Plugfests, and a host of networking conversations happening up and down the hallways.  The “high caliber speakers and presentation content”, learning about "recent developments in the industry", and “connecting directly with other developers who are tackling the same problems” were cited by attendees as some of the most beneficial aspects of the conference. Whether you participated in person or virtually by viewing videos, downloading presentations, or listening to podcasts, let us know what you would like to see for future SDCs.  Is it Modern Storage for Modern Data Centers? NVMe and NVMe-oF? Persistent Memory and PM-oF? Artificial intelligence? New directions?  We want to know!  Watch for our Post Event Survey to be sent out shortly.  And thank you for contributing to a great 2017 SDC!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage for Transactional Systems: From Banking to Facebook

J Metz

Oct 5, 2017

title of post
We're all accustomed to transferring money from one bank account to another; a credit to the payer becomes a debit to the payee. But that model uses a specific set of sophisticated techniques to accomplish what appears to be a simple transaction. Today, we're also well acquainted with ordering goods online, reserving an airline seat over the Internet, or simply updating a photograph on Facebook. Can these applications use the same banking models, or are new techniques required? It's a question we'll tackle at our next Ethernet Storage Forum webcast on October 31st "Transactional Models & Their Storage Requirements." One of the more important concepts in storage is the notion of  transactions,  which are used in databases, financials, and other mission critical workloads. However, in the age of cloud and distributed systems, we need to update our thinking about what constitutes a transaction. We need to understand how new theories and techniques allow us to undertake transactional work in the face of unreliable and physically dispersed systems. It's a topic full of interesting concepts (and lots of acronyms!). In this webcast, we'll provide a brief tour of traditional transactional systems and their use of storage, we'll explain new application techniques and transaction models, and we'll discuss what storage systems need to look like to support these new advances. And yes, we'll decode all the acronyms and nomenclature too. You will learn:
  • A brief history of transactional systems from banking to Facebook
  • How the Internet and distributed systems have changed and how we view transactions
  • An explanation of the terminology, from ACID to CAP and beyond
  • How applications, networks & particularly storage have changed to meet these demands
You may have noticed this webcast is on Halloween, October 31st. We promise it will be a treat not a trick! I encourage you to register today.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage for Transactional Systems: From Banking to Facebook

J Metz

Oct 5, 2017

title of post
We’re all accustomed to transferring money from one bank account to another; a credit to the payer becomes a debit to the payee. But that model uses a specific set of sophisticated techniques to accomplish what appears to be a simple transaction. Today, we’re also well acquainted with ordering goods online, reserving an airline seat over the Internet, or simply updating a photograph on Facebook. Can these applications use the same banking models, or are new techniques required? It’s a question we’ll tackle at our next Ethernet Storage Forum webcast on October 31st “Transactional Models & Their Storage Requirements.” One of the more important concepts in storage is the notion of transactions, which are used in databases, financials, and other mission critical workloads. However, in the age of cloud and distributed systems, we need to update our thinking about what constitutes a transaction. We need to understand how new theories and techniques allow us to undertake transactional work in the face of unreliable and physically dispersed systems. It’s a topic full of interesting concepts (and lots of acronyms!). In this webcast, we’ll provide a brief tour of traditional transactional systems and their use of storage, we’ll explain new application techniques and transaction models, and we’ll discuss what storage systems need to look like to support these new advances. And yes, we’ll decode all the acronyms and nomenclature too. You will learn:
  • A brief history of transactional systems from banking to Facebook
  • How the Internet and distributed systems have changed and how we view transactions
  • An explanation of the terminology, from ACID to CAP and beyond
  • How applications, networks & particularly storage have changed to meet these demands
You may have noticed this webcast is on Halloween, October 31st. We promise it will be a treat not a trick! I encourage you to register today.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Take the 2017 Archive Requirements Survey!

khauser

Sep 19, 2017

title of post
  by Samuel A. Fineberg, Co-chair, SNIA LTR TWG Ten years ago, a SNIA Task Force undertook a 100 Year Archive Requirements Survey with a goal to determine requirements for long-term digital retention in the data center.  The Task Force hypothesized that the practitioner survey respondents would have experiences with terabyte archive systems that would be adequate to define business and operating system requirements for petabyte-sized information repositories in the data center. Time flies while you’re having fun.  Now it’s 2017, and the SNIA Long-Term Retention Technical Working Group (LTR TWG) and the SNIA Data Protection & Capacity Optimization Committee have teamed up to launch the 2017 SNIA Archive Survey. Back in the “first” decade of the 21st century, practitioners struggled with logical and physical retention, but for the most part generally understood their problems.  Eighty percent of organizations participating in the 2007 survey had a need to retain information over 50 years, while 68% reported a need of over 100 years.  However, “long term” realistically extended to only about 2017-2022 to migrate and retain readability. After that, survey responders felt that processes would fail and/or become too costly under an expected avalanche of information. Fast forward to 2017 and new standards, storage formats, and software are in play; and markets like cloud services offer choices which did not exist 10 years ago.  Migration and retention solutions are becoming available but these solutions are not widely used, except in government agencies, libraries, and highly regulated industries.  Understanding what is needed and why is a focus of SNIA’s new survey. The 2017 survey seeks to assess who needs to retain long term information and what information needs to be retained, with appropriate policies.  The focus will now be on IT best practices, not just business requirements.  How is long term information stored, secured, and preserved?  Does the cloud impact long term retention requirements? SNIA’s 2017 Archive Survey launched at September 2017 Storage Developer Conference.  We’re sending out the call.  Are you a member of an IT staff associated with archives?  In Records and Information Management (RIM)? An academic? In Legal or Finance?  If long term data preservation is near and dear to your heart, you’ll want to take the survey, which covers business drivers, policies, storage, practices, preservation, security, and more.  Help SNIA understand how archive practices have evolved in the last 10 years, what changes have taken place in corporate practices, and what technology changes have impacted daily operations. Take the survey and join us at Storage Visions in Milpitas CA on October 16, 2017 where we’ll be discussing SNIA’s work in long term retention and data protection.  Finally, stay tuned - we’ll be publishing our results in early 2018!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Gateways Benefit Cloud Object Storage

Alex McDonald

Aug 29, 2017

title of post
The use of cloud object storage is ramping up sharply especially in the public cloud, where its simplicity can significantly reduce capital budgets and operating expenses. And while it makes good economic sense, enterprises are challenged with legacy applications that do not support standard protocols to move data to and from the cloud. That’s why the SNIA Cloud Storage Initiative is hosting a live webcast on September 26th, “Cloud Object Storage and the Use of Gateways.” Object storage is a secure, simple, scalable, and cost-effective means of managing the explosive growth of unstructured data enterprises generate every day. Enterprises have developed data strategies specific to the public cloud; improved data protection, long term archive, application development, DevOps, Data Science, and cognitive artificial intelligence to name a few. However, these same organizations have legacy applications and infrastructure that are not object storage friendly, but use file protocols like NFS and SMB. Gateways enable SMB and NFS data transfers to be converted to Amazon’s S3 protocol while optimizing data with deduplication, providing QoS (quality of service), and efficiencies on the data path to the cloud. This webcast will highlight the market trends toward the adoption of object storage and the use of gateways to execute a cloud strategy, the benefits of object storage when gateways are deployed, and the use cases that are best suited to leverage this solution. You will learn:
  • The benefits of object storage when gateways are deployed
  • Primary use cases for using object storage and gateways in private, public or hybrid cloud
  • How gateways can help achieve the goals of your cloud strategy without retooling your on-premise infrastructure and applications
We plan to share some pearls of wisdom on the challenges organizations are facing with object storage in the cloud from a vendor-neutral, SNIA perspective. If you need a firm background on cloud object storage before September 26th, I encourage you to watch the SNIA Cloud on-demand webcast, “Cloud Object Storage 101.” It will provide you with a foundation to get even more out of this upcoming webcast. I hope you will join us on September 26th. Register now to save your spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Gateways Benefit Cloud Object Storage

Alex McDonald

Aug 29, 2017

title of post
The use of cloud object storage is ramping up sharply especially in the public cloud, where its simplicity can significantly reduce capital budgets and operating expenses. And while it makes good economic sense, enterprises are challenged with legacy applications that do not support standard protocols to move data to and from the cloud. That’s why the SNIA Cloud Storage Initiative is hosting a live webcast on September 26th, “Cloud Object Storage and the Use of Gateways.” Object storage is a secure, simple, scalable, and cost-effective means of managing the explosive growth of unstructured data enterprises generate every day. Enterprises have developed data strategies specific to the public cloud; improved data protection, long term archive, application development, DevOps, big data analytics and cognitive artificial intelligence to name a few. However, these same organizations have legacy applications and infrastructure that are not object storage friendly, but use file protocols like NFS and SMB. Gateways enable SMB and NFS data transfers to be converted to Amazon’s S3 protocol while optimizing data with deduplication, providing QoS (quality of service), and efficiencies on the data path to the cloud. This webcast will highlight the market trends toward the adoption of object storage and the use of gateways to execute a cloud strategy, the benefits of object storage when gateways are deployed, and the use cases that are best suited to leverage this solution. You will learn:
  • The benefits of object storage when gateways are deployed
  • Primary use cases for using object storage and gateways in private, public or hybrid cloud
  • How gateways can help achieve the goals of your cloud strategy without retooling your on-premise infrastructure and applications
We plan to share some pearls of wisdom on the challenges organizations are facing with object storage in the cloud from a vendor-neutral, SNIA perspective. If you need a firm background on cloud object storage before September 26th, I encourage you to watch the SNIA Cloud on-demand webcast, “Cloud Object Storage 101.” It will provide you with a foundation to get even more out of this upcoming webcast. I hope you will join us on September 26th. Register now to save your spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to