Sorry, you need to enable JavaScript to visit this website.

New SNIA-CSI Webcast: LTFS Bulk Transfer Standard

Alex McDonald

Feb 2, 2015

title of post
Mark your calendar for February 10th as we conclude our Cloud Developer’s series by hosting a live Webcast on the LTFS Bulk Transfer Standard. LTFS (Linear Tape File System) technology provides compelling economics for bulk transportation of data between enterprise cloud storage. This Webcast will provide an update on the joint work of the LTFS and Cloud Technical Working Groups on a bulk transfer standard that uses LTFS to allow for the reliable movement of bulk data in and out of the cloud, and mechanisms for verification, error handling and the management of namespaces. Register now to hear David Slik, Co-Chair of the SNIA Cloud Storage Technical Work Group, discuss:
  • LTFS standard mandate and history
  • LTFS adoption and use cases
  • LTFS bulk transfer to, from, and between clouds
  • Error handling and recovery
  • Security considerations
I’ll be hosting the event, taking your questions, and hopefully shedding some light on the importance of this standard. I hope you’ll join us.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

MRAM Topic of Open SSSI TechDev Committee Call Monday February 2 at 2:00 pm PT

Marty Foltyn

Jan 30, 2015

title of post
As part of their educational offering, the SNIA Solid State Storage Initiative TechDev committee will feature Barry Hoberman speaking on Spin Transfer and MRAM. This conference call and SNIA WebEx at 2:00 pm Pacific time February 2, 2015 is open to the public. Find the answers to your questions on Spin Transfer/MRAM, including:
  • What are the drivers pushing emergence / adoption of Spin Transfer / MRAM?
  • What are the compelling advantages of Spin Transfer / MRAM?
  • What are the key applications that will be able to take advantage of MRAM?
  • What has to happen for Spin Transfer to find traction and deployment?
  • When will Spin Transfer / MRAM market adoption take place?
Dial-in to: snia.webex.com Meeting Number: 794 116 066 password: TechDev2015 Teleconference: 1-866-439-4480 Passcode: 57236696# Looking forward to seeing you!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

MRAM Topic of Open SSSI TechDev Committee Call Monday February 2 at 2:00 pm PT

Marty Foltyn

Jan 30, 2015

title of post

As part of their educational offering, the SNIA Solid State Storage Initiative TechDev committee will feature Barry Hoberman speaking on Spin Transfer and MRAM.

This conference call and SNIA WebEx at 2:00 pm Pacific time February 2, 2015 is open to the public. Find the answers to your questions on Spin Transfer/MRAM, including:

  • What are the drivers pushing emergence / adoption of Spin Transfer / MRAM?
  • What are the compelling advantages of Spin Transfer / MRAM?
  • What are the key applications that will be able to take advantage of MRAM?
  • What has to happen for Spin Transfer to find traction and deployment?
  • When will Spin Transfer / MRAM market adoption take place?

Dial-in to: snia.webex.com Meeting Number: 794 116 066 password: TechDev2015 Teleconference: 1-866-439-4480 Passcode: 57236696#

Looking forward to seeing you!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New ESF Webcast: Benefits of RDMA in Accelerating Ethernet Storage Connectivity

David Fair

Jan 30, 2015

title of post

We’re kicking off our 2015 ESF Webcasts on March 4th with what we believe is an intriguing topic – how RDMA technologies can accelerate Ethernet Storage. Remote Direct Memory Access (RDMA) has existed for many years as an interconnect technology, providing low latency and high bandwidth in computing clusters. More recently, RDMA has gained traction as a method for accelerating storage connectivity and interconnectivity on Ethernet. In this Webcast, experts from Emulex, Intel and Microsoft will discuss:

  • Storage protocols that take advantage of RDMA
  • Overview of iSER for block storage
  • Deep dive of SMB Direct for file storage.
  • Benefits of available RDMA technologies to accelerate your Ethernet storage connectivity, both iWARP and RoCE

Register now. This live Webcast will provide attendees with a vendor-neutral look at RDMA technologies and should prove to be an interactive and informative event. I hope you’ll join us!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New ESF Webcast: Benefits of RDMA in Accelerating Ethernet Storage Connectivity

David Fair

Jan 30, 2015

title of post
We're kicking off our 2015 ESF Webcasts on March 4th with what we believe is an intriguing topic – how RDMA technologies can accelerate Ethernet Storage. Remote Direct Memory Access (RDMA) has existed for many years as an interconnect technology, providing low latency and high bandwidth in computing clusters.  More recently, RDMA has gained traction as a method for accelerating storage connectivity and interconnectivity on Ethernet.  In this Webcast, experts from Emulex, Intel and Microsoft will discuss:
  • Storage protocols that take advantage of RDMA
  • Overview of iSER for block storage
  • Deep dive of SMB Direct for file storage.
  • Benefits of available RDMA technologies to accelerate your Ethernet storage connectivity, both iWARP and RoCE
Register now. This live Webcast will provide attendees with a vendor-neutral look at RDMA technologies and should prove to be an interactive and informative event. I hope you'll join us!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Volunteers Honored at SNIA Symposium

Marty Foltyn

Jan 23, 2015

title of post
The SNIA Solid State Storage Initiative (SSSI), its affiliated Technical Work Groups (TWGs), and its individual members were honored at this week's SNIA Member Symposium in San Jose, California. winners2

From left, Paul Wassenberg, Marvell (SNIA SSSI Chair); Paul Von Behren, Intel (SNIA NVM Programming TWG Co-Chair); Jim Ryan, Intel (SNIA SSSI Marketing Chair); and Arthur Sainio, SMART Modular (SNIA NVDIMM SIG Co-Chair)

At Wednesday's SNIA member recognition event,  five awards were bestowed based on votes by their colleagues in the Association:

1.      MOST OUTSTANDING ACHIEVEMENTS OF A SNIA TECHNOLOGY COMMUNITY –  SSSI (for the second year in a row)

2.      MOST SIGNIFICANT CONTRIBUTIONS BY A COMMITTEE – NVDIMM SIG (in their first year of existence)

3.      MOST SIGNIFICANT IMPACT BY A TECHNICAL WORK GROUP – NVM Programming TWG (adding to all the other awards it has received in the past 2 years)

4.      VOLUNTEER OF THE YEAR – Jim Ryan (who in addition to managing the Storage Industry Summit for two years in a row, is also the SSSI Marketing co-chair)

5.      INDUSTRY IMPACT AWARD – Paul von Behren, NVM Programming TWG chair (and tireless advocate of NVM technology)

In addition, Phil Mills, SNIA SSSI Founding Chair, was inducted into the SNIA Hall of Fame.

More details are available at www.snia.org/about/awards.

Congratulations to these SNIA volunteers and groups, who are poised for a great 2015!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Volunteers Honored at SNIA Symposium

Marty Foltyn

Jan 23, 2015

title of post

The SNIA Solid State Storage Initiative (SSSI), its affiliated Technical Work Groups (TWGs), and its individual members were honored at this week’s SNIA Member Symposium in San Jose, California.

winners2

From left, Paul Wassenberg, Marvell (SNIA SSSI Chair);
Paul Von Behren, Intel (SNIA NVM Programming TWG Co-Chair);
Jim Ryan, Intel (SNIA SSSI Marketing Chair);
and Arthur Sainio, SMART Modular (SNIA NVDIMM SIG Co-Chair)

At Wednesday’s SNIA member recognition event,  five awards were bestowed based on votes by their colleagues in the Association:

1.      MOST OUTSTANDING ACHIEVEMENTS OF A SNIA TECHNOLOGY COMMUNITY –  SSSI (for the second year in a row)

2.      MOST SIGNIFICANT CONTRIBUTIONS BY A COMMITTEE – NVDIMM SIG (in their first year of existence)

3.      MOST SIGNIFICANT IMPACT BY A TECHNICAL WORK GROUP – NVM Programming TWG (adding to all the other awards it has received in the past 2 years)

4.      VOLUNTEER OF THE YEAR – Jim Ryan (who in addition to managing the Storage Industry Summit for two years in a row, is also the SSSI Marketing co-chair)

5.      INDUSTRY IMPACT AWARD – Paul von Behren, NVM Programming TWG chair (and tireless advocate of NVM technology)

In addition, Phil Mills, SNIA SSSI Founding Chair, was inducted into the SNIA Hall of Fame.

More details are available at www.snia.org/about/awards.

Congratulations to these SNIA volunteers and groups, who are poised for a great 2015!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

OpenStack Cloud Storage Q&A

Sam Fineberg

Jan 21, 2015

title of post

More than 300 people have seen our Webcast “OpenStack Cloud Storage.” If you missed it, it’s now available on demand. It was a great session with a lot of questions from attendees. We did not have time to address them all – so here is a complete Q&A. If you think of any others, please comment on this blog. Also, mark your calendar for January 29th when the SNIA Cloud Storage Initiative will continue its Developers Tutorial Series with a live Webcast on OpenStack Manila.

Q. Is it correct to say that one can use OpenStack on any vendor’s hardware?

A. Servers, yes, assuming the hardware is supported by Linux. Block storage requires a driver, and not all vendor systems have Cinder drivers.

Q. Is there any OpenStack investigation and/or development in the storage networking area?

A. Cinder includes support for FC and iSCSI. As of Icehouse, the FC support also includes auto-zoning. 

Q. Is there any monetization going on around OpenStack, like we see for distros of Linux?

A. Yes, there are already several commercial distributions available.

Q. Is erasure code needed to get a positive business case for Swift, when compared with traditional storage systems?

A. It is a way to reduce the cost of replication. Traditional storage systems typically already have erasure coding, in the form of RAID. Systems without erasure coding end up using more storage to achieve the same level of protection due to their use of 3-way replication.

Q. Is erasure code currently implemented in the current Swift release?

A. No, it is a separate development stream, which has not been merged yet.

Q. Any limitation on the number of objects per container or total number of objects per Swift cluster?

A. Technically there are no limits. However, in practice, the fact that the containers are implemented using SQL lite limits their size to a million or maybe a few million objects per container. However, due to the way that Swift partitions its metadata, each user can also have millions of containers, and there can be millions of users. So practically speaking, the total system can support an unlimited number of objects.

Q. What are some of the technical reasons for an enterprise to select Swift vs. Amazon S3? In other words, are they pretty much direct alternatives, or does each have its own preferred use cases?

A. They are more or less direct alternatives. There are some minor differences, but they are made for the same purpose. That said, S3 is only available from Amazon. There are some S3 compatible systems, but most of those also support Swift. Swift, on the other hand, is available open source or from multiple vendors. So if you want to run it in your own data center, or in a public cloud other than Amazon, you probably want Swift.

Q. If I wanted to play around with Open Stack, Cinder, and Swift in a lab environment (or in my basement), what do I need and how do I get started?

A. openstack.org is the best place to start. The “devstack” distribution is also good for playing around.

Q. Will you be showing any features for Kilo?

A. The “Futures” I showed will likely be Kilo features, though the final decision of what will be in Kilo won’t happen until just before release.

 Q. Are there any plans to implement data encryption in Cinder?

A. I believe some of the back ends can support encryption already. Cinder is really just a provisioning and orchestration layer. Encryption is a data path feature, so it would need to be implemented in the back end.

Q. Some time back I heard OpenStack Swift is going to come up with block storage as well, any timeline for that?

A. I haven’t heard this, Swift is object storage.

Q. The performance characteristics of Cinder block services can vary quite widely. Is there any standard measure proposed within OpenStack to inform Nova or the application about the underlying Cinder block performance characteristics?

A. Volume types were designed to enable clouds to provide different levels of service. The meaning of these types is up to the cloud administrator. That said, Cinder does expose QoS features like minimum/maximum IOPS.

Q. Is the hypervisor talking to a cinder volume or to (for example) a NetApp or EMC volume?

A. The hypervisor talks to a volume the same way it does outside of OpenStack. For example, the KVM hypervisor can talk to volumes through LVM, or can mount SAN volumes directly.

Q. Which of these projects are most production-ready?

A. This is a hard question, and depends on your definition of production ready. It’s hard to do much without Nova, Glance, and Horizon. Most people use Cinder too, and Swift has been in production at HP and Rackspace for years. Neutron has a lot of complexity, so some people still use Nova network, but that has many limitations. For toy clouds you can avoid using Keystone, but you need it for a “production” cluster. The best way to get a “production ready” OpenStack is to get a supported commercial distribution.

Q. Are there any Plugfests?

A. No, however, the Cinder team has a fairly extensive and continuous integration process that drivers need to pass through. Swift does not because it doesn’t officially “support” any plugins.

 

 

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

OpenStack Cloud Storage Q&A

Sam Fineberg

Jan 21, 2015

title of post
More than 300 people have seen our Webcast “OpenStack Cloud Storage.” If you missed it, it’s now available on demand. It was a great session with a lot of questions from attendees. We did not have time to address them all – so here is a complete Q&A. If you think of any others, please comment on this blog. Also, mark your calendar for January 29th when the SNIA Cloud Storage Initiative will continue its Developers Tutorial Series with a live Webcast on OpenStack Manila. Q. Is it correct to say that one can use OpenStack on any vendor's hardware? A. Servers, yes, assuming the hardware is supported by Linux. Block storage requires a driver, and not all vendor systems have Cinder drivers. Q. Is there any OpenStack investigation and/or development in the storage networking area? A. Cinder includes support for FC and iSCSI. As of Icehouse, the FC support also includes auto-zoning.  Q. Is there any monetization going on around OpenStack, like we see for distros of Linux? A. Yes, there are already several commercial distributions available. Q. Is erasure code needed to get a positive business case for Swift, when compared with traditional storage systems? A. It is a way to reduce the cost of replication. Traditional storage systems typically already have erasure coding, in the form of RAID. Systems without erasure coding end up using more storage to achieve the same level of protection due to their use of 3-way replication. Q. Is erasure code currently implemented in the current Swift release? A. No, it is a separate development stream, which has not been merged yet. Q. Any limitation on the number of objects per container or total number of objects per Swift cluster? A. Technically there are no limits. However, in practice, the fact that the containers are implemented using SQL lite limits their size to a million or maybe a few million objects per container. However, due to the way that Swift partitions its metadata, each user can also have millions of containers, and there can be millions of users. So practically speaking, the total system can support an unlimited number of objects. Q. What are some of the technical reasons for an enterprise to select Swift vs. Amazon S3? In other words, are they pretty much direct alternatives, or does each have its own preferred use cases? A. They are more or less direct alternatives. There are some minor differences, but they are made for the same purpose. That said, S3 is only available from Amazon. There are some S3 compatible systems, but most of those also support Swift. Swift, on the other hand, is available open source or from multiple vendors. So if you want to run it in your own data center, or in a public cloud other than Amazon, you probably want Swift. Q. If I wanted to play around with Open Stack, Cinder, and Swift in a lab environment (or in my basement), what do I need and how do I get started? A. openstack.org is the best place to start. The "devstack" distribution is also good for playing around. Q. Will you be showing any features for Kilo? A. The "Futures" I showed will likely be Kilo features, though the final decision of what will be in Kilo won't happen until just before release.  Q. Are there any plans to implement data encryption in Cinder? A. I believe some of the back ends can support encryption already. Cinder is really just a provisioning and orchestration layer. Encryption is a data path feature, so it would need to be implemented in the back end. Q. Some time back I heard OpenStack Swift is going to come up with block storage as well, any timeline for that? A. I haven't heard this, Swift is object storage. Q. The performance characteristics of Cinder block services can vary quite widely. Is there any standard measure proposed within OpenStack to inform Nova or the application about the underlying Cinder block performance characteristics? A. Volume types were designed to enable clouds to provide different levels of service. The meaning of these types is up to the cloud administrator. That said, Cinder does expose QoS features like minimum/maximum IOPS. Q. Is the hypervisor talking to a cinder volume or to (for example) a NetApp or EMC volume? A. The hypervisor talks to a volume the same way it does outside of OpenStack. For example, the KVM hypervisor can talk to volumes through LVM, or can mount SAN volumes directly. Q. Which of these projects are most production-ready? A. This is a hard question, and depends on your definition of production ready. It’s hard to do much without Nova, Glance, and Horizon. Most people use Cinder too, and Swift has been in production at HP and Rackspace for years. Neutron has a lot of complexity, so some people still use Nova network, but that has many limitations. For toy clouds you can avoid using Keystone, but you need it for a "production" cluster. The best way to get a "production ready" OpenStack is to get a supported commercial distribution. Q. Are there any Plugfests? A. No, however, the Cinder team has a fairly extensive and continuous integration process that drivers need to pass through. Swift does not because it doesn’t officially "support" any plugins.      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

OpenStack Manila Webcast – Shared File Services for the Cloud

Alex McDonald

Jan 7, 2015

title of post

On January 29th, we continue our Cloud Developer’s series by hosting a live Webcast on OpenStack Manila – the OpenStack file share service. Manila provides the management of file shares (for example, NFS and CIFS) as a core service to OpenStack. Manila currently works with a variety of vendors’ storage products, including NetApp, Red Hat, EMC, IBM, and with the Linux NFS server.

In this Webcast we will:

  • Introduce the Manila file share service
  • Review key Manila concepts
  • Describe the logical architecture of Manila and its API structure
  • Explain what’s new in Juno, the latest release of OpenStack
  • Highlight the roadmap for Manila in the next release, OpenStack Kilo, and beyond

Register now for this live event that we expect will be informative and interactive. I hope you’ll join us.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to