Sorry, you need to enable JavaScript to visit this website.

New Webcast: The Life of a Storage Packet (Walk)

J Metz

Nov 10, 2015

title of post
Wonder how storage really works? When we talk about "Storage" in the context of data centers, it can mean different things to different people. Someone who is developing applications will have a very different perspective than someone who is responsible for managing that data on some form of media. Moreover, someone who is responsible for transporting data from one place to another has their own view that is related to, and yet different from, the previous two. Add in virtualization and layers of abstraction, from file systems to storage protocols, and things can get very confusing very quickly. Pretty soon people don't even know the right questions to ask! That's why we're hosting our next SNIA Ethernet Storage Webcast, "Life of a Storage Packet (Walk)." Join us on November 19th to learn how applications and workloads get information. Find out what happens when you need more of it, or faster access to it, or move it far away. This Webcast will take a step back and look at "storage" with a "big picture" perspective, looking at the whole piece and attempting to fill in some of the blanks for you. We'll be talking about:
  • Applications and RAM
  • Servers and Disks
  • Networks and Storage Types
  • Storage and Distances
  • Tools of the Trade/Offs
The goal of the Webcast is not to make specific recommendations, but equip you with information that will help you ask the relevant questions, as well as get a keener insight to the consequences of storage choices.  As always, this event is live, so please bring your questions, we'll answer as many as we can on the spot. I encourage you to register today. Hope to see you on November 19th! Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Assessing SSD Performance in the Data Center

Marty Foltyn

Oct 20, 2015

title of post
By Marty Foltyn As solid state drives (SSDs) are deployed in datacenters around the world in both hybrid HDD/SSD and all flash arrays (AFAs), it is becoming increasingly important to understand what metrics are relevant to assess SSD datacenter performance. While the traditional metrics of IO operations per second (IOPS), Bandwidth, and Response Times are commonly used, it is becoming more important to report and understand the ‘Quality of Service’ of those metrics. eden articleEden Kim, Chair of the SNIA Solid State Storage Technical Working Group, has recently authored an article on Understanding Data Center Workloads. In it, he defines workloads and specifically data center workloads, describes how they are tested, and shows how to measure workloads for performance analysis. Industry standard test methodologies that ensure fair and accurate testing of SSDs both at the device and system level are described, along with how to use them on a reference test platform, Eden also describes in depth Response Time Confidence levels and how an understanding of Demand Variation and Demand Intensity can help the IT administrator assess how a given SSD or array will perform relative to the requirements of an application workload or relative to a specific Response Time Ceiling thus helping in the overall system optimization, design, and deployment. Read Eden's full article on the SNIA Solid State Storage Education page at http://www.snia.org/forums/sssi/knowledge/educationBy. Scroll down to "Performance" to find this and a whole range of white papers, tech notes, webcasts, and presentations on this important Solid State Storage topic.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Assessing SSD Performance in the Data Center

Marty Foltyn

Oct 20, 2015

title of post

By Marty Foltyn

As solid state drives (SSDs) are deployed in datacenters around the world in both hybrid HDD/SSD and all flash arrays (AFAs), it is becoming increasingly important to understand what metrics are relevant to assess SSD datacenter performance. While the traditional metrics of IO operations per second (IOPS), Bandwidth, and Response Times are commonly used, it is becoming more important to report and understand the ‘Quality of Service’ of those metrics.

eden articleEden Kim, Chair of the SNIA Solid State Storage Technical Working Group, has recently authored an article on Understanding Data Center Workloads. In it, he defines workloads and specifically data center workloads, describes how they are tested, and shows how to measure workloads for performance analysis. Industry standard test methodologies that ensure fair and accurate testing of SSDs both at the device and system level are described, along with how to use them on a reference test platform, Eden also describes in depth Response Time Confidence levels and how an understanding of Demand Variation and Demand Intensity can help the IT administrator assess how a given SSD or array will perform relative to the requirements of an application workload or relative to a specific Response Time Ceiling thus helping in the overall system optimization, design, and deployment.

Read Eden’s full article on the SNIA Solid State Storage Education page at http://www.snia.org/forums/sssi/knowledge/educationBy. Scroll down to “Performance” to find this and a whole range of white papers, tech notes, webcasts, and presentations on this important Solid State Storage topic.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New White Paper: An Updated Overview of NFSv4

AlexMcDonald

Oct 20, 2015

title of post

Maybe you’ve asked yourself recently; “Hmm, I wonder what’s new in NFSv4?” Maybe (and more likely) you haven’t; but you should.

During the last few years, NFSv4 has become the version of choice for many users, and there are lots of great reasons for making the transition from NFSv3 to NFSv4. Not the least of which is that it’s a relatively straightforward transition.

But there’s more; NFSv4 offers features unavailable in NFSv3. Parallelization, better security, WAN awareness and many other features make it suitable as a file protocol for the next generation of applications. As a proof point, lately we’ve seen new clients of NFSv4 servers beyond the standard Linux client, including support in VMware’s vSphere for virtual machine datastores accessible via NFSv4.

In this updated white paper, An Updated Overview of NFSv4, we explain how NFSv4 is better suited to a wide range of datacenter and high performance compute (HPC) uses than its predecessor NFSv3, as well as providing resources for migrating from v3 to v4.

You’ll learn:

  • How NFSv4 overcomes statelessness issues associated with NFSv3
  • Advantages and features of NFSv4.1 & NFSv4.2
  • What parallel NFS (pNFS) and layouts do
  • How NFSv4 supports performant WAN access

We believe this document makes the argument that users should, at the very least, be evaluating and deploying NFSv4 for use in new projects; and ideally, should be using it wholesale in their existing environments. The information in this white paper is meant to be comprehensive and educational and we hope you find it helpful.

If you have questions or comments after reading this white paper, please comment on this blog and we’ll get back to you as soon as possible.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New White Paper: An Updated Overview of NFSv4

Alex McDonald

Oct 20, 2015

title of post
Maybe you've asked yourself recently; "Hmm, I wonder what's new in NFSv4?" Maybe (and more likely) you haven't; but you should. During the last few years, NFSv4 has become the version of choice for many users, and there are lots of great reasons for making the transition from NFSv3 to NFSv4. Not the least of which is that it's a relatively straightforward transition. But there's more; NFSv4 offers features unavailable in NFSv3. Parallelization, better security, WAN awareness and many other features make it suitable as a file protocol for the next generation of applications. As a proof point, lately we've seen new clients of NFSv4 servers beyond the standard Linux client, including support in VMware's vSphere for virtual machine datastores accessible via NFSv4. In this updated white paper, An Updated Overview of NFSv4, we explain how NFSv4 is better suited to a wide range of datacenter and high performance compute (HPC) uses than its predecessor NFSv3, as well as providing resources for migrating from v3 to v4. You'll learn:
  • How NFSv4 overcomes statelessness issues associated with NFSv3
  • Advantages and features of NFSv4.1 & NFSv4.2
  • What parallel NFS (pNFS) and layouts do
  • How NFSv4 supports performant WAN access
We believe this document makes the argument that users should, at the very least, be evaluating and deploying NFSv4 for use in new projects; and ideally, should be using it wholesale in their existing environments. The information in this white paper  is meant to be comprehensive and educational and we hope you find it helpful. If you have questions or comments after reading this white paper, please comment on this blog and we'll get back to you as soon as possible.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

OpenStack Manila – A Q&A on Liberty and Mitaka

AlexMcDonald

Oct 16, 2015

title of post

Our recent Webcast with OpenStack Manila OpenStack Manila Project Team Lead (PTL), Ben Swartzlander, generated a lot of great questions. As promised, we’ve complied answers for all of the questions that came in. If you think of additional questions, please feel free to comment on this blog. And if you missed the live Webcast, it’s now available on-demand.

Q. Is Hitachi Data Systems contributing to the Manila project?

A. Yes, Hitachi contributed a new driver and also contributed a major new feature (migration) during Liberty. HDS was also active during the Kilo release with a different driver which is unfortunately no longer maintained.

Q. EMC has open sourced ViPER as CopperHD. Do you see any overlap between Manila/Cinder from one side and CopperHD from the other hand?

A. I’m not familiar enough with CoprHD to answer authoritatively, but I understand that there is definitely some overlap between it and Cinder, and I also expect there is some overlap with Manila. Assuming there is some overlap, I think that’s a great thing because competition within open source drives greater quality, and it’s confirmation that there is real demand for what we’re building.

Q. Could Manila be used stand-alone (without OpenStack) to create a fileshare server?

A. Yes, the only OpenStack service Manila depends on is Keystone (for authentication). Running Manila in a stand-alone fashion is a specific use case the team supports.

Q. If we are mapping the snap shot images what is the guarantee for data integrity?

A. Snapshots are typically crash-consistent copies of the filesystem at a point in time. In reality the exact guarantee depends on the backend used, and that’s something we’d like to avoid, so that the snapshot semantics are clear to the user. In the future, backends which cannot meet the crash-consistent guarantee will probably be forced to advertise a different capability so end users are aware of what they’re getting.

Q. Is there manila automation with ansible?

A. As far as I know this hasn’t been done yet.

Q. For kilo deployed in production does it work for all commercial drivers or is there a chart that says which commercial drivers support kilo?

A. The developer doc now has a table which attempts to answer this question. However, the most reliable way to see which drivers are part of the stable/kilo release would be to look at the driver directory of the code. This is an area where the docs need to improve.

Q. Could you explain consistency groups?

A. Consistency groups are a mechanism to ensure that 2 or more shares can be snapshotted in a single operation. Without CGs, you can take 2 snapshots of 2 shares but there is no guarantee that those snapshots will represent the same point in time. CGs allow you to guarantee that the snapshots are synchronized, which makes it possible to use multiple shares together for a single application and to take snapshots of that application’s data in a consistent way.

Q. How is the consistency group in Manila different from Cinder? Is it similar?

A. The designs are very similar. There are some semantic differences in terms of how you modify the membership of the CGs, but the snapshot functionality is identical.

Q. Are you considering pNFS, but I guess this will be hard since it has req. on the client as well?

A. Manila is agnostic to the data protocol so if the backend supports pNFS and Manila is asked to create an NFS share, it may very well get a share with pNFS support. Certainly Manila supports shares with multiple export locations so that on a system with multiple network interfaces, or a clustered system, Manila will tell the clients about all of the paths to the share. In the future we may want Manila to actually know the capabilities of the backends w.r.t. what version of NFS they support so that if a user requires a minimum version we can guarantee that they get that version or get a sensible error if it’s not possible.

Q. Share Replication. In what mode, Async and/or Sync?

A. We plan to support both, and the choice of which is used will be up to the administrator. Communication about which is used and any relevant information like RPO time would be out of band from Manila. The goal of the feature in Manila is to make Manila able to configure the replication relationship, and able to initiate failovers. The intention is for planned failovers to be disruptive but with no data loss, and for unplanned failovers to be disruptive, with data loss corresponding to the RPO that the administrator configured (which would be zero for synchronous replication).

Q. Can you point me to any resources with SNIA available for OpenStack? Where can I download document, videos, etc?

A. You can find several informative OpenStack on-demand Webcasts on the SNIA BrightTalk channel here.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Moving Data Protection to the Cloud: Key Considerations

Alex McDonald

Oct 13, 2015

title of post

Leveraging the cloud for data protection can be an advantageous and viable option for many organizations, but first you must understand the pros and cons of different approaches. Join us on Nov. 17th for our live Webcast, “Moving Data Protection to the Cloud: Trends, Challenges and Strategies” where we’ll discuss the experiences of others with advice on how to avoid the pitfalls, especially during the transition from strictly local resources to cloud resources. We’ve pulled together a unique panel of SNIA experts as well as perspectives from some leading vendor experts Acronis, Asigra and Solid Fire who’ll discuss and debate:

  • Critical cloud data protection challenges
  • How to use the cloud for data protection
  • Pros and cons of various cloud data protection strategies
  • Experiences of others to avoid common pitfalls
  • Cloud standards in use – and why you need them

Register now for this live and interactive event. Our entire panel will be available to answer your questions. I hope you’ll join us!

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Moving Data Protection to the Cloud: Key Considerations

Alex McDonald

Oct 13, 2015

title of post
Leveraging the cloud for data protection can be an advantageous and viable option for many organizations, but first you must understand the pros and cons of different approaches. Join us on Nov. 17th for our live Webcast, “Moving Data Protection to the Cloud: Trends, Challenges and Strategies” where we’ll discuss the experiences of others with advice on how to avoid the pitfalls, especially during the transition from strictly local resources to cloud resources. We’ve pulled together a unique panel of SNIA experts as well as perspectives from some leading vendor experts Acronis, Asigra and SolidFire who’ll discuss and debate:
  • Critical cloud data protection challenges
  • How to use the cloud for data protection
  • Pros and cons of various cloud data protection strategies
  • Experiences of others to avoid common pitfalls
  • Cloud standards in use – and why you need them
Register now for this live and interactive event. Our entire panel will be available to answer your questions. I hope you’ll join us!  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Performance Benchmarking – The Sequel

J Metz

Oct 7, 2015

title of post

We at the Ethernet Storage Forum heard you loud and clear. You need more info on storage performance benchmarking. Our first Webcast, “Storage Performance Benchmarking: Introduction and Fundamentals” was tremendously popular – reaching over 2x the average audience, while hundreds more have read our Q&A blog on the same topic. So, back by popular demand Mark Rogov, Advisory Systems Engineer at EMC, Ken Cantrell, Performance Engineering Manager at NetApp, and I will move past the basics to the second Webcast in this series Storage Performance Benchmarking Part 2. With a focus on System Under Test (SUT), we’ll cover:

  • Commonalities and differences between basic Block and File terminology
  • Basic file components and the meaning of data workloads
  • Main characteristics of various workloads and their respective dependencies, assumptions and environmental components
  • The complexity of the technology benchmark interpretations
  • The importance to System Under Test:
    • What are the elements of a SUT?
    • Why are caches so important to understanding performance of a SUT?
    • Bottlenecks and threads and why they matter

I hope you’ll join us on October 21st at 9:00 a.m. PT. to learn why file performance benchmarking truly is an art. My colleagues and I plan to deliver another informative and interactive hour. Please register today and bring your questions. I hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Performance Benchmarking – The Sequel

J Metz

Oct 7, 2015

title of post
We at the Ethernet Storage Forum heard you loud and clear. You need more info on storage performance benchmarking. Our first Webcast, "Storage Performance Benchmarking: Introduction and Fundamentals" was tremendously popular – reaching over 2x the average audience, while hundreds more have read our Q&A blog on the same topic. So, back by popular demand Mark Rogov, Advisory Systems Engineer at EMC, Ken Cantrell, Performance Engineering Manager at NetApp, and I will move past the basics to the second Webcast in this series Storage Performance Benchmarking Part 2. With a focus on System Under Test (SUT), we'll cover:
  • Commonalities and differences between basic Block and File terminology
  • Basic file components and the meaning of data workloads
  • Main characteristics of various workloads and their respective dependencies, assumptions and environmental components
  • The complexity of the technology benchmark interpretations
  • The importance to System Under Test:
    • What are the elements of a SUT?
    • Why are caches so important to understanding performance of a SUT?
    • Bottlenecks and threads and why they matter
SUT I hope you'll join us on October 21st at 9:00 a.m. PT. to learn why file performance benchmarking truly is an art. My colleagues and I plan to deliver another informative and interactive hour. Please register today and bring your questions. I hope to see you there. Update: This webcast is part of a series on storage performance benchmarking. Check out the others:

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to