Sorry, you need to enable JavaScript to visit this website.
AlexMcDonald

May 28, 2015

title of post

We received several great questions at our What’s New in NFS 4.2 Webcast. We did not have time to answer them all, so here is a complete Q&A from the live event. If you missed it, it’s now available on demand.

Q. Are there commercial Linux or windows distributions available which have adopted pNFS?

A. Yes. RedHat RHEL6.2, Suse SLES 11.3 and Ubuntu 14.10 all support the pNFS capable client. There aren’t any pNFS servers on Linux so far; but commercial systems such as NetApp (file pNFS), EMC (block pNFS), Panasas (object pNFS) and maybe others support pNFS servers. Microsoft Windows has no client or server support for pNFS.

Q. Are we able to prevent it from going back to NFS v3 if we want to ensure file lock management?

A. An NFSv4 mount (mount -t nfs4) won’t fall back to an nfs3 mount. See man mount for details.

Q. Can pNFS metadata servers forward clients to other metadata servers?

A. No, not currently.

Q. Can pNfs provide a way similar to synchronous writes? So data’s instantly safe in at least 2 locations?

A. No; that kind of replication is a feature of the data servers. It’s not covered by the NFSv4.1 or pNFS specification.

Q. Does hole punching depend on underlying file system in server?

A. If the underlying server supports it, then hole punching will be supported. The client & server do this silently; a user of the mount isn’t aware that it’s happening.

Q. How are Ethernet Trunks formed? By the OS or by the NFS client or NFS Server or other?

A. Currently, they’re not! Although trunking is specified and is optional, there are no servers that support it.

Q. How do you think vVols could impact NFS and VMware’s use of NFS?

A. VMware has committed to supporting NFSv4.1 and there is currently support in vSphere 6. vVols adds another opportunity for clients to inform the server with IO hints; it is an area of active development.

Q. In pNFS, does the callback call to the client must come from the original-called-to metadata server?

A. Yes, the callback originates from the MDS.

Q. Is hole punched in block units?

A. That depends on the server.

Q. Is there any functionality like SMB continuous availability?

A. Since it’s a function of the server, and much of the server’s capabilities are unspecified in NFSv4, the answer is – it depends. It’s a question for the vendor of your server.

Q. NFS has historically not been used in large HPC cluster environments for cluster-wide storage, for performance reasons. Do you see these changes as potentially improving this situation?

A. Yes. There’s much work being done on the performance side, and the cluster parallelism that pNFS brings will have it outperform NFSv3 once clients employ more of its capabilities.

Q. Speaking of the Amazon adoption for NFSv4.0. Do you have insight / guess on why Amazon did not select NFSv4.1, which has a lot more performance / scalability advantages over NFSv4.0?

A. No, none at all.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Alex McDonald

May 28, 2015

title of post
We received several great questions at our What's New in NFS 4.2 Webcast. We did not have time to answer them all, so here is a complete Q&A from the live event. If you missed it, it's now available on demand. Q. Are there commercial Linux or windows distributions available which have adopted pNFS? A. Yes. RedHat RHEL6.2, Suse SLES 11.3 and Ubuntu 14.10 all support the pNFS capable client. There aren't any pNFS servers on Linux so far; but commercial systems such as NetApp (file pNFS), EMC (block pNFS), Panasas (object pNFS) and maybe others support pNFS servers. Microsoft Windows has no client or server support for pNFS. Q. Are we able to prevent it from going back to NFS v3 if we want to ensure file lock management? A. An NFSv4 mount (mount -t nfs4) won't fall back to an nfs3 mount. See man mount for details. Q. Can pNFS metadata servers forward clients to other metadata servers? A. No, not currently. Q. Can pNfs provide a way similar to synchronous writes? So data's instantly safe in at least 2 locations? A. No; that kind of replication is a feature of the data servers. It's not covered by the NFSv4.1 or pNFS specification. Q. Does hole punching depend on underlying file system in server? A. If the underlying server supports it, then hole punching will be supported. The client & server do this silently; a user of the mount isn't aware that it's happening. Q. How are Ethernet Trunks formed? By the OS or by the NFS client or NFS Server or other? A. Currently, they're not! Although trunking is specified and is optional, there are no servers that support it. Q. How do you think vVols could impact NFS and VMware's use of NFS? A. VMware has committed to supporting NFSv4.1 and there is currently support in vSphere 6. vVols adds another opportunity for clients to inform the server with IO hints; it is an area of active development. Q. In pNFS, does the callback call to the client must come from the original-called-to metadata server? A. Yes, the callback originates from the MDS. Q. Is hole punched in block units? A. That depends on the server. Q. Is there any functionality like SMB continuous availability? A. Since it's a function of the server, and much of the server's capabilities are unspecified in NFSv4, the answer is – it depends. It's a question for the vendor of your server. Q. NFS has historically not been used in large HPC cluster environments for cluster-wide storage, for performance reasons. Do you see these changes as potentially improving this situation? A. Yes. There's much work being done on the performance side, and the cluster parallelism that pNFS brings will have it outperform NFSv3 once clients employ more of its capabilities. Q. Speaking of the Amazon adoption for NFSv4.0. Do you have insight / guess on why Amazon did not select NFSv4.1, which has a lot more performance / scalability advantages over NFSv4.0? A. No, none at all.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Webcast: Block Storage in the Open Source Cloud called OpenStack

AlexMcDonald

May 26, 2015

title of post

On June 3rd at 10:00 a.m. SNIA-ESF will present its next live Webcast “Block Storage in the Open Source Cloud called OpenStack.” Storage is a major component of any cloud computing platform. OpenStack is one of largest and most widely supported Open Source cloud computing platforms that exist in the market today. The OpenStack block storage service (Cinder) provides persistent block storage resources that OpenStack Nova compute instances can consume.

I will be moderating this Webcast, presented by a core member of the OpenStack Cinder team, Walt Boring. Join us, as we’ll dive into:

  • Relevant components of OpenStack Cinder
  • How block storage is managed by OpenStack
  • What storage protocols are currently supported
  • How it all works together with compute instances

I encourage you to register now to block your calendar. This will be a live and interactive Webcast, please bring your questions. I look forward to “seeing” you on June 3rd

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Webcast: Block Storage in the Open Source Cloud called OpenStack

Alex McDonald

May 26, 2015

title of post
On June 3rd at 10:00 a.m. SNIA-ESF will present its next live Webcast "Block Storage in the Open Source Cloud called OpenStack." Storage is a major component of any cloud computing platform. OpenStack is one of largest and most widely supported Open Source cloud computing platforms that exist in the market today.  The OpenStack block storage service (Cinder) provides persistent block storage resources that OpenStack Nova compute instances can consume. I will be moderating this Webcast, presented by a core member of the OpenStack Cinder team, Walt Boring. Join us, as we'll dive into:
  • Relevant components of OpenStack Cinder
  • How block storage is managed by OpenStack
  • What storage protocols are currently supported
  • How it all works together with compute instances
I encourage you to register now to block your calendar. This will be a live and interactive Webcast, please bring your questions. I look forward to "seeing" you on June 3rd

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New SNIA SSSI Webcast May 28 on Persistent Memory Advances

Marty Foltyn

May 22, 2015

title of post
Join the NVDIMM Special Interest Group for an informative SNIA Brighttalk webcast on Persistent Memory Advances:  Solutions with Endurance, Performance & Non-Volatility on Thursday, May 28, 2015 at 12:00 noon Eastern/9:00 am Pacific.  Register at http://www.snia.org/news_events/multimedia#webcasts Mario Martinez of Netlist, a SNIA SSSI NVDIMM SIG member, will discuss how persistent memory solutions deliver the endurance and performance of DRAM coupled with the non-volatility of Flash. This webinar will also update you on the latest solutions for enterprise server and storage designs, and provide insights into future persistent memory advances. A specific focus will be NVDIMM solutions, with examples from the member companies of the SNIA NVDIMM Special Interest Group.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New SNIA SSSI Webcast May 28 on Persistent Memory Advances

Marty Foltyn

May 22, 2015

title of post

Join the NVDIMM Special Interest Group for an informative SNIA Brighttalk webcast on Persistent Memory Advances:  Solutions with Endurance, Performance & Non-Volatility on Thursday, May 28, 2015 at 12:00 noon Eastern/9:00 am Pacific.  Register at http://www.snia.org/news_events/multimedia#webcasts

Mario Martinez of Netlist, a SNIA SSSI NVDIMM SIG member, will discuss how persistent memory solutions deliver the endurance and performance of DRAM coupled with the non-volatility of Flash. This webinar will also update you on the latest solutions for enterprise server and storage designs, and provide insights into future persistent memory advances. A specific focus will be NVDIMM solutions, with examples from the member companies of the SNIA NVDIMM Special Interest Group.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Swift, S3 or CDMI – Your Questions Answered

mac

May 13, 2015

title of post

Last week’s live SNIA Cloud Webcast “Swift, S3 or CDMI – Why Choose?” is now available on demand. Thanks to all the folks who attended the live event. We had some great questions from attendees, in case you missed it, here is a complete Q&A.

Q. How do you tag the data? Is that a manual operation?

A. The data is tagged as part of the CDMI API by supplying key value pairs in the JSON Object. Since it is an API you can put a User Interface in front of it to manually tag the data. But you can also develop software to automatically tag the data. We envision an entire ecosystem of software that would use this interface to better manage data in the future

Q. Which vendors support CDMI today?

A. We have a page that lists all the publically announced CDMI implementations here. We also plan to start testing implementations with standardized tests to certify them as conformant. This will be a separate list.

Q. FC3 Common Services layer vs. SWIFT, S3, & CDMI – Will it fully integrate with encryption at rest vendors?

A. Amazon does offer encryption at rest for example, but does not (yet) allow you choose the algorithm. CDMI allows vendors to show a list of algorithms and pick the one they want.

Q. You’d mentioned NFS, other interfaces for compatibility – but often “native” NFS deployments can be pretty high performance. Object storage doesn’t really focus on performance, does it? How is it addressed for customers moving to the object model?

A. CDMI implementations are responsible for the performance not the standard itself, but there is nothing in an object interface that would make it inherently slower. But if the NFS interface implementation is faster, customers can use that interface for apps with those performance needs. The compatibility means they can use whatever interface makes sense for each application type.

Q. Is it possible to query the user-metadata on a container level for listing all the data objects that have that user-metadata set?

A. Yes. Metadata query is key and it can be scoped however you like. Data system metadata is also hierarchical and inherited – meaning that you can override the parent container settings.

Q. So would it be reasonable to say that any current object storage should be expected to implement one or more of these metadata models? What if the object store wasn’t necessarily meant to play in a cloud? Would it be at a disadvantage if its metadata model was proprietary?

A. Yes, but as an add-on that would not interfere with the existing API/access method. Eventually as CDMI becomes ubiquitous, products would be at a disadvantage if they did not add this type of interface.

 

 

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Swift, S3 or CDMI – Your Questions Answered

mac

May 13, 2015

title of post
Last week’s live SNIA Cloud Webcast “Swift, S3 or CDMI – Why Choose?” is now available on demand. Thanks to all the folks who attended the live event. We had some great questions from attendees, in case you missed it, here is a complete Q&A. Q. How do you tag the data? Is that a manual operation? A. The data is tagged as part of the CDMI API by supplying key value pairs in the JSON Object. Since it is an API you can put a User Interface in front of it to manually tag the data. But you can also develop software to automatically tag the data. We envision an entire ecosystem of software that would use this interface to better manage data in the future Q. Which vendors support CDMI today? A. We have a page that lists all the publically announced CDMI implementations here. We also plan to start testing implementations with standardized tests to certify them as conformant. This will be a separate list. Q. FC3 Common Services layer vs. SWIFT, S3, & CDMI - Will it fully integrate with encryption at rest vendors? A. Amazon does offer encryption at rest for example, but does not (yet) allow you choose the algorithm. CDMI allows vendors to show a list of algorithms and pick the one they want. Q. You'd mentioned NFS, other interfaces for compatibility - but often "native" NFS deployments can be pretty high performance. Object storage doesn't really focus on performance, does it? How is it addressed for customers moving to the object model? A. CDMI implementations are responsible for the performance not the standard itself, but there is nothing in an object interface that would make it inherently slower. But if the NFS interface implementation is faster, customers can use that interface for apps with those performance needs. The compatibility means they can use whatever interface makes sense for each application type. Q. Is it possible to query the user-metadata on a container level for listing all the data objects that have that user-metadata set? A. Yes. Metadata query is key and it can be scoped however you like. Data system metadata is also hierarchical and inherited - meaning that you can override the parent container settings. Q. So would it be reasonable to say that any current object storage should be expected to implement one or more of these metadata models? What if the object store wasn't necessarily meant to play in a cloud? Would it be at a disadvantage if its metadata model was proprietary? A. Yes, but as an add-on that would not interfere with the existing API/access method. Eventually as CDMI becomes ubiquitous, products would be at a disadvantage if they did not add this type of interface.      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Webcast: Hierarchical Erasure Coding: Making Erasure Coding Usable

Glyn Bowden

May 11, 2015

title of post

On May 14th the SNIA-CSI (Cloud Storage Initiative) will be hosting a live Webcast “Hierarchical Erasure Coding: Making erasure coding usable.” This technical talk, presented by Vishnu Vardhan, Sr. Manager, Object Storage, at NetApp and myself, will cover two different approaches to erasure coding – a flat erasure code across JBOD, and a hierarchical code with an inner code and an outer code. This Webcast, part of the SNIA-CSI developer’s series, will compare the two approaches on different parameters that impact the IT business and provide guidance on evaluating object storage solutions. You’ll learn:

  • Industry dynamics
  • Erasure coding vs. RAID – Which is better?
  • When is erasure coding a good fit?
  • Hierarchical Erasure Coding- The next generation
  • How hierarchical codes make growth easier
  • Key areas where hierarchical coding is better than flat erasure codes

Register now and bring your questions. Vishnu and I will look forward to answering them.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Webcast: Hierarchical Erasure Coding: Making Erasure Coding Usable

Glyn Bowden

May 11, 2015

title of post
On May 14th the SNIA-CSI (Cloud Storage Initiative) will be hosting a live Webcast “Hierarchical Erasure Coding: Making erasure coding usable.” This technical talk, presented by Vishnu Vardhan, Sr. Manager, Object Storage, at NetApp and myself, will cover two different approaches to erasure coding – a flat erasure code across JBOD, and a hierarchical code with an inner code and an outer code. This Webcast, part of the SNIA-CSI developer’s series, will compare the two approaches on different parameters that impact the IT business and provide guidance on evaluating object storage solutions. You’ll learn:
  • Industry dynamics
  • Erasure coding vs. RAID – Which is better?
  • When is erasure coding a good fit?
  • Hierarchical Erasure Coding- The next generation
  • How hierarchical codes make growth easier
  • Key areas where hierarchical coding is better than flat erasure codes
Register now and bring your questions. Vishnu and I will look forward to answering them.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to