Sorry, you need to enable JavaScript to visit this website.

Moving Genomics to the Cloud

Alex McDonald

Jul 27, 2021

title of post
The study of genomics in modern biology has revolutionized the discovery of medicines and the COVID pandemic response has quickened genetic research and driven the rapid development of vaccines. Genomics, however, requires a significant amount of compute power and data storage to make new discoveries possible. Making sure compute and storage are not a roadblock for genomics innovations will be the topic of discussion at the SNIA Cloud Storage Technologies Initiative live webcast “Moving Genomics to the Cloud: Compute and Storage Considerations.” This session will feature expert viewpoints from both bioinformatics and technology perspectives with a focus on some of the compute and data storage challenges for genomics workflows. We will discuss:
  • How to best store and manage large genomics datasets
  • Methods for sharing large datasets for collaborative analysis
  • Legal and ethical implications of storing shareable data in the cloud
  • Transferring large data sets and the impact on storage and networking
Join us for this live event on September 9, 2021 for a fascinating discussion on an area of technology that is rapidly evolving and changing the world.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Extending Storage to the Edge

Jim Fister

Jul 19, 2021

title of post
Data gravity has pulled computing to the Edge and enabled significant advances in hybrid cloud deployments. The ability to run analytics from the datacenter to the Edge, where the data is generated and lives, also creates new use cases for nearly every industry and company. However, this movement of compute to the Edge is not the only pattern to have emerged. How might other use cases impact your storage strategy? That’s the topic of our next SNIA Cloud Storage Technologies Initiative (CSTI) live webcast on August 25, 2021 “Extending Storage to the Edge – How It Should Affect Your Storage Strategy” where our experts, Erin Farr, Senior Technical Staff Member, IBM Storage CTO Innovation Team and Vincent Hsu, IBM Fellow, VP & CTO for Storage will join us for an interactive session that will cover:
  • Emerging patterns of data movement and the use cases that drive them
  • Cloud Bursting
  • Federated Learning across the Edge and Hybrid Cloud
  • Considerations for distributed cloud storage architectures to match these emerging patterns
It is sure to be a fascinating and insightful discussion. Register today. Our esteemed expert will be on-hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Extending Storage to the Edge

Jim Fister

Jul 19, 2021

title of post
Data gravity has pulled computing to the Edge and enabled significant advances in hybrid cloud deployments. The ability to run analytics from the datacenter to the Edge, where the data is generated and lives, also creates new use cases for nearly every industry and company. However, this movement of compute to the Edge is not the only pattern to have emerged. How might other use cases impact your storage strategy? That’s the topic of our next SNIA Cloud Storage Technologies Initiative (CSTI) live webcast on August 25, 2021 “Extending Storage to the Edge – How It Should Affect Your Storage Strategy” where our experts, Erin Farr, Senior Technical Staff Member, IBM Storage CTO Innovation Team and Vincent Hsu, IBM Fellow, VP & CTO for Storage will join us for an interactive session that will cover:
  • Emerging patterns of data movement and the use cases that drive them
  • Cloud Bursting
  • Federated Learning across the Edge and Hybrid Cloud
  • Considerations for distributed cloud storage architectures to match these emerging patterns
It is sure to be a fascinating and insightful discussion. Register today. Our esteemed expert will be on-hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Storage Debate Q&A: Hyperconverged vs. Disaggregated vs. Centralized

John Kim

Jul 12, 2021

title of post
The SNIA Networking Storage Forum recently hosted another webcast in our Great Storage Debate webcast series. This time, our SNIA experts debated three competing visions about how storage should be done: Hyperconverged Infrastructure (HCI), Disaggregated Storage, and Centralized Storage. If you missed the live event, it’s available on-demand. Questions from the webcast attendees made the panel debate quite lively. As promised, here are answers to those questions. Q. Can you imagine a realistic scenario where the different storage types are used as storage tiers? How much are they interoperable? A. Most HCI solutions already have a tiering/caching structure built-in.  However, a user could use HCI for hot to warm data, and also tier less frequently accessed data out to a separate backup/archive.  Some of the HCI solutions have close partnerships with backup/archive vendor solutions just for this purpose. Q. Does Hyperconverged (HCI) use primarily object storage with erasure coding for managing the distributed storage, such as vSAN for VxRail (from Dell)? A. That is accurate for vSAN, but other HCI solutions are not necessarily object based. Even if object-based, the object interface is rarely exposed. Erasure coding is a common method of distributing the data across the cluster for increased durability with efficient space sharing. Q. Is there a possibility if two or more classification of storage can co-exist or deployed? Examples please? A. Often IT organizations have multiple types of storage deployed in their data centers, particularly over time with various types of legacy systems. Also, HCI solutions that support iSCSI can interface with these legacy systems to enable better sharing of data to avoid silos. Q. How would you classify HPC deployment given it is more distributed file systems and converged storage? does it need a new classification? A. Often HPC storage is deployed on large, distributed file systems (e.g. Lustre), which I would classify as distributed, scale-out storage, but not hyperconverged, as the compute is still on separate servers. Q. A lot of HCI solutions are already allowing heterogeneous nodes within a cluster. What about these “new” Disaggregated HCI solutions that uses “traditional” storage arrays in the solution (thus not using a Software Defined Storage solution? Doesn’t it sound a step back? It seems most of the innovation comes on the software. A. The solutions marketed as disaggregated HCI are not HCI. They are traditional servers and storage combined in a chassis.  This would meet the definition of converged, but not hyperconverged. Q. Why is HCI growing so quickly and seems so popular of late?  It seems to be one of the fastest growing “data storage” use cases. A. HCI has many advantages, as I shared in the slides up front. The #1 reason for the growth and popularity is the ease of deployment and management. Any IT person who is familiar with deploying and managing a VM can now easily deploy and manage the storage with the VM.  No specialized storage system skillsets required, which makes better use of limited IT people resources, and reduces OpEx. Q. Where do you categorize newer deployments like Vast Data? Is that considered NAS since it presents as NFS and CIFS? A. I would categorize Vast Data as scale-out, software-defined storage.  HCI is also a type of scale-out, software-defined storage, but with compute as well, so that is the key difference. Q. So what happens when HCI works with ANY storage including centralized solutions. What is HCI then? A. I believe this question is referencing the SCSi interface support. HCI solutions that support iSCSI can interface with other types of storage systems to enable better sharing of data to avoid silos. Q. With NVMe/RoCE becoming more available, DAS-like performance while have reducing CPU usage on the hosts massively, saving license costs (potentially, we are only in pilot phase) does the ball swing back towards disaggregated? A. I’m not sure I fully understand the question, but RDMA can be used to streamline the inter-node traffic across the HCI cluster.  Network performance becomes more critical as the size of the cluster, and therefore the traffic between nodes increases, and RDMA can reduce any network bottlenecks. RoCEv2 is popular, and some HCI solutions also support iWARP.  Therefore, as HCI solutions adopt RDMA, this is not a driver to disaggregated. Q. HCI was initially targeted at SMB and had difficulty scaling beyond 16 nodes. Why would HCI be the choice for large scale enterprise implementations? A. HCI has proven itself as capable of running a broad range of workloads in small to large data center environments at this point. Each HCI solution can scale to different numbers of nodes, but usage data shows that single clusters rarely exceed about 12 nodes, and then users start a new cluster. There are a mix of reasons for this: concerns about the size of failure domains, departmental or remote site deployment size requirements, but often it’s the software license fees for the applications running on the HCI infrastructure that limits the typical clusters sizes in practice. Q. SPC (Storage Performance Council) benchmarks are still the gold standard (maybe?) and my understanding is they typically use an FC SAN. Is that changing? I understand that the underlying hardware is what determines performance but I’m not aware of SPC benchmarks using anything other than SAN. A. Myriad benchmarks are used to measure HCI performance across a cluster. I/O benchmarks that are variants on FIO are common to measure the storage performance, and then the compute performance is often measured using other benchmarks, such as TPC benchmarks for database performance, LoginVSI for VDI performance, etc. Q. What is the current implementation mix ratio in the industry? What is the long-term projected mix ratio? A. Today the enterprise is dominated by centralized storage with HCI in second place and growing more rapidly. Large cloud service providers and hyperscalers are dominated by disaggregated storage, but also use some centralized storage and some have their own customized HCI implementations for specific workloads. HPC and AI customers use a mix of disaggregated and centralized storage. In the long-term, it’s possible that disaggregated will have the largest overall share since cloud storage is growing the most, with centralized storage and HCI splitting the rest. Q. Is the latency high for HCI vs. disaggregated vs. centralized? A. It depends on the implementation. HCI and disaggregated might have slightly higher latency than centralized storage if they distribute writes across nodes before acknowledging them or if they must retrieve reads from multiple nodes. But HCI and disaggregated storage can also be implemented in a way that offers the same latency as centralized. Q. What about GPUDirect? A. GPUDirect Storage allows GPUs to access storage more directly to reduce latency. Currently it is supported by some types of centralized and disaggregated storage. In the future, it might be supported with HCI as well. Q. Splitting so many hairs here. Each of the three storage types are more about HOW the storage is consumed by the user/application versus the actual architecture. A. Yes, that is largely correct, but the storage architecture can also affect how it’s consumed. Q. Besides technical qualities, is there a financial differentiator between solutions? For example, OpEx and CapEx, ROI? A. For very large-scale storage implementations, disaggregated generally has the lowest CapEx and OpEx because the higher initial cost of managing distributed storage software is amortized across many nodes and many terabytes. For medium to large implementations, centralized storage usually has the best CapEx and OpEx. For small to medium implementations, HCI usually has the lowest CapEx and OpEx because it’s easy and fast to acquire and deploy. However, it always depends on the specific type of storage and the skill set or expertise of the team managing the storage. Q. Why wouldn’t disaggregating storage compute and memory be the next trend? The Hyperscalers have already done it. What are we waiting for? A. Disaggregating compute is indeed happening, supported by VMs, containers, and faster network links. However, disaggregating memory across different physical machines is more challenging because even today’s very fast network links have much higher latency than memory. For now, memory disaggregation is largely limited to being done “inside the box” or within one rack with links like PCIe, or to cases where the compute and memory stick together and are disaggregated as a unit. Q. Storage lends itself as first choice for disaggregation as mentioned before. What about disaggregation of other resources (such as networking, GPU, memory) in the future and how do you believe will it impact the selection of centralized vs disaggregated storage? Will Ethernet stay 1st choice for the fabric for disaggregation? A. See the above answer about disaggregating memory. Networking can be disaggregated within a rack by using a very low-latency fabric, for example PCIe, but usually networking is used to support disaggregation of other resources. GPUs can be disaggregated but normally still travel with some CPU and memory in the same box, though this could change in the near future. Ethernet will indeed remain the 1st networking choice for disaggregation, but other network types will also be used (InfiniBand, Fibre Channel, Ethernet with RDMA, etc.) Don’t forget to check out our other great storage debates, including: File vs. Block vs. Object Storage, Fibre Channel vs. iSCSI, FCoE vs. iSCSI vs. iSER, RoCE vs. iWARP, and Centralized vs. Distributed. You can view them all on our SNIAVideo YouTube Channel.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

An Easy Path to Confidential Computing

Michael Hoard

Jul 6, 2021

title of post
To counter the ever-increasing likelihood of catastrophic disruption and cost due to enterprise IT security threats, data center decision makers need to be vigilant in protecting their organization’s data. Confidential Computing is architected to provide security for data in use to meet this critical need for enterprises today. The next webcast in our Confidential Computing series is “How to Easily Deploy Confidential Computing.” It will provide insight into how data center, cloud and edge applications may easily benefit from cost-effective, real-world Confidential Computing solutions. This educational discussion on July 28, 2021 will provide end-user examples, tips on how to assess systems before and after deployment, as well as key steps to complete along the journey to mitigate threat exposure.  Presenting will be Steve Van Lare (Anjuna), Anand Kashyap (Fortanix), and Michael Hoard (Intel), who will discuss: ·       What would it take to build-your-own Confidential Computing solution? ·       Emergence of easily deployable, cost-effective Confidential Computing solutions ·       Real-world usage examples and key technical, business and investment insights Hosted by the SNIA Cloud Storage Technologies Initiative (CSTI), this webinar acts as a grand finale in our three-part Confidential Computing series.  Earlier we covered an introduction What is Confidential Computing and Why Should I Care? and how Confidential Computing works in multi-tenant cloud environments “Confidential Computing: Protecting Data in Use.”   Please join us on July 28th for this exciting discussion.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Confidential Computing FAQ

Jim Fister

Jun 25, 2021

title of post
Recently, the SNIA Cloud Storage Technologies Initiative (CSTI) I hosted a lively panel discussion “What is Confidential Computing and Why Should I Care?” It was the first in a 3-part series of Confidential Computing security discussions. You can learn about the series here.  The webcast featured three experts who are working to define the Confidential Computing architecture, Mike Bursell of the Enarx Project, David Kaplan at AMD, and Ronald Perez from Intel. This session served as an introduction to the concept of Confidential Computing and examined the technology and its initial uses. The audience asked several interesting questions. We’re answering some of the more basic questions here, as well as some that did not get addressed directly during the live event. Q. What is Confidential Computing?  How does it complement existing security efforts, such as the Trusted Platform Model (TPM)? A.  Confidential Computing is an architectural approach to security that uses virtualization to create a Trusted Execution Environment (TEE).  This environment can run any amount of code within it, though the volume of code is usually selective in the protected environment. This allows data to be completely protected, even from other code and data running in the system. Q.  Is Confidential Computing only for a CPU architecture? A. The current architecture is focused on delivering this capability via the CPU, but nothing limits other system components such as GPU, FPGA, or the like from implementing a similar architecture. Q. It was mentioned that with Confidential Compute, one only needs to trust their own code along with the hardware. With the prevalence of microarchitectural attacks that break down various isolation mechanisms, can the hardware really be trusted? A. Most of the implementations to create a TEE are using fairly well-tested hardware and security infrastructure.  As such, the threat profile is fairly low. However, any implementation in the market does need to ensure that it’s following proper protocol to best protect data.  An example would be ensuring that data in the TEE is only used or accessed there and is not passed to non-trusted execution areas. Q. Are there potential pitfalls in the TEE implementations that might become security issues later, similar to speculative execution?  Are there potential side-channel attacks using TEE? A. No security solution is 100% secure and there is always a risk of vulnerabilities in any product. But perfect cannot be the enemy of good, and TEEs are a great defense-in-depth tool to provide an additional layer of isolation on top of existing security controls, making data that much more secure.  Additionally, the recent trend has been to consider security much earlier in the design process and perform targeted security testing to try to identify and mitigate issues as early as possible. Q. Is this just a new technology, or is there a bigger value proposition?  What’s in it for the CISO or the CIO? A. There are a variety of answers to this. One would be that running TEE in the cloud provides the protection for vital workloads that otherwise would not be able to run on a shared system.  Another benefit is that key secrets can be secured while much of the rest of the code can be run at a lower privilege level, which helps with costs. In terms of many security initiatives, Confidential Computing might be one that is easier to explain to the management team. Q. Anybody have a guess at what a regulation/law might look like? Certification test analogous to FCC (obviously more complex)? Other approaches? A. This technology is in response to the need for stronger security and privacy which includes legal compliance with regulations being passed by states like California. But this has not taken the form of certifications at this time.  Individual vendors will retain the necessary functions of their virtualization products and may consider security as one of the characteristics within their certification. To hear answers to all the questions that our esteemed panel answered during the live event. Please watch this session on-demand.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Accelerating Disaggregated Storage to Optimize Data-Intensive Workloads

SNIA CMS Community

Jun 21, 2021

title of post

Thanks to big data, artificial intelligence (AI), the Internet of things (IoT), and 5G, demand for data storage continues to grow significantly. The rapid growth is causing storage and database-specific processing challenges within current storage architectures. New architectures, designed with millisecond latency, and high throughput, offer in-network and storage computational processing to offload and accelerate data-intensive workloads.

On June 29, 2021, SNIA Compute, Memory and Storage Initiative will host a lively webcast discussion on today’s storage challenges in an aggregated storage world and if a disaggregated storage model could optimize data-intensive workloads.  We’ll talk about the concept of a Data Processing Unit (DPU) and if a DPU should be combined with a storage data processor to accelerate compute-intensive functions.   We’ll also introduce the concept of key value and how it can be an enabler to solve storage problems.

Join moderator Tim Lustig, Co- Chair of the CMSI Marketing Committee, and speakers John Kim from NVIDIA and Kfir Wolfson from Pliops as we shift into overdrive to accelerate disaggregated storage. Register now for this free webcast.

The post Accelerating Disaggregated Storage to Optimize Data-Intensive Workloads first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Technologies & Practices Ripe for Refresh – Part 2

Alex McDonald

Jun 7, 2021

title of post
So much of what we discuss in SNIA is the latest emerging technologies in storage. While it’s good to know all about the latest and greatest technologies, it’s also important to understand those technologies being sunsetted. In this SNIA Networking Storage Forum (NSF) webcast series “Storage Technologies & Practices Ripe for Refresh” we cover technologies that are at (or close to) being past their useful life. On June 22, 2021, we’ll host the second installment of this series, Storage Technologies & Practices Ripe for Refresh – Part 2 where we’ll discuss obsolete hardware, protocols, interfaces and other aspects of storage. We’ll offer advice on how to replace these older technologies in production environments as well as why these changes are recommended. We’ll also cover protocols that you should consider removing from your networks, either older versions of protocols where only newer versions should be used, or protocols that have been supplanted by superior options and should be discontinued entirely. Finally, we will look at physical networking interfaces and cabling that are popular today but face an uncertain future as networking speeds grow ever faster. Join us on June 22nd to learn if there is anything ripe for refresh in your data center.  And if you missed the first webcast in this series, you can view it on demand here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How SNIA Swordfish™ Expanded with NVMe® and NVMe-oF™

Richelle Ahlvers

Jun 2, 2021

title of post

The SNIA Swordfish™ specification and ecosystem are growing in scope to include full enablement and alignment for NVMe® and NVMe-oF client workloads and use cases. By partnering with other industry-standard organizations including DMTF®, NVM Express, and OpenFabrics Alliance (OFA), SNIA’s Scalable Storage Management Technical Work Group has updated the Swordfish bundles from version 1.2.1 and later to cover an expanding range of NVMe and NVMe-oF functionality including NVMe device management and storage fabric technology management and administration.

The Need
Large-scale computing designs are increasingly multi-node and linked together through high-speed networks. These networks may be comprised of different types of technologies, fungible, and morphing. Over time, many different types of high-performance networking devices will evolve to participate in these modern, coupled-computing platforms. New fabric management capabilities, orchestration, and automation will be required to deploy, secure, and optimally maintain these high-speed networks.

The NVMe and NVMe-oF specifications provide comprehensive management for NVMe devices at an individual level, however, when you want to manage these devices at a system or data center level, DMTF Redfish and SNIA Swordfish are the industry’s gold standards. Together, Redfish and Swordfish enable a comprehensive view across the system, data center, and enterprise, with NVMe and NVMe-oF instrumenting the device-level view. This complete approach provides a way to manage your entire environment across technologies with standards-based management, making it more cost-effective and easier to operate.

The Approach
The expanded NVMe resource management within SNIA Swordfish is comprised of a mapping between the DMTF Redfish, Swordfish, and NVMe specifications, enabling developers to construct a standard implementation within the Redfish and Swordfish service for any NVMe and NVMe-oF managed device.

The architectural approach to creating the SNIA Swordfish 1.2.1 version of the standard began with a deep dive into the existing management models of systems and servers for Redfish, storage for Swordfish, and fabrics management within NVM Express. After evaluating each approach, there was a step-by-step walkthrough to map the models. From that, we created mockups and a comprehensive mapping guide using examples of object and property level mapping between the standard ecosystems.

In addition, Swordfish profiles were created that provide a comprehensive representation of required properties for implementations. These profiles have been incorporated into the new Swordfish Conformance Test Program (CTP), to support NVMe capabilities. Through its set of test suites, the CTP validates that a company’s products conform to a specified version of the Swordfish specification. CTP supports conformance testing against multiple versions of Swordfish.

What’s Next?
In 2021, the Swordfish specification will continue to be enhanced to fully capitalize on the fabrics model by extending fabric technology-specific use cases and creating more profiles for additional device types.

Want to learn more?
Watch SNIA’s on-demand webcast, “Universal Fabric Management for Tomorrow’s Data Centers,” where Phil Cayton, Senior Staff Software Engineer, Intel; and Richelle Ahlvers, storage technology enablement architect, Intel Corporation, Board of Directors, SNIA, provide insights into how standard organizations are working together to improve and promote the vendor-neutral, standards-based management of open source fabrics and remote network services or devices in high-performance data center infrastructures.







   

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How SNIA Swordfish™ Expanded with NVMe® and NVMe-oF™

Linda Capcara

Jun 2, 2021

title of post
The SNIA Swordfish™ specification and ecosystem are growing in scope to include full enablement and alignment for NVMe® and NVMe-oF client workloads and use cases. By partnering with other industry-standard organizations including DMTF®, NVM Express, and OpenFabrics Alliance (OFA), SNIA’s Scalable Storage Management Technical Work Group has updated the Swordfish bundles from version 1.2.1 and later to cover an expanding range of NVMe and NVMe-oF functionality including NVMe device management and storage fabric technology management and administration. The Need Large-scale computing designs are increasingly multi-node and linked together through high-speed networks. These networks may be comprised of different types of technologies, fungible, and morphing. Over time, many different types of high-performance networking devices will evolve to participate in these modern, coupled-computing platforms. New fabric management capabilities, orchestration, and automation will be required to deploy, secure, and optimally maintain these high-speed networks. The NVMe and NVMe-oF specifications provide comprehensive management for NVMe devices at an individual level, however, when you want to manage these devices at a system or data center level, DMTF Redfish and SNIA Swordfish are the industry’s gold standards. Together, Redfish and Swordfish enable a comprehensive view across the system, data center, and enterprise, with NVMe and NVMe-oF instrumenting the device-level view. This complete approach provides a way to manage your entire environment across technologies with standards-based management, making it more cost-effective and easier to operate. The Approach The expanded NVMe resource management within SNIA Swordfish is comprised of a mapping between the DMTF Redfish, Swordfish, and NVMe specifications, enabling developers to construct a standard implementation within the Redfish and Swordfish service for any NVMe and NVMe-oF managed device. The architectural approach to creating the SNIA Swordfish 1.2.1 version of the standard began with a deep dive into the existing management models of systems and servers for Redfish, storage for Swordfish, and fabrics management within NVM Express. After evaluating each approach, there was a step-by-step walkthrough to map the models. From that, we created mockups and a comprehensive mapping guide using examples of object and property level mapping between the standard ecosystems. In addition, Swordfish profiles were created that provide a comprehensive representation of required properties for implementations. These profiles have been incorporated into the new Swordfish Conformance Test Program (CTP), to support NVMe capabilities. Through its set of test suites, the CTP validates that a company’s products conform to a specified version of the Swordfish specification. CTP supports conformance testing against multiple versions of Swordfish. What’s Next? In 2021, the Swordfish specification will continue to be enhanced to fully capitalize on the fabrics model by extending fabric technology-specific use cases and creating more profiles for additional device types. Want to learn more? Watch SNIA’s on-demand webcast, “Universal Fabric Management for Tomorrow’s Data Centers,” where Phil Cayton, Senior Staff Software Engineer, Intel; and Richelle Ahlvers, storage technology enablement architect, Intel Corporation, Board of Directors, SNIA, provide insights into how standard organizations are working together to improve and promote the vendor-neutral, standards-based management of open source fabrics and remote network services or devices in high-performance data center infrastructures. [contact-form]

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to