Sorry, you need to enable JavaScript to visit this website.

Scale-Out File Systems FAQ

John Kim

Mar 8, 2019

title of post
On February 28th, the SNIA Networking Storage Forum (NSF) took at look at what’s happening in Scale-Out File Systems. We discussed general principles, design considerations, challenges, benchmarks and more. If you missed the live webcast, it’s now available on-demand. We did not have time to answer all the questions we received at the live event, so here are answers to them all. Q. Can scale-out file systems do Erasure coding? A. Indeed, Erasure coding is a common method to improve resilience. Q. How does one address the problem of a specific disk going down? Where does scale-out architecture provide redundancy? A. Disk failures typically are covered by RAID software. Some of scale-out software also use multiple replicators to mitigate the impact of disk failures. Q. Are there use cases where a hybrid of these two styles is needed? A. Yes, for example, in some environments, the foundation layer might be using the dedicated storage server to form the large storage pool, which is the 1st style, and then export LUNs or virtual disks to the compute nodes (either physical or virtual) to run the applications, which is the 2nd style. Q. Which scale-out file systems present on windows, Linux platforms? A.  Some of the scale-out file systems provide native client software across multiple platforms. Another approach is to use Samba to build SMB gateways to make the scale-out file system available to Windows computers. Q. Is Amazon elastic file system (EFS) on AWS scale-out file systems? A. Please see: https://docs.aws.amazon.com/efs/latest/ug/performance.html “Amazon EFS file systems are distributed across an unconstrained number of storage servers, enabling file systems to grow elastically to petabyte scale and allowing massively parallel access from Amazon EC2 instances to your data. The distributed design of Amazon EFS avoids the bottlenecks and constraints inherent to traditional file servers.” Q. Where are the most cost effective price/performance uses of NVMe?  A. NVMe can support very high IOPS and very high throughput as well. The best use case would be to couple NVMe with high performance storage software that would not limit the NVMe.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Composable Infrastructure Q&A

Alex McDonald

Mar 5, 2019

title of post

On February 13, 2019, the SNIA Cloud Storage Technologies Initiative (CSTI) presented a live webcast, Why Composable Infrastructure? Our goal was to clearly explain the reasoning behind, and the benefits of, composable infrastructure in an educational, vendor-neutral way. We believe our speakers, Philip Kufeldt and Mike Jochimsen, did just that. Now, as promised, Philip and Mike have answered the many interesting questions we received during the live event.

Q. Are composable infrastructure solutions incompatible with virtualized or containerized environments? Will these solutions only serve bare metal environments?

A. Composable infrastructure solutions will eventually work across any environment that supports the orchestration toolsets. There are no compatibility issues between virtualization/containerization and composable infrastructure, even if they fundamentally look at allocation of resources within a defined resource differently. For example, in a virtualized environment if the need for network bandwidth or storage capacity exceeds the capability of a given resource, a "larger" resource could be composed using the orchestration tools. It would then be managed within the virtualization layer like any other resource.

Q. Typically new technology adoption is slowed due to support within commercial operating systems. Are there changes needed in the major OS environments (Linux, Windows, VMware, etc.), that will need to be released before composable infrastructure solutions will be supported?

A. So the PCIe and Ethernet based fabrics are already well established and have great OS support. The storage world and networking worlds already deploy composable infrastructure. However, the newer standards such as Gen-Z, OpenCAPI, and CCIX will need both hardware and software support.   ARM SoCs are showing up with CCIX HW and OpenCAPI is in the Power architecture. But this is just the early stage; switches, enclosures and components that support these standards are still in the offing. Furthermore the OS support for these standards is also unavailable. And finally the management mechanisms and composability software is also undefined. So we still are a good distance from the larger composable infrastructure being available.

Q. Are the data center orchestration tools currently on the market capable of building a composable infrastructure solution like you described?

A. The tools on the market are still in the early stages of providing this capability. Some are purpose built for specific environments while others encompass a wider set of environments, but lack some of the dynamic/automation capabilities. Also, there is work going on, or starting up, in standards bodies to define the APIs needed for orchestration to work in truly heterogeneous application, operating system and hardware environments.

Q. In composable environments, how does security scale with it, specifically, encryption? Encrypt everything? Or some subset of jobs that truly are only jobs requiring encryption?

A. Fabrics can be configured to be local and private relieving the need for encrypted transfers. However there will be new issues to contend with. For example, consider memory that was previously used in one configuration that was disassembled and then reused in another. Ensuring that memory is cleaned before reuse will become required to prevent information leakage.

Q. For Gen-Z, Pooled Memory or Memory from different Racks, what about the Latency issues? Local memories don’t have issues with latency?

A. Although Gen-Z supports longer distance interconnects, it does not mean that only long distance configurations will be utilized. Think of it as a set of tools in a toolbox. Some memories will be close for lower latencies and others will be farther to provide for 4th or 5th level caching.

Q. Is it more about declarative mapping of the components? At this point software and hardware are decoupled, so the messaging and logic are really the requirement for orchestration.

A. The orchestration layer provides a translation between a declarative and imperative state in composable infrastructure. It is responsible for gathering the requirements from the application (declarative - "this is what I want"), then identifying the capabilities of the components on the network and logically mapping them to create a virtual infrastructure (imperative - "this is how to do it").

Q. As apps start to be built from microservices, which may run across different physical nodes, I would think this further raises performance challenges on disaggregated infrastructure. How this will impact things would be an interesting next topic.

A. Agreed. Although I believe microservices will actually be enhanced by composable infrastructure. Composable infrastructure in general will create smaller systems that more closely fit the needs of the service or classes of services that will run on them. Just as in a bin packing problem having smaller units tends to provide better utilization of the container.

Got more questions? Feel free to comment on this blog and we’ll answer.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Composable Infrastructure Q&A

Alex McDonald

Mar 5, 2019

title of post
On February 13, 2019, the SNIA Cloud Storage Technologies Initiative (CSTI) presented a live webcast, Why Composable Infrastructure? Our goal was to clearly explain the reasoning behind, and the benefits of, composable infrastructure in an educational, vendor-neutral way. We believe our speakers, Philip Kufeldt and Mike Jochimsen, did just that. Now, as promised, Philip and Mike have answered the many interesting questions we received during the live event. Q. Are composable infrastructure solutions incompatible with virtualized or containerized environments? Will these solutions only serve bare metal environments? A. Composable infrastructure solutions will eventually work across any environment that supports the orchestration toolsets. There are no compatibility issues between virtualization/containerization and composable infrastructure, even if they fundamentally look at allocation of resources within a defined resource differently. For example, in a virtualized environment if the need for network bandwidth or storage capacity exceeds the capability of a given resource, a “larger” resource could be composed using the orchestration tools. It would then be managed within the virtualization layer like any other resource. Q. Typically new technology adoption is slowed due to support within commercial operating systems. Are there changes needed in the major OS environments (Linux, Windows, VMware, etc.), that will need to be released before composable infrastructure solutions will be supported? A. So the PCIe and Ethernet based fabrics are already well established and have great OS support. The storage world and networking worlds already deploy composable infrastructure. However, the newer standards such as Gen-Z, OpenCAPI, and CCIX will need both hardware and software support.   ARM SoCs are showing up with CCIX HW and OpenCAPI is in the Power architecture. But this is just the early stage; switches, enclosures and components that support these standards are still in the offing. Furthermore the OS support for these standards is also unavailable. And finally the management mechanisms and composability software is also undefined. So we still are a good distance from the larger composable infrastructure being available. Q. Are the data center orchestration tools currently on the market capable of building a composable infrastructure solution like you described? A. The tools on the market are still in the early stages of providing this capability. Some are purpose built for specific environments while others encompass a wider set of environments, but lack some of the dynamic/automation capabilities. Also, there is work going on, or starting up, in standards bodies to define the APIs needed for orchestration to work in truly heterogeneous application, operating system and hardware environments. Q. In composable environments, how does security scale with it, specifically, encryption? Encrypt everything? Or some subset of jobs that truly are only jobs requiring encryption? A. Fabrics can be configured to be local and private relieving the need for encrypted transfers. However there will be new issues to contend with. For example, consider memory that was previously used in one configuration that was disassembled and then reused in another. Ensuring that memory is cleaned before reuse will become required to prevent information leakage. Q. For Gen-Z, Pooled Memory or Memory from different Racks, what about the Latency issues? Local memories don’t have issues with latency? A. Although Gen-Z supports longer distance interconnects, it does not mean that only long distance configurations will be utilized. Think of it as a set of tools in a toolbox. Some memories will be close for lower latencies and others will be farther to provide for 4th or 5th level caching. Q. Is it more about declarative mapping of the components? At this point software and hardware are decoupled, so the messaging and logic are really the requirement for orchestration. A. The orchestration layer provides a translation between a declarative and imperative state in composable infrastructure. It is responsible for gathering the requirements from the application (declarative – “this is what I want”), then identifying the capabilities of the components on the network and logically mapping them to create a virtual infrastructure (imperative – “this is how to do it”). Q. As apps start to be built from microservices, which may run across different physical nodes, I would think this further raises performance challenges on disaggregated infrastructure. How this will impact things would be an interesting next topic. A. Agreed. Although I believe microservices will actually be enhanced by composable infrastructure. Composable infrastructure in general will create smaller systems that more closely fit the needs of the service or classes of services that will run on them. Just as in a bin packing problem having smaller units tends to provide better utilization of the container. Got more questions? Feel free to comment on this blog and we’ll answer.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Got Questions on Container Storage? We’ve Got Answers!

Alex McDonald

Feb 27, 2019

title of post
Keeping up with changes in the world of container storage is not easy. That’s why the SNIA Cloud Storage Technologies Initiative invited expert Keith Hudgins of Docker for a live webcast, “What’s New in Container Storage.” I encourage you to watch it on-demand. It’s well worth the approximately half-hour investment to get up to speed on container storage. As promised during the live event, here are answers to the questions we received: Q. How does the new Container Storage Interface fit in here? A. Container Storage Interface (CSI) is one of the three persistent storage interfaces for Kubernetes. It's also gaining a bit of traction for non-Kubernetes use as well: Pivotal and Mesos have announced their intention to use the API for support for volume use. You can learn more at the CSI main project page. Q. Where does LXD/LXC fit into this discussion? A. Not very well - LXC technology was used in earlier versions of Docker prior to Docker Engine 1.10. There is some provision under LXC for both persistent volumes and overlay, but I'm honestly not that familiar with the pluggable APIs for that container tech. Here's a link to some docs for the LXD persistent storage interface. Q. How does hardware-RAID created volumes play a role in Kubernetes? Do the hardware RAID volumes need an out-of-tree plugin for Kubernetes persistent volume? A. Hardware RAID devices can provide volumes for containers running under Kubernetes. Like any installation, the method you use will depend on your requirements. You can use basic, in-tree drivers for most cases. Kubernetes has built-in support for NFS and iSCSI. Depending on needs, you can also build a custom driver using FlexVolume or CSI. Q. Are there plans to add Docker support for persistent me A.It's a very new technology - we're interested in the applications but it's new. We're waiting to see how the market matures. Q. Is FlexVolume persistent storage? A. Yes, absolutely. FlexVolume is one of the three persistent storage APIs available for use with Kubernetes. For deeper info on how to build FlexVolume plugins, take a look at these links: You can learn more about container storage in the other three container webcasts the CSTI has hosted: If you have questions, please leave comments in this blog. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Got Questions on Container Storage? We’ve Got Answers!

Alex McDonald

Feb 27, 2019

title of post
Keeping up with changes in the world of container storage is not easy. That’s why the SNIA Cloud Storage Technologies Initiative invited expert Keith Hudgins of Docker for a live webcast, “What’s New in Container Storage.” I encourage you to watch it on-demand. It’s well worth the approximately half-hour investment to get up to speed on container storage. As promised during the live event, here are answers to the questions we received: Q. How does the new Container Storage Interface fit in here? A. Container Storage Interface (CSI) is one of the three persistent storage interfaces for Kubernetes. It’s also gaining a bit of traction for non-Kubernetes use as well: Pivotal and Mesos have announced their intention to use the API for support for volume use. You can learn more at the CSI main project page. Q. Where does LXD/LXC fit into this discussion? A. Not very well – LXC technology was used in earlier versions of Docker prior to Docker Engine 1.10. There is some provision under LXC for both persistent volumes and overlay, but I’m honestly not that familiar with the pluggable APIs for that container tech. Here’s a link to some docs for the LXD persistent storage interface. Q. How does hardware-RAID created volumes play a role in Kubernetes? Do the hardware RAID volumes need an out-of-tree plugin for Kubernetes persistent volume? A. Hardware RAID devices can provide volumes for containers running under Kubernetes. Like any installation, the method you use will depend on your requirements. You can use basic, in-tree drivers for most cases. Kubernetes has built-in support for NFS and iSCSI. Depending on needs, you can also build a custom driver using FlexVolume or CSI. Q. Are there plans to add Docker support for persistent me A.It’s a very new technology – we’re interested in the applications but it’s new. We’re waiting to see how the market matures. Q. Is FlexVolume persistent storage? A. Yes, absolutely. FlexVolume is one of the three persistent storage APIs available for use with Kubernetes. For deeper info on how to build FlexVolume plugins, take a look at these links: You can learn more about container storage in the other three container webcasts the CSTI has hosted: If you have questions, please leave comments in this blog. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

What Drives SNIA Technical Work?

khauser

Feb 25, 2019

title of post
In 2019, SNIA On Storage is  partnering with the SNIA Technical Council Co-Chairs Bill Martin and Mark Carlson to chat about SNIA technical activities. In this blog, we chat with Mark on the catalysts for new SNIA work.  SNIA also invites you to check out our event calendar to meet SNIA in person across the globe in the next months. SNIA On Storage (SOS):  A new year always brings new ideas, and I’m sure SNIA has some exciting activities planned for 2019.  But just how are new SNIA efforts created? Mark Carlson (MC):  SNIA does not just sit back and dream up work to do, but rather relies on industry input and requirements.  A great example of how SNIA work is created is the 2018 launch of the Computational Storage Technical Work Group (TWG).  It all started at the Flash Memory Summit industry event with a simple Birds-of-a-Feather on a thing called “computational storage”.  Interest definitely was high – the room was packed - and the individuals assembled decided to use SNIA as the vehicle to do definition and standardization of this new concept. SOS:  What made them choose SNIA? MC:  The industry views SNIA as a place where they can come to get agreement on technical issues. SNIA is committed to developing and promoting standards in the broad and complex fields of digital storage and information management to allow solutions to be more easily produced.  Technical Work Groups (TWGs) are collaborative technical groups within the SNIA that work on specific areas of technology development. In the case of computational storage, it was clear that the industry saw SNIA as a catalyst to get things off and running with computational storage in the standards world because…six months after FMS we have a new Technical Work Group with 90 individuals from 30 member companies starting their work and evangelizing computational storage at events like March’s OCP Summit. SOS:  What are some of the changing storage industry trends that might create a need for new SNIA efforts? MC:  We see some exciting activity in the areas of composable infrastructure, next generation fabrics, and new data center innovations. Standard computer architectures in data centers today are essentially servers acting as the building blocks for assembling data center sized solutions.  But imagine if CPUs, memory, storage, and networks were individual components on next generation fabrics, making more granular building blocks.  The definition then of what a server is would be an assemblage on the fly of these components.  Scalability of any individual platform instance would become a matter of combining sufficient components for the task. SOS:  What role would SNIA play? MC:  Composable infrastructure will now need to be managed in a different way than when using servers as a building block.  Rather than having a single point of management for multiple components in a box, management now needs to be component-based and handled by the components themselves. Redfish from the DMTF is already architected for this very issue of scale out and SNIA’s Swordfish is continuing to extend this into storage architectures. SNIA is already a pioneer in the area of storage management, and anticipating this coming possible disruption by moving its standards to modern protocols with alliance partners like DMTF and NVM Express.  DMTF Redfish and SNIA Swordfish are examples of this work where Object Drive and other work items create a Redfish profile for NVMe drives. NVMe drives will be a storage component and a peer of the CPU either attached to the host or not attached and outside the box. Next generation fabrics are enabling these composable infrastructures, making new data center innovation possible.  But the new fabrics themselves need to be managed as well. SOS:  Where can folks learn about these activities in person? MC:  Check out the Computational Storage Birds-of-a-Feather at USENIX FAST this week.  In March, SNIA will present Persistent Memory, including a PM Programming Tutorial and Hackathon at the Non Volatile Memory Workshop at UC San Diego; Computational Storage presentations at OCP Summit; and NVM alliance activities at NVM Annual Members Meeting and Developer Day. SOS:  Exciting news indeed!   Thanks, Mark, and we’ll look forward to seeing SNIA at upcoming events.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Marty Foltyn

Feb 22, 2019

title of post

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Marty Foltyn

Feb 22, 2019

title of post

It’s now less than three weeks for the next SNIA Persistent Memory Hackathon and Workshop.  Our next workshop will be held in conjunction with the 10th Annual Non-Volatile Memory Workshop (http://nvmw.ucsd.edu/) at the University of California, San Diego on Sunday, March 10th from 2:00pm to 5:30pm.

The Hackathon at NVMW19 provides software developers with an understanding of the different tiers and modes of persistent memory, and gives an overview of the standard software libraries that are available to access persistent memory.  Attendees will have access to system configured with persistent memory, software libraries, and sample source code. A variety of mentors will be available to provide tutorials and guide participants in the development of code. Learn more here.

In the last workshop, the feedback from the attendees pointed to a desire to work longer on code after the tutorial ended.  We will ensure that all the Hackathon attendees will have access to their environment through the length of the conference. So any participant in the Sunday session will be able to continue work until the conference completion on Tuesday afternoon.  While there won’t be an opportunity for formal follow-up, we’re planning an informal meet-up the final day of the conference. Stay tuned for details.

For those not familiar with NVMW, the program is replete with the latest in non-volatile memory research, which enables attendees to understand the practical advances in software development for persistence.  The workshop facilitates the exchange of ideas and advances collaboration and knowledge of the use of persistent memory. Registration for the conference is affordable, and grants are available for university student attendees.

For those not able to get to San Diego in March, enjoy the weather that obviously won’t be anywhere near as nice where you live.  Oh, sorry. For those not able to get to San Diego in March, SNIA is working on the next opportunities for a formal hackathon. But we can’t do it alone.  If you have a group of programmers interested in learning persistent memory development, SNIA would consider coming to you with a Host a Hackathon. We can provide, or even train, mentors to get you started, and show you how to build your own cloud-based development environment.  You’ll get an introduction to coding, and you’ll be left with some great examples to build your own applications. Contact SNIA at PMhackathon@snia.org for more details and visit our PM Programming Hackathon webpage for the latest updates.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

What Are the Networking Requirements for HCI?

J Metz

Feb 20, 2019

title of post
Hyperconverged infrastructures (also known as “HCI”) are designed to be easy to set up and manage. All you need to do is add networking. In practice, the “add networking” part has been more difficult than most anticipated. That’s why the SNIA Networking Storage Forum (NSF) hosted a live webcast “The Networking Requirements for Hyperconverged Infrastructure” where we covered what HCI is, storage characteristics of HCI, and important networking considerations. If you missed it, it’s available on-demand. We had some interesting questions during the live webcast and as we promised during the live presentation, here are answers from our expert presenters: Q. An HCI configuration ought to exist out of 3 or more nodes, or have I misunderstood this? In an earlier slide I saw HCI with 1 and 2 nodes. A. You are correct that HCI typically requires 3 or more nodes with resources pooled together to ensure data is distributed through the cluster in a durable fashion. Some vendors have released 2 node versions appropriate for edge locations or SMBs, but these revert to a more traditional failover approach between the two nodes rather than a true HCI configuration. Q. NVMe-oF means running NVMe over Fibre Channel or something else? A. The “F” in “NVMe-oF” stands for “Fabrics”. As of this writing, there are currently 3 different “official” Fabric transports explicitly outlined in the specification: RDMA-based (InfiniBand, RoCE, iWARP), TCP, and Fibre Channel. HCI, however, is a topology that is almost exclusively Ethernet-based, and Fibre Channel is a less likely storage networking transport for the solution. The spec for NVMe-oF using TCP was recently ratified, and may gain traction quickly given the broad deployment of TCP and comfort level with the technology in IT. You can learn ore about NVMe-oF in the webinar “Under the Hood with NVMe over Fabrics” and NVMe/TCP in this NSF webcast “What NVMe™/TCP Means to Networked Storage.” Q. In the past we have seen vendors leverage RDMA within the host but not take it to the fabric i.e. RDMA yes, RDMA over fabric may be not. Within HCI, do you see fabrics being required to be RDMA aware and if so, who do you think will ultimately decide this, HCI vendor, applications vendor, the customer, or someone else? A. The premise of HCI systems is that there is an entire ecosystem “under one roof,” so to speaker. Vendors with HCI solutions on the market have their choice of networking protocols that best works with their levels of virtualization and abstraction. To that end, it may be possible that RDMA-capable fabrics will become more common as workload demands on the network increase, and IT looks for various ways to optimize traffic. Hyperconverged infrastructure, with lots of east-west traffic between nodes, can take advantage of RDMA and NVMe-oF to improve performance and alleviate certain bottlenecks in the solution. It is, however, only one component piece of the overall picture. The HCI solution needs to know how to take advantage of these fabrics, as do switches, etc. for an end-to-end solution, and in some cases other transport forms may be more appropriate. Q. What is a metadata network? I had never heard that term before. A. Metadata is the data about the data. That is, HCI systems need to know where the data is located, when it was written, how to access it. That information about the data is called metadata. As systems grow over time, the amount of metadata that exists in the system grows as well. In fact, it is not uncommon for the metadata quantity and traffic to exceed the data traffic. For that reason, some vendors recommend establishing a completely separate network for handling the metadata traffic that traverses the system.        

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Persistently Fun Once Again – SNIA’s 7th Persistent Memory Summit is a Wrap!

kristin.hauser

Jan 28, 2019

title of post
Leave it to Rob Peglar, SNIA Board Member and the MC of SNIA’s 7th annual Persistent Memory Summit to capture the Summit day as persistently fun with a metric boatload of great presentations and speakers! And indeed it was a great day, with fourteen sessions presented by 23 speakers covering the breadth of where PM is in 2019 – real world, application-focused, and supported by multiple operating systems. Find a great recap on the Forbes blog by Tom Coughlin of Coughlin Associates. Attendees enjoyed live demos of Persistent Memory technologies from AgigA Tech, Intel, SMART Modular, the SNIA Solid State Storage Initiative, and Xilinx.  Learn more about what they presented here. And for the first time as a part of the Persistent Memory Summit, SNIA hosted a Persistent Memory Programming Hackathon sponsored by Google Cloud, where SNIA PM experts mentored software developers to do live coding to understand the various tiers and modes of PM and what existing methods are available to access them.  Upcoming SNIA SSSI on Solid State Storage blogs will give details and insights into "PM Hacking".  Also sign up for the SNIAMatters monthly newsletter to learn more, and stay tuned for upcoming Hackathons – next one is March 10-11 in San Diego. Missed out on the live sessions?  Not to worry, each session was videotaped and can be found on the SNIA Youtube Channel.  Download the slides for each session on the PM Summit agenda at www.snia.org/pm-summit.  Thanks to our presenters from Advanced Computation and Storage, Arm, Avalanche Technology, Calypso Systems, Coughlin Associates, Dell, Everspin Technologies, In-Cog Solutions, Intel, Mellanox Technologies, MemVerge, Microsoft, Objective Analysis, Sony Semiconductor Solutions Corporation, Tencent Cloud, Western Digital, and Xilinx.   And thanks also to our great audience and their questions – your enthusiasm and support will keep us persistently having even more fun!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to