Sorry, you need to enable JavaScript to visit this website.
Marty Foltyn

Feb 4, 2020

title of post
The SNIA Persistent Memory and NVDIMM Special Interest Group announced a programming challenge for NVDIMM-based systems in Q4 of 2019.  Participants get free online access to persistent memory systems based at the SNIA Technology Center using NVDIMM-Ns provided by SIG members AgigA Tech, Intel, SMART Modular, and Supermicro.  The goal of the challenge is to spark interest by developers in this new technology so they can understand more clearly how persistent memory applications can be developed and applied in 2020 environments and beyond. Response to the NVDIMM Programming Challenge has been very positive.  Entrants to date have backgrounds from no experience programming persistent memory to those who develop persistent memory applications as part of their day jobs.
At the January 2020 Persistent Memory Summit, the SIG announced the first NVDIMM Programming Challenge winner:   Steve Heller of Chrysalis Software Corporation. Steve submitted a closed-source project, the Three Misses Persistent Hash Table (www.threemisses.com), a key-value store application that uses persistent memory to enable significantly faster start-up and shut-down.  Its use of the DRAM speed of the NVDIMM modules enables faster look-up performance. Steve’s project met the challenge criteria as reviewed by the judges, including the use of multiple aspects of NVDIMM/Persistent Memory capabilities and the use of persistence to enable new features and appeal across multiple aspects of a system beyond persistence.  The Three Misses Persistent Hash Table also advanced the cause of Persistent Memory and applied to all types of NVDIMM/Persistent Memory systems. Jim Fister, who directs the SNIA Hackathon Program, provided a lively summary of Steve’s winning entry during his talk Introduction to PM Hackathons at the Persistent Memory Summit.  Look for the details about 9 minutes, 30 seconds into the video.  You can watch all of the day’s videos on the SNIA Video Channel PM Summit playlist. Steve also provided a live demonstration of his work during the day at the Persistent Memory Summit. SNIA congratulates Steve and reminds you that the NVDIMM Programming Challenge is still LIVE!  Additional participants and submissions are welcome through March 31, 2020, and will be featured at upcoming SNIA events.  Send an email to PMhackathon@snia.org and get your credentials.  Read more about challenge details, and watch this space for future winners, as well as more challenge opportunities!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Pay Attention to These Cloud Standards

Alex McDonald

Jan 16, 2020

title of post

What’s going on in the world of cloud standards? Since the initial publication of the National Institute of Standards and Technology (NIST) definition of cloud computing in NIST SP 800-145 in 2011, international standards development organizations (SDOs) have sought to refine and expand the cloud computing landscape. On February 13, 2020 at our next live SNIA Cloud Storage Technologies Initiative webcast “Cloud Standards: What They Are, Why You Should Care” we will dive into the cloud standards worth noting as Eric Hibbard, standards expert and ISO editor, will discuss:

  • Key published and draft cloud standards
  • Interdependencies of cloud standards and their importance
  • Potential future work
  • Related technologies: virtualization, federation and fog/edge computing

Lastly, the relevance of the standards will be explored to help organizations understand ways these documents can be exploited.

Register today to join us on February 13, 2020 10:00 am PST for what is sure to be an insightful discussion on the state of cloud standards.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Pay Attention to These Cloud Standards

Alex McDonald

Jan 16, 2020

title of post
What’s going on in the world of cloud standards? Since the initial publication of the National Institute of Standards and Technology (NIST) definition of cloud computing in NIST SP 800-145 in 2011, international standards development organizations (SDOs) have sought to refine and expand the cloud computing landscape. On February 13, 2020 at our next live SNIA Cloud Storage Technologies Initiative webcast “Cloud Standards: What They Are, Why You Should Care” we will dive into the cloud standards worth noting as Eric Hibbard, standards expert and ISO editor, will discuss:
  • Key published and draft cloud standards
  • Interdependencies of cloud standards and their importance
  • Potential future work
  • Related technologies: virtualization, federation and fog/edge computing
Lastly, the relevance of the standards will be explored to help organizations understand ways these documents can be exploited. Register today to join us on February 13, 2020 10:00 am PST for what is sure to be an insightful discussion on the state of cloud standards.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Tim Lustig

Jan 15, 2020

title of post
We kicked-off our 2020 webcast program by diving into how The Storage Performance Development Kit (SPDK) fits in the NVMe landscape. Our SPDK experts, Jim Harris and Ben Walker, did an outstanding job presenting on this topic. In fact, their webcast, “Where Does SPDK Fit in the NVMe-oF Landscape” received at 4.9 rating on a scale of 1-5 from the live audience. If you missed the webcast, I highly encourage you to watch it on-demand. We had some great questions from the attendees and here are answers to them all: Q. Which CPU architectures does SPDK support? A. SPDK supports x86, ARM and Power CPU architectures. Q. Are there plans to extend SPDK support to additional architectures? A. If someone has interest in using SPDK on additional architectures, they may develop the necessary SPDK patches and submit them for review.  Please note that SPDK relies on the Data Plane Development Kit (DPDK) for some aspects of CPU architecture support, so DPDK patches would also be required. Q. Will SPDK NVMe-oF support QUIC?  What advantages does it have compared to RDMA and TCP transports? A. SPDK currently has implementations for all of the transports that are part of the NVMe and related specifications – RDMA, TCP and Fibre Channel (target only).  If NVMe added QUIC (a new UDP-based transport protocol for the Internet) as a new transport, SPDK would likely add support. QUIC could be a more efficient transport than TCP, since it is a reliable transport based on multiplexed connections over UDP. On that note, the SNIA Networking Storage Forum will be hosting a webcast on April 2, 2020. “QUIC – Will it Replace TCP/IP?” You can register for it here. Q. How do I map a locally attached NVMe SSD to an NVMe-oF subsystem? A. Use the bdev_nvme_attach_controller RPC to create SPDK block devices for the NVMe namespaces. You can then attach those block devices to an existing subsystem using the nvmf_subsystem_add_ns RPC. You can find additional details on SPDK nvmf RPCs here. Q. How can I present a regular file as a block device over NVMe-oF?

A. Use the bdev_aio_create RPC to create an SPDK block device for the desired file. You can then attach this block device to an existing subsystem using the nvmf_subsystem_add_ns RPC.  You can find additional details on SPDK nvmf RPCs here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Why Object Storage is Important

John Kim

Jan 3, 2020

title of post
Object storage is a secure, simple, scalable, and cost-effective means of embracing the explosive growth of unstructured data enterprises generate every day. Object storage adoption is on the rise. That’s why the SNIA Networking Storage Forum (NSF) is hosting “Object Storage: What, How and Why.”  This webcast, with experts Chris Evans of Bookend LTD, Rick Vanover of Veeam, and Alex McDonald, Vice Chair of SNIA NSF and NetApp, the will explain how object storage works, its benefits and why it’s important. Like other storage technologies, object storage brings its own set of unique characteristics to the market. Join us on February 19th at 10:00 am PT/1:00 pm ET to learn:
  • What object storage is and what it does
  • How to use object storage
  • Essential characteristics of typical consumption
  • Why object storage is important to data creators and consumers everywhere
Save your place now and register for this webcast on February 19th. We look forward to seeing you.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

24G SAS Feature Focus: Inherent Scalability and Flexibility

STA Forum

Dec 12, 2019

title of post
[Editor’s Note: This is part of a series of blog posts discussing 24G SAS features.] By: Jeremiah Tussey, Vice President, SCSI Trade Association; Alliances Manager, Product Marketing, Data Center Solutions, Microchip Technology, December 12, 2019 Serial Attached SCSI (SAS) and the SCSI protocol are undergoing an evolution, transitioning to 24G SAS and implementing new features and enhancements to a well-established storage technology. SAS has set the bar in the storage industry with inherent scalability and flexibility in modern data center, server and storage topologies. This tested ability carries forward to the latest update in the SAS technology roadmap: 24G SAS. Unsurpassed scalability SAS technology addresses a very large and continuously growing market, natively supporting many host connections to thousands of target devices in a single topology. It enables optimized capacity storage solutions and provides interconnect distance that empowers SAS fabric solutions. Today, SAS services cold cloud data centers, high-performance computing and direct server markets. SAS is effectively the persistent storage with flexibility in attached media solutions. A variety of interconnect solutions support SAS, with copper for short to medium reach, and fiber optics for long reach applications. These cabling options utilize mini SAS-HD and Quad Small Form-Factor Pluggable (QSFP), as well as recent SlimSAS™ and MiniLink definitions added into the latest SAS standard. Additionally, native failover and dual-port features allow multiple domains to be configured and then automatically takeover operations in the case of component failures, broken cable connections or domain failures of any form. These inherent features were built into the protocol with high reliability, availability and scalability (RAS) in mind. Overall, these maintain per-lane performance, data integrity, flexible architectures and a native hot plugging ability to add or remove enterprise servers and storage for decades.
 

Scalability in External Storage Architectures

Flexibility for the future What makes this support come all together is the flexibility that is built into SAS. SAS supports simultaneous SAS and SATA end devices, using the Serial SCSI Protocol (SSP) for SAS connections and Serial ATA Tunneling Protocol (STP)for SATA connections. Because SAS technology can support two transport protocols simultaneously, SAS has and will continue to span all storage types. This flexibility allows high-performance SAS SSDs with MultiLink connections for optimized throughput in demanding applications with SAS infrastructure. Traditional enterprise SAS Hard Disk Drives (HDDs) are still effectively utilized in more practical medium IO-intensive, cost-sensitive applications. With rising use of cost-optimized SAS or SATA SSD alternatives, SSDs are increasingly used in these more IO-Intensive and cost-sensitive applications. SAS has effortlessly enabled that media transition as well. Better yet, SAS enables data centers that are filled with capacity-optimized SAS and SATA HDDs, which still maintain a nearly 10 times cost advantage over NAND Flash-based solutions. New standards and technology enhancements have been introduced in HDDs to continue supporting the exponential growth in storage demands that cannot be effectively covered with SSD solutions. SMR and hybrid HDD solutions with dynamic configuration only available as a SCSI feature are recent introductions ideal for these capacity-demanding applications. Last but not least, SAS topologies still support backup technologies such as SAS or SATA Tape products, allowing access to historical data storage and continued use of the latest backup solutions based on tape, DVD or Blu-ray backup storage arrays, even in new 24G storage deployments. SAS Innovations in HDD and SSD Technologies
  • SSHD (Solid State Hard Drive) – hybridization of performance and capacity
  • Storage Intelligence – SSD optimization, including garbage collection
  • TDMR (Two-Dimensional Magnetic Recording) – faster writes
  • Helium – capacity optimization
  • SMR (Shingled Magnetic Recording) – capacity optimization
  • MultiLink – bandwidth
  • Multiple Actuator – IOPs per TB
  • MAMR (Microwave-Assisted Magnetic Recording) – capacity optimization
  • HAMR (Heat-Assisted Magnetic Recording) – capacity optimization
  • Hybrid SMR – Configurable SMR vs. standard capacity; helps with SKU management
The ecosystem is on track for 24G SAS production readiness for the upcoming deployment of next-generation server platforms. Analyzers and test equipment have been available for some time now, with cables and connectors in existing and new form-factors ready for 24G SAS. Next-gen SAS controller and expander products are aligned with upcoming PCIe Gen4 platform launches. New 12G SAS and 6G SATA HDD/SSD capabilities to intersect with 24G SAS ecosystem include: MultiLink SAS™ SSDs and Dual-actuator HDDs for increased IOPs/Bandwidth, hybrid SMR for flexible and increased capacity, as well as HAMR / MAMR technologies for increased capacity. SAS continues to evolve through innovation, targeting enhanced performance and features and continuing inherent scalability and flexibility support in modern data center server and storage topologies.
Preserving the Past, Creating the FutureSAS – Preserving the Past, Creating the Future

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Hyperscalers Take on NVMe™ Cloud Storage Questions

J Metz

Dec 2, 2019

title of post
Our recent webcast on how Hyperscalers, Facebook and Microsoft are working together to merge their SSD drive requirements generated a lot of interesting questions. If you missed “How Facebook & Microsoft Leverage NVMe Cloud Storage” you can watch it on-demand. As promised at our live event. Here are answers to the questions we received. Q. How does Facebook or Microsoft see Zoned Name Spaces being used? A. Zoned Name Spaces are how we will consume QLC NAND broadly. The ability to write to the NAND sequentially in large increments that lay out nicely on the media allows for very little write amplification in the device. Q. How high a priority is firmware malware? Are there automated & remote management methods for detection and fixing at scale? A. Security in the data center is one of the highest priorities. There are tools to monitor and manage the fleet including firmware checking and updating. Q. If I understood correctly, the need for NVMe rooted from the need of communicating at faster speeds with different components in the network. Currently, at which speed is NVMe going to see no more benefit with higher speed because of the latencies in individual components? Which component is most gating/concerning at this point? A. In today’s SSDs, the NAND latency dominates. This can be mitigated by adding backend channels to the controller and optimization of data placement across the media. There are applications that are direct connect to the CPU where performance scales very well with PCIe lane speeds and do not have to deal with network latencies. Q. Where does zipline fit? Does Microsoft expect Azure to default to zipline at both ends of the Azure network? A. Microsoft has donated the RTL for the Zipline compression ASIC to Open Compute so that multiple endpoints can take advantage of “bump in the wire” inline compression. Q. What other protocols exist that are competing with NVMe? What are the pros and cons for these to be successful? A. SATA and SAS are the legacy protocols that NVMe was designed to replace. These protocols still have their place in HDD deployments. Q. Where do you see U.2 form factor for NVMe? A. Many enterprise solutions use U.2 in their 2U offerings. Hyperscale servers are mostly focused on 1U server form factors were the compact heights of E1.S and E1.L allow for vertical placement on the front of the server. Q. Is E1.L form factor too big (32 drives) for failure domain in a single node as a storage target? A. E1.L allows for very high density storage. The storage application must take into account the possibility of device failure via redundancy (mirroring, erasure coding, etc.) and rapid rebuild. In the future, the ability for the SSD to slowly lose capacity over time will be required. Q. What has been the biggest pain points in using NVMe SSD – since inception/adoption, especially, since Microsoft and Facebook started using this. A. As discussed in the live Q&A, in the early days of NVMe the lack of standard drives for both Windows and Linux hampered adoption. This has since been resolved with standard in box drive offerings. Q. Has FB or Microsoft considered allowing drives to lose data if they lose power on an edge server? if the server is rebuilt on a power down this can reduce SSD costs. A. There are certainly interesting use cases where Power Loss Protection is not needed. Q. Do zoned namespaces makes Denali spec obsolete or dropped by Microsoft? How does it impact/compete open channel initiatives by Facebook? A. Zoned Name Spaces incorporates probably 75% of the Denali functionality in an NVMe standardized way. Q. How stable is NVMe PCIe hot plug devices (unmanaged hot plug)? A. Quite stable. Q. How do you see Ethernet SSDs impacting cloud storage adoption?

A. Not clear yet if Ethernet is the right connection mechanism for storage disaggregation. CXL is becoming interesting.

Q. Thoughts on E3? What problems are being solved with E3? A. E3 is meant more for 2U servers. Q. ZNS has a lot of QoS implications as we load up so many dies on E1.L FF. Given the challenge how does ZNS address the performance requirements from regular cloud requirements? A. With QLC, the end to end systems need to be designed to meet the application’s requirements. This is not limited to the ZNS device itself, but needs to take into account the entire system. If you’re looking for more resources on any of the topics addressed in this blog, check out the SNIA Educational Library where you’ll find over 2,000 vendor-neutral presentations, white papers, videos, technical specifications, webcasts and more.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Hyperscalers Take on NVMe™ Cloud Storage Questions

J Metz

Dec 2, 2019

title of post

Our recent webcast on how Hyperscalers, Facebook and Microsoft are working together to merge their SSD drive requirements generated a lot of interesting questions. If you missed "How Facebook & Microsoft Leverage NVMe Cloud Storage" you can watch it on-demand. As promised at our live event. Here are answers to the questions we received.

Q. How does Facebook or Microsoft see Zoned Name Spaces being used?

A. Zoned Name Spaces are how we will consume QLC NAND broadly. The ability to write to the NAND sequentially in large increments that lay out nicely on the media allows for very little write amplification in the device.

Q. How high a priority is firmware malware? Are there automated & remote management methods for detection and fixing at scale?

A. Security in the data center is one of the highest priorities. There are tools to monitor and manage the fleet including firmware checking and updating.

Q. If I understood correctly, the need for NVMe rooted from the need of communicating at faster speeds with different components in the network. Currently, at which speed is NVMe going to see no more benefit with higher speed because of the latencies in individual components? Which component is most gating/concerning at this point?

A. In today's SSDs, the NAND latency dominates. This can be mitigated by adding backend channels to the controller and optimization of data placement across the media. There are applications that are direct connect to the CPU where performance scales very well with PCIe lane speeds and do not have to deal with network latencies.

Q. Where does zipline fit? Does Microsoft expect Azure to default to zipline at both ends of the Azure network?

A. Microsoft has donated the RTL for the Zipline compression ASIC to Open Compute so that multiple endpoints can take advantage of "bump in the wire" inline compression.

Q. What other protocols exist that are competing with NVMe? What are the pros and cons for these to be successful?

A. SATA and SAS are the legacy protocols that NVMe was designed to replace. These protocols still have their place in HDD deployments.

Q. Where do you see U.2 form factor for NVMe?

A. Many enterprise solutions use U.2 in their 2U offerings. Hyperscale servers are mostly focused on 1U server form factors were the compact heights of E1.S and E1.L allow for vertical placement on the front of the server.

Q. Is E1.L form factor too big (32 drives) for failure domain in a single node as a storage target?

A. E1.L allows for very high density storage. The storage application must take into account the possibility of device failure via redundancy (mirroring, erasure coding, etc.) and rapid rebuild. In the future, the ability for the SSD to slowly lose capacity over time will be required.

Q. What has been the biggest pain points in using NVMe SSD - since inception/adoption, especially, since Microsoft and Facebook started using this.

A. As discussed in the live Q&A, in the early days of NVMe the lack of standard drives for both Windows and Linux hampered adoption. This has since been resolved with standard in box drive offerings.

Q. Has FB or Microsoft considered allowing drives to lose data if they lose power on an edge server? if the server is rebuilt on a power down this can reduce SSD costs.

A. There are certainly interesting use cases where Power Loss Protection is not needed.

Q. Do zoned namespaces makes Denali spec obsolete or dropped by Microsoft? How does it impact/compete open channel initiatives by Facebook?

A. Zoned Name Spaces incorporates probably 75% of the Denali functionality in an NVMe standardized way.

Q. How stable is NVMe PCIe hot plug devices (unmanaged hot plug)?

A. Quite stable.

Q. How do you see Ethernet SSDs impacting cloud storage adoption?

A. Not clear yet if Ethernet is the right connection mechanism for storage disaggregation.  CXL is becoming interesting.

Q. Thoughts on E3? What problems are being solved with E3?

A. E3 is meant more for 2U servers.

Q. ZNS has a lot of QoS implications as we load up so many dies on E1.L FF. Given the challenge how does ZNS address the performance requirements from regular cloud requirements?

A. With QLC, the end to end systems need to be designed to meet the application's requirements. This is not limited to the ZNS device itself, but needs to take into account the entire system.

If you're looking for more resources on any of the topics addressed in this blog, check out the SNIA Educational Library where you'll find over 2,000 vendor-neutral presentations, white papers, videos, technical specifications, webcasts and more.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SPDK in the NVMe-oF™ Landscape

Tim Lustig

Nov 25, 2019

title of post
The Storage Performance Development Kit (SPDK) has gained industry-wide recognition as a framework for building highly performant and efficient storage software with a focus on NVMe™. This includes software drivers and libraries for building NVMe over Fabrics (NVMe-oF) host and target solutions. On January 9, 2020, the SNIA Networking Storage Forum is going to kick-off its 2020 webcast program by diving into this topic with a live webcast “Where Does SPDK Fit in the NVMe-oF Landscape.” In this presentation, SNIA members and technical leaders from SPDK, Jim Harris and Ben Walker, will provide an overview of the SPDK project, NVMe-oF use cases that are best suited for SPDK, and insights into how SPDK achieves its storage networking performance and efficiency, discussing:
  • An overview of the SPDK project
  • Key NVMe-oF use cases for SPDK
  • Examples of NVMe-oF use cases not suited for SPDK
  • NVMe-oF target architecture and design
  • NVMe-oF host architecture and design
  • Performance data
I hope you will join us on January 9th by registering today for what is sure to be an insightful discussion about this timely topic.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SAS Lives On and Here’s Why

STA Forum

Nov 22, 2019

title of post
By: Rohit Gupta, Enterprise Segment Management, Western Digital In the Zettabyte Age, data is the new lifeline. Data is transforming anything and everything at an unprecedented rate in the globally connected world. Businesses are constantly looking for new ways of collecting, storing, and transforming various forms of data to extract intelligence for better decisions, improve processes and efficiencies, develop technologies and innovative products, and ultimately maximize business profitability. With data creating so much economic value, it is putting pressure on IT requirements to support peak workloads, storage tiering, data mining, and running complex algorithms using on-premises or hybrid cloud environments. Today, industry-talk is mostly around NVMe™ SSDs to help meet these stringent workload demands; however, not all drives and workloads are created equal. Over the decades, enterprise OEMs, channel partners, and ecosystem players have continued to support and utilize Serial-Attached SCSI (SAS) to address performance, reliability, availability, and data service challenges for traditional enterprise server and storage workloads. SAS — A reliable, proven interface supporting high-availability storage environments Originally debuting in 2004, SAS has evolved over the decades delivering distinguishing and proven enterprise features such as high reliability, fault tolerance (dual-port), speed, and efficiency. As a result, it has become a desirable and popular choice to run traditional, mission-critical enterprise applications such as OLTP and OLAP, hyper converged infrastructure (HCI) and software-defined storage (SDS) workloads. And it still has traction. According to a leading-industry analyst firm, IDC*, it projects SAS to drive SSD market demand over 24% CAGR in PB growth through 2022, contributing to a combined 52% CAGR growth for the data center SSD market as a whole. “SAS continues to be a valued and trusted interface standard for enterprise storage customers worldwide,” said Jeff Janukowicz, vice president, IDC. “With a robust feature set, backward compatibility, and an evolving roadmap, SAS will remain vital storage interconnect for demanding mission-critical data center workloads today and into the future. Capitalizing on its decades of innovation and SAS expertise, Western Digital is announcing the Ultrastar® DC SS540 SAS SSDs, its 6th generation of SAS SSDs that provide exceptional performance by delivering up to 470K/240K random read/write IOPS. The DC SS540 offers reliability of 2.5 million hours mean-time-between-failure (MTBF) with 96-layer 3D NAND from Western Digital’s joint venture with Intel. The new SAS SSD family is the ideal drive of choice for all-flash arrays (AFAs), caching tiers, HPC and SDS environments. Ultrastar DC SS540 leverages existing SAS platform architecture, reliability, performance leadership, and various security and encryption options, which will lead to faster qualifications and time-to-market for enterprise and private cloud customers. The Ultrastar DC SS540 offers performance, enterprise-grade reliability, dual/single port, and multiple power options. It comes in capacities up to 15.36TB with soft SKU options to cover mainstream endurance choices and also reduce SKU management efforts for OEMs and channel customers. The Ultrastar DC SS540 is currently sampling and in qualification with select customers, with mass production scheduled for CYQ1 2020. Learn More Ultrastar SAS Series
Forward-Looking Statements Certain blog and other posts on this website may contain forward-looking statements, including statements relating to expectations for our product portfolio, the market for our products, product development efforts, and the capacities, capabilities and applications of our products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances. Source: *IDC, Worldwide Solid State Drive Forecast, 2019–2023, Doc #US43828819, May 2019

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to