Three Truths About Hard Drives and SSDs

Jason Feist

May 23, 2024

title of post
An examination of the claim that flash will replace hard drives in the data center “Hard drives will soon be a thing of the past.” “The data center of the future is all-flash.” Such predictions foretelling hard drives’ demise, perennially uttered by a few vocal proponents of flash-only technology, have not aged well. Without question, flash storage is well-suited to support applications that require high-performance and speed. And flash revenue is growing, as is all-flash-array (AFA) revenue. But not at the expense of hard drives. We are living in an era where the ubiquity of the cloud and the emergence of AI use cases have driven up the value of massive data sets. Hard drives, which today store by far the majority of the world’s exabytes (EB), are more indispensable to data center operators than ever. Industry analysts expect hard drives to be the primary beneficiary of continued EB growth, especially in enterprise and large cloud data centers—where the vast majority of the world’s data sets reside. Myth: SSD pricing will soon match the pricing of hard drives. Truth: SSD and hard drive pricing will not converge at any point in the next decade. Hard drives hold a firm cost-per-terabyte (TB) advantage over SSDs, which positions them as the unquestionable cornerstone of data center storage infrastructure. Analysis of research by IDC, TRENDFOCUS, and Forward Insights confirms that hard drives will remain the most cost-effective option for most enterprise tasks. The price-per-TB difference between enterprise SSDs and enterprise hard drives is projected to remain at or above a 6 to 1 premium through at least 2027. This differential is particularly evident in the data center, where device acquisition cost is by far the dominant component in total cost of ownership (TCO). Taking all storage system costs into consideration—including device acquisition, power, networking, and compute costs—a far superior TCO is rendered by hard drive-based systems on a per-TB basis. Myth: Supply of NAND can ramp to replace all hard drive capacity. Truth: Entirely replacing hard drives with NAND would require untenable CapEx investments. The notion that the NAND industry would or could rapidly increase its supply to replace all hard drive capacity isn’t just optimistic—such an attempt would lead to financial ruin. According to the Q4 2023 NAND Market Monitor report from industry analyst Yole Intelligence, the entire NAND industry shipped 3.1 zettabytes (ZB) from 2015 to 2023, while having to invest a staggering $208 billion in CapEx—approximately 47% of their combined revenue. In contrast, the hard drive industry addresses the vast majority—almost 90%—of large-scale data center storage needs in a highly capital-efficient manner. To help crystalize it, let's do a thought experiment. Note that there are 3 hard drive manufacturers in the world. Let's take one of them, for whom we have the numbers, as an example. Look at the chart below where byte production efficiency of the NAND and hard drive industries are compared, using this hard drive manufacturer as a proxy. You can easily see, even with just one manufacturer represented, that the hard drive industry is far more efficient at delivering ZBs to the data center. Could the flash industry fully replace the entire hard drive industry’s capacity output by 2028? Yole Intelligence report cited above indicates that from 2025 to 2027, the NAND industry will invest about $73 billion, which is estimated to yield 963EB of output for enterprise SSDs as well as other NAND products for tablets and phones. This translates to an investment of about $76 per TB of flash storage output. Applying that same capital price per bit, it would require a staggering $206 billion in additional investment to support the 2.723ZB of hard drive capacity forecast to ship in 2027. In total, that’s nearly $279 billion of investment for a total addressable market of approximately $25 billion. A 10:1 loss. This level of investment is unlikely for an industry facing uncertain returns, especially after losing money throughout 2023. Myth: Only AFAs can meet the performance requirements of modern enterprise workloads. Truth: Enterprise storage architecture usually mixes media types to optimize for the cost, capacity, and performance needs of specific workloads. At issue here is a false dichotomy. All-flash vendors advise enterprises to “simplify” and “future-proof” by going all-in on flash for high performance. Otherwise, they posit, enterprises risk finding themselves unable to keep pace with the performance demands of modern workloads. This zero-sum logic fails because:
  1. Most modern workloads do not require the performance advantage offered by flash.
  2. Enterprises must balance capacity and cost, as well as performance.
  3. The purported simplicity of a single-tier storage architecture is a solution in search of a problem.
Let’s address these one by one. First, most of the world’s data resides in the cloud and large data centers. There, only a small percentage of the workload requires a significant percentage of the performance. This is why according to IDC[1] over the last five years, hard drives have amounted to almost 90% of the storage installed base in cloud service providers and hyperscale data centers. In some cases, all-flash systems are not even required at all as part of the highest performance solutions. There are hybrid storage systems that perform as well as or faster than all-flash. Second, TCO considerations are key to most data center infrastructure decisions. This forces a balance of cost, capacity, and performance. Optimal TCO is achieved by aligning the most cost-effective media—hard drive, flash, or tape—to the workload requirement. Hard drives and hybrid arrays (built from hard drives and SSDs) are a great fit for most enterprise and cloud storage and application use cases. While flash storage excels in read-intensive scenarios, its endurance diminishes with increased write activity. Manufacturers address this with error correction and overprovisioning—extra, unseen storage, to replace worn cells. However, overprovisioning greatly increases the imbedded product cost and constant power is needed to avoid data loss, posing cost challenges in data centers. Additionally, while technologies like triple level cell (TLC) and quad-level cell (QLC) allow flash to handle data-heavy workloads like hard drives, the economic rationale weakens for larger data sets or long-term retention. In these cases, disk drives, with their growing areal density, offer a more cost-effective solution. Third, the claim that using an AFA is “simpler” than adopting a mix of media types in a tiered architecture is a solution in search of a problem. Many hybrid storage systems employ a well-proven and finely tuned software-defined architecture that seamlessly integrates and harnesses the strengths of diverse media types into singular units. In scale-out private or public cloud data center architectures, file systems or software defined storage is used to manage the data storage workloads across data center locations and regions. AFAs and SSDs are a great fit for high-performance, read-intensive workloads. But it’s a mistake to extrapolate from niche use cases or small-scale deployments to the mass market and hyperscale where AFAs provide an unnecessarily expensive way to do what hard drives already deliver at a much lower TCO. The data bears it out. Analysis of data from IDC and TRENDFOCUS predicts an almost 250% increase in EB outlook for hard drives by 2028. Extrapolating further out in time, that ratio holds well into the next decade. Hard drives, indeed, are here to stay—in synergy with flash storage.  And the continued use of hard drives and hybrid array solutions supports, in turn, the continued use of SAS technologies in the infrastructure data centers. About the Author Jason Feist is senior vice president of marketing, products and markets, at Seagate Technology.   [1] IDC, Multi-Client Study, Cloud Infrastructure Index 2023: Compute and Storage Consumption by 100 Service Providers, November 2023

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

30 Speakers Highlight AI, Memory, Sustainability, and More at the May 21-22 Summit!

SNIA CMS Community

May 1, 2024

title of post
SNIA Compute, Memory, and Storage Summit is where solutions, architectures, and community come together. Our 2024 Summit – taking place virtually on May 21-22, 2024 – is the best example to date, featuring a stellar lineup of 30 speakers in sessions on artificial intelligence, the future of memory, sustainability, critical storage security issues, the latest on CXL®, UCIe™, and Ultra Ethernet, and more. “We’re excited to welcome executives, architects, developers, implementers, and users to our 12th annual Summit,” said David McIntyre, Compute, Memory, and Storage Summit Chair and member of the SNIA Board of Directors. “Our event features technology leaders from companies like Dell, IBM, Intel, Meta, Samsung – and many more – to bring us the latest developments in AI, compute, memory, storage, and security in our free online event.  We hope you will attend live to ask questions of our experts as they present and watch those you miss on-demand.“ Artificial intelligence sessions sponsored by the SNIA Data, Networking & Storage Forum feature J Michel Metz of the Ultra Ethernet Consortium (UEC) on powering AI’s future with the UEC,  John Cardente of Dell on storage requirements for AI, Jeff White of Dell on edgenuity, and Garima Desai of Samsung on creating a sustainable semiconductor industry for the AI era. Other AI sessions include Manoj Wadekar of Meta on the evolution of hyperscale data centers from CPU centric to GPU accelerated AI, Paul McLeod of Supermicro on storage architecture optimized for AI, and Prasad Venkatachar of Pliops on generative AI data architecture. Memory sessions begin with Jim Handy and Tom Coughlin on how memories are driving big architectural changes. Ahmed Medhioub of Astera Labs will discuss breaking through the memory wall with CXL, and Sudhir Balasubramanian and Arvind Jagannath of VMware will share their memory vision for real world applications. Compute sessions include Andy Walls of IBM on computational storage and real time ransomware detection, JB Baker of ScaleFlux on computational storage real world deployments, Dominic Manno of Los Alamos National Labs on streamlining scientific workflows in computational storage, and Bill Martin and Jason Molgaard of the SNIA Computational Storage Technical Work Group on computational storage standards. CXL and UCIe will be featured with a CXL Consortium panel on increasing AI and HPC application performance with CXL fabrics and a session from Samsung and Broadcom on bringing unique customer value with CXL accelerator-based memory solutions. Richelle Ahlvers and Brian Rea of the UCI Express will discuss enabling an open chipset system with UCIe. The Summit will also dive into security with a number of presentations on this important topic. And there is much more, including a memory Birds-of-a-Feather session, a live Memory Workshop and Hackathon featuring CXL exercises, and opportunities to chat with our experts! Check out the agenda and register for free! The post 30 Speakers Highlight AI, Memory, Sustainability, and More at the May 21-22 Summit! first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Power Efficiency Measurement – Our Experts Make It Clear – Part 4

title of post
Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier! In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization. Join us on our final installment on the  journey to better power efficiency - Part 4: Impact of Storage Architectures on Power Efficiency Measurement. And if you missed our earlier segments, click on the titles to read them:  Part 1: Key Issues in Power Efficiency Measurement,  Part 2: Impact of Workloads on Power Efficiency Measurement, and Part 3: Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives.  Bookmark this blog series and explore the topic further in the SNIA Green Storage Knowledge Center. Impact of Storage Architectures on Power Efficiency Measurement Ultimately, the interplay between hardware and software storage architectures can have a substantial impact on power consumption. Optimizing these architectures based on workload characteristics and performance requirements can lead to better power efficiency and overall system performance. Different hardware and software storage architectures can lead to varying levels of power efficiency. Here's how they impact power consumption. Hardware Storage Architectures
  1. HDDs v SSDs: Solid State Drives (SSDs) are generally more power-efficient than Hard Disk Drives (HDDs) due to their lack of moving parts and faster access times. SSDs consume less power during both idle and active states.
  2. NVMe® v SATA SSDs: NVMe (Non-Volatile Memory Express) SSDs often have better power efficiency compared to SATA SSDs. NVMe's direct connection to the PCIe bus allows for faster data transfers, reducing the time components need to be active and consuming power. NVMe SSDs are also performance optimized for different power states.
  3. Tiered Storage: Systems that incorporate tiered storage with a combination of SSDs and HDDs optimize power consumption by placing frequently accessed data on SSDs for quicker retrieval and minimizing the power-hungry spinning of HDDs.
  4. RAID Configurations: Redundant Array of Independent Disks (RAID) setups can affect power efficiency. RAID levels like 0 (striping) and 1 (mirroring) may have different power profiles due to how data is distributed and mirrored across drives.
Software Storage Architectures
  1. Compression and Deduplication: Storage systems using compression and deduplication techniques can affect power consumption. Compressing data before storage can reduce the amount of data that needs to be read and written, potentially saving power.
  2. Caching: Caching mechanisms store frequently accessed data in faster storage layers, such as SSDs. This reduces the need to access power-hungry HDDs or higher-latency storage devices, contributing to better power efficiency.
  3. Data Tiering: Similar to caching, data tiering involves moving data between different storage tiers based on access patterns. Hot data (frequently accessed) is placed on more power-efficient storage layers.
  4. Virtualization Virtualized environments can lead to resource contention and inefficiencies that impact power consumption. Proper resource allocation and management are crucial to optimizing power efficiency.
  5. Load Balancing: In storage clusters, load balancing ensures even distribution of data and workloads. Efficient load balancing prevents overutilization of certain components, helping to distribute power consumption evenly
  6. Thin Provisioning: Allocating storage on-demand rather than pre-allocating can lead to more efficient use of storage resources, which indirectly affects power efficiency

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Power Efficiency Measurement – Our Experts Make It Clear – Part 4

title of post
Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier! In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization. Join us on our final installment on the  journey to better power efficiency – Part 4: Impact of Storage Architectures on Power Efficiency Measurement. And if you missed our earlier segments, click on the titles to read them:  Part 1: Key Issues in Power Efficiency Measurement,  Part 2: Impact of Workloads on Power Efficiency Measurement, and Part 3: Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives.  Bookmark this blog series and explore the topic further in the SNIA Green Storage Knowledge Center. Impact of Storage Architectures on Power Efficiency Measurement Ultimately, the interplay between hardware and software storage architectures can have a substantial impact on power consumption. Optimizing these architectures based on workload characteristics and performance requirements can lead to better power efficiency and overall system performance. Different hardware and software storage architectures can lead to varying levels of power efficiency. Here’s how they impact power consumption. Hardware Storage Architectures
  1. HDDs v SSDs: Solid State Drives (SSDs) are generally more power-efficient than Hard Disk Drives (HDDs) due to their lack of moving parts and faster access times. SSDs consume less power during both idle and active states.
  2. NVMe® v SATA SSDs: NVMe (Non-Volatile Memory Express) SSDs often have better power efficiency compared to SATA SSDs. NVMe’s direct connection to the PCIe bus allows for faster data transfers, reducing the time components need to be active and consuming power. NVMe SSDs are also performance optimized for different power states.
  3. Tiered Storage: Systems that incorporate tiered storage with a combination of SSDs and HDDs optimize power consumption by placing frequently accessed data on SSDs for quicker retrieval and minimizing the power-hungry spinning of HDDs.
  4. RAID Configurations: Redundant Array of Independent Disks (RAID) setups can affect power efficiency. RAID levels like 0 (striping) and 1 (mirroring) may have different power profiles due to how data is distributed and mirrored across drives.
Software Storage Architectures
  1. Compression and Deduplication: Storage systems using compression and deduplication techniques can affect power consumption. Compressing data before storage can reduce the amount of data that needs to be read and written, potentially saving power.
  2. Caching: Caching mechanisms store frequently accessed data in faster storage layers, such as SSDs. This reduces the need to access power-hungry HDDs or higher-latency storage devices, contributing to better power efficiency.
  3. Data Tiering: Similar to caching, data tiering involves moving data between different storage tiers based on access patterns. Hot data (frequently accessed) is placed on more power-efficient storage layers.
  4. Virtualization Virtualized environments can lead to resource contention and inefficiencies that impact power consumption. Proper resource allocation and management are crucial to optimizing power efficiency.
  5. Load Balancing: In storage clusters, load balancing ensures even distribution of data and workloads. Efficient load balancing prevents overutilization of certain components, helping to distribute power consumption evenly
  6. Thin Provisioning: Allocating storage on-demand rather than pre-allocating can lead to more efficient use of storage resources, which indirectly affects power efficiency
The post Power Efficiency Measurement – Our Experts Make It Clear – Part 4 first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

2024 Year of the Summit Kicks Off – Meet us at MemCon

SNIA CMS Community

Mar 6, 2024

title of post
2023 was a great year for SNIA CMSI to meet with IT professionals and end users in “Summits” to discuss technologies, innovations, challenges, and solutions.  Our outreach at six industry events reached over 16,000 and we thank all who engaged with our CMSI members. We are excited to continue a second “Year of the Summit” with a variety of opportunities to network and converse with you.  Our first networking event will take place March 26-27, 2024 at MemCon in Mountain View, CA. MemCon 2024 focuses on systems design for the data centric era, working with data-intensive workloads, integrating emerging technologies, and overcoming data movement and management challenges.  It’s the perfect event to discuss the integration of SNIA’s focus on developing global standards and delivering education on all technologies related to data.  SNIA and MemCon have prepared a video highlighting several of the key topics to be discussed.
MemCon 2024 Video Preview
At MemCon, SNIA CMSI member and SDXI Technical Work Group Chair Shyam Iyer of Dell will moderate a panel discussion on How are Memory Innovations Impacting the Total Cost of Ownership in Scaling-Up and Power Consumption , discussing impacts on hyperscalers, AI/ML compute, and cost/power. SNIA Board member David McIntyre will participate in a panel on How are Increased Adoption of CXL, HBM, and Memory Protocol Expected to Change the Way Memory and Storage is Used and Assembled? , with insights on the markets and emerging memory innovations. The full MemCon agenda is here. In the exhibit area, SNIA leaders will be on hand to demonstrate updates to the SNIA Persistent Memory Programming Workshop featuring new CXL® memory modules (get an early look at our Programming exercises here) and to provide a first look at a Smart Data Accelerator Interface (SDXI) specification implementation.  We’ll also provide updates on SNIA technical work on form factors like those used for CXL. We will feature a drawing for gift cards at the SNIA hosted coffee receptions and at the Tuesday evening networking reception. SNIA colleagues and friends can register for MemCon with a 15% discount using code SNIA15. And stay tuned for engaging with SNIA at upcoming events in 2024, including a return of the SNIA Compute, Memory, and Storage Summit in May 2024, August 2024 FMS-the Future of Memory and Storage; SNIA SDC in September, and SC24 in Atlanta in November 2024. We’ll discuss each of these in depth in our Year of the Summit blog series. The post 2024 Year of the Summit Kicks Off – Meet us at MemCon first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Power Efficiency Measurement – Our Experts Make It Clear – Part 3

title of post
Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier! In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization. Join us on our journey to better power efficiency as we continue with Part 3: Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives. And if you missed our earlier segments, click on the titles to read them:  Part 1: Key Issues in Power Efficiency Measurement, and Part 2: Impact of Workloads on Power Efficiency Measurement..  Bookmark this blog  and check back in April for the final installment of our four-part series. And explore the topic further in the SNIA Green Storage Knowledge Center. Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives There are significant differences in power efficiency between Hard Disk Drives (HDDs) and Solid State Drives (SSDs). While some commentators have examined differences in power efficiency measurement for HDDs v SSDs, much of the analysis has not accounted for the key power efficiency contributing factors outlined in this blog. As a simple generalization at the individual storage device level, HDDs show higher power consumption than SSDs.  In addition, SSDs have higher performance (IOPS and MB/s) often by an order of magnitude or more.  Hence, cursory consideration of device power efficiency measurement, expressed as IOPS/W or MB/s/W, will typically favor the faster SSD with lower device power consumption. On the other hand, depending on the workload and IO transfer size, HDD devices and systems may exhibit better IOPS/W and MB/s/W if measured to large block sequential RW workloads where head actuators can reside on the disk OD (outer diameter) with limited seek accesses. The above traditional HDD and SSD power efficiency considerations can be described at the device level as involving the following key points: HDDs (Hard Disk Drives):
  1. Mechanical Components: HDDs consist of spinning disks and mechanical read/write heads. These moving parts consume a substantial amount of power, especially during startup and when seeking data.
  2. Idle Power Consumption: Even when not actively reading or writing data, HDDs still consume a notable amount of power to keep the disks spinning and ready to access data
  3. Access Time Impact: The mechanical nature of HDDs leads to longer access times compared to SSDs. This means the drive remains active for longer periods during data access, contributing to higher power consumption.
SSDs (Solid State Drives):
  1. No Moving Parts: SSDs are entirely electronic and have no moving parts. As a result, they consume less power during both idle and active states compared to HDDs
  2. Faster Access Times: SSDs have much faster access times since there are no mechanical delays. This results in quicker data retrieval and reduced active time, contributing to lower power consumption
  3. Energy Efficiency: SSDs are generally more energy-efficient, as they consume less power during read and write operations. This is especially noticeable in laptops and portable devices, where battery life is critical
  4. Less Heat Generation: Due to their lack of moving parts, SSDs generate less heat during operation, which can lead to better thermal efficiency in systems.
In summary, SSDs tend to be more power-efficient than HDDs due to their lack of mechanical components, faster access times, and lower energy consumption during both active and idle states. This power efficiency advantage is one of the reasons why SSDs have become increasingly popular in various computing devices, from laptops to data centers.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Power Efficiency Measurement – Our Experts Make It Clear – Part 3

title of post
Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier! In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization. Join us on our journey to better power efficiency as we continue with Part 3: Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives. And if you missed our earlier segments, click on the titles to read them:  Part 1: Key Issues in Power Efficiency Measurement, and Part 2: Impact of Workloads on Power Efficiency Measurement..  Bookmark this blog  and check back in April for the final installment of our four-part series. And explore the topic further in the SNIA Green Storage Knowledge Center. Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives There are significant differences in power efficiency between Hard Disk Drives (HDDs) and Solid State Drives (SSDs). While some commentators have examined differences in power efficiency measurement for HDDs v SSDs, much of the analysis has not accounted for the key power efficiency contributing factors outlined in this blog. As a simple generalization at the individual storage device level, HDDs show higher power consumption than SSDs.  In addition, SSDs have higher performance (IOPS and MB/s) often by an order of magnitude or more.  Hence, cursory consideration of device power efficiency measurement, expressed as IOPS/W or MB/s/W, will typically favor the faster SSD with lower device power consumption. On the other hand, depending on the workload and IO transfer size, HDD devices and systems may exhibit better IOPS/W and MB/s/W if measured to large block sequential RW workloads where head actuators can reside on the disk OD (outer diameter) with limited seek accesses. The above traditional HDD and SSD power efficiency considerations can be described at the device level as involving the following key points: HDDs (Hard Disk Drives):
  1. Mechanical Components: HDDs consist of spinning disks and mechanical read/write heads. These moving parts consume a substantial amount of power, especially during startup and when seeking data.
  2. Idle Power Consumption: Even when not actively reading or writing data, HDDs still consume a notable amount of power to keep the disks spinning and ready to access data
  3. Access Time Impact: The mechanical nature of HDDs leads to longer access times compared to SSDs. This means the drive remains active for longer periods during data access, contributing to higher power consumption.
SSDs (Solid State Drives):
  1. No Moving Parts: SSDs are entirely electronic and have no moving parts. As a result, they consume less power during both idle and active states compared to HDDs
  2. Faster Access Times: SSDs have much faster access times since there are no mechanical delays. This results in quicker data retrieval and reduced active time, contributing to lower power consumption
  3. Energy Efficiency: SSDs are generally more energy-efficient, as they consume less power during read and write operations. This is especially noticeable in laptops and portable devices, where battery life is critical
  4. Less Heat Generation: Due to their lack of moving parts, SSDs generate less heat during operation, which can lead to better thermal efficiency in systems.
In summary, SSDs tend to be more power-efficient than HDDs due to their lack of mechanical components, faster access times, and lower energy consumption during both active and idle states. This power efficiency advantage is one of the reasons why SSDs have become increasingly popular in various computing devices, from laptops to data centers. The post Power Efficiency Measurement – Our Experts Make It Clear – Part 3 first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

So Just What Is An SSD?

Jonmichael Hands

Jul 19, 2023

title of post
It seems like an easy enough question, “What is an SSD?” but surprisingly, most of the search results for this get somewhat confused quickly on media, controllers, form factors, storage interfaces, performance, reliability, and different market segments.  The SNIA SSD SIG has spent time demystifying various SSD topics like endurance, form factors, and the different classifications of SSDs – from consumer to enterprise and hyperscale SSDs. “Solid state drive is a general term that covers many market segments, and the SNIA SSD SIG has developed a new overview of “What is an SSD? ,” said Jonmichael Hands, SNIA SSD Special Interest Group (SIG)Co-Chair. “We are committed to helping make storage technology topics, like endurance and form factors, much easier to understand coming straight from the industry experts defining the specifications.”   The “What is an SSD?” page offers a concise description of what SSDs do, how they perform, how they connect, and also provides a jumping off point for more in-depth clarification of the many aspects of SSDs. It joins an ever-growing category of 20 one-page “What Is?” answers that provide a clear and concise, vendor-neutral definition of often- asked technology terms, a description of what they are, and how each of these technologies work.  Check out all the “What Is?” entries at https://www.snia.org/education/what-is And don’t miss other interest topics from the SNIA SSD SIG, including  Total Cost of Ownership Model for Storage and SSD videos and presentations in the SNIA Educational Library. Your comments and feedback on this page are welcomed.  Send them to askcmsi@snia.org. The post So just what is an SSD? first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

It’s A Wrap – But Networking and Education Continue From Our C+M+S Summit!

SNIA CMSI

May 1, 2023

title of post
Our 2023 SNIA Compute+Memory+Storage Summit was a success! The event featured 50 speakers in 40 sessions over two days. Over 25 SNIA member companies and alliance partners participated in creating content on computational storage, CXL™ memory, storage, security, and UCIe™. All presentations and videos are free to view at www.snia.org/cms-summit. “For 2023, the Summit scope expanded to examine how the latest advances within and across compute, memory and storage technologies should be optimized and configured to meet the requirements of end customer applications and the developers that create them,” said David McIntyre, Co-Chair of the Summit.  “We invited our SNIA Alliance Partners Compute Express Link™ and Universal Chiplet Interconnect Express™ to contribute to a holistic view of application requirements and the infrastructure resources that are required to support them,” McIntyre continued.  “Their panel on the CXL device ecosystem and usage models and presentation on UCIe innovations at the package level along with three other sessions on CXL added great value to the event.” Thirteen computational storage presentations covered what is happening in NVMe™ and SNIA to support computational storage devices and define new interfaces with computational storage APIs that work across different hardware architectures.  New applications for high performance data analytics, discussions of how to integrate computational storage into high performance computing designs, and new approaches to integrate compute, data and I/O acceleration closely with storage systems and data nodes were only a few of the topics covered. “The rules by which the memory game is played are changing rapidly and we received great feedback on our nine presentations in this area,” said Willie Nelson, Co-Chair of the Summit.  “SNIA colleagues Jim Handy and Tom Coughlin always bring surprising conclusions and opportunities for SNIA members to keep abreast of new memory technologies, and their outlook was complimented by updates on SNIA standards on memory-to memory data movement and on JEDEC memory standards; presentations on thinking memory, fabric attached memory, and optimizing memory systems using simulations; a panel examining where the industry is going with persistent memory, and much more.” Additional highlights included an EDSFF panel covering the latest SNIA specifications that support these form factors, sharing an overview of platforms that are EDSFF-enabled, and discussing the future for new product and application introductions; a discussion on NVMe as a cloud interface; and a computational storage detecting ransomware session. New to the 2023 Summit – and continuing to get great views - was a “mini track” on Security, led by Eric Hibbard, chair of the SNIA Storage Security Technical Work Group with contributions from IEEE Security Work Group members, including presentations on cybersecurity, fine grain encryption, storage sanitization, and zero trust architecture. Co-Chairs McIntyre and Nelson encourage everyone to check out the video playlist and send your feedback to askcmsi@snia.org. The “Year of the Summit” continues with networking opportunities at the upcoming SmartNIC Summit (June), Flash Memory Summit (August), and SNIA Storage Developer Conference (September).  Details on all these events and more are at the SNIA Event Calendar page.  See you soon!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

It’s A Wrap – But Networking and Education Continue From Our C+M+S Summit!

SNIA CMS Community

May 1, 2023

title of post
Our 2023 SNIA Compute+Memory+Storage Summit was a success! The event featured 50 speakers in 40 sessions over two days. Over 25 SNIA member companies and alliance partners participated in creating content on computational storage, CXL™ memory, storage, security, and UCIe™. All presentations and videos are free to view at www.snia.org/cms-summit. “For 2023, the Summit scope expanded to examine how the latest advances within and across compute, memory and storage technologies should be optimized and configured to meet the requirements of end customer applications and the developers that create them,” said David McIntyre, Co-Chair of the Summit.  “We invited our SNIA Alliance Partners Compute Express Link™ and Universal Chiplet Interconnect Express™ to contribute to a holistic view of application requirements and the infrastructure resources that are required to support them,” McIntyre continued.  “Their panel on the CXL device ecosystem and usage models and presentation on UCIe innovations at the package level along with three other sessions on CXL added great value to the event.” Thirteen computational storage presentations covered what is happening in NVMe™ and SNIA to support computational storage devices and define new interfaces with computational storage APIs that work across different hardware architectures.  New applications for high performance data analytics, discussions of how to integrate computational storage into high performance computing designs, and new approaches to integrate compute, data and I/O acceleration closely with storage systems and data nodes were only a few of the topics covered. “The rules by which the memory game is played are changing rapidly and we received great feedback on our nine presentations in this area,” said Willie Nelson, Co-Chair of the Summit.  “SNIA colleagues Jim Handy and Tom Coughlin always bring surprising conclusions and opportunities for SNIA members to keep abreast of new memory technologies, and their outlook was complimented by updates on SNIA standards on memory-to memory data movement and on JEDEC memory standards; presentations on thinking memory, fabric attached memory, and optimizing memory systems using simulations; a panel examining where the industry is going with persistent memory, and much more.” Additional highlights included an EDSFF panel covering the latest SNIA specifications that support these form factors, sharing an overview of platforms that are EDSFF-enabled, and discussing the future for new product and application introductions; a discussion on NVMe as a cloud interface; and a computational storage detecting ransomware session. New to the 2023 Summit – and continuing to get great views – was a “mini track” on Security, led by Eric Hibbard, chair of the SNIA Storage Security Technical Work Group with contributions from IEEE Security Work Group members, including presentations on cybersecurity, fine grain encryption, storage sanitization, and zero trust architecture. Co-Chairs McIntyre and Nelson encourage everyone to check out the video playlist and send your feedback to askcmsi@snia.org. The “Year of the Summit” continues with networking opportunities at the upcoming SmartNIC Summit (June), Flash Memory Summit (August), and SNIA Storage Developer Conference (September).  Details on all these events and more are at the SNIA Event Calendar page.  See you soon! The post It’s A Wrap – But Networking and Education Continue From Our C+M+S Summit! first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to Solid State Storage