Sorry, you need to enable JavaScript to visit this website.

How to Connect a SAS System

Submitted by Anonymous (not verified) on

The new Serial Attached SCSI (SAS)-4.1 (INCITS 567) technology is being deployed in the market for use in practically every industry, including hyperscale data centers, banking, education, government, healthcare and manufacturing. It maintains backwards compatibility with previous-generation SAS implementations, which means that older drives will be compatible with newer storage controller and SAS expander products.

Facts, figures and insights from 250,000 hard drives

Submitted by Anonymous (not verified) on

For the last eight years Backblaze has collected daily operational data from the hard drives in our data centers. This includes daily SMART statistics from over 250,000 hard drives, and SSDs, totaling nearly two exabytes of storage, and totaling over 200 million data points. We'll use this data to examine the following: - the lifetime failure statistics for all the hard drives we have ever used. - how has temperature effects the failure rate of hard drives. - a comparison the failure rates of helium filled hard drives versus air-filled hard drives.

Hardening OpenZFS to Further Mitigate Ransomware

Submitted by Anonymous (not verified) on

While the Open Source cross-platform OpenZFS file system and volume manager provides advanced block-level features including checksumming, snapshotting, and replication to mitigate ransomware at the POSIX level, the evolving nature of ransomware dictates that no technology should rest on its laurels. This developer and administrator-focused talk will explore strategies for hardening OpenZFS and its supported operating systems towards a goal of total storage immutability without authorization.

DNA storage with ADS Codex: the Adaptive Codec for Organic Molecular Archives

Submitted by Anonymous (not verified) on

The information explosion is making the storage industry look to new media to meet increasing demands. Molecular storage, and specifically synthetic DNA, is a rapidly evolving technology that provides high levels of physical data density and longevity for archival storage systems. Major challenges to the adoption are the higher error rates for synthesis and sequencing, as well as the different nature of errors. Unlike traditional storage media, erroneous insertions and deletions are a common source of errors in DNA-based storage sys- tems.

Be On Time: Command Duration Limits Feature Support in Linux

Submitted by Anonymous (not verified) on

Delivering high throughput and/or high IO rates while minimizing command execution latency is a common problem to many storage applications. Achieving low command latency to implement a responsive system is often not compatible with high performance. The ATA NCQ IO priority feature set, supported today by many SATA hard-disks, provides a coarse solution to this problem. Commands that require a low latency can be assigned a high priority, while background disk accesses are assigned a normal priority level.

NVMe Computational Storage Update

Submitted by Anonymous (not verified) on

Learn what is happening in NVMe to support Computational Storage devices. The development is ongoing and not finalized, but this presentation will describe the directions that the proposal is taking. Kim and Stephen will describe the high level architecture that is being defined in NVMe for Computational Storage. The architecture provides for programs based on a standardized eBPF. We will describe how this new command set fits within the NVMe I/O Command Set architecture. The commands that are necessary for Computational Storage will be described.

Scalable and Dynamic File Operations for DNA-based Data Storage

Submitted by Anonymous (not verified) on

DNA-based data storage systems have the potential to offer unprecedented increases in density and longevity over conventional storage mediums. Starting from the assumption that advances in synthesis and sequencing technology will soon make DNA-based storage cost competitive with conventional media, we will need ways of organizing, accessing, and manipulating the data stored in DNA to harness its full potential. There are a range of possible storage system designs. This talk will cover three systems that the speaker co-developed and prototyped at NC State / DNAli Data Technologies.

InfiniBand/RoCE RDMA Specification Update

Submitted by Anonymous (not verified) on

The first phase of the IBTA Memory Placement Extensions (MPE), supporting low-latency RDMA access to persistent memory on Infiniband and RoCE networks, was published in August. In this talk, the MPE will be introduced, motivations for the additions discussed, and performance advantages of the MPE over current techniques will be reviewed. In addition to the these new MPE protocol enhancements in the new specification, additional operations, currently under development and planned for the next version, will also be presented.

Ozone - Architecture and Performance at billions’ scale

Submitted by Anonymous (not verified) on

Object stores are known for ease of use and massive scalability. Unlike other storage solutions like file systems and block stores, object stores are capable of handling data growth without increase in complexity or developer intervention. Apache Hadoop Ozone is a highly scalable Object Store which extends the design principles of HDFS while maintaining a 10-100x scale compared to HDFS. It can store billions of keys and hundreds of petabytes of data. With the massive scale there is a requirement for it to have very high throughput while maintaining low latency.

Apache Ozone - Balancing and Deleting Data At Scale

Submitted by Anonymous (not verified) on

Apache Ozone is an object store which scales to tens of billions of objects, hundreds of petabytes of data and thousands of datanodes. Ozone not only supports high throughput data ingestion but also supports high throughput deletion with performance similar to HDFS. Further with massive scale the data can be non-uniformly distributed due to addition of new datanodes, deletion of data etc. Non-uniform distribution can lead to lower utilisation of resources and can affect the overall throughput of the cluster.

Subscribe to