Sorry, you need to enable JavaScript to visit this website.

Evolving Storage for a New Generation of AI/ML

Submitted by Anonymous (not verified) on

AI/ML is not new, but innovations in ML models development have made it possible to process data at unprecedented speeds. Data scientists have used standard POSIX file systems for years, but as the scale and need for performance have grown, many face new storage challenges. Samsung has been working with customers on new ways of approaching storage issues with object storage designed for use with AI/ML. Hear how software and hardware are evolving to allow unprecedented performance and scale of storage for Machine Learning.

Lessons Learned (the hard way) from Building a Global, Decentralized Storage Network

Submitted by Anonymous (not verified) on

Durability and performance in an S3 alternative storage platform are complex problems, but not owning the hard drives brings another level of difficulty. At Storj, we have 13,500+ independent node operators putting their unutilized hard drive space to work by joining our decentralized network. Learn how Storj developed an architecture that could deliver the demands of an S3 workload and also ensure durability. Assuming any of the node operators could be malicious, required a focus on encryption.

An approach for impact analysis of flash behavior on QoS in DC/Enterprise SSDs

Submitted by Anonymous (not verified) on

In consumer and enterprise world, SSD Performance is the main quality constraint. SSD performance parameters are classified in terms of lOPS, Throughput, latency and Quality of service(QoS). SSDs processes millions bytes of data with certain latency and throughput for read, write and mixed operations. But Quality of service is not guaranteed by SSD vendors for single user. However, Enterprise/server SSD storage must meet a Quality of Service (QoS) level with the given requirement to ensure the study-state performance over long period of time.

Green Computing with Computational Storage Devices

Submitted by Anonymous (not verified) on

Data center systems power consumption is currently one of the biggest concern and green computing is main industry interest. Recent research found that more than 60% power is consumed in CPU in a server. SNIA and NVMe computational storage standard compliant Samsung Smart SSD achieves high energy-efficient computing by offloading computation from CPU to SSD. DB SCAN acceleration engine in Smart SSD demonstrated that it can internally process data at the full speed and highly enhance energy efficiency.

Challenges and opportunities in developing a hash table optimized for persistent memory

Submitted by Anonymous (not verified) on

Most programs have traditionally been divisible into either "in-memory" or "database" applications. When writing in-memory applications, all that matters for speed is the efficiency of the algorithms and data structures in the CPU/memory system during the run of the program, whereas with database applications the main determinant of speed is the number and pattern of accesses to storage (e.g., random accesses vs. sequential ones).

The Challenges in Creating a Clustered Software Defined Fileserver from Scratch on HCI

Submitted by Anonymous (not verified) on

This presentation will outline the trials and tribulations encountered over the last 5 years creating the first software defined Fileserver on our hyperconverged platform. We believe that this architecture brings value and goodness as on a private cloud platform but poses unique challenges. Working closely in partnership with developers and performance engineers, we gleaned insights by performance experimentation troubleshooting. These insights were then passed on to architectural changes as we iterated on the design.

Cloud Storage Acceleration Layer (CSAL): Enabling Unprecedented Performance and Capacity Values with Optane and QLC Flash

Submitted by Anonymous (not verified) on

Cloud service providers offer a range of storage services to enterprises, such as block storage and object storage. Currently, more data is being created, stored, and analyzed than ever before. Cloud service providers tend to extend storage capacity to meet the requirement of high-growth technologies, such as big data analytics, real-time databases, and high-performance computing. All these technologies require not only high performance but also ever-expanding data volumes.

Implementation of Persistent Write Log Cache with Replication in Ceph

Submitted by Anonymous (not verified) on

Ceph is an open source distributed storage solution. This presentation shows the implementation of Persistent Write Log Cache(PWL cache) at Ceph client side to handle burst writes to reduce backend pressure. PWL cache is a kind of Write Back Cache, but it leverages log to ensure the I/O order. Compared with the traditional volatile cache, PWL cache persists the data to Persistent memory (PMEM). In order to prevent data loss caused by abnormal conditions, we implement Replication for PWL.

Demystifying Edge Devices Cloud Native Storage Services for Different Data Sources

Submitted by Anonymous (not verified) on

Edge is becoming the new core .More and more data will live in edge environments rather than in traditional data centers/cloud due to reasons ranging from data gravity to data sovereignty.Different data sources(block,object,streaming etc) require different kind of storage architecture at edge. Data movement and data storage are key components of edge computing. Also Taking cloud operating model at edge is gaining momentum. In this talk we will try to demystify different cloud native storage services that can be use at the edge nodes for different data types and advantages it brings .

Subscribe to