Sorry, you need to enable JavaScript to visit this website.

The Challenges in Creating a Clustered Software Defined Fileserver from Scratch on HCI

Submitted by Anonymous (not verified) on

This presentation will outline the trials and tribulations encountered over the last 5 years creating the first software defined Fileserver on our hyperconverged platform. We believe that this architecture brings value and goodness as on a private cloud platform but poses unique challenges. Working closely in partnership with developers and performance engineers, we gleaned insights by performance experimentation troubleshooting. These insights were then passed on to architectural changes as we iterated on the design.

Cloud Storage Acceleration Layer (CSAL): Enabling Unprecedented Performance and Capacity Values with Optane and QLC Flash

Submitted by Anonymous (not verified) on

Cloud service providers offer a range of storage services to enterprises, such as block storage and object storage. Currently, more data is being created, stored, and analyzed than ever before. Cloud service providers tend to extend storage capacity to meet the requirement of high-growth technologies, such as big data analytics, real-time databases, and high-performance computing. All these technologies require not only high performance but also ever-expanding data volumes.

Implementation of Persistent Write Log Cache with Replication in Ceph

Submitted by Anonymous (not verified) on

Ceph is an open source distributed storage solution. This presentation shows the implementation of Persistent Write Log Cache(PWL cache) at Ceph client side to handle burst writes to reduce backend pressure. PWL cache is a kind of Write Back Cache, but it leverages log to ensure the I/O order. Compared with the traditional volatile cache, PWL cache persists the data to Persistent memory (PMEM). In order to prevent data loss caused by abnormal conditions, we implement Replication for PWL.

Demystifying Edge Devices Cloud Native Storage Services for Different Data Sources

Submitted by Anonymous (not verified) on

Edge is becoming the new core .More and more data will live in edge environments rather than in traditional data centers/cloud due to reasons ranging from data gravity to data sovereignty.Different data sources(block,object,streaming etc) require different kind of storage architecture at edge. Data movement and data storage are key components of edge computing. Also Taking cloud operating model at edge is gaining momentum. In this talk we will try to demystify different cloud native storage services that can be use at the edge nodes for different data types and advantages it brings .

Converging PCIe and TSN Ethernet for Composable Infrastructure in High-Performance In-Vehicle Embedded Systems

Submitted by Anonymous (not verified) on

Costs and risks of implementing High-Performance Embedded Systems such as Centralized Car Servers for Autonomous Vehicles can be reduced when borrowing from modern datacenter technology. Therefore, PCIe and Multi-Gigabit Ethernet have become a foundation for automotive in-vehicle infrastructure. While the needs for storage in automotive are somewhat relaxed, compared to datacenters, automotive has a need for “unconventional” storage connectivity like many sensors to few CPUs to single SSD.

The Quest for an Autonomous Storage Fabric

Submitted by Anonymous (not verified) on

While consistently delivering on bandwidth needs, many storage fabrics have struggled to keep pace with customer expectations around storage infrastructures powered by AIOps, intelligently tiered storage and provisioning. Building an autonomous, self-driving storage fabric requires shared intelligence between endpoints, strong awareness of fabric conditions and decisive action that automates best practices.

Next Generation Architecture for Scale-out Block Storage

Submitted by Anonymous (not verified) on

We are in the midst of a major technology shift as Storage Devices and Networks are outpacing general purpose Compute. At the same time, the usage models for Storage have become more diverse across requirements for performance, data protection and security. This perfect storm is creating a challenge for storage solutions in meeting application demands while efficiently utilizing hardware resources. A modern data center requires a new architecture for scale-out block storage that can provide this flexibility in a cost effective manner.

HDD Computational Storage Benchmarking

Submitted by Anonymous (not verified) on

This presentation looks at a computational storage use-case within the Human Cell Atlas genomics research and discovers that the deployed HW CS engine is insufficient and why this is the case. The presentation shows the journey from standard system bench marking to micro-benchmarking specifically instruction per cycle analysis (IPC). This presentation also details the programming techniques used along the way, including intrinsic SIMD and inline assembler programming.

NVMe/FC or NVMe/TCP an in-depth NVMe Packet & Flow Level Comparison Between the Two Transport Options

Submitted by Anonymous (not verified) on

All major storage vendors have started to offer NVMe/FC on their storage arrays. Almost all of them also have been offering an IP storage option via 25G ethernet iSCSI. Now with the introduction of NVMe/TCP that offers FC like services (discovery, notification, zoning) storage vendors can provide a software upgrade that would offer NVMe/TCP as an option. Customers have started to evaluate the NVMe transport options and are asking which infrastructure (Enet/FC) should they invest going forward?

Data Loss Mitigation through 2-Factor Authentication

Submitted by Anonymous (not verified) on

Ransomware attack mitigation has been a high profile problem and is getting more visibility in recent years due to the high payback from victims to have their data released. This proposal implements a series of ‘recognition’ triggers within a layered file system on Windows, which forces a caller through a form of 2FA to potentially reduce the impact of the attack. The approach taken by Thales, within the layered file system implementation for data protection, leverages several layers to recognize when a potential threat is executing.

Subscribe to