Samba Multi-Channel/io_uring Status Update

Submitted by Anonymous (not verified) on

Samba had experimental support for multi-channel for quite a while. SMB3 has a few concepts to replay requests safely. We now implement them completely (and in parts better than a Windows Server). The talk will explain how we implemented the missing features. With the increasing amount of network throughput, we'll reach a point where a data copies are too much for a single cpu core to handle. This talk gives an overview about how the io_uring infrastructure of the Linux kernel could be used in order to avoid copying data, as well as spreading the load between cpu cores.

Accelerating File Systems and Data Services with Computational Storage

Submitted by Anonymous (not verified) on

Standardized computational storage services are frequently touted as the Next Big Thing in building faster, cheaper file systems and data services for large-scale data centers. However, many developers, storage architects and data center managers are still unclear on how best to deploy computational storage services and whether computational storage offers real promise in delivering faster, cheaper – more efficient – storage systems. In this talk we describe Los Alamos National Laboratory’s ongoing efforts to deploy computational storage into the HPC data center.

Sanitization – Forensic-proofing Your Data Deletion

Submitted by Anonymous (not verified) on

Almost everyone understands that systems and data both have lifecycles that typically include a disposal phase (i.e., what you do when you do not need something anymore). Conceptually, data needs to be eliminated either on a system or entirely (everywhere stored) as part of this disposal. Simply hitting the delete-key may seem like the right approach, but the reality is that eliminating data can be difficult. Additionally, failing to correctly eliminate certain data can result in costly data breach scenarios.

From NASD to DeltaFS: CMU and Los Alamos's Efforts in Building Large-Scale Filesystem Metadata

Submitted by Anonymous (not verified) on

It has been a tradition that, every once in a while, we stop and reassess whether we need to build our next filesystems differently. A key previous effort was made by the Carnegie Mellon University's NASD project, which decoupled filesystem data communication from metadata management and leveraged object storage devices for scalable data access. Now, as we enter into the exascale age, once again, we need bold ideas to advance parallel filesystem performance if we are to keep with up the rapidly increasing scale of today's massively-parallel computing environments.

Rethinking Software Defined Memory (SDM) for large-scale applications with faster interconnects and memory technologies

Submitted by Anonymous (not verified) on

Software-Defined Memory (SDM) is an emerging architecture paradigm that provides software abstraction between applications and underlying memory resources with dynamic memory provisioning to achieve the desired SLA. With emergence of newer memory technologies and faster interconnects, it is possible to optimize memory resources deployed in cloud infrastructure while achieving best possible TCO. SDM provides a mechanism to pool disjoint memory domains into a unified memory namespace.

Stop Wasting 80% of Your Infrastructure Investment!

Submitted by Anonymous (not verified) on

There is a new architectural approach to accelerating storage-intensive databases and applications which take advantage of new techniques that dramatically accelerate database performance, improve response times and reduce infrastructure cost at massive scale. This new architecture efficiently stores fundamental data structures increasing space savings—up to 80%—over host- and software-based techniques that cannot address inherent inefficiencies in today’s fastest SSD technology.

Scalable Storage Performance for High Density Applications

Submitted by Anonymous (not verified) on

Real Time Analytics and Data intensive applications have driven the adoption of high performance, low latency, highly parallel NVMe solutions and now we are in the cusp of wide adoption of NVMe over Fabrics (NVMe-oF) Storage in both On-Prem and Cloud Data Centers. Since the introduction of NVMe-oF, a lot of investment and effort have gone in to improve the overall storage performance and features resulting in substantial improvement of IOPs and CPU utilization and lowering the TCO for the customer.

Data Protection for stateful application: OpenSource way!!

Submitted by Anonymous (not verified) on

Currently Stateful Applications contribute to the major chunk (more than 50%) of containerized application deployment for enterprise. To maintain an uninterrupted application availability for the containerized applications, backup and recover these application to be DR ready is the need and important strategy of the enterprises. One important aspect is to backup and recover in an alternative environment. With alternative environment we mean, in an hybrid environment, i.e. take an example of Cloud1 managed K8S environment being recovered into Cloud2 managed K8S environment.

NextGen Connected Cloud Datacenter with programmable and flexible infrastructure

Submitted by Anonymous (not verified) on

Many businesses like ours are challenged with on-demand infrastructure provisioning at the speed of software while the datacenter footprint is diminishing. Pure Storage as a company is getting out of the datacenter center business and are adopting more of a OpEx cost model with strict legal and data compliance guidelines for many of our business application pipelines. We chose to go with a programmable Infrastructure and Connected Cloud architecture with open APIs.

Power-Efficient Data Processing with Software-Defined Computational Storage

Submitted by Anonymous (not verified) on

CPU performance improvements based on Dennard scaling and Moore's Law have already reached their limits, and domain-specific computing has been considered as an alternative to overcome the limitations of traditional CPU-centric computing models. Domain-specific computing, seen in early graphics cards and network cards, has expanded into the field of accelerators such as GPGPUs, TPUs, and FPGAs as machine learning and blockchain technologies become more common.

Subscribe to