Display Order
0
Track Background Color
#4363d8
Old ID
7
Slack Channel Url
https://app.slack.com/client/T02DWHYB4P7/C02DZKJV6JX

Managing Cloud infrastructure by using Terraform HCL

Submitted by Anonymous (not verified) on

Terraform – infrastructure as code tool from HashiCorp for building, changing, and managing infrastructure. We can use it to manage Multi-Cloud environments with a configuration language called the HashiCorp Configuration Language (HCL). It codifies cloud APIs into declarative configuration files. We will learn how to write the Configuration files in the Terraform to run a single application or manage an entire data center by defining the plan and then executing it to build the described infrastructure.

Ozone - Architecture and Performance at billions’ scale

Submitted by Anonymous (not verified) on

Object stores are known for ease of use and massive scalability. Unlike other storage solutions like file systems and block stores, object stores are capable of handling data growth without increase in complexity or developer intervention. Apache Hadoop Ozone is a highly scalable Object Store which extends the design principles of HDFS while maintaining a 10-100x scale compared to HDFS. It can store billions of keys and hundreds of petabytes of data. With the massive scale there is a requirement for it to have very high throughput while maintaining low latency.

Apache Ozone - Balancing and Deleting Data At Scale

Submitted by Anonymous (not verified) on

Apache Ozone is an object store which scales to tens of billions of objects, hundreds of petabytes of data and thousands of datanodes. Ozone not only supports high throughput data ingestion but also supports high throughput deletion with performance similar to HDFS. Further with massive scale the data can be non-uniformly distributed due to addition of new datanodes, deletion of data etc. Non-uniform distribution can lead to lower utilisation of resources and can affect the overall throughput of the cluster.

Compacting Smaller Objects in Cloud for Higher Yield

Submitted by Anonymous (not verified) on

In file systems, large sequential writes are more beneficial than small random writes, and hence many storage systems implement a log structured file system. In the same way, the cloud favors large objects more than small objects. Cloud providers place throttling limits on PUTs and GETs, and so it takes significantly longer time to upload a bunch of small objects than a large object of the aggregate size. Moreover, there are per-PUT calls associated with uploading smaller objects. In Netflix, a lot of media assets and their relevant metadata is generated and pushed to cloud.

Compression, Deduplication & Encryption conundrums for Cloud Storage

Submitted by Anonymous (not verified) on

Cloud storage footprint is in exabytes and exponentially growing and companies pay billions of dollars to store and retrieve data. In this talk, we will cover some of the space and time optimizations, which have historically been applied to on-premise file storage, and how they would be applied to objects stored in Cloud. Deduplication and compression are techniques that have been traditionally used to reduce the amount of storage used by applications.

Transparent Encryption and Dual End Point Access Controls to Secure AWS S3 buckets

Submitted by Anonymous (not verified) on

Amazon AWS S3 storage is widely deployed to store everything from customer data, server logs, software repositories and so on. Poorly secured S3 buckets have resulted in many publicized data breaches. The cloud service provider's shared responsibility model places responsibility on customers for protecting the confidentiality, availability and integrity of their data. Thales Cipher Trust Encryption Cloud Object Storage for S3 secures S3 objects by enabling advanced encryption along with dual end point access controls.

Behind the Scenes for Azure Block Storage Unique Capabilities

Submitted by Anonymous (not verified) on

Azure Block Storage, also referred to as Azure Disks, is the persistent block storage for Azure Virtual Machines and a core pillar for Azure IaaS infrastructure. Azure offer unique block storage capabilities that differentiate it from other Cloud Block Storage offerings. In this talk, we will use a few of these capabilities as examples to reveal the technical designs behind and how they are tied to our XStore storage architecture.

Direct Drive - Azure's Next-generation Block Storage Architecture

Submitted by Anonymous (not verified) on

Azure Disks provide block storage for Azure Virtual Machines and are a core pillar of the Azure IaaS platform. In this talk, we will provide an overview of Direct Drive - Azure's next-generation block storage architecture. Direct Drive forms the foundation for a new family of Azure disk offerings, starting with Ultra Disk (Azure's highest performance disks). We will describe the challenges of providing durable, highly-available, high-performance disks at cloud scale as well as the software and hardware innovations that allow us to overcome these challenges.

Lessons Learned (the hard way) from Building a Global, Decentralized Storage Network

Submitted by Anonymous (not verified) on

Durability and performance in an S3 alternative storage platform are complex problems, but not owning the hard drives brings another level of difficulty. At Storj, we have 13,500+ independent node operators putting their unutilized hard drive space to work by joining our decentralized network. Learn how Storj developed an architecture that could deliver the demands of an S3 workload and also ensure durability. Assuming any of the node operators could be malicious, required a focus on encryption.

Subscribe to Cloud Storage