Abstract
Moving big data in and out of the cloud has presented an insurmountable challenge for organizations looking to leverage the cloud for big data applications. Typical file transfer acceleration "gateways" upload data to cloud object storage in two phases, which introduces significant delays, limits the size of the data that can be transferred, and increases local storage costs, and machine compute time and costs. This session will describe direct-to-cloud capabilities that achieve maximum end-to-end transfer speeds and scale out of storage through direct integration with the underlying object storage interfaces, enabling transferred data to be written directly to object storage and available immediately when the transfer completes. It will explore how organizations across different industries are using direct-to-cloud technology for applications that require the movement of gigabytes, terabytes or petabytes of data in, out and across the cloud.
Learning Objectives
An understanding of the root causes of technical bottlenecks associated with using cloud-based serv
Methods to overcome these technical bottlenecks and speed up cloud-based big data workflows
Insight into how organizations across different industries have successfully deployed cloud-based