Abstract
Businesses are extracting value from more data, more sources and at increasingly real-time rates. Spark and HANA are just the beginning. This presentation details existing and emerging solutions for in-memory computing solutions that address this market trend and the disruptions that happen when combining big-data (Petabyes) with in-memory/real-time requirements. We will also be providing use cases and survey results from users who have implemented in memory computing applications, It provides an overview and trade-offs of key solutions (Hadoop/Spark, Tachyon, Hana, NoSQL-in-memory, etc) and related infrastructure (DRAM, Nand, 3D-crosspoint, NV-DIMMs, high-speed networking) and the disruption to infrastructure design and operations when "tiered-memory" replaces "tiered storage". And it also includes real customer data on how they are addressing and planning for this transition with this architectural framework in mind. Audience will leave with a framework to evaluate and plan for their adoption of in-memory computing.
Learning Objectives
Learn what is takes to evaluate, plan and implement in-memory computing applications