Sorry, you need to enable JavaScript to visit this website.

See You (Online) at SDC!

Marty Foltyn

Sep 15, 2020

title of post
We’re going virtual in 2020, and Compute, Memory, and Storage are important topics at the upcoming SNIA Storage Developer ConferenceSNIA CMSI is a sponsor of SDC 2020 – so visit our booth for the latest information and a chance to chat with our experts.  With over 120 sessions available to watch live during the event and later on-demand, live Birds of a Feather chats, and a Persistent Memory Bootcamp accessing new PM systems in the cloud, we want to make sure you don’t miss anything!  Register here to see sessions live – or on demand to your schedule.  Agenda highlights include: Computational Storage Talks Deploying Computational Storage at the Edge – discussing the deployment of small form factor, asic-based, solutions, including a use case. Next Generation Datacenters require composable architecture enablers and deterministic programmable intelligenceexplaining why determinism, parallel programming and ease of programming are important. Computational Storage Birds of a Feather LIVE Session – ask your questions of our experts and see live demos of computational storage production systems. Tuesday September 22, 2020 – 3:00 pm – 4:00 pm PDT (UTC-7) Persistent Memory Presentations Caching on PMEM: an Iterative Approachdiscussing Twitter’s approach to exploring in-memory caching. Challenges and Opportunities as Persistence Moves Up the Memory/Storage Hierarchy – show how and why memory at all levels will become persistent. Persistent Memory on eADR System – describes how the SNIA Persistent Memory Programming Model will include the possibility of platforms where the CPU caches are considered permanent and need no flushing. Persistent Memory Birds of a Feather LIVE Sessionask your questions to our experts on your bootcamp progress, how to program PM, or what PM is shipping today . Tuesday, September 22, 2020 – 4:00 pm – 5:00 pm PDT (UTC-7) Solid State Storage Sessions Enabling Ethernet Drives – provides a glimpse into a new SNIA standard that enables SSDs to have an Ethernet interface, and discussed the latest management standards for NVMe-oF drives. An SSD for Automotive Applications – details efforts under way in JEDEC to define a new Automotive SSD standard.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

Sep 14, 2020

title of post
Last month, the SNIA Cloud Storage Technologies Initiative was fortunate to have artificial intelligence (AI) expert, Parviz Peiravi, explore the topic of AI Operations (AIOps) at our live webcast, “IT Modernization with AIOps: The Journey.” Parviz explained why the journey to cloud native and microservices, and the complexity that comes along with that, requires a rethinking of enterprise architecture. If you missed the live presentation, it’s now available on demand together with the webcast slides. We had some interesting questions from our live audience. As promised, here are answers to them all: Q. Can you please define the Data Lake and how different it is from other data storage models?           A. A data lake is another form of data repository with specific capability that allows data ingestion from different sources with different data types (structured, unstructured and semi-structured), data as is and not transformed. The data transformation process Extract, Load, Transform (ELT) follow schema on read vs. schema on write Extract, Transform and Load (ETL) that has been used in traditional database management systems. See the definition of data lake in the SNIA Dictionary here. In 2005 Roger Mougalas coined the term Big Data, it refers to large volume high velocity data generated by the Internet and billions of connected intelligent devices that was impossible to store, manage, process and analyze by traditional database management and business intelligent systems. The need for a high-performance data management systems and advanced analytics that can deal with a new generation of applications such as Internet of things (IoT), real-time applications and, streaming apps led to development of data lake technologies. Initially, the term “data lake” was referring to Hadoop Framework and its distributed computing and file system that bring storage and compute together and allow faster data ingestion, processing and analysis. In today’s environment, “data lake” could refer to both physical and logical forms: a logical data lake could include Hadoop, data warehouse (SQL/No-SQL) and object-based storage, for instance. Q. One of the aspects of replacing and enhancing a brownfield environment is that there are different teams in the midst of different budget cycles. This makes greenfield very appealing. On the other hand, greenfield requires a massive capital outlay. How do you see the percentages of either scenario working out in the short term? A. I do not have an exact percentage, but the majority of enterprises using a brownfield implementation strategy have been in place for a long time. In order to develop and deliver new capabilities with velocity, greenfield approaches are gaining significant traction. Most of the new application development based on microservices/cloud native is being implemented in greenfield to reduce the risk and cost using cloud resources available today in smaller scale at first and adding more resources later. Q. There is a heavy reliance upon mainframes in banking environments. There’s quite a bit of error that has been eliminated through decades of best practices. How do we ensure that we don’t build in error because these models are so new? A. The compelling reasons behind mainframe migration – beside the cost – is ability to develop and deliver new application capabilities, business services and making data available to all other applications. There are four methods for mainframe migration:
  • Data migration only
  • Re-platforming
  • Re-architecting
  • Re-factoring
Each approach provides enterprises different degrees of risk and freedom.  Applying best practices to both application design/development and operational management, is the best way to ensure smooth application migration from a monolith to a new distributed environment such as microservices/cloud native. Data architecture plays a pivotal role in the design process in addition to applying Continuous Integration and Continuous Delivery (CI/CD) process. Q. With the changes into a monolithic data lake, will we be seeing different data lakes with different security parameters, which just means that each lake is simply another data repository? A. If we follow a domain-driven design principal, you could have multiple data lakes with specific governance and security policies appropriate to that domain. Multiple data lakes could be accessed through data virtualization to mimic a monolithic data lake; this approach is based on a logical data lake architecture. Q. What’s the difference between multiple data lakes and multiple data repositories? Isn’t it just a matter of quantity? A. Looking from Big Data perspective, a data lake is not only stored data but also provides capabilities to process and analyze data (e.g. Hadoop framework/HDFS). New trends are emerging that separate storage and compute (e.g., disaggregated storage architectures) hence some vendors use the term “data lake” loosely and offer only storage capability, while others provide both storage and data processing capabilities as an integrated solution. What is more important than the definition of data lake is your usage and specific application requirements to determine which solution is a good fit for your environment.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Impact and Implications of Internet of Payments

Jim Fister

Sep 2, 2020

title of post
Electronic payments, once the purview of a few companies, have expanded to include a variety of financial and technology companies. Internet of Payment (IoP) enables payment processing over many kinds of IoT devices and has also led to the emergence of the micro-transaction. The growth of independent payment services offering e-commerce solutions, such as Square, and the entry of new ways to pay, such as Apple Pay, mean that a variety of devices and technologies also have come into wide use. This is the topic that the SNIA Cloud Storage Technologies Initiative is going to examine at our live webcast on October 14, 2020 “Technology Implications of Internet of Payments.” Along with the rise and dispersal of the payment eco-system, more of our assets that we exchange for payment are becoming digitized as well. When digital ownership is equivalent to physical ownership, security and scrutiny of those digital platforms and methods takes a leap forward in significance. Assets and funds are now widely distributed across multiple organizations. Physical asset ownership is even being shared between many stakeholders resulting in more ownership opportunities for less investment but in a distributed way. In this webcast we will look at the impact of all of these new principles across multiple use cases and how it impacts not only on the consumers driving this behavior but on the underlying infrastructure that supports and enables it. We’ll examine:
  • The cloud network, applications and storage implications of IoP
  • Use of emerging blockchain capabilities for payment histories and smart contracts
  • Identity and security challenges at the device in addition to point of payment
  • Considerations on architecting IoP solutions for future scale
Register today and please bring your questions for our expert presenters.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Impact and Implications of Internet of Payments

Jim Fister

Sep 2, 2020

title of post
Electronic payments, once the purview of a few companies, have expanded to include a variety of financial and technology companies. Internet of Payment (IoP) enables payment processing over many kinds of IoT devices and has also led to the emergence of the micro-transaction. The growth of independent payment services offering e-commerce solutions, such as Square, and the entry of new ways to pay, such as Apple Pay, mean that a variety of devices and technologies also have come into wide use. This is the topic that the SNIA Cloud Storage Technologies Initiative is going to examine at our live webcast on October 14, 2020 “Technology Implications of Internet of Payments.” Along with the rise and dispersal of the payment eco-system, more of our assets that we exchange for payment are becoming digitized as well. When digital ownership is equivalent to physical ownership, security and scrutiny of those digital platforms and methods takes a leap forward in significance. Assets and funds, similar to those found on btreal, are now widely distributed across multiple organizations. Physical asset ownership is even being shared between many stakeholders resulting in more ownership opportunities for less investment but in a distributed way. In this webcast we will look at the impact of all of these new principles across multiple use cases and how it impacts not only on the consumers driving this behavior but on the underlying infrastructure that supports and enables it. We’ll examine:
  • The cloud network, applications and storage implications of IoP
  • Use of emerging blockchain capabilities for payment histories and smart contracts
  • Identity and security challenges at the device in addition to point of payment
  • Considerations on architecting IoP solutions for future scale
Register today and please bring your questions for our expert presenters.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Achieving Data Literacy

Jim Fister

Aug 24, 2020

title of post
We’re all spending our days living in the pandemic and understanding the cultural changes on a personal level.  That keening wail you hear is not some outside siren, it’s you staring out the window at the world that used to be.  But with all that, have you thought about the insight that you could be applying to your business? If the pandemic has taught data professionals one essential thing, it’s this:  Data is like water when it escapes; it reaches every aspect of the community it inhabits. This fact becomes apparent when the general public has access to statistics, assessments, analysis and even medical journals related to the pandemic, at a scale never seen before. But having access to data does not automatically grant the reader knowledge of how to interpret that data or the ability to derive insight. In fact, it can be quite challenging to judge the accuracy or value in that data. Gaining insight to information in context that extends beyond just the facts presented and instead enables reasonable predictions and suppositions about new instances of that data requires a skill called data literacy. Join us on September 17, 2020 for a live SNIA Cloud Storage Technologies Initiative webcast “Using Data Literacy to Drive Insight” where we will explore the importance of data literacy and examine:
  • How data literacy is defined by the ability to interpret and apply context
  • How a data scientist approaches new data sources and the questions they ask of it
  • How to seek out supporting or challenging data to validate its accuracy and value for providing insight
  • How this impacts underlying information systems
  • How data platforms need to adjust to this purpose+ data eco-system where data sources are no longer single use
Register today for what is sure to be an “insightful” look at this important topic.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding the NVMe Key-Value Standard

John Kim

Aug 17, 2020

title of post

The storage industry has many applications that rely on storing data as objects. In fact, it’s the most popular way that unstructured data—for example photos, videos, and archived messages--is accessed.

At the drive level, however, the devil is in the details. Normally, storage devices like drives or storage systems store information as blocks, not objects. This means that there is some translation that goes on between the data as it is ingested or consumed (i.e., objects) and the data that is stored (i.e., blocks).

Naturally, storing objects from applications as objects on storage would be more efficient and means that there are performance boosts, and simplicity means that there are fewer things that can go wrong. Moving towards storing key value pairs that get away from the traditional block storage paradigm makes it easier and simpler to access objects. But nobody wants a marketplace where each storage vendor has their own key value API.

Both the NVM Express™ group and SNIA have done quite a bit of work in standardizing this approach:

  • NVM Express has completed standardization of the Key Value Command Set
  • SNIA has standardized a Key Value API
  • Spoiler alert: these two work very well together!

What does this mean? And why should you care? Find out on September 1, 2020 at our live SNIA Networking Storage Forum webcast, “The Key to Value: Understanding the NVMe Key-Value Standard” when Bill Martin, SNIA Technical Council Co-Chair, will discuss the benefits of Key Value storage, present the major features of the NVMe-KV Command Set and how it interacts with the NVMe standards. He will also cover the SNIA KV-API and open source work that is available to take advantage of Key Value storage and discuss:

  • How this approach is different than traditional block-based storage
  • Why doing this makes sense for certain types of data (and, of course, why doing this may not make sense for certain types of data)
  • How this simplifies the storage stack
  • Who should care about this, why they should care about this, and whether or not you are in that group

The event is live and with time permitting, Bill will be ready to answer questions on the spot. Register today to join us on September 1st.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding the NVMe Key-Value Standard

John Kim

Aug 17, 2020

title of post
The storage industry has many applications that rely on storing data as objects. In fact, it’s the most popular way that unstructured data—for example photos, videos, and archived messages–is accessed. At the drive level, however, the devil is in the details. Normally, storage devices like drives or storage systems store information as blocks, not objects. This means that there is some translation that goes on between the data as it is ingested or consumed (i.e., objects) and the data that is stored (i.e., blocks). Naturally, storing objects from applications as objects on storage would be more efficient and means that there are performance boosts, and simplicity means that there are fewer things that can go wrong. Moving towards storing key value pairs that get away from the traditional block storage paradigm makes it easier and simpler to access objects. But nobody wants a marketplace where each storage vendor has their own key value API. Both the NVM Express™ group and SNIA have done quite a bit of work in standardizing this approach:
  • NVM Express has completed standardization of the Key Value Command Set
  • SNIA has standardized a Key Value API
  • Spoiler alert: these two work very well together!
What does this mean? And why should you care? Find out on September 1, 2020 at our live SNIA Networking Storage Forum webcast, “The Key to Value: Understanding the NVMe Key-Value Standard” when Bill Martin, SNIA Technical Council Co-Chair, will discuss the benefits of Key Value storage, present the major features of the NVMe-KV Command Set and how it interacts with the NVMe standards. He will also cover the SNIA KV-API and open source work that is available to take advantage of Key Value storage and discuss:
  • How this approach is different than traditional block-based storage
  • Why doing this makes sense for certain types of data (and, of course, why doing this may not make sense for certain types of data)
  • How this simplifies the storage stack
  • Who should care about this, why they should care about this, and whether or not you are in that group
The event is live and with time permitting, Bill will be ready to answer questions on the spot. Register today to join us on September 1st.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Compression Puts the Squeeze on Storage

Ilker Cebeli

Aug 12, 2020

title of post

Everyone knows data volumes are exploding faster than IT budgets. And customers are increasingly moving to flash storage, which is faster and easier to use than hard drives, but still more expensive. To cope with this conundrum and squeeze more efficiency from storage, storage vendors and customers can turn to data reduction techniques such as compression, deduplication, thin provisioning and snapshots.

On September 2, 2020, the SNIA Networking Storage Forum will specifically focus on data compression in our live webcast, “Compression: Putting the Squeeze on Storage.” Compression can be done at different times, at different stages in the storage process, and using different techniques. We’ll discuss:

  • Where compression can be done: at the client, on the network, on the storage controller, or within the storage devices
  • What types of data should be compressed
  • When to compress: real-time compression vs. post-process compression
  • Different compression techniques
  • How compression affects performance

Join me and my SNIA colleagues, John Kim and Brian Will, for this compact and informative webcast! We hope to see you on September 2nd. Register today.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Compression Puts the Squeeze on Storage

Ilker Cebeli

Aug 12, 2020

title of post
Everyone knows data volumes are exploding faster than IT budgets. And customers are increasingly moving to flash storage, which is faster and easier to use than hard drives, but still more expensive. To cope with this conundrum and squeeze more efficiency from storage, storage vendors and customers can turn to data reduction techniques such as compression, deduplication, thin provisioning and snapshots. On September 2, 2020, the SNIA Networking Storage Forum will specifically focus on data compression in our live webcast, “Compression: Putting the Squeeze on Storage.” Compression can be done at different times, at different stages in the storage process, and using different techniques. We’ll discuss:
  • Where compression can be done: at the client, on the network, on the storage controller, or within the storage devices
  • What types of data should be compressed
  • When to compress: real-time compression vs. post-process compression
  • Different compression techniques
  • How compression affects performance
Join me and my SNIA colleagues, John Kim and Brian Will, for this compact and informative webcast! We hope to see you on September 2nd. Register today.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factors

Tom Friend

Aug 11, 2020

title of post

NVMe over Fabrics technology is gaining momentum and getting more traction in data centers, but there are three kinds of Ethernet based NVMe over Fabrics transports: iWARP, RoCEv2 and TCP.

How do we optimize NVMe over Fabrics performance with different Ethernet transports? That will be the discussion topic at our SNIA Networking Storage Forum Webcast, “Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factorson September 16, 2020.

Setting aside the considerations of network infrastructure, scalability, security requirements and complete solution stack, this webcast will explore the performance of different Ethernet-based transports for NVMe over Fabrics at the detailed benchmark level. We will show three key performance indicators: IOPs, Throughput, and Latency with different workloads including: Sequential Read/Write, Random Read/Write, 70%Read/30%Write, all with different data sizes. We will compare the result of three Ethernet based transports: iWARP, RoCEv2 and TCP.

Further, we will dig a little bit deeper to talk about the variables that impact the performance of different Ethernet transports. There are a lot of variables that you can tune, but these variables will impact the performance of each transport differently. We will cover the variables including:

  1. How many CPU cores are needed (I’m willing to give)?
  2. Optane SSD or 3D NAND SSD?
  3. How deep should the Queue-Depth be?
  4. Do I need to care about MTU?

This discussion won’t tell you which transport is the best. Instead we unfold the performance of each transport and tell you what it would take for each transport to get the best performance, so that you can make the best choice for your NVMe over Fabrics solutions.

I hope you will join us on September 16th for this live session that is sure to be informative. Register today.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to