Sorry, you need to enable JavaScript to visit this website.

Kubernetes in the Cloud Q&A

Michelle Tidwell

Aug 6, 2019

title of post
Kubernetes is a hot topic these days, generating lots of interest and questions. The goal of our SNIA Cloud Storage Technologies Initiative Kubernetes in the Cloud webcast series is to cut through the hype and provide a vendor neutral look at what Kubernetes is and how it is being used. Our most recent webcast, Kubernetes in the Cloud (Part 2), generated some interesting questions. Here are answers from our expert presenters. Q. If I’m running my Kubernetes infrastructure at a cloud service provider, do I need CSI support by the cloud provider? If this is not available, I will need a virtual storage array that provides CSI leveraging the underlying cloud storage. Do you know whether there are solutions on the market that I can deploy as a virtual machine at my cloud provider? A. Current solutions using the CSI interface for public cloud storage are not available at this point. It will be up to the cloud provider to decide whether to support those interfaces to their storage layers. Q. Does each pod run on one CPU core? I am trying to understand how to size the server configuration? A. Containers use Linux cgroups to limit the amount of CPU and memory a container can consume and this is exposed in Kubernetes as limits that you can set. Q. In today’s environment for Kubernetes Flex storage, what is the suggested process to “backup” these stateful PVs or is that not necessary anymore? A. Backups are still as important with containers as they are today with traditional applications. There are many different approaches available to backup containers: storage snapshots via native storage interfaces, deployment of backup clients in containers, application level backups, etc. Q. You mentioned Kubernetes a lot – but what is the status of native Docker CSI support? Can I use CSI for usage within Docker without deploying Kubernetes? And if yes: Can I get rid of the need for docker volume drivers then? A. Docker universal control point (UCP) support of CSI is currently in beta. Once UCP is generally available, we’ll be able to answer your question in more detail. Interested in more Kubernetes in the Cloud information? Watch our first installment Kubernetes in the Cloud (Part 1) on-demand at your convenience and sign up for our next webcast, Kubernetes in the Cloud (Part 3): Stateful Workloads which will be live on August 20, 2019 and available on demand after that.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Stateful Workloads on Kubernetes: (Almost) Everything You Need to Know

Mike Jochimsen

Jul 26, 2019

title of post

Kubernetes is great for running stateless workloads, like web servers. It’ll run health checks, restart containers when they crash, and do all sorts of other wonderful things. So, what about stateful workloads? Large implementers like Uber say to avoid it if you can [1], and gurus like Kelsey Hightower echo that sentiment [2].

It’s the topic we’ll address on August 20th at our live SNIA Cloud Storage Technologies Initiative webcast “Kubernetes in the Cloud (Part 3): Stateful Workloads.”  In this session, we’ll explore when it’s appropriate to run a stateful workload in cluster, or out. We’ll discuss the best options for running a workload like a database on the cloud, or in the cluster, and what’s needed to set that up.

We’ll cover:

  • Secrets management
  • Running a database on a VM and connecting it to Kubernetes as a service
  • Running a database in Kubernetes using a `stateful set`
  • Running a database in Kubernetes using an Operator
  • Running a database on a cloud managed service

Register today to save your place on August 20th. This is the 3rd installment of our Kubernetes in the Cloud webcast series. Kubernetes in the Cloud (Part 1) and Kubernetes in the Cloud (Part 2) are available on demand. I encourage you to check them out for great information and demonstrations on Kubernetes.

[1] https://eng.uber.com/dockerizing-mysql/

[2] https://twitter.com/kelseyhightower/status/963413508300812295

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Stateful Workloads on Kubernetes: (Almost) Everything You Need to Know

Mike Jochimsen

Jul 26, 2019

title of post

Kubernetes is great for running stateless workloads, like web servers. It’ll run health checks, restart containers when they crash, and do all sorts of other wonderful things. So, what about stateful workloads? Large implementers like Uber say to avoid it if you can [1], and gurus like Kelsey Hightower echo that sentiment [2].

It’s the topic we’ll address on August 20th at our live SNIA Cloud Storage Technologies Initiative webcast “Kubernetes in the Cloud (Part 3): Stateful Workloads.”  In this session, we’ll explore when it’s appropriate to run a stateful workload in cluster, or out. We’ll discuss the best options for running a workload like a database on the cloud, or in the cluster, and what’s needed to set that up.

We’ll cover:

  • Secrets management
  • Running a database on a VM and connecting it to Kubernetes as a service
  • Running a database in Kubernetes using a `stateful set`
  • Running a database in Kubernetes using an Operator
  • Running a database on a cloud managed service

Register today to save your place on August 20th. This is the 3rd installment of our Kubernetes in the Cloud webcast series. Kubernetes in the Cloud (Part 1) and Kubernetes in the Cloud (Part 2) are available on demand. I encourage you to check them out for great information and demonstrations on Kubernetes.

[1] https://eng.uber.com/dockerizing-mysql/

[2] https://twitter.com/kelseyhightower/status/963413508300812295

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Stateful Workloads on Kubernetes: (Almost) Everything You Need to Know

Mike Jochimsen

Jul 26, 2019

title of post
Kubernetes is great for running stateless workloads, like web servers. It’ll run health checks, restart containers when they crash, and do all sorts of other wonderful things. So, what about stateful workloads? Large implementers like Uber say to avoid it if you can [1], and gurus like Kelsey Hightower echo that sentiment [2]. It’s the topic we’ll address on August 20th at our live SNIA Cloud Storage Technologies Initiative webcast “Kubernetes in the Cloud (Part 3): Stateful Workloads.”  In this session, we’ll explore when it’s appropriate to run a stateful workload in cluster, or out. We’ll discuss the best options for running a workload like a database on the cloud, or in the cluster, and what’s needed to set that up. We’ll cover:
  • Secrets management
  • Running a database on a VM and connecting it to Kubernetes as a service
  • Running a database in Kubernetes using a `stateful set`
  • Running a database in Kubernetes using an Operator
  • Running a database on a cloud managed service
Register today to save your place on August 20th. This is the 3rd installment of our Kubernetes in the Cloud webcast series. Kubernetes in the Cloud (Part 1) and Kubernetes in the Cloud (Part 2) are available on demand. I encourage you to check them out for great information and demonstrations on Kubernetes. [1] https://eng.uber.com/dockerizing-mysql/ [2] https://twitter.com/kelseyhightower/status/963413508300812295

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Join SNIA at Pure//Accelerate 2019: Austin, September 15-18

Richelle Ahlvers

Jul 23, 2019

title of post

Equal parts education, information, and inspiration, Pure//Accelerate 2019 is where technology and innovation meet. It’s a place to learn about new products, solutions, and integrations. It is a place for technology enthusiasts to explore industry trends, network with like-minded companies, and map out how to stay ahead as the tech landscape rapidly changes.

SNIA Board Member and Chair of the Scalable Storage Management Technical Work Group Richelle Ahlvers will be joining SNIA Storage Management Initiative Board Member “Barkz” at Pure//Accelerate on Wednesday, September 18, 2019 from 2:00 p.m. – 2:45 p.m. for a presentation titled “Reel It In: SNIA Swordfish™ Scalable Storage Management

By extending the DMTF Redfish® API protocol and schema, SNIA Swordfish™ helps provide a unified approach for the management of storage equipment, data services, and servers. Learn how Pure Storage is using the Swordfish RESTful interface to support the implementation of fast, efficient storage products.

Take advantage of special pricing for SNIA members. Register here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Join SNIA at Pure//Accelerate 2019: Austin, September 15-18

title of post

Equal parts education, information, and inspiration, Pure//Accelerate 2019 is where technology and innovation meet. It’s a place to learn about new products, solutions, and integrations. It is a place for technology enthusiasts to explore industry trends, network with like-minded companies, and map out how to stay ahead as the tech landscape rapidly changes.

SNIA Board Member and Chair of the Scalable Storage Management Technical Work Group Richelle Ahlvers will be joining SNIA Storage Management Initiative Board Member “Barkz” at Pure//Accelerate on Wednesday, September 18, 2019 from 2:00 p.m. – 2:45 p.m. for a presentation titled “Reel It In: SNIA Swordfish™ Scalable Storage Management

By extending the DMTF Redfish® API protocol and schema, SNIA Swordfish™ helps provide a unified approach for the management of storage equipment, data services, and servers. Learn how Pure Storage is using the Swordfish RESTful interface to support the implementation of fast, efficient storage products.

Take advantage of special pricing for SNIA members. Register here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Join SNIA at Pure//Accelerate 2019: Austin, September 15-18

title of post

Equal parts education, information, and inspiration, Pure//Accelerate 2019 is where technology and innovation meet. It’s a place to learn about new products, solutions, and integrations. It is a place for technology enthusiasts to explore industry trends, network with like-minded companies, and map out how to stay ahead as the tech landscape rapidly changes.

SNIA Board Member and Chair of the Scalable Storage Management Technical Work Group Richelle Ahlvers will be joining SNIA Storage Management Initiative Board Member “Barkz” at Pure//Accelerate on Wednesday, September 18, 2019 from 2:00 p.m. – 2:45 p.m. for a presentation titled “Reel It In: SNIA Swordfish™ Scalable Storage Management

By extending the DMTF Redfish® API protocol and schema, SNIA Swordfish™ helps provide a unified approach for the management of storage equipment, data services, and servers. Learn how Pure Storage is using the Swordfish RESTful interface to support the implementation of fast, efficient storage products.

Take advantage of special pricing for SNIA members. Register here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Blurred Lines of Memory and Storage – A Q&A

John Kim

Jul 22, 2019

title of post
The lines are blurring as new memory technologies are challenging the way we build and use storage to meet application demands. That’s why the SNIA Networking Storage Forum (NSF) hosted a “Memory Pod” webcast is our series, “Everything You Wanted to Know about Storage, but were too Proud to Ask.” If you missed it, you can watch it on-demand here along with the presentation slides. We promised Q. Do tools exist to do secure data overwrite for security purposes? A. Most popular tools are cryptographic signing of the data where you can effectively erase the data by throwing away the keys. There are a number of technologies available; for example, the usual ones like BitLocker (part of Windows 10, for example) where the NVDIMM-P is tied to a specific motherboard. There are others where the data is encrypted as it is moved from NVDIMM DRAM to flash for the NVDIMM-N type. Other forms of persistent memory may offer their own solutions. SNIA is working on a security model for persistent memory, and there is a presentation on our work here. Q. Do you need to do any modification on OS or application to support Direct Access (DAX)? A. No, DAX is a feature of the OS (both Windows and Linux support it). DAX enables direct access to files stored in persistent memory or on a block device. Without DAX support in a file system, the page cache is generally used to buffer reads and writes to files, and DAX avoids that extra copy operation by performing reads and writes directly to the storage device. Q. What is the holdup on finalizing the NVDIMM-P standard? Timeline? A. The DDR5 NVDIMM-P standard is under development. Q. Do you have a webcast on persistent memory (PM) hardware too? A. Yes. The snia.org website has an educational library with over 2,000 educational assets. You can search for material on any storage-related topic. For instance, a search on persistent memory will get you all the presentations about persistent memory. Q. Must persistent memory have Data Loss Protection (DLP) A. Since it’s persistent, then the kind of DLP is the kind relevant for other classes of storage. This presentation on the SNIA Persistent Memory Security Threat Model covers some of this. Q. Traditional SSDs are subject to “long tail” latencies, especially as SSDs fill and writes must be preceded by erasures. Is this “long-tail” issue reduced or avoided in persistent memory? A. As PM is byte addressable and doesn’t require large block erasures, the flash kind of long tail latencies will be avoided. However, there are a number of proposed technologies for PM, and the read and write latencies and any possible long tail “stutters” will depend on their characteristics. Q. Does PM have any Write Amplification Factor (WAF) issues similar to SSDs? A. The write amplification (WA) associated with non-volatile memory (NVM) technologies comes from two sources.
  1. When the NVM material cannot be modified in place but requires some type of “erase before write” mechanism where the erasure domain (in bytes) is larger than the writes from the host to that domain.
  2. When the atomic unit of data placement on the NVM is larger than the size of incoming writes. Note the term used to denote this atomic unit can differ but is often referred to as a page or sector.
NVM technologies like the NAND used in SSDs suffer from both sources 1 and 2. This leads to very high write amplification under certain workloads, the worst being small random writes. It can also require over provisioning; that is, requiring more NVM internally than is exposed to the user externally. Persistent memory technologies (for example Intel’s 3DXpoint) only suffer from source 2 and can in theory suffer WA when the writes are small. The severity of the write amplification is dependent on how the memory controller interacts with the media. For example, current PM technologies are generally accessed over a DDR-4 channel by an x86 processor. x86 processors send 64 bytes at a time down to a memory controller, and can send more in certain cases (e.g. interleaving, multiple channel parallel writes, etc.). This makes it far more complex to account for WA than a simplistic random byte write model or in comparison with writing to a block device. Q. Persistent memory can provide faster access in comparison to NAND FLASH, but the cost is more for persistent memory. What do you think on the usability for this technology in future? A. Very good. See this presentation “MRAM, XPoint, ReRAM PM Fuel to Propel Tomorrow’s Computing Advances” by analysts, Tom Coughlin and Jim Handy for an in-depth treatment. Q. Does PM have a ‘lifespan’ similar to SSDs (e.g. 3 years with heavy writes, 5 years)? A. Yes, but that will vary by device technology and manufacturer. We expect the endurance to be very high; comparable or better than the best of flash technologies. Q. What is the performance difference between fast SSD vs “PM as DAX?” A. As you might expect us to say; it depends. PM via DAX is meant as a bridge to using PM natively, but you might expect to have improved performance from PM over NVMe as compared with a flash based SSD, as the latency of PM is much lower than flash; micro-seconds as opposed to low milliseconds. Q. Does DAX work the same as SSDs? A. No, but it is similar. DAX enables efficient block operations on PM similar to block operations on an SSD. Q. Do we have any security challenges with PME? A. Yes, and JEDEC is addressing them. Also see the Security Threat Model presentation here. Q. On the presentation slide of what is or is not persistent memory, are you saying that in order for something to be PM it must follow the SNIA persistent memory programming model? If it doesn’t follow that model, what is it? A. No, the model is a way of consuming this new technology. PM is anything that looks like memory (it is byte addressable via CPU load and store operations) and is persistent (it doesn’t require any external power source to retain information). Q. DRAM is basically a capacitor. Without power, the capacitor discharges and so the data is volatile. What exactly is persistent memory? Does it store data inside DRAM or it will use FLASH to store data? A. The presentation discusses two types of NVDIMM; one is based on DRAM and a flash backup that provides the persistence (that is NVDIMM-N), and the other is based on PM technologies (that is NVDIMM-P) that are themselves persistent, unlike DRAM. Q. Slide 15: If Persistent memory is fast and can appear as byte-addressable memory to applications, why bother with PM needing to be block addressed like disks? A. Because it’s going to be much easier to support applications from day one if PM can be consumed like very fast disks. Eventually, we expect PM to be consumed directly by applications, but that will require them to be upgraded to take advantage of it. Q. Can you please elaborate on byte and block addressable? A. Block addressable is the way we do I/O; that is, data is read and written in large blocks of data, typically 4Kbytes in size. Disk interfaces like SCSI or NVMe take commands to read and write these blocks of data to the external device by transferring the data to and from CPU memory, normally DRAM. Byte addressable means that we’re not doing any I/O at all; the CPU instructions for loading & storing fast registers from memory are used directly on PM. This removes an entire software stack to do the I/O, and means we can efficiently work on much smaller units of data; down to the byte as opposed to the fixed 4Kb demanded by I/O interfaces. You can learn more in our presentation “File vs. Block vs. Object Storage.” There are now 10 installments of the “Too Proud to Ask” webcast series, covering these topics: If you have an idea for an “Everything You Wanted to Know about Storage, but were too Proud to Ask” presentation, please let comment on this blog and the NSF team will put it up for consideration.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered - Now You Can Be a Part of the Real World Workload Revolution!

Marty Foltyn

Jul 17, 2019

title of post

The SNIA Solid State Storage Initiative would like to thank everyone who attended our webcast: How To Be Part of the Real World Workload Revolution.  If you haven’t seen it yet, you can view the on demand version here.  You can find the slides here.

Eden Kim and Jim Fister led a discussion on the testmyworkload (TMW) tool and data repository, discussing how a collection of real-world workload data captures can revolutionize design and configuration of hardware, software and systems for the industry.   A new SNIA white paper available in both English and Chinese authored by Eden Kim, with an introduction by Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis, discusses how we can all benefit by sharing traces of our digital workloads through the SNIA SSSI Real-World Workload Capture program.

In an environment where workloads are becoming more complex -- and the choices of hardware configuration for solid-state storage are growing -- the opportunity to better understand the characteristics of data transfers to and from the storage systems is critical.  By sharing real-world workloads on the Test My Workload repository, the industry can benefit overall in design and development at every level from SSD development to system configuration in the datacenter.

There were several questions asked in and after the webcast.  Here are some of the answers.  Any additional questions can be addressed to asksssi@snia.org.

Q: Shouldn't real world workloads have concurrent applications?  Also, wouldn’t any SQL workloads also log or journal sequential writes?

A: Yes.  Each capture shows all of the IO Streams that are being applied to each logical storage recognized by the OS.  These IO Streams are comprised of IOs generated by System activities as well as a variety of drivers, applications and OS activities.  The IOProfiler toolset allows you to not only see all of the IO Stream activity that occurs during a capture, but also allows you to parse, or filter, the capture to see just the IO Streams (and other metrics) that are of interest.

Q: Is there any collaboration with the SNIA IOTTA Technical Work Group on workload or trace uploading?

A:  While IOTTA TWG and SSS TWG work closely together, an IO Capture is fundamentally different from an IO Trace and hence is not able to be presented on the IOTTA trace repository.  An IO Trace collects all of the data streams that occur during the IO Trace capture period and results in a very large file.  An IO Capture, on the other hand, captures statistics on the observed IO Streams and saves these statistics to a table.  Hence, no actual personal or user data is captured in an IO Capture, only the statistics on the IO Streams. Because IO Captures are a series of record tables for individual time steps, the format is not compatible with a repository for the streaming data captured in an IO Trace.

For example, an IO Trace could do a capture where 50,000 RND 4K Write and 50,000 RND 4K Read IOPS are recorded, resulting in 100,000 4K transfers, or 40M bytes of data.  OTOH, an IO Capture that collects statistics would log the fact that 50,000 RND 4K Writes and 50,000 RND 4K Reads occurred… a simple two item entry in a table.  Of course, the IOPS, Response Times, Queue Depths and LBA Ranges could also be tracked resulting in a table of 100,000 entries times the above 4 metrics, but 400,000 table entries is much smaller than 40 MB of data.

Both of these activities are useful, and the SNIA supports both.

Q: Can the traces capture a cluster workload or just single server?

A: IO Captures capture the IO Streams that are observed going from User space to all logical storage recognized by the OS.  Accordingly, for clusters, there will be an individual capture for each logical unit.  Note that all logical device captures can be aggregated into a single capture for analysis with the advanced analytics offered by the commercial IOProfiler tools.

Q: Have you seen situation where the IO size on the wire does not matched what application request?  Example Application request 256K but driver chopped the IO into multiple 16K before sent to the storage. How would we verify this type of issue?

A: Yes, this is a common situation. Applications may generate a large block SEQ IO Stream for video on demand.  However, that large block SEQ IO Stream is often fragmented into concurrent RND block sizes.  For example, in Linux OS, a 1MB file is often fragmented into random concurrent 128K block sizes for transmission to and from storage, but then coalesced back into a single 1024K BS in user space..

Q: Will you be sharing the costs for your tools or systems?

A: The tool demonstrated in the webcast is available free at testmyworkload.com (TMW).  This is done to build the repository of workloads at the TMW site.  Calypso Systems does have a set of Pro tools built around the TMW application.  Contact Calypso for specific details.

Q: Can the capture be replayed on different drives?

A: Yes.  In fact, this is one of the reasons that the tool was created.  The tool and repository of workloads are intended to be used as a way to compare drive and system performance, as well as tune software for real-world conditions.

Q: How are you tracking compressibility & duplication if the user does not turn on compression or dedupe?

A: The user must turn on compression or duplication at the beginning of the capture to see these metrics.

Q: An end user can readily use this to see what their real world workload looks like.  But, how could an SSD vendor mimic the real world workload or get a more "realworld-like" workload for use in common benchmarking tools like FIO & Sysbench?

A: The benchmarking tools mentioned are synthetic workloads, and write a predictable stream to and from the drive.  IO Captures ideally are run as a replay test that recreates the sequence of changing IO Stream combinations and Queue Depths observed during the capture.  While the Calypso toolset can do this automatically, free benchmark tools like FIO and sysbench may not be able to change QDs and IO Stream combinations from step to step in a test script.  However, the IO Capture will also provide a cumulative workload that list the dominant IO Streams and their percentage of occurrence.  This list of dominant IO Streams can be used with fio or sysbench to create a synthetic composite IO stream workload.

Q: Is it possible to use the tool to track CPU State such as IOWAIT or AWAIT based on the various streams?

A: Yes, IO Captures contain statistics on CPU usage such as CPU System Usage %, CPU IO Wait, CPU User usage, etc.

Q: Can we get more explanation of demand intensity and comparison to queue depth?

A: Demand Intensity (DI) is used to refer to the outstanding IOs at a given level of the software/hardware stack.  It may be referred to simply as the outstanding Queue Depth (QD) or as the number of outstanding Thread Count (TC) and QD.  The relevance of TC depends on where in the stack you are measuring the DI.  User QD varies from level to level and depends on what each layer of abstraction is doing.  Usually, focus is paid to the IO Scheduler and the total outstanding IOs at the block IO level.  Regardless of nomenclature, it is important to understand the DI as your workload traverses the IO Stack and to be able to minimize bottlenecks due to high DI.

Q: In these RWSW application traces do these include non-media command percentages such as identify and read log page (SMART), sleep states, etc.?  Depending on the storage interface and firmware this can adversely affect performance/QoS.

A: IO Capture metrics are the IO Streams at the logical storage level and thus do not include protocol level commands.  Non performance IO commands such as TRIMs can be recorded, and SMART logs can be tracked if access to the physical storage is provided.

Q: Isn't latency a key performance metric for these workloads so collecting only 2 minute burst might not show latency anomalies?

A: IO Captures average the statistics over a selected time window.  Each individual IO Stream and its metrics are recorded and tabulated on a table but the time window average is what is displayed on the IO Stream map.  Of course, the min and max Response times over the 2 minute window are displayed, but the individual IO latencies are not displayed.  In order to track IO Bursts, the time window resolution should be set to a narrow time range, such as 100 mS or less, in order to distinguish IO Bursts and Host Idle times.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered – Now You Can Be a Part of the Real World Workload Revolution!

Marty Foltyn

Jul 17, 2019

title of post
The SNIA Solid State Storage Initiative would like to thank everyone who attended our webcast: How To Be Part of the Real World Workload Revolution.  If you haven’t seen it yet, you can view the on demand version here.  You can find the slides here. Eden Kim and Jim Fister led a discussion on the testmyworkload (TMW) tool and data repository, discussing how a collection of real-world workload data captures can revolutionize design and configuration of hardware, software and systems for the industry.   A new SNIA white paper available in both English and Chinese authored by Eden Kim, with an introduction by Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis, discusses how we can all benefit by sharing traces of our digital workloads through the SNIA SSSI Real-World Workload Capture program. In an environment where workloads are becoming more complex — and the choices of hardware configuration for solid-state storage are growing — the opportunity to better understand the characteristics of data transfers to and from the storage systems is critical.  By sharing real-world workloads on the Test My Workload repository, the industry can benefit overall in design and development at every level from SSD development to system configuration in the datacenter. There were several questions asked in and after the webcast.  Here are some of the answers.  Any additional questions can be addressed to asksssi@snia.org. Q: Shouldn’t real world workloads have concurrent applications?  Also, wouldn’t any SQL workloads also log or journal sequential writes? A: Yes.  Each capture shows all of the IO Streams that are being applied to each logical storage recognized by the OS.  These IO Streams are comprised of IOs generated by System activities as well as a variety of drivers, applications and OS activities.  The IOProfiler toolset allows you to not only see all of the IO Stream activity that occurs during a capture, but also allows you to parse, or filter, the capture to see just the IO Streams (and other metrics) that are of interest. Q: Is there any collaboration with the SNIA IOTTA Technical Work Group on workload or trace uploading? A:  While IOTTA TWG and SSS TWG work closely together, an IO Capture is fundamentally different from an IO Trace and hence is not able to be presented on the IOTTA trace repository.  An IO Trace collects all of the data streams that occur during the IO Trace capture period and results in a very large file.  An IO Capture, on the other hand, captures statistics on the observed IO Streams and saves these statistics to a table.  Hence, no actual personal or user data is captured in an IO Capture, only the statistics on the IO Streams. Because IO Captures are a series of record tables for individual time steps, the format is not compatible with a repository for the streaming data captured in an IO Trace. For example, an IO Trace could do a capture where 50,000 RND 4K Write and 50,000 RND 4K Read IOPS are recorded, resulting in 100,000 4K transfers, or 40M bytes of data.  OTOH, an IO Capture that collects statistics would log the fact that 50,000 RND 4K Writes and 50,000 RND 4K Reads occurred… a simple two item entry in a table.  Of course, the IOPS, Response Times, Queue Depths and LBA Ranges could also be tracked resulting in a table of 100,000 entries times the above 4 metrics, but 400,000 table entries is much smaller than 40 MB of data. Both of these activities are useful, and the SNIA supports both. Q: Can the traces capture a cluster workload or just single server? A: IO Captures capture the IO Streams that are observed going from User space to all logical storage recognized by the OS.  Accordingly, for clusters, there will be an individual capture for each logical unit.  Note that all logical device captures can be aggregated into a single capture for analysis with the advanced analytics offered by the commercial IOProfiler tools. Q: Have you seen situation where the IO size on the wire does not matched what application request?  Example Application request 256K but driver chopped the IO into multiple 16K before sent to the storage. How would we verify this type of issue? A: Yes, this is a common situation. Applications may generate a large block SEQ IO Stream for video on demand.  However, that large block SEQ IO Stream is often fragmented into concurrent RND block sizes.  For example, in Linux OS, a 1MB file is often fragmented into random concurrent 128K block sizes for transmission to and from storage, but then coalesced back into a single 1024K BS in user space.. Q: Will you be sharing the costs for your tools or systems? A: The tool demonstrated in the webcast is available free at testmyworkload.com (TMW).  This is done to build the repository of workloads at the TMW site.  Calypso Systems does have a set of Pro tools built around the TMW application.  Contact Calypso for specific details. Q: Can the capture be replayed on different drives? A: Yes.  In fact, this is one of the reasons that the tool was created.  The tool and repository of workloads are intended to be used as a way to compare drive and system performance, as well as tune software for real-world conditions. Q: How are you tracking compressibility & duplication if the user does not turn on compression or dedupe? A: The user must turn on compression or duplication at the beginning of the capture to see these metrics. Q: An end user can readily use this to see what their real world workload looks like.  But, how could an SSD vendor mimic the real world workload or get a more “realworld-like” workload for use in common benchmarking tools like FIO & Sysbench? A: The benchmarking tools mentioned are synthetic workloads, and write a predictable stream to and from the drive.  IO Captures ideally are run as a replay test that recreates the sequence of changing IO Stream combinations and Queue Depths observed during the capture.  While the Calypso toolset can do this automatically, free benchmark tools like FIO and sysbench may not be able to change QDs and IO Stream combinations from step to step in a test script.  However, the IO Capture will also provide a cumulative workload that list the dominant IO Streams and their percentage of occurrence.  This list of dominant IO Streams can be used with fio or sysbench to create a synthetic composite IO stream workload. Q: Is it possible to use the tool to track CPU State such as IOWAIT or AWAIT based on the various streams? A: Yes, IO Captures contain statistics on CPU usage such as CPU System Usage %, CPU IO Wait, CPU User usage, etc. Q: Can we get more explanation of demand intensity and comparison to queue depth? A: Demand Intensity (DI) is used to refer to the outstanding IOs at a given level of the software/hardware stack.  It may be referred to simply as the outstanding Queue Depth (QD) or as the number of outstanding Thread Count (TC) and QD.  The relevance of TC depends on where in the stack you are measuring the DI.  User QD varies from level to level and depends on what each layer of abstraction is doing.  Usually, focus is paid to the IO Scheduler and the total outstanding IOs at the block IO level.  Regardless of nomenclature, it is important to understand the DI as your workload traverses the IO Stack and to be able to minimize bottlenecks due to high DI. Q: In these RWSW application traces do these include non-media command percentages such as identify and read log page (SMART), sleep states, etc.?  Depending on the storage interface and firmware this can adversely affect performance/QoS. A: IO Capture metrics are the IO Streams at the logical storage level and thus do not include protocol level commands.  Non performance IO commands such as TRIMs can be recorded, and SMART logs can be tracked if access to the physical storage is provided. Q: Isn’t latency a key performance metric for these workloads so collecting only 2 minute burst might not show latency anomalies? A: IO Captures average the statistics over a selected time window.  Each individual IO Stream and its metrics are recorded and tabulated on a table but the time window average is what is displayed on the IO Stream map.  Of course, the min and max Response times over the 2 minute window are displayed, but the individual IO latencies are not displayed.  In order to track IO Bursts, the time window resolution should be set to a narrow time range, such as 100 mS or less, in order to distinguish IO Bursts and Host Idle times.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to