Sorry, you need to enable JavaScript to visit this website.

Continuous Delivery: Cloud Software Development on Speed

Alex McDonald

Mar 23, 2021

title of post
It happens with more frequency these days. Two companies merge, and the IT departments breathe a small sigh of relief as they learn that they both use the same infrastructure software, though one is on-premises and one is in the cloud. Their relief slowly dissolves, as they discover that the cloud-provisioned workers are using features in the software that have yet to be integrated into the on-prem version. Now both have to adapt and it seems that no one is happy. So, what’s the best way to get these versions in sync? A Continuous Delivery model is increasingly being adopted to get software development on a pace to keep up with business demands. The Continuous Delivery model results in a development organization that looks much like current manufacturing processes with effective workers, modern machines, and a just-in-time inventory. Even large software companies are starting to embrace this cloud delivery methodology to create a continuous stream of new revisions. On April 20, 2021, the SNIA Cloud Storage Technologies Initiative will explore why Continuous Delivery is a valuable addition to the software development toolbox at our live webcast “Continuous Delivery: Cloud Software Development on Speed.” By adapting some of the principles of modern manufacturing to software development, a Continuous Delivery methodology ensures that the product is streamlined in its feature set while building constant value to the customer via the cloud. Webcast attendees will learn:
  • Structuring development and testing resources for Continuous Delivery
  • A flexible software planning cycle for driving new features throughout the process
  • A set of simple guidelines for tracking success
  • Ways to ensure new features are delivered before moving to the next plan
Register today. Our expert speakers, Davis Frank, Co-creator of the Jasmine Test Framework & Former Associate Director at Pivotal Labs and Glyn Bowden, CTO, AI & Data Practice at HPE will be on hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuous Delivery: Cloud Software Development on Speed

Alex McDonald

Mar 23, 2021

title of post
It happens with more frequency these days. Two companies merge, and the IT departments breathe a small sigh of relief as they learn that they both use the same infrastructure software, though one is on-premises and one is in the cloud. Their relief slowly dissolves, as they discover that the cloud-provisioned workers are using features in the software that have yet to be integrated into the on-prem version. Now both have to adapt and it seems that no one is happy. So, what’s the best way to get these versions in sync? A Continuous Delivery model is increasingly being adopted to get software development on a pace to keep up with business demands. The Continuous Delivery model results in a development organization that looks much like current manufacturing processes with effective workers, modern machines, and a just-in-time inventory. Even large software companies are starting to embrace this cloud delivery methodology to create a continuous stream of new revisions. On April 20, 2021, the SNIA Cloud Storage Technologies Initiative will explore why Continuous Delivery is a valuable addition to the software development toolbox at our live webcast “Continuous Delivery: Cloud Software Development on Speed.” By adapting some of the principles of modern manufacturing to software development, a Continuous Delivery methodology ensures that the product is streamlined in its feature set while building constant value to the customer via the cloud. Webcast attendees will learn:
  • Structuring development and testing resources for Continuous Delivery
  • A flexible software planning cycle for driving new features throughout the process
  • A set of simple guidelines for tracking success
  • Ways to ensure new features are delivered before moving to the next plan
Register today. Our expert speakers, Davis Frank, Co-creator of the Jasmine Test Framework & Former Associate Director at Pivotal Labs and Glyn Bowden, CTO, AI & Data Practice at HPE will be on hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A from “SAS 201: An Introduction to Enterprise Features” Webcast

STA Forum

Mar 10, 2021

title of post

Questions from SAS 201 Webinar Answered

In an effort to provide ongoing educational content to the industry, the SCSI Trade Association (STA) tackled the basics of Serial Attached SCSI (SAS) in a webinar titled “SAS 201: An Introduction to Enterprise Features,” now available on the STA YouTube channel here. Immediately following their presentations, our experts Rick Kutcipal of Broadcom and Tim Symons of Microchip Technology held a Q&A session. In this blog, we’ve captured the questions asked and answers given to provide you with insight on the recent evolutions in SAS enterprise features including 24G SAS and what they mean to system designers. Q1. Do you think 24G SAS will only be applicable to SSDs, and HDDs will remain at 12Gb/s? A1. Rick: At this time, hard disk drives (HDDs) can’t take advantage of the bandwidth that’s available in 24G SAS. And right now, the technology itself is focused on the backbone and then solid-state drive (SSD) connectivity. Currently, that’s the way we see it shaping up. Tim: If we go back about eight years or so, someone asked me the same type of question when we went from 3Gb/s SAS to 6Gb/s SAS, and the answer was “the platters don’t get data off that quickly.” Well, look where we are now. Q2. How does SAS-4 deal with the higher value AC block capacitor specified in U.3? A2. Tim: This is really getting into the details. U.3 allows you to interconnect SAS devices and PCI devices in a similar backplane environment. All SAS devices are AC coupled so you’ve got a capacitor that sits between the transmitter and receiver. The value is different between different technologies. However, what we did for SAS, and it’s common for a lot of receivers, we changed the blocking AC capacitor values – de-rated them. This does not have a very significant effect on the signal. Consequently, we’re able to accommodate multiple technologies changing AC capacitor value without having significant change in the error correction. So, if you have a look at a U.3 specification, you’ll see a slightly different capacitor value than is specified in the SAS environment. However, that has been endorsed by SAS and does not have any impact on it. Q3. To achieve 18″ trace on backplane + 6m cables, what budget was assigned to the host adapter, and the media? A3. Tim: In SAS, we don’t assign particular budgets to particular parts of their subsystem. In the back of the SAS specification you’ll find an example in the Appendix. They call out, “What if we had 4.0 dB loss in the disk drive and 2.5-3.0 dB loss on the host before we got to the cable?” But those are just examples, they’re not requirements. Essentially, the channel end-to-end from transmitter to receiver is a 30 dB loss channel and how you use that is really up to you. Sometimes, when the disk drive is very close to your host, you may actually choose to use that budget in perhaps a lower cost material, and you’ll have a 30 dB loss channel in a 12-inch connection. SAS is very flexible in that nature, so we don’t assign specific budget to any specific portion of the channel. Q4. How do you see 24G SAS and x4 NVMe Gen 5 drives co-existing? A4. Tim: Speaking from the 24G side of an array, disk drives themselves can have multi-links on them. Because of the x4 nature of an NVMe x4, that just gives you more bandwidth on a Gen5 system. Gen5 is 32 Gbps on NVMe, whereas in SAS, we’re looking at 24 Gbps. It’s quite reasonable technically to add x4 links to that. So, the technology and the bandwidth is pretty similar between the two. Rick: The one thing that Tim just went through was some of the investments and improvements that we have made in 24G SAS to account for these higher data rates, so there will be a difference in the reliability of those particular devices. In general, they will coexist, probably targeting slightly different market spaces. Q5. In large Gen 2 or Gen 3 SAS domains, the end devices can suffer with response time issues due to retiming delays introduced by the expanders in the path. Is Gen 4 looking at ways to reduce these delays? A5. Tim: That’s a great question about fairness enhancements. So, the observation is that as you add an expander and daisy chain to other expanders, when a transaction says, “I finished this transaction,” the first expander tells all its attached devices, “Hey, you’ve got some available bandwidth.” What can happen in a very heavily loaded congested system is that the device closest to the host gets serviced first. So, what we did in SAS-4 and SPL-4 specifically, and beyond, was add fairness enhancements such that you don’t just say a device is waiting for available bandwidth or an available transaction. Each request comes with an age. That ensures that it doesn’t matter where you are in that infrastructure. You will get a fair crack at getting bandwidth as it frees up. So, that is a change from Gen 3 and Gen 4 and it becomes more prevalent as you go to higher performance because you’re attaching more devices and you’re sharing that bandwidth between more devices. As a result, we’re seeing it become more impactful at that rate. Q6. Could you explain a little bit more about the need for Forward Error Correction? A6. Tim: At 12Gbs SAS and previous generations, 6Gb/s and earlier, the noise characteristics of a transmitter to a receiver and also transmissions through a PCB were more affected by crosstalk and reflections. Whereas as we go up in higher frequency ranges of 24G, we do get more disruption to the channel. So, the real need for forward error correction was, we would go down to one-third lengths of cables and one-third the lengths of PCB traces if we didn’t have it. We’d also have to require quite exotic materials, Megtron 10 and beyond. And also, we’d probably have to change our interconnect to all the disk drives as well to support 24G SAS. It would have been quite a disruption. We needed a technology that could give us the data integrity and data delivery of valid uncorrupted data. That’s why we turned to forward error correction. It has been proven in other technologies, such as Ethernet, which has had it for quite a while. So, we weren’t reinventing the wheel, but what we were doing was taking a concept successfully being used in other technologies and applying it to SAS. As a result, we were able to keep the latency low, channel costs down, and continue to support the same ecosystem requirements of six-meter cables and backplanes. Q7. What is the outlook for HDDs, given the ongoing acceptance of SSDs in the enterprise? A7. Rick: In my section of the presentation, I did talk a lot about HDDs, and SSDs are gaining quite a bit of market share. However, they represent different needs. In my examples, I showed warm tiers and cold tiers of storage and how important dollar-per-gigabyte is in those particular applications. And that’s all serviced by HDDs today. For the foreseeable future, a lot of those innovations we talked about during the presentation are optimizing for capacity. And so right now there still is a sizeable gap between the dollar-per-gigabyte on equivalent SSDs as compared to HDDs. Will it always be that way? Probably not. But for the foreseeable future, HDDs are going to play a very important role in enterprise storage. Tim: When comparing HDDs and SSDs, we talked about warm storage, cold storage, and intermediate and hot storage. For rotating media, that’s one performance level. For SSDs, it’s a slightly different performance level, and this is why we’re seeing NVMe work hand-in-hand with SAS. They don’t replace each other because they have different performance characteristics. Disk drives are still by a long way, the best cost-per-gigabyte of storage or cost-per-terabyte storage. In large cold storage systems, that’s required. Q8. In terms of scalability, how large of a topology is possible? A8. Rick: For SAS, it’s some unrealistic number like 64K. More practical cases are being limited by the route cable in the expanders where we’re seeing it at just north of a thousand connected devices. Tim: You may break it into segments so you have total accessibility to tens of thousands of drives, but you really only want hundreds to up to a thousand per regional zone just to get your bandwidth. Q9. Can you comment on the performance implications of Shingled Magnetic Recording? A9. Rick: During the presentation, I talked about Shingled Magnetic Recording (SMR) and what we’re doing in T10 to support it with the Zoned Block Commands, etc. And in the press, SMR has gotten significant feedback on performance. The important part is you have to understand the type of SMR that’s being used. In the enterprise, it’s all host-managed SMR. So that means the OS or the application manage the zone allocations and the streaming of data to make sure that you’re dealing with the shingles, the overlapping tracks, correctly. In drive-managed SMR, this is all managed in the drive and that can have performance implications, but that technology is not used in the enterprise.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A from “SAS 101: The Most Widely-Deployed Storage Interface” Webcast

STA Forum

Mar 9, 2021

title of post

Questions from SAS 101 Webcast Answered

In an effort to provide ongoing educational content to the industry, the SCSI Trade Association (STA) tackled the basics of Serial Attached SCSI (SAS) in a webinar titled “SAS 101: The Most Widely-Deployed Storage Interface,” now available on demand here. Immediately following their presentations, our experts Jeremiah Tussey of Microchip Technology and current STA Vice President; and former STA board member Jeff Mason of TE Connectivity, held a Q&A session. In this blog, we’ve captured the questions asked and answers given to provide you with the extra insight needed on why SAS remains the protocol of choice in the data center. Q1. You mentioned that SAS supports up to 64K devices. Has anyone been able to test a SAS topology of that scale? A1. Although the technology is capable of supporting 64K devices, the realistic implementations are based on routing table designs within expanders and controllers of today. Typically, you see quantities of 1K to 2K attached devices per topology, but you’ll have a lot of these topologies in parallel, so the amount of devices you support can certainly span out to that level of devices. But the practical reality with the I/O flow, RAID, and different applications and data transfer factors, results in the average topologies probably having no more than 2K. Q2. What are the key gaps in NVMe technology that are in place for SAS? A2. NVMe from a performance and latency standpoint is top of its class with its typical shipping x4 interface. However, the fact remains that as we develop and extend our innovations in SAS, the majority of SAS market deployments are utilizing rotating media, so inherent scalability and the flexibility to dynamically add on a mixture of SAS and SATA targets (SSDs, HDDs, tape, etc.) to new or existing configurations is where SAS topologies excel. The SATA deployments are generally going to be higher capacities at lower cost, while not quite at the performance and reliability of SAS deployments intended for more mission critical applications or balanced workloads. Overall, SAS is a technology that’s going to be implemented in enterprise and cloud and not in the PC world or some of the other higher volume, lower cost markets where NVMe is becoming the go-to choice. Q3. What’s the difference on the device level of a SAS SSD vs. a NVMe SSD? A3. Overall, if you look at it on a lane-by-lane perspective, SAS is a faster interface in today’s typical applications. NVMe drives are deployed as a x4 interface in most applications, maybe even in x2, so they have some advantages there natively. But there are capabilities built into SAS drives that have been hardened over many years. For example, SAS supports surprise hot plug in environments that need to swap drives dynamically. Those features are natively built into SAS, but NVMe drives are making progress there. Q4. What’s the maximum length of external 24G SAS copper cables? A4. In general, we’re going to follow the same lengths that we had in the past generations: 1.5 meters. Beyond that, you might be looking at active cable technology, but overall that is the safe distance for copper cables. Q5. What’s the real-world bandwidth of a x4 24G SAS external port? A5. Overall, the real-world bandwidth of a x4 24G SAS external port is 9.6 GB/s, which is full bandwidth supporting x4 connectivity. There are new cables now that support x8, so that bandwidth can be doubled even further. Q6. Is T10 working on the tri-mode U.3 specification? A6. The way we are defining our standards, we are focused on SAS technology. The U.3 specification is something that has been standardized through the Standard Form Factor (SFF) Technical Workgroup, which is now under the SNIA umbrella. Therefore, we don’t directly drive the requirements for that, but we are certainly open to supporting these types of applications that allow different types of connectivity that include SAS products. STA has several members that do contribute to SFF standardization work, typically supporting SAS and SATA inclusion in the standards that directly apply. Q7. In one of your topology diagrams, you showed a mix of SAS HDDs, SAS SSDs, and SATA HDDs. Could you discuss again why someone would mix drive types like that? A7. Certainly, it’s really a choice of the implementer, but some of the ideas behind doing the different types of media relate to data center applications requiring different tiers that provide varying metrics for hot data versus warm data versus cold data. So, when you need higher performance and lower latency, that’s typically where you would use SSDs. Depending on your performance and cost requirements, you can use SAS SSDs for the highest performance at an added cost, or you can use SATA SSDs that give you a slightly lower performance metric at a lower cost point. What you typically see is an overlap in some of the different areas in the overall tiering, where you’ll have SAS SSDs at the top of the line, and a mixture of SATA SSDs with SAS or SATA HDDs in a cached type of JBOD that provides more of a medium level, warm access data platform. Then, down the spectrum, you would have your colder data, where there would be nearline SATA HDDs and SAS HDDs, all the way down to SATA HDDs with SMR. The SMR technology provides serialization and striping of data that gives you the lowest cost per GB. There are even tiers lower than that, including tape and CD technologies as well, which are certainly part of the ecosystem that can be supported with SAS infrastructure. Q8. What is SMR? A8. SMR stands for Shingled Magnetic Recording. This is a technology that a lot of the hard drive manufacturers are deploying today in various applications, specifically cloud data center applications where you need the lowest cost per data metric. It allows the striping of data on the disk platter themselves, actually overlapping to a degree, so you get more compact amounts of data being formed on the platters. SMR has a specific use case and it requires more of a serialization of the data streams to and from the drives, so more management is needed from the host. This means a little bit more oversight and control of how the data is being put on the drive. It’s not as well-suited for more random IOPS, but it certainly provides a more compact method of recording the data.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Computational Storage in the Real World

Eli Tiomkin

Mar 3, 2021

title of post

Computational storage has arrived, with real world applications meeting the goal of enabling parallel computation and/or alleviating constraints on existing compute, memory, storage, and I/O.  The SNIA Computational Storage Special Interest Group has gathered examples of computational storage use cases which demonstrate improvements in application performance and infrastructure efficiency through the integration of compute resources either directly with storage or between the host and the storage. First up in the SNIA Computational Storage Demo Series are our SIG member companies Eideticom Communications and NGD Systems. Their examples demonstrate proof of computational storage concepts.  They also illustrate how SNIA and the Compute Memory and Storage Initiative (CMSI) member companies are advancing the SNIA Computational Storage Architecture and Programing Model, which defines recommended behavior for hardware and software that supports computational storage.

The NGD Systems use case highlights a Microsoft Azure IoT System running on a computational storage device. The video walks through the steps to establish connections to agents and hubs, and shows how to establish event monitors and do image analysis to store images from the web into a computational storage device.

The Eideticom Communications use case highlights transparent compression via a stacked file system and a NVMe-based computational storage processor. The video walks through the steps to mount a no-load file systems, and run sys admin commands to read/write files to disk with compression illustrating speed and application transparency.

We invite you to visit our Computational Storage Use Cases page for these and more examples of real world computational storage applications. Questions? Send them to askcmsi@snia.org.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Computational Storage in the Real World

Eli Tiomkin

Mar 3, 2021

title of post
Computational storage has arrived, with real world applications meeting the goal of enabling parallel computation and/or alleviating constraints on existing compute, memory, storage, and I/O.  The SNIA Computational Storage Special Interest Group has gathered examples of computational storage use cases which demonstrate improvements in application performance and infrastructure efficiency through the integration of compute resources either directly with storage or between the host and the storage. First up in the SNIA Computational Storage Demo Series are our SIG member companies Eideticom Communications and NGD Systems. Their examples demonstrate proof of computational storage concepts.  They also illustrate how SNIA and the Compute Memory and Storage Initiative (CMSI) member companies are advancing the SNIA Computational Storage Architecture and Programing Model, which defines recommended behavior for hardware and software that supports computational storage. The NGD Systems use case highlights a Microsoft Azure IoT System running on a computational storage device. The video walks through the steps to establish connections to agents and hubs, and shows how to establish event monitors and do image analysis to store images from the web into a computational storage device. The Eideticom Communications use case highlights transparent compression via a stacked file system and a NVMe-based computational storage processor. The video walks through the steps to mount a no-load file systems, and run sys admin commands to read/write files to disk with compression illustrating speed and application transparency. We invite you to visit our Computational Storage Use Cases page for these and more examples of real world computational storage applications. Questions? Send them to askcmsi@snia.org. The post Computational Storage in the Real World first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Cloud Analytics Drives Airplanes-as-a-Service Business

Jim Fister

Feb 25, 2021

title of post
On-demand flying through an app sounds like something for only the rich and famous, yet the use of cloud analytics is making flexible flying a reality at start-up airline, KinectAir.  On April 7, 2021, The CTO of KinectAir, Ben Howard, will join the SNIA Cloud Storage Technologies Initiative (CSTI) for a fascinating discussion on first-hand experiences of leveraging cloud analytics methods to bring new business models to life that are competitive and profitable. And since start-up companies may not have legacy data and analytics to consider, we’ll also explore what established businesses using traditional analytics methods can learn from this use case. Join us on April 7th for our live webcast “Adapting Cloud Analytics for Practical Business Use” for views from both start-up and established companies on how to revisit the analytics decision process with a discussion on:
  • How to build and take advantage of a data ecosystem
  • Overcoming challenges and roadblocks
  • How to use cloud resources in unique ways to accomplish business and engineering goals
  • Considerations for business requirements and developing technical metrics
  • Thoughts on when to start new vs. adapt existing analytics processes
  • Real-world examples of cloud analytics and AI
Register today. Our panelists will be on-hand to answer questions. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Cloud Analytics Drives Airplanes-as-a-Service Business

Jim Fister

Feb 25, 2021

title of post
On-demand flying through an app sounds like something for only the rich and famous, yet the use of cloud analytics is making flexible flying a reality at start-up airline, KinectAir.  On April 7, 2021, The CTO of KinectAir, Ben Howard, will join the SNIA Cloud Storage Technologies Initiative (CSTI) for a fascinating discussion on first-hand experiences of leveraging cloud analytics methods to bring new business models to life that are competitive and profitable. And since start-up companies may not have legacy data and analytics to consider, we’ll also explore what established businesses using traditional analytics methods can learn from this use case. Join us on April 7th for our live webcast “Adapting Cloud Analytics for Practical Business Use” for views from both start-up and established companies on how to revisit the analytics decision process with a discussion on:
  • How to build and take advantage of a data ecosystem
  • Overcoming challenges and roadblocks
  • How to use cloud resources in unique ways to accomplish business and engineering goals
  • Considerations for business requirements and developing technical metrics
  • Thoughts on when to start new vs. adapt existing analytics processes
  • Real-world examples of cloud analytics and AI
Register today. Our panelists will be on-hand to answer questions. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMI-S Storage Management Quick Start Guide Series Kicks-Off

Mike Walker

Feb 24, 2021

title of post

Twenty-year SNIA veteran Mike Walker has created a series of videos titled “SMI-S Quick Start Guides” that provides developers using the SMI-S storage management specification instructions on how to find useful information in a SMI-S server using the python-based PyWBEM open source tool.

“Using the PyWBEM tool, I created a set of mock SMI-S 1.8 servers which I have shared with the world on GitHub,” said Walker. “I also created a set of PDFs called ‘Quick Start Guides’ and a series of videos demonstrating some of the most recent capabilities of the SMI-S 1.8 specification. Storage equipment vendors and management software vendors seeking to address the day-to-day tasks of the IT environment can use this information to work with SMI-S 1.8.”

The first two videos of this series now available on the SNIA Video YouTube channel are listed below. Be sure to check back or subscribe to the SNIA Video YouTube channel for future video installments.

• A short trailer explaining the content you can expect to see in the series here.
• A SNIA SMI-S Storage Management Spec. Mockups, Installation and Setup video here.

The Quick Start Guide PDFs and a set of mock WBEM servers that support SMI-S 1.8 storage management servers can be found on GitHub here. You can also learn more about PyWBEM here.

About the SMI-S Storage Management Specification

SMI-S was first approved as an ISO standard in 2002. Today, it has been implemented in over 1,350 storage products that provide access to common storage management functions and features.
During its lifetime, several versions of the SMI-S standard have been approved by ISO. The current international standard for SMI-S was based on SMI-S v1.5, which was completed in 2011, submitted for ISO approval in 2012, and formally adopted in 2014 as the latest revision of ISO/IEC 24775.

SMI-S 1.8 rev 5 was sent to ISO as an update to ISO/IEC 24775 and is expected to become an internationally recognized standard in the first half of 2021. SMI-S 1.8 rev 5 is the recommended and final version of the specification as no further updates are planned.

Subscribe to the SNIA Matters Newsletter here to stay up-to-date on all SNIA announcements and be one of the first to learn the ISO approval status of the SMI-S 1.8 rev 5 storage specification.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SMI-S Storage Management Quick Start Guide Series Kicks-Off

Linda Capcara

Feb 24, 2021

title of post
Twenty-year SNIA veteran Mike Walker has created a series of videos titled “SMI-S Quick Start Guides” that provides developers using the SMI-S storage management specification instructions on how to find useful information in a SMI-S server using the python-based PyWBEM open source tool. “Using the PyWBEM tool, I created a set of mock SMI-S 1.8 servers which I have shared with the world on GitHub,” said Walker. “I also created a set of PDFs called ‘Quick Start Guides’ and a series of videos demonstrating some of the most recent capabilities of the SMI-S 1.8 specification. Storage equipment vendors and management software vendors seeking to address the day-to-day tasks of the IT environment can use this information to work with SMI-S 1.8.” The first two videos of this series now available on the SNIA Video YouTube channel are listed below. Be sure to check back or subscribe to the SNIA Video YouTube channel for future video installments. • A short trailer explaining the content you can expect to see in the series here. • A SNIA SMI-S Storage Management Spec. Mockups, Installation and Setup video here. The Quick Start Guide PDFs and a set of mock WBEM servers that support SMI-S 1.8 storage management servers can be found on GitHub here. You can also learn more about PyWBEM here. About the SMI-S Storage Management Specification SMI-S was first approved as an ISO standard in 2002. Today, it has been implemented in over 1,350 storage products that provide access to common storage management functions and features. During its lifetime, several versions of the SMI-S standard have been approved by ISO. The current international standard for SMI-S was based on SMI-S v1.5, which was completed in 2011, submitted for ISO approval in 2012, and formally adopted in 2014 as the latest revision of ISO/IEC 24775. SMI-S 1.8 rev 5 was sent to ISO as an update to ISO/IEC 24775 and is expected to become an internationally recognized standard in the first half of 2021. SMI-S 1.8 rev 5 is the recommended and final version of the specification as no further updates are planned. Subscribe to the SNIA Matters Newsletter here to stay up-to-date on all SNIA announcements and be one of the first to learn the ISO approval status of the SMI-S 1.8 rev 5 storage specification.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to