Sorry, you need to enable JavaScript to visit this website.

Another Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized

David McIntyre

Mar 26, 2021

title of post

The SNIA Networking Storage Forum’s “Great Storage Debate” webcast series is back! This time, SNIA experts will be discussing the ongoing evolution of the data center, in particular how storage is allocated and managed. There are three competing visions about how storage should be done: Hyperconverged Infrastructure (HCI), Disaggregated Storage, and Centralized Storage. Join us on May 4, 2021 for our live webcast Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized.

IT architects, storage vendors, and industry analysts argue constantly over which is the best approach and even the exact definition of each. Isn’t Hyperconverged constrained? Is Disaggregated designed only for large cloud service providers? Is Centralized storage only for legacy applications?

Tune in to debate these questions and more:  

  • What is the difference between centralized, hyperconverged, and disaggregated infrastructure, when it comes to storage?
  • Where does the storage controller or storage intelligence live in each?
  • How and where can the storage capacity and intelligence be distributed?
  • What is the difference between distributing the compute or application and distributing the storage?
  • What is the role of a JBOF or EBOF (Just a Bunch of Flash or Ethernet Bunch of Flash) in these storage models?
  • What are the implications for data center, cloud, and edge?  

Register today as leading storage minds converge to argue the definitions and merits of where to put the storage and storage intelligence.

For anyone not familiar with the Great Storage Debates it is very important to note that this series isn’t about winners and losers; it’s about providing essential compare and contrast information between similar technologies. We won’t settle any arguments as to which is better – but we will debate the arguments, point out advantages and disadvantages, and make the case for specific use cases.  

To date, the SNIA NSF has hosted several great storage debates, including: File vs. Block vs. Object Storage, Fibre Channel vs. iSCSI, FCoE vs. iSCSI vs. iSER, RoCE vs. iWARP, and Centralized vs. Distributed. You can view them all on our SNIAVideo YouTube Channel.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Another Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized

David McIntyre

Mar 26, 2021

title of post
The SNIA Networking Storage Forum’s “Great Storage Debate” webcast series is back! This time, SNIA experts will be discussing the ongoing evolution of the data center, in particular how storage is allocated and managed. There are three competing visions about how storage should be done: Hyperconverged Infrastructure (HCI), Disaggregated Storage, and Centralized Storage. Join us on May 4, 2021 for our live webcast Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized. IT architects, storage vendors, and industry analysts argue constantly over which is the best approach and even the exact definition of each. Isn’t Hyperconverged constrained? Is Disaggregated designed only for large cloud service providers? Is Centralized storage only for legacy applications? Tune in to debate these questions and more:
  • What is the difference between centralized, hyperconverged, and disaggregated infrastructure, when it comes to storage?
  • Where does the storage controller or storage intelligence live in each?
  • How and where can the storage capacity and intelligence be distributed?
  • What is the difference between distributing the compute or application and distributing the storage?
  • What is the role of a JBOF or EBOF (Just a Bunch of Flash or Ethernet Bunch of Flash) in these storage models?
  • What are the implications for data center, cloud, and edge?
Register today as leading storage minds converge to argue the definitions and merits of where to put the storage and storage intelligence. For anyone not familiar with the Great Storage Debates it is very important to note that this series isn’t about winners and losers; it’s about providing essential compare and contrast information between similar technologies. We won’t settle any arguments as to which is better – but we will debate the arguments, point out advantages and disadvantages, and make the case for specific use cases. To date, the SNIA NSF has hosted several great storage debates, including: File vs. Block vs. Object Storage, Fibre Channel vs. iSCSI, FCoE vs. iSCSI vs. iSER, RoCE vs. iWARP, and Centralized vs. Distributed. You can view them all on our SNIAVideo YouTube Channel.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on the Ethics of AI

Jim Fister

Mar 25, 2021

title of post
Earlier this month, the SNIA Cloud Storage Technologies Initiative (CSTI) hosted an intriguing discussion on the Ethics of Artificial Intelligence (AI). Our expert, Rob Enderle, Founder of The Enderle Group, and Eric Hibbard, Chair of the SNIA Security Technical Work Group, shared their experiences and insights on what it takes to keep AI ethical. If you missed the live event, it Is available on-demand along with the presentation slides at the SNIA Educational Library. As promised during the live event, our experts have provided written answers to the questions from this session, many of which we did not have time to get to. Q. The webcast cited a few areas where AI as an attacker could make a potential cyber breach worse, are there also some areas where AI as a defender could make cybersecurity or general welfare more dangerous for humans? A. Indeed, we address several different scenarios where AI running at a speed of thought and reaction is much faster than human reaction. Some that we didn’t address are the impact of AI on general cybersecurity. Phishing attacks using AI are getting more sophisticated, and an AI that can compromise systems with cameras or microphones has the ability to pick up significant amounts of information from users. As we continue to automate a response to an attack there could be situations where an attacker is misidentified, and an innocent person is charged by mistake. AI operates at large scale, sometimes making decisions on data that it not apparent to humans looking at the same data. This might cause an issue where an AI believes a human is in the wrong in ways that we could not otherwise see. An AI might also overreach to an attack, for instance noticing there is an attempt to hack into the infrastructure of a company, shutting down that infrastructure in an abundance of caution could leave workers with no power, lights, or air conditioning. Some water-cooling systems if shut down suddenly will burst and that could cause both safety and severe damage issues. Q. What are some of the technical and legal standards that are currently in place that are trying to regulate AI from an ethics standpoint?  Are legal experts actually familiar enough with AI technology and bias training to make informed decisions? A. The legal community is definitely aware of AI. As an example, the American Bar Association Science and Technology Law Section’s (ABA SciTech) Artificial Intelligence & Robotics Committee has been active since at least 2008. ABA SciTech is currently planning its third National Institute Artificial Intelligence (AI) and Robotics for October 2021 in which AI ethics will figure prominently. That said, case law on AI ethics/bias in the U.S. is still limited, but expected to grow as AI becomes more prevalent in business decisions and operations. It is also worth noting that international standards on AI ethics/bias either exist or are under development. For example, the IEEE 7000 Standards Working Groups are already developing standards for the future of ethical intelligent and autonomous technologies. In addition, ISO/IEC JTC 1/SC 42 is developing AI and Machine Learning standards that includes ethics/bias as an element. Q. The webcast talked a lot about automated vehicles and the work done by companies in terms of safety as well as in terms of liability protection.  Is there a possibility that these two conflict? A. In the webcast we discussed the fact that autonomous vehicle safety requires a multi-layered approach that could include connectivity in-vehicle, with other vehicles, with smart city infrastructure, and with individuals’ schedules and personal information. This is obviously a complex environment, and current liability process makes it difficult for companies and municipalities to work together without encountering legal risk. For instance, let’s say an autonomous car sees a pedestrian in danger and could place itself between the pedestrian and that danger. But it doesn’t because the resulting accident could result in the vehicle attracting liability. Or, hitting ice on a corner, turning control over to the driver so the driver is clearly responsible for the accident even though the autonomous system could be more effective at reducing the chance of a fatal outcome. Q. You didn’t discuss much on AI as a teacher. Is there a possibility that AI could be used to educate students, and what are some of the ethical implications of AI teaching humans? A. An AI can scale to individually-focused custom teaching plans far better than a human could.  However, AIs aren’t inherently unbiased and were they’re corrupted through their training they will perform consistently with that training. If the training promotes unethical behavior that is what the AI will teach. Q. Could an ethical issue involving AI become unsolvable by current human ethical standards?  What is an example of that, and what are some steps to mitigate that circumstance? A. Certainly, ethics are grounded in rules and those rules aren’t consistent and are in flux.  These two conditions make it virtually impossible to assure the AI is truly ethical because the related standard is fluid.  Machines like immutable rules, ethics rules aren’t immutable. Q. I can’t believe that nobody’s brought up HAL from Arthur C. Clarke’s 2001 book. Wasn’t this a prototype of AI ethics issues? A. We spent some time at the end of the session, where Jim mentioned that our “Socratic Forebearers” were some of the early science fiction writers such as Clarke and Isaac Asimov. We spent some time discussing Asimov’s Three Laws of Robotics and how Asimov and others later theorized how smart robots could get around the three laws. In truth, there’s been decades of thought into the ethics of an artificial intelligence, and we’re fortunate to be able to build on that as we address what are now real-world problems.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on the Ethics of AI

Jim Fister

Mar 25, 2021

title of post
Earlier this month, the SNIA Cloud Storage Technologies Initiative (CSTI) hosted an intriguing discussion on the Ethics of Artificial Intelligence (AI). Our expert, Rob Enderle, Founder of The Enderle Group, and Eric Hibbard, Chair of the SNIA Security Technical Work Group, shared their experiences and insights on what it takes to keep AI ethical. If you missed the live event, it Is available on-demand along with the presentation slides at the SNIA Educational Library. As promised during the live event, our experts have provided written answers to the questions from this session, many of which we did not have time to get to. Q. The webcast cited a few areas where AI as an attacker could make a potential cyber breach worse, are there also some areas where AI as a defender could make cybersecurity or general welfare more dangerous for humans? A. Indeed, we address several different scenarios where AI running at a speed of thought and reaction is much faster than human reaction. Some that we didn’t address are the impact of AI on general cybersecurity. Phishing attacks using AI are getting more sophisticated, and an AI that can compromise systems with cameras or microphones has the ability to pick up significant amounts of information from users. As we continue to automate a response to an attack there could be situations where an attacker is misidentified, and an innocent person is charged by mistake. AI operates at large scale, sometimes making decisions on data that it not apparent to humans looking at the same data. This might cause an issue where an AI believes a human is in the wrong in ways that we could not otherwise see. An AI might also overreach to an attack, for instance noticing there is an attempt to hack into the infrastructure of a company, shutting down that infrastructure in an abundance of caution could leave workers with no power, lights, or air conditioning. Some water-cooling systems if shut down suddenly will burst and that could cause both safety and severe damage issues. Q. What are some of the technical and legal standards that are currently in place that are trying to regulate AI from an ethics standpoint?  Are legal experts actually familiar enough with AI technology and bias training to make informed decisions? A. The legal community is definitely aware of AI. As an example, the American Bar Association Science and Technology Law Section’s (ABA SciTech) Artificial Intelligence & Robotics Committee has been active since at least 2008. ABA SciTech is currently planning its third National Institute Artificial Intelligence (AI) and Robotics for October 2021 in which AI ethics will figure prominently. That said, case law on AI ethics/bias in the U.S. is still limited, but expected to grow as AI becomes more prevalent in business decisions and operations. It is also worth noting that international standards on AI ethics/bias either exist or are under development. For example, the IEEE 7000 Standards Working Groups are already developing standards for the future of ethical intelligent and autonomous technologies. In addition, ISO/IEC JTC 1/SC 42 is developing AI and Machine Learning standards that includes ethics/bias as an element. Q. The webcast talked a lot about automated vehicles and the work done by companies in terms of safety as well as in terms of liability protection.  Is there a possibility that these two conflict? A. In the webcast we discussed the fact that autonomous vehicle safety requires a multi-layered approach that could include connectivity in-vehicle, with other vehicles, with smart city infrastructure, and with individuals’ schedules and personal information. This is obviously a complex environment, and current liability process makes it difficult for companies and municipalities to work together without encountering legal risk. For instance, let’s say an autonomous car sees a pedestrian in danger and could place itself between the pedestrian and that danger. But it doesn’t because the resulting accident could result in the vehicle attracting liability. Or, hitting ice on a corner, turning control over to the driver so the driver is clearly responsible for the accident even though the autonomous system could be more effective at reducing the chance of a fatal outcome. Q. You didn’t discuss much on AI as a teacher. Is there a possibility that AI could be used to educate students, and what are some of the ethical implications of AI teaching humans? A. An AI can scale to individually-focused custom teaching plans far better than a human could.  However, AIs aren’t inherently unbiased and were they’re corrupted through their training they will perform consistently with that training. If the training promotes unethical behavior that is what the AI will teach. Q. Could an ethical issue involving AI become unsolvable by current human ethical standards?  What is an example of that, and what are some steps to mitigate that circumstance? A. Certainly, ethics are grounded in rules and those rules aren’t consistent and are in flux.  These two conditions make it virtually impossible to assure the AI is truly ethical because the related standard is fluid.  Machines like immutable rules, ethics rules aren’t immutable. Q. I can’t believe that nobody’s brought up HAL from Arthur C. Clarke’s 2001 book. Wasn’t this a prototype of AI ethics issues? A. We spent some time at the end of the session, where Jim mentioned that our “Socratic Forebearers” were some of the early science fiction writers such as Clarke and Isaac Asimov. We spent some time discussing Asimov’s Three Laws of Robotics and how Asimov and others later theorized how smart robots could get around the three laws. In truth, there’s been decades of thought into the ethics of an artificial intelligence, and we’re fortunate to be able to build on that as we address what are now real-world problems.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuous Delivery: Cloud Software Development on Speed

Alex McDonald

Mar 23, 2021

title of post
It happens with more frequency these days. Two companies merge, and the IT departments breathe a small sigh of relief as they learn that they both use the same infrastructure software, though one is on-premises and one is in the cloud. Their relief slowly dissolves, as they discover that the cloud-provisioned workers are using features in the software that have yet to be integrated into the on-prem version. Now both have to adapt and it seems that no one is happy. So, what’s the best way to get these versions in sync? A Continuous Delivery model is increasingly being adopted to get software development on a pace to keep up with business demands. The Continuous Delivery model results in a development organization that looks much like current manufacturing processes with effective workers, modern machines, and a just-in-time inventory. Even large software companies are starting to embrace this cloud delivery methodology to create a continuous stream of new revisions. On April 20, 2021, the SNIA Cloud Storage Technologies Initiative will explore why Continuous Delivery is a valuable addition to the software development toolbox at our live webcast “Continuous Delivery: Cloud Software Development on Speed.” By adapting some of the principles of modern manufacturing to software development, a Continuous Delivery methodology ensures that the product is streamlined in its feature set while building constant value to the customer via the cloud. Webcast attendees will learn:
  • Structuring development and testing resources for Continuous Delivery
  • A flexible software planning cycle for driving new features throughout the process
  • A set of simple guidelines for tracking success
  • Ways to ensure new features are delivered before moving to the next plan
Register today. Our expert speakers, Davis Frank, Co-creator of the Jasmine Test Framework & Former Associate Director at Pivotal Labs and Glyn Bowden, CTO, AI & Data Practice at HPE will be on hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuous Delivery: Cloud Software Development on Speed

Alex McDonald

Mar 23, 2021

title of post
It happens with more frequency these days. Two companies merge, and the IT departments breathe a small sigh of relief as they learn that they both use the same infrastructure software, though one is on-premises and one is in the cloud. Their relief slowly dissolves, as they discover that the cloud-provisioned workers are using features in the software that have yet to be integrated into the on-prem version. Now both have to adapt and it seems that no one is happy. So, what’s the best way to get these versions in sync? A Continuous Delivery model is increasingly being adopted to get software development on a pace to keep up with business demands. The Continuous Delivery model results in a development organization that looks much like current manufacturing processes with effective workers, modern machines, and a just-in-time inventory. Even large software companies are starting to embrace this cloud delivery methodology to create a continuous stream of new revisions. On April 20, 2021, the SNIA Cloud Storage Technologies Initiative will explore why Continuous Delivery is a valuable addition to the software development toolbox at our live webcast “Continuous Delivery: Cloud Software Development on Speed.” By adapting some of the principles of modern manufacturing to software development, a Continuous Delivery methodology ensures that the product is streamlined in its feature set while building constant value to the customer via the cloud. Webcast attendees will learn:
  • Structuring development and testing resources for Continuous Delivery
  • A flexible software planning cycle for driving new features throughout the process
  • A set of simple guidelines for tracking success
  • Ways to ensure new features are delivered before moving to the next plan
Register today. Our expert speakers, Davis Frank, Co-creator of the Jasmine Test Framework & Former Associate Director at Pivotal Labs and Glyn Bowden, CTO, AI & Data Practice at HPE will be on hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A from “SAS 201: An Introduction to Enterprise Features” Webcast

STA Forum

Mar 10, 2021

title of post

Questions from SAS 201 Webinar Answered

In an effort to provide ongoing educational content to the industry, the SCSI Trade Association (STA) tackled the basics of Serial Attached SCSI (SAS) in a webinar titled “SAS 201: An Introduction to Enterprise Features,” now available on the STA YouTube channel here. Immediately following their presentations, our experts Rick Kutcipal of Broadcom and Tim Symons of Microchip Technology held a Q&A session. In this blog, we’ve captured the questions asked and answers given to provide you with insight on the recent evolutions in SAS enterprise features including 24G SAS and what they mean to system designers. Q1. Do you think 24G SAS will only be applicable to SSDs, and HDDs will remain at 12Gb/s? A1. Rick: At this time, hard disk drives (HDDs) can’t take advantage of the bandwidth that’s available in 24G SAS. And right now, the technology itself is focused on the backbone and then solid-state drive (SSD) connectivity. Currently, that’s the way we see it shaping up. Tim: If we go back about eight years or so, someone asked me the same type of question when we went from 3Gb/s SAS to 6Gb/s SAS, and the answer was “the platters don’t get data off that quickly.” Well, look where we are now. Q2. How does SAS-4 deal with the higher value AC block capacitor specified in U.3? A2. Tim: This is really getting into the details. U.3 allows you to interconnect SAS devices and PCI devices in a similar backplane environment. All SAS devices are AC coupled so you’ve got a capacitor that sits between the transmitter and receiver. The value is different between different technologies. However, what we did for SAS, and it’s common for a lot of receivers, we changed the blocking AC capacitor values – de-rated them. This does not have a very significant effect on the signal. Consequently, we’re able to accommodate multiple technologies changing AC capacitor value without having significant change in the error correction. So, if you have a look at a U.3 specification, you’ll see a slightly different capacitor value than is specified in the SAS environment. However, that has been endorsed by SAS and does not have any impact on it. Q3. To achieve 18″ trace on backplane + 6m cables, what budget was assigned to the host adapter, and the media? A3. Tim: In SAS, we don’t assign particular budgets to particular parts of their subsystem. In the back of the SAS specification you’ll find an example in the Appendix. They call out, “What if we had 4.0 dB loss in the disk drive and 2.5-3.0 dB loss on the host before we got to the cable?” But those are just examples, they’re not requirements. Essentially, the channel end-to-end from transmitter to receiver is a 30 dB loss channel and how you use that is really up to you. Sometimes, when the disk drive is very close to your host, you may actually choose to use that budget in perhaps a lower cost material, and you’ll have a 30 dB loss channel in a 12-inch connection. SAS is very flexible in that nature, so we don’t assign specific budget to any specific portion of the channel. Q4. How do you see 24G SAS and x4 NVMe Gen 5 drives co-existing? A4. Tim: Speaking from the 24G side of an array, disk drives themselves can have multi-links on them. Because of the x4 nature of an NVMe x4, that just gives you more bandwidth on a Gen5 system. Gen5 is 32 Gbps on NVMe, whereas in SAS, we’re looking at 24 Gbps. It’s quite reasonable technically to add x4 links to that. So, the technology and the bandwidth is pretty similar between the two. Rick: The one thing that Tim just went through was some of the investments and improvements that we have made in 24G SAS to account for these higher data rates, so there will be a difference in the reliability of those particular devices. In general, they will coexist, probably targeting slightly different market spaces. Q5. In large Gen 2 or Gen 3 SAS domains, the end devices can suffer with response time issues due to retiming delays introduced by the expanders in the path. Is Gen 4 looking at ways to reduce these delays? A5. Tim: That’s a great question about fairness enhancements. So, the observation is that as you add an expander and daisy chain to other expanders, when a transaction says, “I finished this transaction,” the first expander tells all its attached devices, “Hey, you’ve got some available bandwidth.” What can happen in a very heavily loaded congested system is that the device closest to the host gets serviced first. So, what we did in SAS-4 and SPL-4 specifically, and beyond, was add fairness enhancements such that you don’t just say a device is waiting for available bandwidth or an available transaction. Each request comes with an age. That ensures that it doesn’t matter where you are in that infrastructure. You will get a fair crack at getting bandwidth as it frees up. So, that is a change from Gen 3 and Gen 4 and it becomes more prevalent as you go to higher performance because you’re attaching more devices and you’re sharing that bandwidth between more devices. As a result, we’re seeing it become more impactful at that rate. Q6. Could you explain a little bit more about the need for Forward Error Correction? A6. Tim: At 12Gbs SAS and previous generations, 6Gb/s and earlier, the noise characteristics of a transmitter to a receiver and also transmissions through a PCB were more affected by crosstalk and reflections. Whereas as we go up in higher frequency ranges of 24G, we do get more disruption to the channel. So, the real need for forward error correction was, we would go down to one-third lengths of cables and one-third the lengths of PCB traces if we didn’t have it. We’d also have to require quite exotic materials, Megtron 10 and beyond. And also, we’d probably have to change our interconnect to all the disk drives as well to support 24G SAS. It would have been quite a disruption. We needed a technology that could give us the data integrity and data delivery of valid uncorrupted data. That’s why we turned to forward error correction. It has been proven in other technologies, such as Ethernet, which has had it for quite a while. So, we weren’t reinventing the wheel, but what we were doing was taking a concept successfully being used in other technologies and applying it to SAS. As a result, we were able to keep the latency low, channel costs down, and continue to support the same ecosystem requirements of six-meter cables and backplanes. Q7. What is the outlook for HDDs, given the ongoing acceptance of SSDs in the enterprise? A7. Rick: In my section of the presentation, I did talk a lot about HDDs, and SSDs are gaining quite a bit of market share. However, they represent different needs. In my examples, I showed warm tiers and cold tiers of storage and how important dollar-per-gigabyte is in those particular applications. And that’s all serviced by HDDs today. For the foreseeable future, a lot of those innovations we talked about during the presentation are optimizing for capacity. And so right now there still is a sizeable gap between the dollar-per-gigabyte on equivalent SSDs as compared to HDDs. Will it always be that way? Probably not. But for the foreseeable future, HDDs are going to play a very important role in enterprise storage. Tim: When comparing HDDs and SSDs, we talked about warm storage, cold storage, and intermediate and hot storage. For rotating media, that’s one performance level. For SSDs, it’s a slightly different performance level, and this is why we’re seeing NVMe work hand-in-hand with SAS. They don’t replace each other because they have different performance characteristics. Disk drives are still by a long way, the best cost-per-gigabyte of storage or cost-per-terabyte storage. In large cold storage systems, that’s required. Q8. In terms of scalability, how large of a topology is possible? A8. Rick: For SAS, it’s some unrealistic number like 64K. More practical cases are being limited by the route cable in the expanders where we’re seeing it at just north of a thousand connected devices. Tim: You may break it into segments so you have total accessibility to tens of thousands of drives, but you really only want hundreds to up to a thousand per regional zone just to get your bandwidth. Q9. Can you comment on the performance implications of Shingled Magnetic Recording? A9. Rick: During the presentation, I talked about Shingled Magnetic Recording (SMR) and what we’re doing in T10 to support it with the Zoned Block Commands, etc. And in the press, SMR has gotten significant feedback on performance. The important part is you have to understand the type of SMR that’s being used. In the enterprise, it’s all host-managed SMR. So that means the OS or the application manage the zone allocations and the streaming of data to make sure that you’re dealing with the shingles, the overlapping tracks, correctly. In drive-managed SMR, this is all managed in the drive and that can have performance implications, but that technology is not used in the enterprise.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A from “SAS 101: The Most Widely-Deployed Storage Interface” Webcast

STA Forum

Mar 9, 2021

title of post

Questions from SAS 101 Webcast Answered

In an effort to provide ongoing educational content to the industry, the SCSI Trade Association (STA) tackled the basics of Serial Attached SCSI (SAS) in a webinar titled “SAS 101: The Most Widely-Deployed Storage Interface,” now available on demand here. Immediately following their presentations, our experts Jeremiah Tussey of Microchip Technology and current STA Vice President; and former STA board member Jeff Mason of TE Connectivity, held a Q&A session. In this blog, we’ve captured the questions asked and answers given to provide you with the extra insight needed on why SAS remains the protocol of choice in the data center. Q1. You mentioned that SAS supports up to 64K devices. Has anyone been able to test a SAS topology of that scale? A1. Although the technology is capable of supporting 64K devices, the realistic implementations are based on routing table designs within expanders and controllers of today. Typically, you see quantities of 1K to 2K attached devices per topology, but you’ll have a lot of these topologies in parallel, so the amount of devices you support can certainly span out to that level of devices. But the practical reality with the I/O flow, RAID, and different applications and data transfer factors, results in the average topologies probably having no more than 2K. Q2. What are the key gaps in NVMe technology that are in place for SAS? A2. NVMe from a performance and latency standpoint is top of its class with its typical shipping x4 interface. However, the fact remains that as we develop and extend our innovations in SAS, the majority of SAS market deployments are utilizing rotating media, so inherent scalability and the flexibility to dynamically add on a mixture of SAS and SATA targets (SSDs, HDDs, tape, etc.) to new or existing configurations is where SAS topologies excel. The SATA deployments are generally going to be higher capacities at lower cost, while not quite at the performance and reliability of SAS deployments intended for more mission critical applications or balanced workloads. Overall, SAS is a technology that’s going to be implemented in enterprise and cloud and not in the PC world or some of the other higher volume, lower cost markets where NVMe is becoming the go-to choice. Q3. What’s the difference on the device level of a SAS SSD vs. a NVMe SSD? A3. Overall, if you look at it on a lane-by-lane perspective, SAS is a faster interface in today’s typical applications. NVMe drives are deployed as a x4 interface in most applications, maybe even in x2, so they have some advantages there natively. But there are capabilities built into SAS drives that have been hardened over many years. For example, SAS supports surprise hot plug in environments that need to swap drives dynamically. Those features are natively built into SAS, but NVMe drives are making progress there. Q4. What’s the maximum length of external 24G SAS copper cables? A4. In general, we’re going to follow the same lengths that we had in the past generations: 1.5 meters. Beyond that, you might be looking at active cable technology, but overall that is the safe distance for copper cables. Q5. What’s the real-world bandwidth of a x4 24G SAS external port? A5. Overall, the real-world bandwidth of a x4 24G SAS external port is 9.6 GB/s, which is full bandwidth supporting x4 connectivity. There are new cables now that support x8, so that bandwidth can be doubled even further. Q6. Is T10 working on the tri-mode U.3 specification? A6. The way we are defining our standards, we are focused on SAS technology. The U.3 specification is something that has been standardized through the Standard Form Factor (SFF) Technical Workgroup, which is now under the SNIA umbrella. Therefore, we don’t directly drive the requirements for that, but we are certainly open to supporting these types of applications that allow different types of connectivity that include SAS products. STA has several members that do contribute to SFF standardization work, typically supporting SAS and SATA inclusion in the standards that directly apply. Q7. In one of your topology diagrams, you showed a mix of SAS HDDs, SAS SSDs, and SATA HDDs. Could you discuss again why someone would mix drive types like that? A7. Certainly, it’s really a choice of the implementer, but some of the ideas behind doing the different types of media relate to data center applications requiring different tiers that provide varying metrics for hot data versus warm data versus cold data. So, when you need higher performance and lower latency, that’s typically where you would use SSDs. Depending on your performance and cost requirements, you can use SAS SSDs for the highest performance at an added cost, or you can use SATA SSDs that give you a slightly lower performance metric at a lower cost point. What you typically see is an overlap in some of the different areas in the overall tiering, where you’ll have SAS SSDs at the top of the line, and a mixture of SATA SSDs with SAS or SATA HDDs in a cached type of JBOD that provides more of a medium level, warm access data platform. Then, down the spectrum, you would have your colder data, where there would be nearline SATA HDDs and SAS HDDs, all the way down to SATA HDDs with SMR. The SMR technology provides serialization and striping of data that gives you the lowest cost per GB. There are even tiers lower than that, including tape and CD technologies as well, which are certainly part of the ecosystem that can be supported with SAS infrastructure. Q8. What is SMR? A8. SMR stands for Shingled Magnetic Recording. This is a technology that a lot of the hard drive manufacturers are deploying today in various applications, specifically cloud data center applications where you need the lowest cost per data metric. It allows the striping of data on the disk platter themselves, actually overlapping to a degree, so you get more compact amounts of data being formed on the platters. SMR has a specific use case and it requires more of a serialization of the data streams to and from the drives, so more management is needed from the host. This means a little bit more oversight and control of how the data is being put on the drive. It’s not as well-suited for more random IOPS, but it certainly provides a more compact method of recording the data.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Computational Storage in the Real World

Eli Tiomkin

Mar 3, 2021

title of post

Computational storage has arrived, with real world applications meeting the goal of enabling parallel computation and/or alleviating constraints on existing compute, memory, storage, and I/O.  The SNIA Computational Storage Special Interest Group has gathered examples of computational storage use cases which demonstrate improvements in application performance and infrastructure efficiency through the integration of compute resources either directly with storage or between the host and the storage. First up in the SNIA Computational Storage Demo Series are our SIG member companies Eideticom Communications and NGD Systems. Their examples demonstrate proof of computational storage concepts.  They also illustrate how SNIA and the Compute Memory and Storage Initiative (CMSI) member companies are advancing the SNIA Computational Storage Architecture and Programing Model, which defines recommended behavior for hardware and software that supports computational storage.

The NGD Systems use case highlights a Microsoft Azure IoT System running on a computational storage device. The video walks through the steps to establish connections to agents and hubs, and shows how to establish event monitors and do image analysis to store images from the web into a computational storage device.

The Eideticom Communications use case highlights transparent compression via a stacked file system and a NVMe-based computational storage processor. The video walks through the steps to mount a no-load file systems, and run sys admin commands to read/write files to disk with compression illustrating speed and application transparency.

We invite you to visit our Computational Storage Use Cases page for these and more examples of real world computational storage applications. Questions? Send them to askcmsi@snia.org.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Computational Storage in the Real World

Eli Tiomkin

Mar 3, 2021

title of post
Computational storage has arrived, with real world applications meeting the goal of enabling parallel computation and/or alleviating constraints on existing compute, memory, storage, and I/O.  The SNIA Computational Storage Special Interest Group has gathered examples of computational storage use cases which demonstrate improvements in application performance and infrastructure efficiency through the integration of compute resources either directly with storage or between the host and the storage. First up in the SNIA Computational Storage Demo Series are our SIG member companies Eideticom Communications and NGD Systems. Their examples demonstrate proof of computational storage concepts.  They also illustrate how SNIA and the Compute Memory and Storage Initiative (CMSI) member companies are advancing the SNIA Computational Storage Architecture and Programing Model, which defines recommended behavior for hardware and software that supports computational storage. The NGD Systems use case highlights a Microsoft Azure IoT System running on a computational storage device. The video walks through the steps to establish connections to agents and hubs, and shows how to establish event monitors and do image analysis to store images from the web into a computational storage device. The Eideticom Communications use case highlights transparent compression via a stacked file system and a NVMe-based computational storage processor. The video walks through the steps to mount a no-load file systems, and run sys admin commands to read/write files to disk with compression illustrating speed and application transparency. We invite you to visit our Computational Storage Use Cases page for these and more examples of real world computational storage applications. Questions? Send them to askcmsi@snia.org. The post Computational Storage in the Real World first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to