Sorry, you need to enable JavaScript to visit this website.

24G SAS: an Overview of the Technology & Products

STA Forum

Apr 17, 2023

title of post

Hyperscale and enterprise data centers continue to grow rapidly and to use SAS products as a backbone. Why SAS, and what specific SAS products are helping these data centers to grow? This article briefly discusses the technology evolution of SAS, bringing us to our latest generation of 24G SAS. We will examine recent market data from TRENDFOCUS, underscoring the established and growing trajectory of SAS products. We will highlight our latest plugfest, which smoothed the way for 24G SAS to seamlessly enter the existing data storage ecosystem. Finally, we will help the reader to understand the availability, breadth, and depth of 24G SAS products that are available today, and where you can get those products.

24G SAS: Naturally Evolving from A Long Line of SCSI

The first generation of the parallel SCSI interface, known as SCSI-1, was introduced in 1986 at 5MB/s. Subsequent generations of SCSI followed, effectively doubling the bandwidth each time. The SCSI Trade Association was formed in 1996 and began marketing efforts to promote SCSI technology and its use in enterprise deployments. Ultra640 SCSI was discussed in the early 2000s, but cable lengths became impractical at those data rates. That pushed the industry to transition to a serial version of the technology, with the first Serial Attached SCSI (SAS) generation being introduced in 2004 at 3Gb/s.

Let’s fast forward to today. Leveraging the SAS-4 spec, 24G SAS is the latest generation to hit the market, increasing data rates to a maximum of 24Gb/s. With close to thirty years’ worth of products installed throughout global data centers, customers want to keep their current infrastructure while scaling up for more throughput. SAS continues to be a widely used interface for storage in hyperscale and enterprise data centers.


Fig. 1: Technology Timeline

SAS Continues to Be Deployed: The Data Analysts Weigh In

Data storage industry’s market research and consulting firm TRENDFOCUS has shown us the industry numbers that support our trade association’s assertions that SAS market continues to grow. “SAS will remain the storage infrastructure of choice in the enterprise market,” said Don Jeanette, Vice President of data storage market research firm TRENDFOCUS. “With SAS’ entrenched presence throughout data centers, it is key for STA and its member companies to continue to support this existing infrastructure while developing for the future.”


Fig. 2: Enterprise Storage Capacity (by Technology)
Source: 
TRENDFOCUS, January 2023

Industry Plugfests Keep Products Playing Together

As discussed earlier, SAS technology has increased in speed and features over many years, while maintaining its essential industry-wide compatibility. To help that happen, STA has conducted nineteen different multi-day test events, called “plugfests,” to guarantee SAS interoperability across vendors and the ecosystem.

The second 24G SAS plugfest was held in late 2021. These events are typically a logistical challenge with new technology, products, engineers, and other personnel coming together to make sure products work together before they go out to the market. This plugfest was especially challenging in the recently post-global pandemic environment with many companies skittish to commit personnel to travel.

STA overcame this hurdle by holding a hybrid format at two locations at member company facilities (Broadcom and Microchip), while maintaining credibility and authenticity with independent auditing of test results by University of New Hampshire InterOperability Laboratory (UNH-IOL). The test event included participation by 10 diverse companies, both STA member companies and non-member companies, representing all the different pieces of infrastructure to make a complete SAS network.

As the number of new 24G SAS products available in the market continued to increase, completion of this STA plugfest was another proof point to show 24G SAS ecosystem maturing. The vendor product interoperability testing demonstrated SAS is a reliable choice, giving confidence to buyers about continued reliability and support. Featured were active optical cables included in the testing, demonstrating the SAS interface may reach much longer distances beyond those afforded by passive copper cables.

The 2021 plugfest had an aggressive list of activities to accomplish. Engineers at the test facilities checked off:

  • Interoperability of 24G SAS products in a realistic working environment of the standard as specified.
  • In-depth testing of both the SAS-4 and SPL-4 protocol and electrical specifications.
  • Successful interoperability testing with packetization.
  • Testing of passive cables with lengths up to 6 meters, active optical cables with lengths up to 100 meters, and various interconnects and backplanes, including:
    • multiple vendor cables and interconnects
    • long and short channels
    • cable EEPROM identification/access
    • Mini SAS HD and SlimSAS 24G connectors

24G SAS Products: Available Now & Working for You

Today, end users can choose from a wide array of 24G SAS products on the market. All parts of the data storage ecosystem are available to end users, from storage devices to the necessary connecting products. In this section, we will highlight availability of 24G SAS products from several of our STA member companies.

Broadcom, a STA Principal Member company, is a global technology leader that designs, develops and supplies a broad range of semiconductor and infrastructure software solutions. As such, they offer a wide range of SAS products, from SAS expanders to HBAs to RAID-on-a-Chip solutions. Broadcom’s 24G SAS products were featured at STA’s 2022 Flash Memory Summit booth in a live demo, giving our booth visitors an opportunity to see the technology up close. Here are direct links to many of their 24G SAS products available now:

KIOXIA America, Inc. also a STA Principal Member, is the U.S.-based subsidiary of KIOXIA Corporation, a leading worldwide supplier of flash memory and solid-state drives (SSDs). KIOXIA SSDs were also in STA’s 2022 Flash Memory Summit booth in the live demo, giving our booth visitors an opportunity to see SAS SSDs. Thanks to SAS’ backward compatibility, end users can mix KIOXIA’s 12Gb/s SAS and 24G SAS SSDs in the same system. Here are direct links to SSDs available today, that will work in a 24G SAS environment:

Samsung Semiconductor is also a STA Principal Member who provides innovative semiconductor solutions, including DRAM, SSD, processors, image sensors with a wide-ranging portfolio of trending technologies. Samsung announced the launch of their 24G SAS SSD in 2021, and product details and link are here:

PM1653 is Samsung’s 24G SAS SSD, and is optimized for next-generation server and storage applications. The PM1653 is the first Samsung SSD to support the 24G SAS interface. This allows it to offer up to twice the performance of previous 12Gb/s SAS SSDs. Dual ports ensure greater reliability, while Samsung’s sixth-generation, 128-layer V-NAND enables capacities as high as 30.72TB. The PM1653’s 24G SAS support streamlines storage infrastructure by maximizing throughput. This is possible thanks to the addition of a cutting-edge Rhino controller that doubles the PM1653’s bandwidth. This allows it to offer roughly twice the performance and more than double the random speeds of the previous generation.

Seagate, a STA Principal Member, is a global technology leader for nearly 45 years and has shipped almost four billion terabytes of data capacity. Their 12Gb/s SAS and 6G SATA Enterprise HDDs both work in the SAS infrastructure for greater flexibility for enterprise data systems. They have a wide range of HDDs in their Exos E Series Hard Drives to fulfill the needs of the SAS user. Seagate says, always on and always working, the Exos E series of hard drives is loaded with advanced options for optimal performance, reliability, security and user-definable storage management. Built on generations of industry-defining innovation, Exos E is designed to work and perform consistently in enterprise-class workloads.

Teledyne LeCroy is a STA Promotional Member and a leading manufacturer of advanced oscilloscopes, protocol analyzers, and other test instruments that verify performance, validate compliance, and debug complex electronic systems quickly and thoroughly. Their participation in our many interoperability test events has helped pave the way for SAS’ continual smooth introduction into the marketplace, generation after generation. They offer protocol analyzers specifically for 24G SAS products:

  • The Sierra T244 is a SAS 4.0 protocol analyzer designed to non-intrusively capture up to four 24 Gb/s SAS logical links providing unmatched analysis and debug capabilities for developers working on next generation storage systems, devices and software.
  • The Sierra M244 is the industry’s first SAS 4.0 protocol analyzer / jammer / exerciser system for testing next generation storage systems, devices and software.
  • Austin Labs Testing & Training provides a four-day comprehensive SAS protocol training course covering 24G SAS, legacy data rates, and includes hands-on lab time for each protocol class. Along with training there are also options for testing and analysis for SAS products.

In addition to our member companies, many other companies manufacture, market, and sell 24G SAS products today. The below figure shows a range of companies who are making and selling 24G SAS products right now.


Fig. 3: A Sampling of Companies Providing 24G SAS Products Today

Learn More about SAS, and Stay Informed

Thanks to the efforts of the STA and the T10 Technical Committee, Serial Attached SCSI continues to evolve to meet market needs. Stay abreast of SAS latest technology developments by watching free STA webcasts and other educational videos on the organization’s YouTube channel and follow our social media channels on LinkedIn and Twitter. To find links to all of these and more, visit the STA web site at http://www.scsita.org.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Survey Says…Here are Data & Cloud Storage Trends Worth Noting

Michael Hoard

Apr 7, 2023

title of post
With the move to cloud continuing, application modernization, and related challenges such as hybrid and multi-cloud adoption and regulatory compliance requirements, enterprises must ensure they understand the current data and storage landscape. The SODA Foundation’s annual comprehensive global survey on data and storage trends does just that, providing a comprehensive look at the intersection of cloud computing, data and storage management, the configuration of environments that end-user organizations are gravitating to, and priorities of selected capabilities over the next several years On April 13, 2023, SNIA Cloud Storage Technologies Initiative (CSTI) is pleased to host SODA in a live webcast “Top 12 Trends in Data and Cloud Storage” where SODA members who led this research will share key findings. I hope you will join us for a live discussion and in-depth look at this important research to hear the trends that are driving data and storage decisions, including:
  • The top 12 trends in data and storage
  • Data security challenges facing container deployments
  • Approaches to public cloud deployment
  • Challenges for storage observability
  • Focus on hybrid and multi-cloud deployments
  • Top use cases for cloud storage services
  • The impact of open source working with data and storage
Register here and bring your questions. You will also have the opportunity to download the 42-page full report. We look forward to your joining us!      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Survey Says…Here are Data & Cloud Storage Trends Worth Noting

Michael Hoard

Apr 7, 2023

title of post
With the move to cloud continuing, application modernization, and related challenges such as hybrid and multi-cloud adoption and regulatory compliance requirements, enterprises must ensure they understand the current data and storage landscape. The SODA Foundation’s annual comprehensive global survey on data and storage trends does just that, providing a comprehensive look at the intersection of cloud computing, data and storage management, the configuration of environments that end-user organizations are gravitating to, and priorities of selected capabilities over the next several years On April 13, 2023, SNIA Cloud Storage Technologies Initiative (CSTI) is pleased to host SODA in a live webcast “Top 12 Trends in Data and Cloud Storage” where SODA members who led this research will share key findings. I hope you will join us for a live discussion and in-depth look at this important research to hear the trends that are driving data and storage decisions, including:
  • The top 12 trends in data and storage
  • Data security challenges facing container deployments
  • Approaches to public cloud deployment
  • Challenges for storage observability
  • Focus on hybrid and multi-cloud deployments
  • Top use cases for cloud storage services
  • The impact of open source working with data and storage
Register here and bring your questions. You will also have the opportunity to download the 42-page full report. We look forward to your joining us!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

50 Speakers Featured at the 2023 SNIA Compute+Memory+Storage Summit

SNIA CMS Community

Apr 3, 2023

title of post
SNIA’s Compute+Memory+Storage Summit is where architectures, solutions, and community come together. Our 2023 Summit – taking place virtually on April 11-12, 2023 – is the best example to date, featuring a stellar lineup of 50 speakers in 40 sessions covering topics including computational storage real-world applications, the future of memory, critical storage security issues, and the latest on SSD form factors, CXL™, and UCIe™. “We’re excited to welcome executives, architects, developers, implementers, and users to our 11th annual Summit,” said David McIntyre, C+M+S Summit Co-Chair, and member of the SNIA Board of Directors.  “We’ve gathered the technology leaders to bring us the latest developments in compute, memory, storage, and security in our free online event.  We hope you will watch live to ask questions of our experts as they present, and check out those sessions you miss on-demand.” Memory sessions begin with Watch Out – Memory’s Changing! where Jim Handy and Tom Coughlin will discuss the memory technologies vying for the designer’s attention, with CXL™ and UCIe™ poised to completely change the rules. Speakers will also cover thinking memory, optimizing memory using simulations, providing capacity and TCO to applications using software memory tiering, and fabric attached memory. Compute sessions include Steven Yuan of StorageX discussing the Efficiency of Data Centric Computing, and presentations on the computational storage and compute market, big-disk computational storage arrays for data analytics, NVMe as a cloud interface, improving storage systems for simulation science with computational storage, and updates on SNIA and NVM Express work on computational storage standards. CXL and UCIe will be featured with presentations on CXL 3.0 and Universal Compute Interface Express™ On-Package Innovation Slot for Compute, Memory, and Storage Applications. The Summit will also dive into security with a introductory view of today’s storage security landscape and additional sessions on zero trust architecture, storage sanitization, encryption, and cyber recovery and resilience. For 2023, the Summit is delighted to present three panels – one on Exploring the Compute Express Link™ (CXL™) Device Ecosystem and Usage Models moderated by Kurtis Bowman of the CXL Consortium, one on Persistent Memory Trends moderated by Dave Eggleston of Microchip, and one on Form Factor Updates, moderated by Cameron Brett of the SNIA SSD Special Interest Group. We will also feature the popular SNIA Birds-of-a-Feather sessions. On Tuesday April 11 at 4:00 pm PDT/7:00 pm EDT, you can join to discuss the latest compute, memory, and storage developments, and on Wednesday April at 3:00 pm PDT/6:00 pm EDT, we’ll be talking about memory advances. Learn more in our Summit preview video, check out the agenda, and register for free to access our Summit platform! The post 50 Speakers Featured at the 2023 SNIA Compute+Memory+Storage Summit first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Live Panel: Sustainability in the Data Center

Wayne Adams

Mar 30, 2023

title of post
As our data-driven global economy continues to expand with new workloads such as proven digital assets and currency, artificial intelligence and advanced healthcare, our data centers continue to evolve with denser computational systems and increased data stores. This creates challenges for sustainable growth and managing costs. On April 25, 2023, The SNIA Networking Storage Forum will explore this topic with a live webinar “Sustainability in the Data Center Ecosystem.” We’ve convened a panel of experts, who will cover a wide range of topics, including delivering more power efficiency per capacity, revolutionizing cooling to reduce heat, increasing system processing to enhance performance, infrastructure consolidation to reduce the physical and carbon footprint, and applying current and new metrics for carbon footprint and resource efficiency. Beginning with a definition of sustainability, they will discuss:
  • Why sustainability matters to IT
  • Sustainability for storage & networking
  • The importance of measurement and KPIs
  • Sustainability vs. efficiency
  • Best practices - now vs. future
  • Bringing your IT, Facilities and Sustainability organizations together
It’s a big topic, in fact we expect to present more on it this year. Register today to join us on April 25th. Our panel will be on-hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Live Panel: Sustainability in the Data Center

Wayne Adams

Mar 30, 2023

title of post
As our data-driven global economy continues to expand with new workloads such as proven digital assets and currency, artificial intelligence and advanced healthcare, our data centers continue to evolve with denser computational systems and increased data stores. This creates challenges for sustainable growth and managing costs. On April 25, 2023, The SNIA Networking Storage Forum will explore this topic with a live webinar “Sustainability in the Data Center Ecosystem.” We’ve convened a panel of experts, who will cover a wide range of topics, including delivering more power efficiency per capacity, revolutionizing cooling to reduce heat, increasing system processing to enhance performance, infrastructure consolidation to reduce the physical and carbon footprint, and applying current and new metrics for carbon footprint and resource efficiency. Beginning with a definition of sustainability, they will discuss:
  • Why sustainability matters to IT
  • Sustainability for storage & networking
  • The importance of measurement and KPIs
  • Sustainability vs. efficiency
  • Best practices – now vs. future
  • Bringing your IT, Facilities and Sustainability organizations together
It’s a big topic, in fact we expect to present more on it this year. Register today to join us on April 25th. Our panel will be on-hand to answer your questions. The post Live Panel: Sustainability in the Data Center first appeared on SNIA on Network Storage.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

“Year of the Summit” Kicks Off with Live and Virtual Events

SNIA CMS Community

Mar 24, 2023

title of post
For 11 years, SNIA Compute, Memory and Storage Initiative (CMSI) has presented a Summit featuring industry leaders speaking on the key topics of the day.  In the early years, it was persistent memory-focused, educating audiences on the benefits and uses of persistent memory.  In 2020 it expanded to a Persistent Memory+Computational Storage Summit, examining that new technology, its architecture, and use cases. Now in 2023, the Summit is expanding again to focus on compute, memory, and storage. In fact, we’re calling 2023 the Year of the Summit – a year to get back to meeting in person and offering a variety of ways to listen to leaders, learn about technology, and network to discuss innovations, challenges, solutions, and futures. We’re delighted that our first event of the Year of the Summit is a networking event at MemCon, taking place March 28-29 at the Computer History Museum in Mountain View CA. At MemCon, SNIA CMSI member and IEEE President elect Tom Coughlin of Coughlin Associates will moderate a panel discussion on Compute, Memory, and Storage Technology Trends for the Application Developer.  Panel members Debendra Das Sharma of Intel and the CXL™ Consortium, David McIntyre of Samsung and the SNIA Board of Directors, Arthur Sainio of SMART Modular and the SNIA Persistent Memory Special Interest Group, and Arvind Jaganath of VMware and SNIA CMSI will examine how applications and solutions available today offer ways to address enterprise and cloud provider challenges – and they’ll provide a look to the future. SNIA leaders will be on hand to discuss work in computational storage, smart data acceleration interface (SDXI), SSD form factor advances, and persistent memory trends.  Share a libation or two at the SNIA hosted networking reception on Tuesday evening, March 28. This inaugural MemCon event is perfect to start the conversation, as it focuses on the intersection between systems design, memory innovation (emerging memories, storage & CXL) and other enabling technologies. SNIA colleagues and friends can register for MemCon with a 15% discount using code SNIA15. April 2023 Networking! We will continue the Year with a newly expanded SNIA Compute+Memory+Storage Summit coming up April 11-12 as a virtual event.  Complimentary registration is now open for a stellar lineup of speakers, including Stephen Bates of Huawei, Debendra Das Sharma of  Universal Chiplet Interconnect Express™, Jim Handy of Objective Analysis, Shyam Iyer of Dell, Bill Martin of Samsung, Jake Oshins of Microsoft, Andy Rudoff of Intel, Andy Walls of IBM, and Steven Yuan of StorageX. Summit topics include Memory’s Headed for Change, High Performance Data Analytics, CXL 3.0, Detecting Ransomware, Meeting Scaling Challenges, Open Standards for Innovation at the Package Level, and Standardizing Memory to Memory Data Movement. Great panel discussions are on tap as well.  Kurt Lender of the CXL Consortium will lead a discussion on Exploring the CXL Device Ecosystem and Usage Models, Dave Eggleston of Microchip will lead a panel with Samsung and SMART Modular on Persistent Memory Trends, and Cameron Brett of KIOXIA will lead a SSD Form Factors Update.   More details at www.snia.org/cms-summit. Later in 2023… Opportunities for networking will continue throughout 2023. We look forward to seeing you at the SmartNIC Summit (June 13-15), Flash Memory Summit (August 8-10), SNIA Storage Developer Conference (September 18-21), OCP Global Summit (October 17-19), and SC23 (November 12-17). Details on SNIA participation coming soon! The post “Year of the Summit” Kicks Off with Live and Virtual Events first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Michael Hoard

Mar 9, 2023

title of post
A digital twin (DT) is a virtual representation of an object, system or process that spans its lifecycle, is updated from real-time data, and uses simulation, machine learning and reasoning to help decision-making. Digital twins can be used to help answer what-if AI-analytics questions, yield insights on business objectives and make recommendations on how to control or improve outcomes. It's a fascinating technology that the SNIA Cloud Storage Technologies Initiative (CSTI) discussed at our live webcast “Journey to the Center of Massive Data: Digital Twins.” If you missed the presentation, you can watch it on-demand and access a PDF of the slides at the SNIA Educational Library. Our audience asked several interesting questions which are answered here in this blog. Q. Will a digital twin make the physical twin more or less secure? A. It depends on the implementation. If DTs are developed with security in mind, a DT can help augment the physical twin. Example, if the physical and digital twins are connected via an encrypted tunnel that carries all the control, management, and configuration traffic, then a firmware update of a simple sensor or actuator can include multi-factor authentication of the admin or strong authentication of the control application via features running in the DT, which augments the constrained environment of the physical twin. However, because DTs are usually hosted on systems that are connected to the internet, ill-protected servers could expose a physical twin to a remote intruder. Therefore, security must be designed from the start. Q. What are some of the challenges of deploying digital twins? A. Without AI frameworks and real-time interconnected pipelines in place digital twins’ value is limited. Q. How do you see digital twins evolving in the future? A. Here are a series of evolutionary steps:
  • From Discrete DT (for both pre- and post-production), followed by composite DT (e.g assembly line, transportation systems), to Organization DT (e.g. supply chains, political parties).
  • From pre-production simulation, to operational dashboards of current state with human decisions and control, to autonomous limited control functions which ultimately eliminate the need for individual device manager SW separate from the DT.
  • In parallel, 2D DT content displayed on smartphones, tablets, PCs, moving to 3D rendered content on the same, moving selectively to wearables (AR/VR) as the wearable market matures leading to visualized live data that can be manipulated by voice and gesture.
  • Over the next 10 years, I believe DTs become the de facto Graphic User Interface for machines, buildings, etc. in addition to the GUI for consumer and commercial process management.
Q. Can you expand on your example of data ingestion at the edge please? Are you referring to data capture for transfer to a data center or actual edge data capture and processing for digital twin. If the latter, what use cases might benefit? A. Where DTs are hosted and where AI processes are computed, like inference or training on time-series data, don’t have to occur in the same server or even the same location. Nevertheless, depending upon the expected time-to-action and time-to-insight, plus how much data needs to be processed and the cost of moving that data, will dictate where digital twins are placed and how they are integrated within the control path and data path. Example, a high-speed robotic arm that must stop if a human puts their hand in the wrong space, will likely have an attached or integrated smart camera which is capable of identifying (inferring) a foreign object. It will stop itself and an associated DT will receive notice of an event after the fact. A digital twin of the entire assembly line may learn of the event from the robotic arm’s DT and inject control commands to the rest of the assembly line to gracefully slow down or stop. Both DT of the discrete robotic arm and the composite DT of the entire assembly are likely executing on compute infrastructure on the premises in order to react quickly. Whereas, the “what if” capabilities of both types of DTs may run in the cloud or local data center as the optional simulation capability of the DT are not subjected to real or near real-time round-trip time-to-action constraints and may require more compute and storage capacity than is locally available. The point is the “Edge” is a key part of the calculus to determine where DTs operate. Meaning, time-actionable-insights, cost of data movement, governance restrictions of data movement, and the availability / cost of compute and store infrastructure, plus access to Data Scientists, IT professionals, and AI frameworks is increasingly driving more and more automation processing to the “Edge” and its natural for DTs to follow the data. Q. Isn’t Google maps also an example of a digital twin (especially when we use it to drive based on our directions we input and start driving based on its inputs)? A. Good question! It is a digital representation of physical process (a route to a destination) that ingests data from sensor (other vehicles whose operators are using Google Maps driving instructions along some portion of the route.) So, yes. DTs are digital representations of physical things, processes or organizations that share data. But Google maps is an interesting example of a self-organizing composite DT, meaning lots of users acting both sensors (aka discrete DTs) and selective digital viewers of the behavior of many physical cars moving through a shared space. Q. You brought up an interesting subject around regulations and compliance. Considering that some constructions would require approvals from regulatory authorities, how would a digital twin (especially when we have pics that re-construct / re-model soft copies of the blueprints based on modifications identified through the 14-1500 pics) comply to regulatory requirements? A. Some safety regulations in various regions of the world apply to processes. E.g Worker safety in factories. Time to certify is very slow as lots of documentation is compiled and analyzed by humans. DTs could use live data to accelerate documentation, simulation or replays of real data within digital twins and could potentially enable self-certification of new or reconfigured process, assuming that regulatory bodies evolve. Q. Digital twin captures the state of its partner in real time. What happens to aging data? Do we need to store data indefinitely? A, Data retention can shrink as DTs and AI frameworks evolve to perform ongoing distributed AI model refreshing. As AI models refresh more dynamically, the increasingly fewer and fewer anomalous events become the gold used for the next model refresh. In short, DTs should help reduce how much data is retained. Part of what DT can be built to do is to filter out compliance data for long-term archival. Q. Do we not run a high risk when model and reality do not align? What if we trust the twin too much? A. Your question targets more general challenges of AI. There is a small but growing cottage industry evolving in parallel with DT and AI. Analysts refer to it as Explainable AI, whose intent is to explain to mere mortals how and why an AI model results in the predictions and decisions it makes. Your concern is valid, and for this reason we should expect that humans will likely be in the control loop wherein the DT doesn’t act autonomically for non-real-time control functions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Michael Hoard

Mar 9, 2023

title of post
A digital twin (DT) is a virtual representation of an object, system or process that spans its lifecycle, is updated from real-time data, and uses simulation, machine learning and reasoning to help decision-making. Digital twins can be used to help answer what-if AI-analytics questions, yield insights on business objectives and make recommendations on how to control or improve outcomes. It’s a fascinating technology that the SNIA Cloud Storage Technologies Initiative (CSTI) discussed at our live webcast “Journey to the Center of Massive Data: Digital Twins.” If you missed the presentation, you can watch it on-demand and access a PDF of the slides at the SNIA Educational Library. Our audience asked several interesting questions which are answered here in this blog. Q. Will a digital twin make the physical twin more or less secure? A. It depends on the implementation. If DTs are developed with security in mind, a DT can help augment the physical twin. Example, if the physical and digital twins are connected via an encrypted tunnel that carries all the control, management, and configuration traffic, then a firmware update of a simple sensor or actuator can include multi-factor authentication of the admin or strong authentication of the control application via features running in the DT, which augments the constrained environment of the physical twin. However, because DTs are usually hosted on systems that are connected to the internet, ill-protected servers could expose a physical twin to a remote intruder. Therefore, security must be designed from the start. Q. What are some of the challenges of deploying digital twins? A. Without AI frameworks and real-time interconnected pipelines in place digital twins’ value is limited. Q. How do you see digital twins evolving in the future? A. Here are a series of evolutionary steps:
  • From Discrete DT (for both pre- and post-production), followed by composite DT (e.g assembly line, transportation systems), to Organization DT (e.g. supply chains, political parties).
  • From pre-production simulation, to operational dashboards of current state with human decisions and control, to autonomous limited control functions which ultimately eliminate the need for individual device manager SW separate from the DT.
  • In parallel, 2D DT content displayed on smartphones, tablets, PCs, moving to 3D rendered content on the same, moving selectively to wearables (AR/VR) as the wearable market matures leading to visualized live data that can be manipulated by voice and gesture.
  • Over the next 10 years, I believe DTs become the de facto Graphic User Interface for machines, buildings, etc. in addition to the GUI for consumer and commercial process management.
Q. Can you expand on your example of data ingestion at the edge please? Are you referring to data capture for transfer to a data center or actual edge data capture and processing for digital twin. If the latter, what use cases might benefit? A. Where DTs are hosted and where AI processes are computed, like inference or training on time-series data, don’t have to occur in the same server or even the same location. Nevertheless, depending upon the expected time-to-action and time-to-insight, plus how much data needs to be processed and the cost of moving that data, will dictate where digital twins are placed and how they are integrated within the control path and data path. Example, a high-speed robotic arm that must stop if a human puts their hand in the wrong space, will likely have an attached or integrated smart camera which is capable of identifying (inferring) a foreign object. It will stop itself and an associated DT will receive notice of an event after the fact. A digital twin of the entire assembly line may learn of the event from the robotic arm’s DT and inject control commands to the rest of the assembly line to gracefully slow down or stop. Both DT of the discrete robotic arm and the composite DT of the entire assembly are likely executing on compute infrastructure on the premises in order to react quickly. Whereas, the “what if” capabilities of both types of DTs may run in the cloud or local data center as the optional simulation capability of the DT are not subjected to real or near real-time round-trip time-to-action constraints and may require more compute and storage capacity than is locally available. The point is the “Edge” is a key part of the calculus to determine where DTs operate. Meaning, time-actionable-insights, cost of data movement, governance restrictions of data movement, and the availability / cost of compute and store infrastructure, plus access to Data Scientists, IT professionals, and AI frameworks is increasingly driving more and more automation processing to the “Edge” and its natural for DTs to follow the data. Q. Isn’t Google maps also an example of a digital twin (especially when we use it to drive based on our directions we input and start driving based on its inputs)? A. Good question! It is a digital representation of physical process (a route to a destination) that ingests data from sensor (other vehicles whose operators are using Google Maps driving instructions along some portion of the route.) So, yes. DTs are digital representations of physical things, processes or organizations that share data. But Google maps is an interesting example of a self-organizing composite DT, meaning lots of users acting both sensors (aka discrete DTs) and selective digital viewers of the behavior of many physical cars moving through a shared space. Q. You brought up an interesting subject around regulations and compliance. Considering that some constructions would require approvals from regulatory authorities, how would a digital twin (especially when we have pics that re-construct / re-model soft copies of the blueprints based on modifications identified through the 14-1500 pics) comply to regulatory requirements? A. Some safety regulations in various regions of the world apply to processes. E.g Worker safety in factories. Time to certify is very slow as lots of documentation is compiled and analyzed by humans. DTs could use live data to accelerate documentation, simulation or replays of real data within digital twins and could potentially enable self-certification of new or reconfigured process, assuming that regulatory bodies evolve. Q. Digital twin captures the state of its partner in real time. What happens to aging data? Do we need to store data indefinitely? A, Data retention can shrink as DTs and AI frameworks evolve to perform ongoing distributed AI model refreshing. As AI models refresh more dynamically, the increasingly fewer and fewer anomalous events become the gold used for the next model refresh. In short, DTs should help reduce how much data is retained. Part of what DT can be built to do is to filter out compliance data for long-term archival. Q. Do we not run a high risk when model and reality do not align? What if we trust the twin too much? A. Your question targets more general challenges of AI. There is a small but growing cottage industry evolving in parallel with DT and AI. Analysts refer to it as Explainable AI, whose intent is to explain to mere mortals how and why an AI model results in the predictions and decisions it makes. Your concern is valid, and for this reason we should expect that humans will likely be in the control loop wherein the DT doesn’t act autonomically for non-real-time control functions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on the Open Programmable Infrastructure (OPI) Project

Joseph White

Feb 23, 2023

title of post
Last month, the SNIA Networking Storage Forum hosted several experts leading the Open Programmable Infrastructure (OPI) project with a live webcast, “An Introduction to the OPI (Open Programmable Infrastructure) Project.” The project has been created to address a new class of cloud and datacenter infrastructure component. This new infrastructure element, often referred to as Data Processing Unit (DPU), Infrastructure Processing Unit (IPU) or xPU as a general term, takes the form of a server hosted PCIe add-in card or on-board chip(s), containing one or more ASIC’s or FPGA’s, usually anchored around a single powerful SoC device. Our OPI experts provided an introduction to the OPI Project and then explained lifecycle provisioning, API, use cases, proof of concept and developer platform. If you missed the live presentation, you can watch it on demand and download a PDF of the slides at the SNIA Educational Library. The attendees at the live session asked several interesting questions. Here are answers to them from our presenters. Q. Are there any plans for OPI to use GraphQL for API definitions since GraphQL has a good development environment, better security, and a well-defined, typed, schema approach? A. GraphQL is a good choice for frontend/backend services with many benefits as stated in the question. These benefits are particularly compelling for data fetching. For OPI for communications between different microservices we still see gRPC as a better choice. gRPC has a strong ecosystem in cloud and K8S systems with fast execution, strong typing, and polygot endpoints. We see gRPC as the best choice for most OPI APIs due to the strong containerized approach and ease building schemas with Protocol Buffers. We do keep alternatives like GraphQL in mind for specific cases. Q. Will OPI add APIs for less common use cases like hypervisor offload, application verification, video streaming, storage virtualization, time synchronization, etc.? A. OPI will continue to add APIs for various use cases including less common ones. The initial focus of the APIs is to address the major areas of networking, storage, security and then expand to address other cases. The API discussions today are already expanding to consider the virtualization (containers, virtual machines, etc.) as a key area to address. Q. Do you communicate with CXL™ Consortium too? A. While we have not communicated with the Compute Express Link (CXL) Consortium formally. There have been a few conversations with CXL interested parties. We will need to engage in discussions with CXL Consortium like we have with SNIA, DASH, and others. Q. Can you elaborate on the purpose of APIs for AI/ML? A. The DPU solutions contain accelerators and capabilities that can be leveraged by AI/ML type solutions, and we will need to consider what APIs need to be exposed to take advantage of these capabilities. OPI believes there is a set of data movement and co-processor APIs to support DPU incorporation into AI/ML solutions. In keeping with its core mission, OPI is not going to attempt to redefine the existing core AI/ML APIs. We may look at how to incorporate those into DPUs directly as well. Q. Have you considered creating a TEE (Trusted Execution Environment) oriented API? A. This is something that has been considered and is a possibility in the future. There are some different sides to this: 1) OPI itself using TEE on the DPU. This may be interesting, although we’d need a compelling use case. 2) Enabling OPI users to utilize the TEE via a vendor neutral interface. This will likely be interesting, but potentially challenging for DPUs as OPI is considering them. We are currently focused on enabling applications running in containers on DPUs and securing containers via TEE is currently a research area in the industry. For example, there is this project at the “sandbox” maturity level: https://www.cncf.io/projects/confidential-containers/ Q. Will OPI support integration with OCP Caliptra project for ensuring silicon level hardware authentication during boot? Reference: https://siliconangle.com/2022/10/18/open-compute-project-announces-caliptra-new-standard-hardware-root-trust/ A. OPI hasn’t looked at Caliptra yet. As Caliptra matures OPI will follow the industry ecosystem wider direction in this area. We currently follow https://www.dmtf.org/standards/spdm for attestation plus IEEE 802.1AR – Secure Device Identity and https://www.rfc-editor.org/rfc/pdfrfc/rfc8572.txt.pdf for secure device zero touch provisioning and onboarding. Q. When testing NVIDIA DPUs on some server models, the temperature of the DPU was often high because of lack of server cooling resulting in the DPU shutting itself down. First question, is there an open API to read sensors from DPU card itself? Second question, what happens when DPU shuts down, then cools, and comes back to life again? Will the server be notified as per standards and DPU will be usable again? A. Qualified DPU servers from major manufacturers integrate close loop thermals to make sure that cooling is appropriate and temp readout is implemented. If a DPU is used in a non-supported server, you may see the challenges that you experienced with overheating and high temperatures causing DPU shutdowns. Since the server is still in charge of the chassis, PDUs, fans and others, it is the BMCs responsibility to take care of overall server cooling and temperature readouts. There are several different ways to measure temperature, like SMBUS, PLDM and others already widely used with standard NICs, GPUs and other devices. OPI is looking into which is the best specification to adopt for handling temperature readout, DPU reboot, and overall thermal management. OPI is not looking to define any new standards in this area. If you are interested in learning more about DPUs/xPUs, SNIA has covered this topic extensively in the last year or so. You can find all the recent presentations at the SNIAVideo YouTube Channel.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to