Sorry, you need to enable JavaScript to visit this website.

Storage for Automotive Q&A

Tom Friend

Jan 10, 2022

title of post
At our recent SNIA Networking Storage Forum (NSF) webcast “Revving up Storage for Automotive” our expert presenters, Ryan Suzuki and John Kim, discussed storage implications as vehicles are turning into data centers on wheels. If you missed the live event, it is available on-demand together with the presentations slides. Our audience asked several interesting questions on this quickly evolving industry. Here are John and Ryan’s answers to them. Q: What do you think the current storage landscape is missing to support the future of IoV [Internet of Vehicles]? Are there any identified cases of missing features from storage (edge/cloud) which are preventing certain ideas from being implemented and deployed? [Ryan] I would have to say no, currently there are no missing features in edge or cloud storage that are preventing ideas from being implemented. If anything, more vehicles need to adopt both wireless connectivity and the associated systems (IVI, ADAS/AD) to truly realize IoV. This will take some time as these technologies are just beginning to be offered in vehicles today. There are 200 million vehicles on the road in the US while in a typical year 17 million new vehicles are sold. [John] My personal opinion is no—the development of the IoV is currently limited by a combination of AI training power in the datacenter, compute power within the vehicles, wireless bandwidth (such as waiting for the broader rollout of 5G), and the development of software for new vehicles. Possibly the biggest limit is the slow rate of replacement of existing non-connected vehicles with IoV-capable. The IoV will definitely require more and possibly smarter storage in the datacenter, cloud and edge, but that storage is not what is limiting or blocking the faster rollout of IoV. Q: Talking from a long-term view, is on-board storage the way to go or will we be shifting to storage at the network edge given high bandwidth network like 5G is flourishing? [Ryan] On-board storage will remain in vehicles and continue to grow because vehicles must be fully operational from a driving perspective even if a wireless connection (5G or otherwise) cannot be established. For example, systems in the vehicle required for safe driving (ADAS/AD) must operate independent of an outside connection. In addition, data collected during operation may need to be stored in the event of a slow or intermittent connection to avoid loss of data. Q: What is the anticipated hourly storage needed? At one point this was in the multiple TB range. [John] HD video (1080p at 30 frames per second) requires from 2-4 GB/hour and 4K video requires 15-20 GB/hour, so if a car has 6 HD cameras and a few additional sensors being recorded, the hourly storage need for a normal ADAS would be 8-30 GB/hour. However, a car being used to train, develop or test ADAS/AD systems would collect multiple video angles, more types of data and higher-resolution video/audio/radar/lidar/performance data, possibly requiring 1-5 TB per hour. Q: Do you know of any specific storage requirement, design etc. in the car or the backend, specifically for meeting the UNECE 155/156? It’s specifically for software update, hence the storage question [Ryan] Currently, there are no specific automotive requirements for storage products to meet UNECE 155/156. This regulation was developed by a regional commission of the UN focused on Europe. While security is a concern and will grow as cars become more connected, in my opinion, an international regulation/standard needs to be agreed upon to ensure a consistent level of security for all vehicles in all regions. Q: Does automotive storage need to be ASIL-B or ASIL-D certified? [Ryan] Individual storage components are not ASIL certified as the certification is completed at the system level. For example, systems like vision ADAS, anti-lock braking, and power steering (self-steering), require ASIL-D certification, the highest compliance level. Typically, components that mention a specific level of ASIL compliance have been evaluated at a system hardware level. Q. What type of endurance does automotive storage need, given the average or 99% percentile lifespan of a modern car? [Ryan] It depends on how the storage device is being used. If the device is used for code/application storage such as the AI Inference, the endurance requirement will be relatively low as it only needs to support periodic updates of the code and updates of high-definition maps. Storage devices used for data logging on the other hand, require a higher endurance level as data is written during vehicle operation, uploaded to the cloud later typically through a WiFi connection and then erased. This cycle is repeated every time the vehicle is driven. Q. Will 5G change how much data vehicles can send and receive while driving? [John] Eventually yes, because 5G allows higher wireless/cellular data rates. However, 5G antennas also have shorter range, so more of those antennas and base stations are required for coverage. This means 5G will roll out first in urban centers and will take time to roll out in more rural areas, and vehicles that drive to rural areas will not be able to count on always using the higher 5G data rates. 5G will also be used to connect vehicles in defined environments such as a school campus, bus/truck depot, factory, warehouse or police station. For example, a robot operating only within a warehouse could count on having 5G access all the time, and a bus, police car or ADAS/AD training car could store terabytes of data in the vehicle and upload it easily over a local 5G connection once it returns to the garage or station. Q. In autonomous driving, are all the AI compute capabilities and AI rules or training stored inside each car? Or are AD cars relying somewhat on AI running somewhere in the cloud? [John] Most of the AI rules for actually driving (AI inferencing) must be stored inside each car because there isn’t enough time to consult a computer (or additional rules) stored in the cloud and use them for real-time driving decisions. The training data and machine learning training algorithms used to create the training rules are typically stored in the cloud or in a corporate data center. Updated training rules, navigation data, and vehicle system software updates can all be stored in the cloud and pushed out to vehicles on a periodic basis. Traffic or weather data can be stored in the cloud and sent to vehicles (or to phones in vehicles) as often as several times each minute. Q. Does the chip shortage mean car companies are putting less storage inside new cars than they think they should? [Ryan] Not from what I have seen.  For vehicles currently in production, the designs are locked and with a limited number of vehicles OEMs can produce, they have shifted production to higher-end models to maximize profit. This means the systems in these vehicles may actually use higher amounts of storage to support the features. For new vehicle development, storage capacities continue to grow in order to enable new applications including IVI and ADAS. [John] Generally no, the manufacturers are still putting in whatever amount of storage they originally planned for each vehicle and simply limiting the number of vehicles built based on the supply of semiconductors, and the limitations tend to be across several types of chips, not just memory or storage chips. It’s possible in some cars they are using older, different, or more expensive storage components than originally planned in order to get around chip shortages, but the total amount of storage is unlikely to decrease. Q. Can typical data storage inside a car be upgraded or expanded? [Ryan] Due to the shock and vibration vehicles encounter during operation, storage devices typically come in a BGA package and are soldered onto a PCB for higher reliability. Increasing the density would require replacing the PCB for a new board with a higher capacity storage device. Some new vehicles are installing external USB ports that can use USB drives to store non-critical information such as security camera footage while the vehicle is parked. Q. Given the critical nature of AD systems or even engine control software, do car makers do anything special with their storage to ensure high availability or high uptime? How does a car deal with storage failure? [Ryan] In the case of autonomous driving, this is a safety critical system and the reliability is examined at a system level. In an AD system, there are typically multiple SOCs not only to handle the complex computational tasks, but also for redundancy. In the event the main SOC system fails, another SOC can take over to ensure the vehicle continues to operate safely. From a storage standpoint, each SOC typically uses its own storage device. Q. You know those black boxes they put in planes (or cars) to record data in case of a crash? Those boxes are designed to survive crashes. Why can’t they build the whole car out of the same stuff? [Ryan] While this would provide an ultimate level of safety for passengers, it is unfortunately not economically feasible. To scale a black box with the approximate volume of a 2.5” hard drive to over 120 ft3 (interior passenger and cargo volume) of a standard mid-size vehicle would be cost prohibitive. [John] It would be too expensive and possibly too heavy to build the entire car like a “black box” data recorder. Also, a black box just needs to be designed to make one small component or data storage very survivable while the entire car needs to act as an impact protection and energy absorption system that maximizes the survivability of the occupants during and after an accident. Q. What prevents hackers from breaching automotive systems and modifying the car’s software or deleting critical data? [John] Automotive systems are typically designed with fewer remote access paths and tighter security to make it harder to breach the system. Usually, the systems require encrypted keys from the vehicle manufacturer to access the systems remotely, and some updates or data deletion may be possible only with physical access to the car’s data port. Also, certain data may be stored on flash or persistent memory within the vehicle to make it harder to delete. Still even with these precautions, a mistake or bug in the vehicle’s software or firmware could allow a hacker to gain unauthorized access in rare cases. Q. Would most automotive storage run as block, file, or object storage? [John] Most of the local storage inside a vehicle and anything storing standardized databases or small logs would probably be block storage, as that typically is easy to use for local storage and/or structured data. Data center storage for AI or ADAS training, vehicle design, or aerodynamic/crash/FEA simulation is usually file-based storage to allow for easy sharing and technical computing across multiple servers. Any archived data for vehicle design, training, simulation, videos, telemetry that is stored outside the vehicle is most likely to be object storage because these are typically larger files with unstructured data that don’t change after creation and need to be retained for a long time. Q. Does automotive storage need to use redundancy like RAID or erasure coding? [Ryan] No, current single-device storage solutions with built-in ECC provide the required reliability.  Implementing a RAID system or erasure encoding would require multiple drives significantly driving up the cost.  Electronics currently account for 40% of a new vehicle’s total cost and it is expected to continue growing.  Switching from an existing solution that meets system requirements to a storage solution that is multiple times the cost is not practical.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

5G Industrial Private Networks and Edge Data Pipelines

Alex McDonald

Jan 5, 2022

title of post
The convergence of 5G, Edge Compute and Artificial Intelligence (AI) promise to be catalyst for continued digital transformation. For many industries, it will be a game-changer in term of how business in conducted. On January 27, 202, the SNIA Cloud Storage Technologies Initiative (CSTI) will take on this topic at our live webcast “5G Industrial Private Networks and Edge Data Pipelines.” Advanced 5G is specifically designed to address the needs of verticals with capabilities like enhanced mobile broadband (emBB), ultra-reliable low latency communications (urLLC), and massive machine type communications (mMTC), to enable near real-time distributed intelligence applications. For example, automated guided vehicle and autonomous mobile robots (AGV/AMRs), wireless cameras, augmented reality for connected workers, and smart sensors across many verticals ranging from healthcare and immersive media, to factory automation. Using this data, manufacturers are looking to maximize operational efficiency and process optimization by leveraging AI and machine learning. To do that, they need to understand and effectively manage the sources and trustworthiness of timely data. In this presentation, our SNIA experts will take a deep dive into how:
  • Edge can be defined and the current state of the industry
  • Industrial Edge is being transformed
  • 5G and Time-Sensitive Networking (TSN) play a foundational role in Industry 4.0
  • The convergence of high-performance wireless connectivity and AI create new data-intensive use cases
  • The right data pipeline layer provides persistent, trustworthy storage from edge to cloud
I encourage you to register today. Our experts will be ready to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Life on the Edge

Tom Friend

Dec 20, 2021

title of post

Cloud to Edge infrastructures are rapidly growing.  It is expected that by 2025, up to 75% of all data generated will be created at the Edge.  However, Edge is a tricky word and you’ll get a different definition depending on who you ask. The physical edge could be in a factory, retail store, hospital, car, plane, cell tower level, or on your mobile device. The network edge could be a top-of-rack switch, server running host-based networking, or 5G base station.

The Edge means putting servers, storage, and other devices outside the core data center and closer to both the data sources and the users of that data—both edge sources and edge users could be people or machines.

 This trilogy of SNIA Networking Storage Forum (NSF) webcasts will provide:

  1. An overview of Cloud to Edge infrastructures and performance, cost and scalability considerations
  2. Application use cases and examples of edge infrastructure deployments
  3. Cloud to Edge performance acceleration options

Attendees will leave with an improved understanding of compute, storage and networking resource optimization to better support Cloud to Edge applications and solutions.

At our first webcast in this series on January 26, 2022, “Storage Life on the Edge: Managing Data from the Edge to the Cloud and Back you‘ll learn:

  • Data and compute pressure points: aggregation, near & far Edge
  • Supporting IoT data
  • Analytics and AI considerations
  • Understanding data lifecycle to generate insights
  • Governance, security & privacy overview
  • Managing multiple Edge sites in a unified way

Register today! We look forward to seeing you on January 26th.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Life on the Edge

Tom Friend

Dec 20, 2021

title of post
Cloud to Edge infrastructures are rapidly growing.  It is expected that by 2025, up to 75% of all data generated will be created at the Edge.  However, Edge is a tricky word and you’ll get a different definition depending on who you ask. The physical edge could be in a factory, retail store, hospital, car, plane, cell tower level, or on your mobile device. The network edge could be a top-of-rack switch, server running host-based networking, or 5G base station. The Edge means putting servers, storage, and other devices outside the core data center and closer to both the data sources and the users of that data—both edge sources and edge users could be people or machines. This trilogy of SNIA Networking Storage Forum (NSF) webcasts will provide:
  1. An overview of Cloud to Edge infrastructures and performance, cost and scalability considerations
  2. Application use cases and examples of edge infrastructure deployments
  3. Cloud to Edge performance acceleration options
Attendees will leave with an improved understanding of compute, storage and networking resource optimization to better support Cloud to Edge applications and solutions. At our first webcast in this series on January 26, 2022, “Storage Life on the Edge: Managing Data from the Edge to the Cloud and Back you‘ll learn:
  • Data and compute pressure points: aggregation, near & far Edge
  • Supporting IoT data
  • Analytics and AI considerations
  • Understanding data lifecycle to generate insights
  • Governance, security & privacy overview
  • Managing multiple Edge sites in a unified way
Register today! We look forward to seeing you on January 26th.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on Big Data in the Cloud

Chip Mauer

Dec 8, 2021

title of post
The title of our recent live SNIA Cloud Storage Technologies webcast, “Cloud Storage and Big Data, A Marriage Made in the Clouds” might lead you to believe we were producing a new reality show, but of course, that was not the case. This webcast with SNIA experts, Chip Maurer, Vincent Hsu and Andy Longworth examined modernization challenges related to Big Data and key considerations for storing Big Data as workloads evolve. Our audience asked great questions during the live event. As promised, here are our experts’ answers. Q: Is there much movement with Open Source Object Storage solutions, such as OpenStack suite – Swift, etc? A. Yes, there is no shortage of Open Source storage solutions. The decision depends upon your organization’s expertise, reliability, cost, application availability and location, and your overall storage strategy. Q. What drives organizations to modernize? A. Modernization decisions are based on an organization’s strategic priorities. Cost, performance and scalability are also frequently key factors. Q. What pushes an organization to on-premises vs. cloud? A. Government regulations, where data cannot leave the data center due to data privacy and data protection concerns are a frequent reason for staying on-prem. Cost is another reason. Despite the benefits of cloud, in many cases, it is less costly to keep data on-prem. Q. Are universities producing graduates with required current “Big Data” skills? A.  It seems to be university specific. Many are on the cutting-edge and offer several data science certificates and advanced degree programs.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Why Use Multiple Clouds?

Alex McDonald

Nov 22, 2021

title of post
As storing data in the cloud has become ubiquitous and mature, many organizations have adopted a multi-cloud strategy. Eliminating dependence on a single cloud platform is quite a compelling case with benefits of increased reliability, availability, performance, and the avoidance of vendor lock-in and/or specific vendor vulnerabilities to name a few. In short, spanning multiple clouds ensures a business does not have all its eggs (i.e. data) in one basket. Aside from utilizing tech, business can also continue to grow by learning information such as legalzoom vs rocket lawyer. In case of financial challenges, seeking hotels and hospitality business liquidation advice can provide tailored insights and guidance for managing your business effectively and safeguarding its future. But multi-cloud environments are not without challenges. Taking advantage of the benefits without increasing complexity requires a strategy that ensures applications are not tightly coupled to cloud-specific technologies. Supporting a storage abstraction layer that insulates the application from the underlying cloud provider’s interfaces allows an application to be easily used with multiple clouds. It allows storage features specific to a cloud to be exposed in a standardized manner and enables data to be transparently accessed and migrated as needed in order to take advantage of cloud-specific features without the application being aware of the underlying mechanics, thus reducing or eliminating the limits and vulnerabilities of any one cloud. How, and why, to support a storage abstraction layer will be the focus of our live SNIA Cloud Storage Technologies (CSTI) webcast on January 11, 2022, “Why Use Multiple Clouds?” where our experts will cover:
  • Risk mitigation of multiple clouds
  • Transparent movement of data from cloud to cloud
  • Political, regulatory and compliance considerations
  • Multi-cloud as part of a business continuity strategy
  • Exit cost reduction
  • Running work in parallel across clouds
Register today. Our experts will be on-hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on Discovery Automation for NVMe-oF IP-Based SANs

Tom Friend

Nov 22, 2021

title of post
In order to fully unlock the potential of the NVMe® IP based SANs, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems. Several leading companies in the industry have joined together through NVM Express to collaborate on innovations to simplify and automate this discovery process. This was the topic of discussion at our recent SNIA Networking Storage Forum webcast “NVMe-oF: Discovery Automation for IP-based SANs” where our experts, Erik Smith and Curtis Ballard, took a deep dive on the work that is being done to address these issues. If you missed the live event, you can watch it on demand here and get a copy of the slides. Erik and Curtis did not have time to answer all the questions during the live presentation. As promised, here are answers to them all. Q. Is the Centralized Discovery Controller (CDC) highly available, and is this visible to the hosts?  Do they see a pair of CDCs on the network and retry requests to a secondary if a primary is not available? A. Each CDC instance is intended to be highly available. How this is accomplished will be specific to vendor and deployment type. For example, a CDC running inside of an ESX based VM can leverage VMware’s high availability (HA) and fault tolerant (FT) functionality. For most implementations the HA functionality for a specific CDC is expected to be implemented using methods that are not visible to the hosts.  In addition, each host will be able to access multiple CDC instances (e.g., one per “IP-Based SAN”). This ensures any problems encountered with any single CDC instance will not impact all paths between the host and storage. One point to note, it is not expected that there will be multiple CDC instances visible to each host via a single host interface.  Although this is allowed per the specification, it does make it much harder for administrators to effectively manage connectivity. Q. First: Isn’t the CDC the perfect Denial-of-Service (DoS) attack target? Being the ‘name server’ of NVMe-oF, when the CDC is compromised no storage is available anymore. Second: the CDC should be running as a multi-instance cluster to realize high availability or even better, have the CDC distributed like name-server in Fibre Channel (FC). A. With regard to denial-of-service attacks. Both FC’s Name Server and NVMe-oF’s CDC are susceptible to this type of problem and both have the ability to mitigate these types of concerns. FC can fence or shut a port that has a misbehaving end device is attached to it. The same can be done with Ethernet, especially when the “underlay config service” mentioned during the presentation is in use. In addition, the CDC’s role is slightly different than FC’s Name Server. If a denial-of-service attack was successfully executed against a CDC instance, existing host to storage connections would remain intact. Hosts that are rebooted or disconnected and reconnected could have a problem connecting to the CDC and could have a problem reconnecting to storage via the IP SAN that is experiencing the DoS attack. For the second concern, it’s all about the implementation. Nothing in the standard prevents the CDC from running in an HA or FT mode. When the CDC is deployed as a VM, hypervisor resident tools can be leveraged to provide HA and FT functionality. When the CDC is deployed as a collection of microservices that are running on the switches in an IP SAN, the services will be distributed. One implementation available today uses distributed microservices to enable scaling to meet or exceed what is possible with an FC SAN today. Q. Would Dell/SFSS CDC be offered as a generic Open Source in public domain so other third parties don’t have to develop their own CDC? If other third parties do not use Dell’s open CDC and develop their own CDC, how would that multi-CDC control work for a customer with multi-vendor storage arrays with multiple CDCs? A. Dell/SFSS CDC will not be open source. However, the reason Dell worked with HPE and other storage vendors on the 8009 and 8010 specifications, is to ensure that whichever CDC instance the customer chooses to deploy, all storage vendors will be able to discover and interoperate with it.  Dell’s goal is to create an NVMe IP-Based SAN ecosystem. As a result, Dell will work to make the CDC implementation (SFSS) interoperate with every NVMe IP-Based product, regardless of the vendor. The last thing anyone wants is for customers to have to worry about basic compatibility. Q. Does this work only for IPv4? We’re going towards IPv6 only environments? A. Both IPv4 and IPv6 can be used. Q. Will the discovery domains for this be limited to a single Ethernet broadcast domain or will there be mechanisms to scale out discovery to alternate subnets, like we see with DHCP & DHCP Relay? A. By default mDNS is constrained to a single broadcast domain. As a result, when you deploy a CDC as a VM, if the IP SAN consists of multiple broadcast domains per IP SAN instance (e.g., IP SAN A = VLANs/subnets 1, 2 and 3; IP SAN B = VLANs/subnets 10, 11 and 13) then you’ll need to ensure an interface from the VM is attached to each VLAN/subnet. However, creating a VM interface for each subnet is a sub-optimal user experience and as a result, there is the concept of an mDNS proxy in the standard. The mDNS proxy is just an mDNS responder that resides on the switch (similar concept to a DHCP proxy) that can respond to mDNS requests on each broadcast domain and point the end devices to the IP Address of the CDC (which could be on a different subnet). When you are selecting a switch vendor to use for your IP SAN, ensure you ask if they support an mDNS proxy. If they do not, you will need to do extra work to get your IP SAN configured properly. When you deploy the CDC as a collection of services running on the switches in an IP SAN, one of these services could be an mDNS responder. This is how Dell’s SFSS will be handling this situation. One final point about IP SANs that span multiple subnets: Historically, these types of configurations have been difficult to administer because of the need to configure and maintain static route table entries on each host. NVM Express has done an extensive amount of work with 8010 to ensure that we can eliminate the need to configure static routes. For more information about the solution to this problem, take a look at nvme-stas on github. Q. A question for Erik: If mDNS turned out to be a problem, how did you work around it? A. mDNS is actually a problem right now because none of the implementations in the first release actually support it. In the second release this limitation is resolved. In any case, the only issue I am expecting with mDNS will be environments that don’t want to use it (for one reason or another) or can’t use it (because the switch vendor does not support an mDNS proxy). In these situations, you can administratively configure the IP address of the CDC on the host and storage subsystem. Q. Just a comment. In my humble opinion, slide 15 is the most important slide at helping people see- at a glance – what these things are. Nice slide. A. Thanks, slide 15 was one of the earliest diagrams we created to communicate the concept of what we’re trying to do. Q. With Fibre Channel there are really only two HBA vendors and two switch vendors, so interoperability, even with vendor-specific implementations, is manageable. For Ethernet, there are many NIC and Switch vendors. How is interoperability going to be ensured in this more complex ecosystem? A. The FC discovery protocol is very stable. We have not seen any basic interop issues related to login and discovery for years. However, back in the early days (’98-02), we needed to validate each HBA/Driver/FW version and do so for each OS we wanted to support. With NVMe/TCP, each discovery client is software based and OS specific (not HBA specific).  As a result, we will only have two host-based discovery client implementations for now (ESX and Linux – see nvme-stas) and a discovery client for each storage OS. To date, we have been pleasantly surprised at the lack of interoperability issues we’ve seen as storage platforms have started integrating with CDC instances. Although it is likely we will see some issues as other storage vendors start to integrate with CDC instances from different storage vendors. Q. A lot of companies will want to use NVMe-oF via IP/Ethernet in a micro-segmented network. There are a lot of L3/routing steps to reach the target. This presentation did not go into this part of scalability, only into scalability of discovery. Today, all networks are now L3 with smaller and smaller subnets with lot of L3-points. A. This is a fair point. We didn’t have time to go into the work we have done to address the types of routing issues we’re anticipating and what we’ve done to mitigate them. However, nvme-stas, The Dell sponsored open-source discovery client for Linux, demonstrates how we use the CDC to work around these types of issues. Erik is planning to write about this topic on his blog brasstacksblog.typepad.com. You can follow him on twitter @provandal to make sure you don’t miss it.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Cabling, Connectors and Transceivers Questions Answered

Tim Lustig

Nov 9, 2021

title of post

In our recent live SNIA Network Storage Forum webcast, “Next-generation Interconnects: The Critical Importance of Connectors and Cables” provided an outstanding tutorial on the latest in the impressive array of data center infrastructure components designed to address expanding requirements for higher-bandwidth and lower-power. They covered common pluggable connectors and media types, copper cabling and transceivers, and real world use cases. If you missed the live event, it is available on-demand.

We ran out of time to answer all the questions from the live audience. As promised, here are answers to them all.

Q. For 25GbE, is the industry consolidating on one of the three options?

A. The first version of 25 GbE (i.e. 25GBASE-CR1) was specified by the Ethernet Technology Consortium around 2014.  This was followed approximately two years later by the IEEE 802.3 versions of 25 GbE (i.e. 25GBASE-CR-S and 25GBASE-CR). As a result of the timing, the first 25 GbE capable products to market only supported consortium mode. More recently developed switches and server products support both consortium and IEEE 802.3 modes, with the link establishment protocol favoring the IEEE 802.3 mode when it is available. Therefore, IEEE 802.3 will likely be the incumbent over the long term.  

Please note that there is no practical difference between Consortium vs. IEEE 802.3 for short reaches (<=3m)  between end points, where a 25G-CA-N cable is used. The longer cable reaches above 3m (CA-25G-L) requires the IEEE 802.3 modes with forward error correction, which adds latency.  The CA-25G-S type of cable is the least common. See slide 16 and slide 17 from the presentation.

Q. What's the max DAC cables (passive and/or active) length for 100G PAM4?

A. The IEEE P802.3ck specification for 100 Gb/s/lane copper cable PHYs targets a reach of at least 2 meters for a passive copper cable. Because the specification is still in development, the exact reach is still being worked out. Expect >2 meters for passive cables and 3-4 meters for active cables. Note that this is a reduction of reach from previous rates, as illustrated on slide 27, and that DAC cables are not for long range and generally used for very short interconnections between devices. For longer reaches, AOC cables are preferred.  

The passive copper cable length is primarily driven by the performance of the SERDES in the host (i.e. switch, server, FPGA, etc) and the construction materials of the cable assembly. Active copper cables (ACCs or AECs) use several different methods of signal conditioning to increase the cable reach; more sophisticated solutions have greater reach, greater latency and greater cost. See slide 35.

Q. What's the latency difference between active and passive DAC (PAM4 encoding)?

A. Passive copper cables do not contain signal conditioning electronics. Active copper cables (ACCs or AECs) use several different methods of signal conditioning to increase the cable reach; more sophisticated solutions have greater reach, greater latency and greater cost. See slide 35. A simple active cable may use one or more linear equalizer IC where as a complex cable uses a full retimer that may use FEC logic embedded inside.

For 50G PAM4 rates, the difference in one-way latency between a passive copper cable and a simple active copper cable is ~20 nsec. The difference between a passive copper cable and a complex active copper cable could be as high as ~80 nsec.

Q. Can you comment about "gearbox" cables (200G 4lane (@56G) to 100G 4 lane (@28G)?

A. A few companies are supplying cables that have 50G PAM4 on one end and 25G-NRZ on the other with gear box. We see it as a niche; used to link new to older equipment.

Q. Showing QSFP instead of OSFP on slide 29 in the image at the top right?

A. Good catch. That was a mistake on the slide. It has been corrected. Thanks.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Cabling, Connectors and Transceivers Questions Answered

Tim Lustig

Nov 9, 2021

title of post
In our recent live SNIA Network Storage Forum webcast, “Next-generation Interconnects: The Critical Importance of Connectors and Cables” provided an outstanding tutorial on the latest in the impressive array of data center infrastructure components designed to address expanding requirements for higher-bandwidth and lower-power. They covered common pluggable connectors and media types, copper cabling and transceivers, and real world use cases. If you missed the live event, it is available on-demand. We ran out of time to answer all the questions from the live audience. As promised, here are answers to them all. Q. For 25GbE, is the industry consolidating on one of the three options? A. The first version of 25 GbE (i.e. 25GBASE-CR1) was specified by the Ethernet Technology Consortium around 2014.  This was followed approximately two years later by the IEEE 802.3 versions of 25 GbE (i.e. 25GBASE-CR-S and 25GBASE-CR). As a result of the timing, the first 25 GbE capable products to market only supported consortium mode. More recently developed switches and server products support both consortium and IEEE 802.3 modes, with the link establishment protocol favoring the IEEE 802.3 mode when it is available. Therefore, IEEE 802.3 will likely be the incumbent over the long term. Please note that there is no practical difference between Consortium vs. IEEE 802.3 for short reaches (<=3m)  between end points, where a 25G-CA-N cable is used. The longer cable reaches above 3m (CA-25G-L) requires the IEEE 802.3 modes with forward error correction, which adds latency.  The CA-25G-S type of cable is the least common. See slide 16 and slide 17 from the presentation. Q. What’s the max DAC cables (passive and/or active) length for 100G PAM4? A. The IEEE P802.3ck specification for 100 Gb/s/lane copper cable PHYs targets a reach of at least 2 meters for a passive copper cable. Because the specification is still in development, the exact reach is still being worked out. Expect >2 meters for passive cables and 3-4 meters for active cables. Note that this is a reduction of reach from previous rates, as illustrated on slide 27, and that DAC cables are not for long range and generally used for very short interconnections between devices. For longer reaches, AOC cables are preferred. The passive copper cable length is primarily driven by the performance of the SERDES in the host (i.e. switch, server, FPGA, etc) and the construction materials of the cable assembly. Active copper cables (ACCs or AECs) use several different methods of signal conditioning to increase the cable reach; more sophisticated solutions have greater reach, greater latency and greater cost. See slide 35. Q. What’s the latency difference between active and passive DAC (PAM4 encoding)? A. Passive copper cables do not contain signal conditioning electronics. Active copper cables (ACCs or AECs) use several different methods of signal conditioning to increase the cable reach; more sophisticated solutions have greater reach, greater latency and greater cost. See slide 35. A simple active cable may use one or more linear equalizer IC where as a complex cable uses a full retimer that may use FEC logic embedded inside. For 50G PAM4 rates, the difference in one-way latency between a passive copper cable and a simple active copper cable is ~20 nsec. The difference between a passive copper cable and a complex active copper cable could be as high as ~80 nsec. Q. Can you comment about “gearbox” cables (200G 4lane (@56G) to 100G 4 lane (@28G)? A. A few companies are supplying cables that have 50G PAM4 on one end and 25G-NRZ on the other with gear box. We see it as a niche; used to link new to older equipment. Q. Showing QSFP instead of OSFP on slide 29 in the image at the top right? A. Good catch. That was a mistake on the slide. It has been corrected. Thanks.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Revving Up Storage for Automotive

Tom Friend

Nov 8, 2021

title of post

Each year cars become smarter and more automated. In fact, the automotive industry is effectively transforming the vehicle into a data center on wheels. Connectedness, autonomous driving, and media & entertainment all bring more and more storage onboard and into networked data centers. But all the storage in (and for) a car is not created equal. There are 10s if not 100s of different processors on a car today. Some are attached to storage, some are not and each application demands different characteristics from the storage device.

The SNIA Networking Storage Forum (NSF) is exploring this fascinating topic on December 7, 2021 at our live webcast “Revving Up Storage for Automotive” where industry experts from both the storage and automotive worlds will discuss:

  • What’s driving growth in automotive storage?  
  • Special requirements for autonomous vehicles
  • Where automotive data is typically stored?  
  • Special use cases
  • Vehicle networking & compute changes and challenges

Start your engines and register today to join us as we drive into the future!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to