Sorry, you need to enable JavaScript to visit this website.

Judging Has Begun – Submit Your Entry for the NVDIMM Programming Challenge!

Marty Foltyn

Nov 19, 2019

title of post
We’re 11 months in to the Persistent Memory Hackathon program, and over 150 software developers have taken the tutorial and tried their hand at programming to persistent memory systems.   AgigA Tech, Intel SMART Modular, and Supermicro, members of the SNIA Persistent Memory and NVDIMM SIG, have now placed persistent memory systems with NVDIMM-Ns into the SNIA Technology Center as the backbone of the first SNIA NVDIMM Programming Challenge. Interested in participating?  Send an email to PMhackathon@snia.org to get your credentials.  And do so quickly, as the first round of review for the SNIA NVDIMM Programming Challenge is now open.  Any entrants who have progressed to a point where they would like a review are welcome to contact SNIA at PMhackathon@snia.org to request a time slot.  SNIA will be opening review times in December and January as well.  Submissions that meet a significant amount of the judging criteria described below, as determined by the panel, will be eligible for a demonstration slot to show the 400+ attendees at the January 23, 2020 Persistent Memory Summit  in Santa Clara CA. Your program or results should be able to be visually demonstrated using remote access to a PM-enabled server. Submissions will be judged by a panel of SNIA experts.  Reviews will be scheduled at the convenience of the submitter and judges, and done via conference call. NVDIMM Programming Challenge Judging Criteria include: Use of multiple aspects of NVDIMM/PM capabilities, for example:
  1. Use of larger DRAM/NVDIMM memory sizes
  2. Use of the DRAM speed of NVDIMM PMEM for performance
  3. Speed-up of application shut down or restart using PM where appropriate
  4. Recovery from crash/failure
  5. Storage of data across application or system restarts
Demonstrates other innovative aspects for a program or tool, for example:
  1. Uses persistence to enable new features
  2. Appeals across multiple aspects of a system, beyond persistence
Advances the cause of PM in some obvious way:
  1. Encourages the update of systems to broadly support PM
  2. Makes PM an incremental need in IT deployments
Program or results apply to all types of NVDIMM/PM systems, though exact results may vary across memory types. Questions? Contact Jim Fister, SNIA Hackathon Program Director, at pmhackathon@snia.org, and happy coding!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Judging Has Begun – Submit Your Entry for the NVDIMM Programming Challenge!

Marty Foltyn

Nov 19, 2019

title of post

We’re 11 months in to the Persistent Memory Hackathon program, and over 150 software developers have taken the tutorial and tried their hand at programming to persistent memory systems.   AgigA Tech, Intel SMART Modular, and Supermicro, members of the SNIA Persistent Memory and NVDIMM SIG, have now placed persistent memory systems with NVDIMM-Ns into the SNIA Technology Center as the backbone of the first SNIA NVDIMM Programming Challenge.

Interested in participating?  Send an email to PMhackathon@snia.org to get your credentials.  And do so quickly, as the first round of review for the SNIA NVDIMM Programming Challenge is now open.  Any entrants who have progressed to a point where they would like a review are welcome to contact SNIA at PMhackathon@snia.org to request a time slot.  SNIA will be opening review times in December and January as well.  Submissions that meet a significant amount of the judging criteria described below, as determined by the panel, will be eligible for a demonstration slot to show the 400+ attendees at the January 23, 2020 Persistent Memory Summit  in Santa Clara CA.

Your program or results should be able to be visually demonstrated using remote access to a PM-enabled server. Submissions will be judged by a panel of SNIA experts.  Reviews will be scheduled at the convenience of the submitter and judges, and done via conference call.

NVDIMM Programming Challenge Judging Criteria include:

Use of multiple aspects of NVDIMM/PM capabilities, for example:

  1. Use of larger DRAM/NVDIMM memory sizes
  2. Use of the DRAM speed of NVDIMM PMEM for performance
  3. Speed-up of application shut down or restart using PM where appropriate
  4. Recovery from crash/failure
  5. Storage of data across application or system restarts

Demonstrates other innovative aspects for a program or tool, for example:

  1. Uses persistence to enable new features
  2. Appeals across multiple aspects of a system, beyond persistence

Advances the cause of PM in some obvious way:

  1. Encourages the update of systems to broadly support PM
  2. Makes PM an incremental need in IT deployments

Program or results apply to all types of NVDIMM/PM systems, though exact results may vary across memory types.

Questions? Contact Jim Fister, SNIA Hackathon Program Director, at pmhackathon@snia.org, and happy coding!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

24G SAS Feature Focus: Active PHY Transmitter Adjustment

STA Forum

Nov 12, 2019

title of post
[Editor’s Note: This is part of a series of upcoming blog posts discussing 24G SAS features.] By: Rick Kutcipal (SCSI Trade Association), with contributions from Tim Symons (Microchip), November 12, 2019 The first 24G SAS plugfest was held the week of June 24, 2019, and products are expected to begin to trickle out into the marketplace in 2020. Production solutions typically follow 12-18 months after the first plugfest. These storage technologies are tied to OEM server and storage vendors, which release solutions at a regular annual cadence sometimes tied to next generation Intel or AMD processors. Servers almost always lead the way, as they push the performance envelope and keep pace with faster processors, faster networking and faster storage. To address the growing performance requirements of modern data centers, 24G SAS doubles the effective bandwidth of the previous SAS generation, 12Gb/s SAS. To achieve this bandwidth, the physical layer experiences significant electrical demands that technologies such as forward error correction and Active PHY Transmitter Adjustment (APTA) specifically address. These enhancements are required to maintain the rigorous quality and reliability demands of IT professionals managing enterprise and data center operations. To ensure optimized transceiver operation, APTA automatically adjusts the SAS PHY transmitter coefficients to compensate for changes that happen over time in environmental conditions. This compensation is accomplished continuously with no performance implications. In the SAS-4 standard, APTA defines specific binary primitives that the receiver PHY and the transmitter PHY exchange. These primitives request changes to the transmitter PHY coefficient settings and report transmitter status without requiring that the PHY terminate connections and enter a PHY reset sequence, ensuring continuous operation. To ensure that no disruption to data transfer occurs and to allow the SAS PHY receiver to adapt to changes in the transmitter PHY settings, the time between each APTA change request is at least 1ms. Making small, single-step changes allows the SAS PHY receiver to adjust to the change over a long period of active data bits. Small changes give the SAS PHY receiver time to equalize before the SAS PHY receiver algorithm calculates the next change request, if any changes are forthcoming. Small changes over a long time period ensures that the channel does not diverge from an operational range that could cause a link reset sequence. The SAS link is not disrupted by the APTA process and all connections, as well as data transfers, continue uninterrupted, yet optimized, by APTA. APTA states cannot be processed when the PHY is in the SP15: SAS_Phy_Ready state. The APTA process might take several seconds to complete; however, making small adjustments avoids SAS connection disruption and enables a PHY transmitter and PHY receiver to return to optimal signal integrity conditions for reliable data transfer./ While performance requirements continually increase within the modern IT infrastructure, the quality and reliability metrics remain unchanged. Technologies such as APTA enable 24G SAS to meet these requirements, and ensures the most reliable and robust storage connectivity. Look for future installments of 24G SAS Feature Focus, as the SCSI Trade Association takes a closer look at key enhancements that make your storage faster and more reliable.

24G SAS Enhancements: Active PHY Transmitter Adjustment (APTA)

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A to Better Understand Storage Security

Steve Vanderlinden

Nov 1, 2019

title of post
Truly understanding storage security issues is no small task, but the SNIA Networking Storage Forum (NSF) is taking that task on in our Storage Networking Security Webcast Series. Earlier this month, we hosted the first in this series, “Understanding Storage Security and Threats” where my SNIA colleagues and I examined the big picture of storage security, relevant terminology and key concepts. If you missed the live event, you can watch it on-demand. Our audience asked some great questions during the live event. Here are answers to them all. Q. If I just deploy self-encrypting drives, doesn’t that take care of all my security concerns?   A. No it does not. Self-encrypted drives can protect if the drive gets misplaced or stolen, but they don’t protect if the operating system or application that accesses data on those drives is compromised. Q. What does “zero trust” mean? A. “Zero Trust” is a security model that works on the principal that organizations should not automatically trust anything inside their network (typically inside the firewalls). In fact they should not automatically trust any group of users, applications, or servers. Instead, all access should be authenticated and verified. What this typically means is granting the least amount of privileges and access needed based on who is asking for access, the context of the request, and the risk associated.  Q. What does white hat vs. black hat mean?  A. In the world of hackers, a “black hat” is a malicious actor that actually wants to steal your data or compromise your system for their own purposes. A “white hat” is an ethical (or reformed) hacker that attempts to compromise your security systems with your permission, in order to verify its security or find vulnerabilities so you can fix them. There are entire companies and industry groups that make looking for security vulnerabilities a full-time job. Q. Do I need to hire someone to try to hack into my systems to see if they are secure? A. To achieve the highest levels of information security, it is often helpful to hire a “white hat” hacker to scan your systems and try to break into them. Some organizations are required–by regulation–to do this periodically to verify the security of their systems. This is sometimes referred to as “penetration testing” or “ethical hacking” and can include physical as well as electronic testing of an infrastructure or even directing suspicious calls and emails to employees to test their security training. All known IT security vulnerabilities are eventually documented and published. You might have your own dedicated security team that regularly tests your operating systems, applications and network for known vulnerabilities and performs penetration testing, or you can hire independent 3rd parties to do this. Some security companies sell tools you can use to test your network and systems for known security vulnerabilities. Q. Can you go over the difference between authorization and authentication again? A. Authorization is a mechanism for verifying that a person or application has the authority to perform certain actions or access specific data. Authentication is a mechanism or process for verifying a person is who he or she claims to be. For example, use of passwords, secure tokens/badges, or fingerprint/iris scans upon login (or physical entry) can authenticate who a person is. After login or entry, the use of access control lists, color coded badges, or permissions tables can determine what that person is authorized to do. Q. Can you explain what non-repudiation means, and how you can implement it? A. Non-repudiation is a method or technology that guarantees the accuracy or authenticity of information or an action, preventing it from being repudiated (successfully disputed or challenged). For example, a hash could ensure that a retrieved file is authentic, or a combination of biometric authentication with an audit log could prove that a particular person was the one who logged into a system or accessed a file. Q. Why would an attacker want to infiltrate data into a data center, as opposed to exfiltrating (stealing) data out of the data center? A. Usually malicious actors (hackers) want to exfiltrate (remove) valuable data. But sometimes they want to infiltrate (insert) malware into the target’s data center so this malware can carry out other attacks. Q. What is ransomware, and how does it work?  A. Ransomware typically encrypts, hides or blocks access to an organization’s critical data, then the malicious actor who sent it demands payment or action from the organization in return for sharing the password that will unlock the ransomed data. Q. Can you suggest some ways to track and report attacking resources? A. Continuous monitoring tools such as Splunk can be used. Q. Does “trust nobody” mean, don’t trust root/admin user as well? A. Trust nobody means there should be no presumption of trust, instead we should authenticate and authorize all users/requests. For example, it could mean changing the default root/admin password, requiring most administrative work to use specific accounts (instead of the root/admin account), and monitoring all users (including root/admin) to detect inappropriate behavior. Q. How do I determine my greatest vulnerability or the weakest link in my security? A. Activities such as Threat Models and Security Assessments can assist in determining weakest links. Q. What does a ‘trust boundary’ mean? A. Trust boundary is a boundary where program data or execution changes its level of “trust”. For example, Internet vs intranet. We are busy planning out the rest of this webcast series. Please follow us Twitter @SNIANSF for notifications of dates and times for each presentation.                          

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Software Defined Storage Q&A

Tim Lustig

Oct 25, 2019

title of post
The SNIA Networking Storage Forum (NSF) recently hosted a live webcast, What Software Defined Storage Means for Storage Networking where our experts, Ted Vojnovich and Fred Bower explained what makes software defined storage (SDS) different from traditional storage. If you missed the live event, you can watch it on-demand at your convenience. We had several questions at the live event and here are our experts' answers to them all: Q. Are there cases where SDS can still work with legacy storage so that high priority flows, online transaction processing (OLTP) can use SAN for the legacy storage while not so high priority and backup data flows utilize the SDS infrastructure? A.  The simple answer is, yes. Like anything else, companies are using different methods and architectures to resolve their compute and storage requirements. Just like public cloud may be used for some non-sensitive/vital data and in-house cloud or traditional storage for sensitive data. Of course, this adds costs, so benefits need to be weighed against the additional expense. Q. What is the best way to mitigate unpredictable network latency that can go out of the bounds of a storage required service level agreement (SLA)? A.  There are several ways to mitigate latency. Generally speaking, increased bandwidth contributes to better network speed because the "pipe" is essentially larger and more data can travel through it. There are other means as well to reduce latency such the use of offloads and accelerators. Remote Direct Memory Access (RDMA) is one of these and is being used by many storage companies to help handle the increased capacity and bandwidth needed in Flash storage environments. Edge computing should also be added to this list as it relocated key data processing and access points from the center of the network to the edge where it can be gathered and delivered more efficiently. Q. Can you please elaborate on SDS scaling in comparison with traditional storage? A.  Most SDS solutions are designed to scale-out both performance and capacity to avoid bottlenecks whereas most traditional storage has always had limited scalability, scaling up in capacity only. This is because as a scale-up storage system begins to reach capacity, the controller becomes saturated and performance suffers. The workaround for this problem with traditional storage is to upgrade the storage controller or purchase more arrays, which can often lead to unproductive and hard to manage silos. Q. You didn't talk much about distributed storage management and namespaces (i.e. NFS or AFS)? A.  Storage management consists of monitoring and maintaining storage health, platform health, and drive health. It also includes storage provisioning such as creating each LUN /share/etc., or binding LUNs to controllers and servers. On top of that, storage management involves storage services like disk groups, snapshot, dedupe, replication, etc. This is true for both SDS and traditional storage (Converged Infrastructure and Hyper-Converged Infrastructure will leverage this ability in storage). NFS is predominately a non-Windows (Linux, Unix, VMware) file storage protocol while AFS is no longer popular in the data center and has been replaced as a file storage protocol by either NFS or SMB (in fact, it's been a long time since somebody mentioned "AFS"). Q. How does SDS affect storage networking? Are SAN vendors going to lose customers? A. SAN vendors aren't going anywhere because of the large existing installed base which isn't going quietly into the night. Most SDS solutions focus on Ethernet connectivity (as diagrams state) while traditional storage is split between Fibre Channel and Ethernet; InfiniBand is more of a niche storage play for HPC and some AI or machine learning customers. Q. Storage costs for SDS are highly dependent on scale and replication or erasure code. An erasure coded multi-petabyte solution can be significantly less than a traditional storage solution. A.  It's a processing complexity vs. cost of additional space tradeoff. Erasure coding is processing intense but requires less storage capacity. Making copies uses less processing power but consumes more capacity. It is true to say replicating copies uses more network bandwidth. Erasure coding tends to be used more often for storage of large objects or files, and less often for latency-sensitive block storage. If you have more questions on SDS, let us know in the comment box.                    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Software Defined Storage Q&A

Tim Lustig

Oct 25, 2019

title of post
The SNIA Networking Storage Forum (NSF) recently hosted a live webcast, What Software Defined Storage Means for Storage Networking where our experts, Ted Vojnovich and Fred Bower explained what makes software defined storage (SDS) different from traditional storage. If you missed the live event, you can watch it on-demand at your convenience. We had several questions at the live event and here are our experts’ answers to them all: Q. Are there cases where SDS can still work with legacy storage so that high priority flows, online transaction processing (OLTP) can use SAN for the legacy storage while not so high priority and backup data flows utilize the SDS infrastructure? A. The simple answer is, yes. Like anything else, companies are using different methods and architectures to resolve their compute and storage requirements. Just like public cloud may be used for some non-sensitive/vital data and in-house cloud or traditional storage for sensitive data. Of course, this adds costs, so benefits need to be weighed against the additional expense. Q. What is the best way to mitigate unpredictable network latency that can go out of the bounds of a storage required service level agreement (SLA)? A. There are several ways to mitigate latency. Generally speaking, increased bandwidth contributes to better network speed because the “pipe” is essentially larger and more data can travel through it. There are other means as well to reduce latency such the use of offloads and accelerators. Remote Direct Memory Access (RDMA) is one of these and is being used by many storage companies to help handle the increased capacity and bandwidth needed in Flash storage environments. Edge computing should also be added to this list as it relocated key data processing and access points from the center of the network to the edge where it can be gathered and delivered more efficiently. Q. Can you please elaborate on SDS scaling in comparison with traditional storage? A. Most SDS solutions are designed to scale-out both performance and capacity to avoid bottlenecks whereas most traditional storage has always had limited scalability, scaling up in capacity only. This is because as a scale-up storage system begins to reach capacity, the controller becomes saturated and performance suffers. The workaround for this problem with traditional storage is to upgrade the storage controller or purchase more arrays, which can often lead to unproductive and hard to manage silos. Q. You didn’t talk much about distributed storage management and namespaces (i.e. NFS or AFS)? A. Storage management consists of monitoring and maintaining storage health, platform health, and drive health. It also includes storage provisioning such as creating each LUN /share/etc., or binding LUNs to controllers and servers. On top of that, storage management involves storage services like disk groups, snapshot, dedupe, replication, etc. This is true for both SDS and traditional storage (Converged Infrastructure and Hyper-Converged Infrastructure will leverage this ability in storage). NFS is predominately a non-Windows (Linux, Unix, VMware) file storage protocol while AFS is no longer popular in the data center and has been replaced as a file storage protocol by either NFS or SMB (in fact, it’s been a long time since somebody mentioned “AFS”). Q. How does SDS affect storage networking? Are SAN vendors going to lose customers? A. SAN vendors aren’t going anywhere because of the large existing installed base which isn’t going quietly into the night. Most SDS solutions focus on Ethernet connectivity (as diagrams state) while traditional storage is split between Fibre Channel and Ethernet; InfiniBand is more of a niche storage play for HPC and some AI or machine learning customers. Q. Storage costs for SDS are highly dependent on scale and replication or erasure code. An erasure coded multi-petabyte solution can be significantly less than a traditional storage solution. A. It’s a processing complexity vs. cost of additional space tradeoff. Erasure coding is processing intense but requires less storage capacity. Making copies uses less processing power but consumes more capacity. It is true to say replicating copies uses more network bandwidth. Erasure coding tends to be used more often for storage of large objects or files, and less often for latency-sensitive block storage. If you have more questions on SDS, let us know in the comment box.                    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Learn the Latest on Persistence at the 2020 Persistent Memory Summit

Marty Foltyn

Oct 21, 2019

title of post
The 2020 SNIA Persistent Memory Summit is coming to the Hyatt Regency Santa Clara on Thursday, January 23, 2020. The day before, on January 22, an expanded version of the SNIA Persistent Memory Hackathon will return, co-located again with the SNIA Annual Members Symposium. We’ll share Hackathon details in an upcoming SNIA Solid State blog. For those who have already attended a Persistent Memory Summit, you will find significant changes in the makeup of the agenda.  For those who have never attended, the new agenda might also be an opportunity to learn more about development options and experiences for persistent memory. The focus of the 2020 Summit will be on tool and application development for systems with persistent memory.  While there is significant momentum for applications, some companies and individuals are still hesitant. The recent Persistent Programming in Real Life (PIRL) conference in San Diego focused on corporate and academic development efforts, specifically diving into the experience of developing for persistent memory.  A great example of the presentations at PIRL is one on ZUFS from Shachar Sharon of NetApp, a SNIA member company. The Persistent Memory Summit will have several similar talks focusing on the experience of delivering applications for persistent memory.  While obviously of benefit to developers and software management, the presentations will also serve the needs of hardware attendees by highlighting the process that applications will follow to utilize new hardware.  Likewise, companies interested in exploring persistent memory in IT infrastructure can benefit from understanding the implementations that are available. The Summit will also feature some of the entries to the SNIA NVDIMM Programming Challenge announced at the SNIA Storage Developer Conference.  Check out the “Show Your Persistent Stuff” blog for all the details.  If you haven’t registered to participate, opportunities are still available. Registration for the Persistent Memory summit is complimentary, and includes a demonstration area, lunch, and reception. Don’t miss this event!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

24G SAS Feature Focus: Forward Error Correction

STA Forum

Oct 21, 2019

title of post
[Editor’s Note: This is the first in a series of upcoming blog posts discussing 24G SAS features.] By: Cameron T. Brett (SCSI Trade Association), with contributions from Tim Symons (Microchip) and Alvin Cox (Seagate), October 21, 2019 Serial Attached SCSI (SAS) and the SCSI protocol continue to innovate with one of the storage industry’s longest running technology roadmaps. Today’s 12Gb/s SAS is moving to 24G, and some 24G ecosystem elements are sampling today. SAS became the industry benchmark for high reliability, availability and scalability (RAS) by delivering per lane performance, data integrity, flexible architectures and scalability in enterprise servers and storage for decades. SAS is becoming less expensive to deploy with the highest capacity SSD and HDD storage devices, along with the introduction of Value SAS. So what’s next for SAS? The SAS-4 specification is in the publication process and products are well into development. 24G SAS brings a significant performance speed bump, nearly doubling the current data rate, but it also brings SSD, QoS, data reliability and signal improvements. 20-bit Forward Error Correction (FEC) In short, FEC enhances data integrity as electrical signal transmission rates increase (like from 12Gb/s → 22.5Gb/s). Faster data rates increase the probability of bit errors at the physical layer, hence improved error correction is required. Forward error correction enables the signal receiver with the ability to correct errors without re-transmitting the data. Without going too far into the technical weeds, Reed-Solomon codes are used to create parity elements, or “symbols”, that are transmitted with the data. SAS uses a very short (150-bits) code length for low latency. The 20-bits of FEC code can fix up to two errors in each transmission.

Data transmission with 20-bit Forward Error Correction (FEC)

A benefit of using FEC is a reduced number of re-transmissions, or retries. Data integrity is improved, and that means SAS can successfully transmit more transactions than other protocols that do not use FEC by reducing the need to retransmit when corrupted data is found. Where Does FEC Benefit the System? During transmission of data over a SAS-4 (24G SAS) channel, the signal is impacted by physical impairments, including crosstalk, reflections and poor signal-to-noise ratio, resulting in bit errors at the receiving end of the transmission. If a bit is uncorrectable, even by FEC, it will be transmitted again so the fewer the re-transmissions the better. The FEC implementation in the SAS-4 specification will correct most bit errors, but occasionally one will get through, which translates to one uncorrectable bit error every 12.3 hours. Not too bad, having to do one retransmission only twice a day, compared with multiple re-transmissions if FEC is not available.

24G SAS Enhancements: 20-bit Forward Error Correction (FEC)

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Facebook & Microsoft Leverage NVMe™ Cloud Storage

J Metz

Oct 9, 2019

title of post
What do Hyperscalers like Facebook and Microsoft have in common? Find out in our next SNIA Networking Storage Forum (NSF) webcast, How Facebook and Microsoft Leverage NVMe Cloud Storage, on November 19, 2019 where you'll hear how these cloud market leaders are using NVMe SSDs in their architectures. Our expert presenters, Ross Stenfort, Hardware System Engineer at Facebook and Lee Prewitt, Principal Hardware Program Manager, Azure CSI at Microsoft, will provide a close up look into their application requirements and challenges, why they chose NVMe flash for storage, and how they are successfully deploying NVMe to fuel their businesses. You'll learn:
  • IOPs requirements for Hyperscalers
  • Challenges when managing at scale
  • Issues around form factors
  • Need to allow for "rot in place"
  • Remote debugging requirements
  • Security needs
  • Deployment success factors
I hope you will join us for this look at NVMe in the real world. Our experts will be on-hand to answer your questions during and after the webcast. Register today. We look forward to seeing you on November 19th.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Facebook & Microsoft Leverage NVMe Cloud Storage

J Metz

Oct 9, 2019

title of post
What do Hyperscalers like Facebook and Microsoft have in common? Find out in our next SNIA Networking Storage Forum (NSF) webcast, How Facebook and Microsoft Leverage NVMe Cloud Storage, on November 19, 2019 where you’ll hear how these cloud market leaders are using NVMe SSDs in their architectures. Our expert presenters, Ross Stenfort, Hardware System Engineer at Facebook and Lee Prewitt, Principal Hardware Program Manager, Azure CSI at Microsoft, will provide a close up look into their application requirements and challenges, why they chose NVMe flash for storage, and how they are successfully deploying NVMe to fuel their businesses. You’ll learn:
  • IOPs requirements for Hyperscalers
  • Challenges when managing at scale
  • Issues around form factors
  • Need to allow for “rot in place”
  • Remote debugging requirements
  • Security needs
  • Deployment success factors
I hope you will join us for this look at NVMe in the real world. Our experts will be on-hand to answer your questions during and after the webcast. Register today. We look forward to seeing you on November 19th.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to