Sorry, you need to enable JavaScript to visit this website.

Opportunity for Persistent Memory is Now

Marty Foltyn

Jan 7, 2019

title of post
It’s very rare that there is a significant change in computer architecture, especially one that is nearly immediately pervasive across the breadth of a market segment.  It’s even more rare when a fundamental change such as this is supported in a way that software developers can quickly adapt to existing software architecture. Most significant transitions require a ground-up rethink to achieve performance or reliability gains, and the cost-benefit analysis generally pushes a transition to the new thing be measured in multiple revisions as opposed to one, big jump. In the last decade the growth of persistent memory has bucked this trend.  The introduction of the solid-state disk made an immediate impact on existing software, especially in the server market.  Any program that relied on multiple, small-data, read/write cycles to disk recognized significant performance increases. In cases such as multi-tiered databases, the software found a, “new tier,” of storage nearly automatically and started partitioning data to it.  In an industry where innovation takes years, improvement took a matter of months to proliferate across new deployments. While the SSD is now a standard consideration there is unexplored opportunity in solid-state storage.  The NVDIMM form factor has been in existence for quite some time, providing data persistence significantly closer to processing units in the modern server and workstation.  Many developers, however, are not aware that programming models already exist to easily incorporate some simple performance and reliability, both for byte and block access in programs.  Moreover, new innovations of persistent memory are on the horizon that will increase the density and performance of DIMM form factors. Perhaps it’s time that more software architecture should be working on adapting this exciting technology.  The barriers to innovation are very low, and opportunity is significant. Over the year 2019, SNIA will be sponsoring the delivery of several workshops dedicated to opening up persistent memory programming to the developer community.  The first of these will be a Persistent Memory Programming Hackathon at the Hyatt Regency Santa Clara CA on January 23, 2019, the day before the SNIA Persistent Memory Summit.   Developers will have the opportunity to work with experienced software architects to understand how to quickly adapt code to use new persistent memory modes in a hackathon format.  Learn more and register at this link. Don’t miss the opportunity to move on a strategic software inflection point ahead of the competition.  Consider attending the 2019 SNIA Persistent Memory Summit and exploring the opportunity with persistent memory.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Exceptional Agenda – and a Hackathon – Highlight the 2019 SNIA Persistent Memory Summit

Marty Foltyn

Jan 5, 2019

title of post
SNIA 7th annual Persistent Memory Summit – January 24, 2019 at the Hyatt Santa Clara CA – delivers a far-reaching agenda exploring exciting new topics with experienced speakers:
  • Paul Grun of OpenFabrics Alliance and Cray on the Characteristics of Persistent Memory
  • Stephen Bates of Eideticom, Neal Christiansen of Microsoft, and Eric Kaczmarek of Intel on Enabling Persistent Memory through OS and Interpreted Languages
  • Adam Roberts of Western Digital on the Mission Critical Fundamental Architecture for Numerous In-memory Databases
  • Idan Burstein of Mellanox Technologies on Making Remote Memory Persistent
  • Eden Kim of Calypso Systems on Persistent Memory Performance Benchmarking and Comparison
And much more!  Full agenda and speaker bios at http://www.snia.org/pm-summit. Registration is complimentary and includes the opportunity to tour demonstrations of persistent memory applications available today from SNIA Persistent Memory and NVDIMM SIG, SMART Modular, AgigA Tech, and Viking Technology over lunch, at breaks, and during the evening Networking Reception.  Additional sponsorship opportunities are available to SNIA and non-SNIA member companies – learn more. New Companion Event to the Summit – Persistent Memory Programming Hackathon Wednesday January 23, 2019 9:00 am – 2:00 pm Join us for the inaugural PM Programming Hackathon on the day before the Summit –a half-day program designed to get software developers an understanding of the various tiers and modes of Persistent Memory and what existing methods are available to access them.  Learn more and register at https://www.snia.org/pm-summit/hackathon

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Exceptional Agenda – and a Hackathon – Highlight the 2019 SNIA Persistent Memory Summit

Marty Foltyn

Jan 5, 2019

title of post
SNIA 7th annual Persistent Memory Summit – January 24, 2019 at the Hyatt Santa Clara CA – delivers a far-reaching agenda exploring exciting new topics with experienced speakers:
  • Paul Grun of OpenFabrics Alliance and Cray on the Characteristics of Persistent Memory
  • Stephen Bates of Eideticom, Neal Christiansen of Microsoft, and Eric Kaczmarek of Intel on Enabling Persistent Memory through OS and Interpreted Languages
  • Adam Roberts of Western Digital on the Mission Critical Fundamental Architecture for Numerous In-memory Databases
  • Idan Burstein of Mellanox Technologies on Making Remote Memory Persistent
  • Eden Kim of Calypso Systems on Persistent Memory Performance Benchmarking and Comparison
And much more!  Full agenda and speaker bios at http://www.snia.org/pm-summit. Registration is complimentary and includes the opportunity to tour demonstrations of persistent memory applications available today from SNIA Persistent Memory and NVDIMM SIG, SMART Modular, AgigA Tech, and Viking Technology over lunch, at breaks, and during the evening Networking Reception.  Additional sponsorship opportunities are available to SNIA and non-SNIA member companies – learn more. New Companion Event to the Summit – Persistent Memory Programming Hackathon Wednesday January 23, 2019 9:00 am – 2:00 pm Join us for the inaugural PM Programming Hackathon on the day before the Summit –a half-day program designed to get software developers an understanding of the various tiers and modes of Persistent Memory and what existing methods are available to access them.  Learn more and register at https://www.snia.org/pm-summit/hackathon

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding Composable Infrastructure

Alex McDonald

Jan 3, 2019

title of post
Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities that can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. On February 13, 2019, The SNIA Cloud Storage Technologies Initiative is going to examine what’s happening with composable infrastructure in our live webcast, Why Composable Infrastructure? In this webcast, SNIA experts will discuss: What prompted the development of composable infrastructure?
  • What is composable infrastructure?
  • What are the enabling technologies and potential solutions
  • Enabling technologies (not just what’s here, but what’s needed…)
  • An update on the current status of composable infrastructure standards/products, and where we might be in two to five years
Our goal is to clearly explain the reasoning behind and the benefits of composable infrastructure in an educational, vendor-neutral way. We hope you’ll join us. Our experts will be on hand to answer your questions. Register today to save your spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding Composable Infrastructure

Alex McDonald

Jan 3, 2019

title of post
Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities that can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. On February 13, 2019, The SNIA Cloud Storage Technologies Initiative is going to examine what’s happening with composable infrastructure in our live webcast, Why Composable Infrastructure? In this webcast, SNIA experts will discuss:
  • What prompted the development of composable infrastructure?
  • What is composable infrastructure?
  • What are the enabling technologies and potential solutions
  • Enabling technologies (not just what’s here, but what’s needed…)
  • An update on the current status of composable infrastructure standards/products, and where we might be in two to five years
Our goal is to clearly explain the reasoning behind and the benefits of composable infrastructure in an educational, vendor-neutral way. We hope you’ll join us. Our experts will be on hand to answer your questions. Register today to save your spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Emerging Memory Questions Answered

Marty Foltyn

Dec 28, 2018

title of post
With a topic like Emerging Memory Poised to Explode, no wonder this SNIA Solid State Storage Initiative webcast generated so much interest!  Our audience had some great questions, and, as promised, our experts Tom Coughlin and Jim Handy provide the answers in this blog. Read on, and join SNIA at the Persistent Memory Summit January 24, 2019 in Santa Clara CA.  Details and complimentary registration are at www.snia.org/pm-summit. Q. Can you mention one or two key applications leading the effort to leverage Persistent Memory? A. Right now the main applications for Persistent Memory are in Storage Area Networks (SANs), where NVDIMM-Ns (Non-Volatile Dual In-line Memory Modules) are being used for journaling.  SAP HANA, SQLserver, Apache Ignite, Oracle RDBMS, eXtremeDB, Aerospike, and other in-memory databases are undergoing early deployment with NVDIMM-N and with Intel’s Optane DIMMs in hyperscale datacenters.  IBM is using Everspin Magnetoresistive Random-Access Memory (MRAM) chips for higher-speed functions (write cache, data buffer, streams, journaling, and logs) in certain Solid State Drives (SSDs), following a lead taken by Mangstor.  Everspin’s STT MRAM DIMM is also seeing some success, but the company’s not disclosing a lot of specifics. Q. I believe that anyone who can ditch the batteries for NVDIMM support will happily pay a mark-up on 3DXP DIMMs should Micron offer them. A: Perhaps that’s true.  I think that Micron, though, is looking for higher-volume applications.  Micron is well aware of the size of the NVDIMM-N market, since the company is an important NVDIMM supplier.  Everspin is probably also working on this opportunity, since its STT MRAM DIMM is similar, although at a significantly higher price than Dynamic Random Access Memory (DRAM). Volume is the key to more applications for 3DXPoint DIMMs and any other memory technology.  It may be that the rise of Artificial Intelligence (AI) applications will help drive the greater use of many of these fast Non-Volatile Memories. Q.  Any comments on HPE's Memristor? A: HPE went very silent on the Memristor at about the same time that the 3D XPoint Memory was introduced.  The company explained in 2016 that the first generation of “The Machine” would use DRAM instead of the Memristor.  This leads us to suspect that 3D XPoint turned some heads at HPE.  One likely explanation is that HPE by itself would have a very difficult time reaching the scale required to bring the Memristor’s cost to the necessary level to justify its use. Q. Do you expect NVDIMM-N will co-exist into the future with other storage class memories because of its speed and essentially unlimited endurance of DRAM? A: Yes.  The NVDIMM-N should continue to appeal to certain applications, especially those that value its technical attributes enough to offset its higher-than-DRAM price. Q. What are Write/Erase endurance limitations of PCM and STT? (vis a vis DRAM's infinite endurance)? A: Intel and Micron have never publicly disclosed their endurance figures for 3D XPoint, although Jim Handy has backed out numbers in his Memory Guy blog (http://TheMemoryGuy.com/examining-3d-xpoints-1000-times-endurance-benefit/).  His calculations indicate an endurance of more than 30K erase/write cycles, but the number could be significantly lower than this since SSD controllers do a good job of reducing the number of writes that the memory chip actually sees.  There’s an SSD guy series on this: http://thessdguy.com/how-controllers-maximize-ssd-life/, also available as a SNIA SSSI TechNote.   Everspin’s EMD3D256M STT MRAM specification lists an endurance of 10^10 cycles. Q. Your thoughts on Nanotube RAM (NRAM)? A: Although the nanotube memory is very interesting it is only one member in a sea of contenders for the Persistent Memory crown.  It’s very difficult to project the outcome of a device that’s not already in volume production. Q. Will Micron commercialize 3D XPoint? I do not see them in the market as much as Intel on Optane. A: Micron needs a clear path to profitability to rationalize entering the 3D XPoint market whereas Intel can justify losing money on the technology.  Learn why in an upcoming post on The Memory Guy blog. Thanks again to the bearded duo and their moderator, Alex McDonald, SNIA Solid State Storage Initiative Co-Chair!  Bookmark the SNIA Brighttalk webcast link for more great webcasts in 2019!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Emerging Memory Questions Answered

Marty Foltyn

Dec 28, 2018

title of post
With a topic like Emerging Memory Poised to Explode, no wonder this SNIA Solid State Storage Initiative webcast generated so much interest!  Our audience had some great questions, and, as promised, our experts Tom Coughlin and Jim Handy provide the answers in this blog. Read on, and join SNIA at the Persistent Memory Summit January 24, 2019 in Santa Clara CA.  Details and complimentary registration are at www.snia.org/pm-summit. Q. Can you mention one or two key applications leading the effort to leverage Persistent Memory? A. Right now the main applications for Persistent Memory are in Storage Area Networks (SANs), where NVDIMM-Ns (Non-Volatile Dual In-line Memory Modules) are being used for journaling.  SAP HANA, SQLserver, Apache Ignite, Oracle RDBMS, eXtremeDB, Aerospike, and other in-memory databases are undergoing early deployment with NVDIMM-N and with Intel’s Optane DIMMs in hyperscale datacenters.  IBM is using Everspin Magnetoresistive Random-Access Memory (MRAM) chips for higher-speed functions (write cache, data buffer, streams, journaling, and logs) in certain Solid State Drives (SSDs), following a lead taken by Mangstor.  Everspin’s STT MRAM DIMM is also seeing some success, but the company’s not disclosing a lot of specifics. Q. I believe that anyone who can ditch the batteries for NVDIMM support will happily pay a mark-up on 3DXP DIMMs should Micron offer them. A: Perhaps that’s true.  I think that Micron, though, is looking for higher-volume applications.  Micron is well aware of the size of the NVDIMM-N market, since the company is an important NVDIMM supplier.  Everspin is probably also working on this opportunity, since its STT MRAM DIMM is similar, although at a significantly higher price than Dynamic Random Access Memory (DRAM). Volume is the key to more applications for 3DXPoint DIMMs and any other memory technology.  It may be that the rise of Artificial Intelligence (AI) applications will help drive the greater use of many of these fast Non-Volatile Memories. Q.  Any comments on HPE’s Memristor? A: HPE went very silent on the Memristor at about the same time that the 3D XPoint Memory was introduced.  The company explained in 2016 that the first generation of “The Machine” would use DRAM instead of the Memristor.  This leads us to suspect that 3D XPoint turned some heads at HPE.  One likely explanation is that HPE by itself would have a very difficult time reaching the scale required to bring the Memristor’s cost to the necessary level to justify its use. Q. Do you expect NVDIMM-N will co-exist into the future with other storage class memories because of its speed and essentially unlimited endurance of DRAM? A: Yes.  The NVDIMM-N should continue to appeal to certain applications, especially those that value its technical attributes enough to offset its higher-than-DRAM price. Q. What are Write/Erase endurance limitations of PCM and STT? (vis a vis DRAM’s infinite endurance)? A: Intel and Micron have never publicly disclosed their endurance figures for 3D XPoint, although Jim Handy has backed out numbers in his Memory Guy blog (http://TheMemoryGuy.com/examining-3d-xpoints-1000-times-endurance-benefit/).  His calculations indicate an endurance of more than 30K erase/write cycles, but the number could be significantly lower than this since SSD controllers do a good job of reducing the number of writes that the memory chip actually sees.  There’s an SSD guy series on this: http://thessdguy.com/how-controllers-maximize-ssd-life/, also available as a SNIA SSSI TechNote.   Everspin’s EMD3D256M STT MRAM specification lists an endurance of 10^10 cycles. Q. Your thoughts on Nanotube RAM (NRAM)? A: Although the nanotube memory is very interesting it is only one member in a sea of contenders for the Persistent Memory crown.  It’s very difficult to project the outcome of a device that’s not already in volume production. Q. Will Micron commercialize 3D XPoint? I do not see them in the market as much as Intel on Optane. A: Micron needs a clear path to profitability to rationalize entering the 3D XPoint market whereas Intel can justify losing money on the technology.  Learn why in an upcoming post on The Memory Guy blog. Thanks again to the bearded duo and their moderator, Alex McDonald, SNIA Solid State Storage Initiative Co-Chair!  Bookmark the SNIA Brighttalk webcast link for more great webcasts in 2019!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Networking for Hyperconvergence

Alex McDonald

Dec 21, 2018

title of post
"Why can't I add a 33rd node?" One of the great advantages of Hyperconverged infrastructures (also known as "HCI") is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they're the "Happy Meals" of infrastructure, because you have compute and storage in the same box. All you need to do is add networking. In practice, though, many consumers of HCI have found that the "add networking" part isn't quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the "back end" communication, it's possible to severely underestimate or misunderstand the requirements necessary to run a seamless environment. At some point, "just add more nodes" becomes a more difficult proposition. That's why the SNIA Networking Storage Forum (NSF) is hosting a live webcast "Networking Requirements for Hyperconvergence" on February 5, 2019. At this webcast, we're going to take a look behind the scenes, peek behind the GUI, so to speak. We'll be talking about what goes on back there, and shine the light behind the bezels to see:
  • The impact of metadata on the network
  • What happens as we add additional nodes
  • How to right-size the network for growth
  • Networking best practices to make your HCI work better
  • And more...
Now, not all HCI environments are created equal, so we'll say in advance that your mileage will vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place. Register here to save your spot for February 5th. Our experts will be on hand to answer your questions. This webcast is the second installment of our Storage Networking series. Our first was "Networking Requirements for Ethernet Scale-Out Storage." It's available on-demand as are all our educational webcasts. I encourage you to peruse the more than 60 vendor-neutral presentations is the NSF webcast library at your convenience.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Networking for Hyperconvergence

AlexMcDonald

Dec 21, 2018

title of post
“Why can’t I add a 33rd node?” One of the great advantages of Hyperconverged infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage in the same box. All you need to do is add networking. In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate or misunderstand the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition. That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webcast “Networking Requirements for Hyperconvergence” on February 5, 2019. At this webcast, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:
  • The impact of metadata on the network
  • What happens as we add additional nodes
  • How to right-size the network for growth
  • Networking best practices to make your HCI work better
  • And more…
Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place. Register here to save your spot for February 5th. Our experts will be on hand to answer your questions. This webcast is the second installment of our Storage Networking series. Our first was “Networking Requirements for Ethernet Scale-Out Storage.” It’s available on-demand as are all our educational webcasts. I encourage you to peruse the more than 60 vendor-neutral presentations is the NSF webcast library at your convenience.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Registration Now Open and Agenda Topics Posted for the 2019 SNIA Persistent Memory Summit

kristin.hauser

Dec 17, 2018

title of post
Don't miss your chance to attend the SNIA's 7th Annual Persistent Memory Summit, co-located with the SNIA Annual Members’ Meeting on January 24, 2019 at a new location – Hyatt Regency Santa Clara CA.  This innovative one-day event brings together industry leaders, solution providers, and users of technology to understand the ecosystem driving system memory and storage into a single, unified “persistent memory” entity. Agenda topics include Enabling Persistent Memory through the Operating System and Interpreted Languages; PM Solutions, Interfaces, and Media; and the NVM Programming Model in the Real World.  The final agenda will be live later this month so stay tuned! Many thanks to SNIA member Intel Corporation and the SNIA Solid State Storage Initiative for underwriting the Summit.  New to the Summit in 2019 is an evening networking reception and a new, expanded demonstration area. Gold and Demonstration sponsor opportunities are now available.  Complimentary registration is now open - visit www.snia.org/pm-summit to sign up, check out videos of 2018 sessions, and learn how to showcase your PM solutions at the event.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to