Sorry, you need to enable JavaScript to visit this website.

Does Your World Include Storage? Don't Miss SDC!

khauser

Aug 18, 2017

title of post
Whether storage is already a main focus of your career or may be advancing toward you, you'll definitely want to attend the flagship event for storage developers - and those involved in storage operations, decision making, and usage  - SNIA’s 19th annual Storage Developer Conference (SDC), September 11-14, 2017 at the Hyatt Regency Santa Clara, California. The SNIA Technical Council has again put together a wide-ranging technical agenda featuring more than 125 industry experts from 60 companies and industry organizations, including Dell/EMC, Docker, FCIA, Google, Hitachi, HPE, IBM, Intel, Microsoft, NetApp, Oracle, Samsung, SAP, STA, and Toshiba. Over four days, network with fellow architects, developers, integrators, and users, and choose from 100+ sessions, three plugfests, and six Birds-of-a-Feather deep dives on a wide range of cutting edge technologies. Current General Session speakers are Sage Weil from Red Hat on Building a New Storage Backend for Ceph and Martin Petersen from Oracle on Recent Developments in the Linux I/O Stack. Among the 15+ topic areas featured at the conference are sessions on: * Flash and Persistent Memory * Big Data, Analytics, and the Internet-of-Things * Storage Resource Management * Storage Performance and Workloads * Containers * Object and Object Drive Storage * Cloud Storage * Storage Security and Identity Management * Data Performance and Capacity Optimization Network with our sponsors Intel, Cisco, IBM, Kalray, Radian, OpenSDS, Celestica, Chelsio, MemoScale, Newisys, SerNet, and Xilinx. Check out special demonstrations in our “Flash Community” area. If you’re a vendor wanting to test product interoperability, grab this chance to participate in one or more of the SDC plugfests underwritten by Microsoft, NetApp, SNIA Cloud Storage Initiative, and SNIA Storage Management Initiative (SMI): Cloud Interoperability, SMB3, and SMI Lab focused on SNIA SwordfishTM  open to all with SNIA SwordfishTM implementations. Find all the details here. Plan to attend our Plugfest open house on Monday evening, welcome reception on Tuesday evening, and a special SNIA 20th anniversary celebration open to SDC attendees and invited guests on Wednesday, September 13. Registration is now open at storagedeveloper.org. where the agenda and speaker list are live. Don't know much about SDC?  Watch a conference overview here and listen to SDC podcasts here. See you in Santa Clara!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on Containers and Persistent Memory

Chad Thibodeau

Aug 18, 2017

title of post
The SNIA Cloud Storage Initiative recently hosted a live webcast “Containers and Persistent Memory.” Where my colleagues and I discussed persistent storage for containers, persistent memory for containers, infrastructure software changes for persistent memory-based containers, and what SNIA is doing to advance persistent memory. If you missed the live event, it’s now available on-demand. You can also download a PDF of the webcast slides. As promised, we are providing answers to the questions we received during the live event. Q. How is” Enterprise Server SAN” different from “Traditional” Server SAN? A. Traditional Server SAN refers to individual servers connected to a dedicated, separate SAN storage solution (e.g. EMC VNX, NetApp FAS, etc.); whereas, Enterprise Server SAN refers to the use of direct-attached-storage that is then aggregated across multiple connected servers to create a “virtual SAN” that is not a separate storage solution, but rather benefits from utilizing the existing capacity contained within the application servers, but in a virtualized, shared pool to improve overall efficiency. Q. Are there any performance studies done with Containers using Tier 1 apps/Business critical? A. There have been performance characterizations done on Tier 1, Business Critical applications such as Oracle, MySQL and others. However, this would be vendor specific and the user would have to contact and work with each storage vendor to better understand their specific performance capabilities. Q. Even though Linux and Microsoft support NVDIMM natively, does the MB/BIOS still need to have support? A. Yes, the MB needs to have the BIOS enabled to recognize NVDIMMs and it needs the ADR signal wired from the Intel CPU to the DIMMs sockets. The motherboard needs to follow the JEDEC standard for NVDIMMs. Q. If someone unplugs NVDIMM-N and moves it to another server… what will happen? A. If the system crashed due to a power loss the data in the NVDIMM will be saved. When it is plugged into another NVDIMM-enabled server the BIOS will check if there is saved data in the NVDIMM and restore that data to DRAM before the system continues to boot. Q. Are traditional storage products able to support containerized applications? A. Yes, assuming that they support container orchestration engines such as Docker Swarm or Kubernetes through a “container volume plugin.” However, to the extent that they support containerized applications, it is very specific vendor-to-vendor and there are also a number of new storage products that have been developed exclusively to support containerized applications (e.g. Veritas, Portworx, Robin Systems). Q. How do the storage requirements for containers compare or differ from those of virtual machines? A. Actually, “production storage requirements” are very similar—albeit almost equivalent—between containerized applications and applications running within virtual machines; the main difference being that due to the scalability potential of containers, these requirements are often exacerbated. Some of these requirements common to both include: data persistence, data recovery, data performance and data security.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Comparing iSCSI, iSER, and NVMe over Fabrics (NVMe-oF): Ecosystem, Interoperability, Performance, and Use Cases

Saqib Jang

Aug 17, 2017

title of post
iSCSI is one of the most broadly supported storage protocols, but traditionally has not been associated with the highest performance. Newer protocols like iSER and NVMe over Fabrics promise extreme performance but are still maturing and lack the broad feature and platform support of iSCSI. Storage vendors and customers face interesting tradeoffs and options when evaluating how to achieve the highest block storage performance on Ethernet networks, while preserving the major software and hardware investment in iSCSI. iSCSI With support from all the major storage vendors, as well as from a host of Tier 2 and Tier 3 storage players, Internet Small Computer System Interface or iSCSI is the most mature and widely supported storage protocol available.  At the server level, Microsoft provides an iSCSI initiator for Windows Server environments, making configuration relatively simple. Almost all other major server operating systems and hypervisors support iSCSI natively, too. Where performance is required, dedicated iSCSI initiator adapters can be used to reduce server or storage CPU load.  Per IDC, iSCSI represents a $3.6B projected system revenue TAM in 2017[1]. Beyond storage and operating system support, the iSCSI protocol has benefited from its ease of adoption. The protocol uses standard 1/10/25/40/50/100 Gigabit Ethernet transport, is transmitted using TCP/IP, which can often help simplify the implementation and operational requirements of large environments. The performance advantages of iSCSI are compelling. Storage arrays with 10/25/40/50/100G iSCSI adapters (scalable to 200/400+ Gb) are now available, and with the use of iSCSI offload adapters easily capable of keeping up with new server multicore processors and many iSCSI hardware initiators that can generate hundreds of thousands of application-level IOPS[2]. For enterprise customers, iSCSI offers three distinct advantages.  First, iSCSI represents a SAN protocol with a built-in “second source:” in the form of a software-only solution that runs on any Ethernet NIC.  Replacing the iSCSI offload adapter with a software solution may result in lower performance, but allows customers to choose a different price/performance ratio that meets their needs.  Second, use of the iSCSI allows the host and storage systems to run the protocol in hardware or, thus decoupling the upgrade cycle of various pieces of server and storage hardware from each other.  Third, the iSCSI software initiator is the most widely supported in-box SAN capability among all OS vendors. Traditionally seen as a solution for many small and medium enterprise organizations, iSCSI support of 25/40/50/100 Gigabit Ethernet transport and the pervasive TCP/IP protocol makes it a natural fit for storage networking for the most demanding of enterprise environments, for private and public cloud communications, and across wide-area networks (WANs). iSER The iSCSI Extensions for RDMA (iSER) protocol is an iSCSI variant that takes advantage of RDMA fabric hardware technologies to enhance performance. It is effectively a translation layer that translates iSCSI to RDMA transactions for operation over Ethernet RDMA transports such as iWARP RDMA and RDMA over Converged Ethernet (RoCE), as well as non-Ethernet transports including InfiniBand or OmniPath Architecture. iSER iWARP/RoCE generally does not support software stacks (except by using soft iWARP or soft RoCE) and optimally requires RDMA-enabled 10/25/40/50/100GbE RDMA offload hardware within both the server initiators and target systems for performance enhancement. iSER end-nodes can only communicate with other iSER end nodes and interoperation with iSCSI end-nodes requires disabling iSER extensions, and losing hardware offload support and performance benefits. One of the caveats of iSER is that end-nodes can only interoperate with other end-nodes supporting the same underlying fabric variant; for example, iSER RoCE initiators can only interoperate with iSER RoCE targets and not with iSER iWARP end-nodes. iSER variants also inherit the features of the underlying RDMA transport. Thus, iSER on RoCE requires Ethernet capable of supporting RoCE, which might mean lossless Ethernet or ECN, depending on the RoCE adapters used. While iSER on iWARP through its use of TCP/IP can run both within data centers as well as across metropolitan area networks (MANs) and WANs, wherever standard TCP/IP is supported. Regarding performance, however, thanks to hardware offload and direct data placement (DDP), iSER provides improved performance efficiencies and lower CPU utilization compared to iSCSI software implementations. Comparing iSCSI and iSER Despite the similarities and origin of its name, iSER is incompatible with iSCSI, and iSER storage systems must fall back to standard iSCSI to interoperate with the very large existing iSCSI installed base. Implementations of iSER transport also differ, for instance, as noted above iSER requires a like-for-like approach to RDMA offload adapters on both ends of a link, but iSCSI target-mode offload implementations are fully interoperable with software initiator peers. The net result is that iSCSI provides the option to mix hardware initiators for comparable application-level performance to using iSER.  For the current generation of solid-state drives (SSD’s), hardware offloaded iSCSI and iSER provide about the same level of CPU utilization and throughput [3]. NVMe over Fabrics (NVMe-oF) Non-Volatile Memory Express (NVMe) is an optimized direct-attach protocol for host communication with flash-based native PCIe devices, such as SSDs. NVMe over Fabrics (NVMe-oF) is a technology specification designed to enable NVMe message-based commands to transfer data between a host computer and a target solid-state storage device or system over a network such as Ethernet (RoCE and iWARP), Fibre Channel, InfiniBand, and OmniPath. As with iSER, NVMe-oF Ethernet RDMA end-nodes can only interoperate with other NVMe-oF Ethernet end-nodes supporting the same Ethernet RDMA transport, such as iWARP-to-iWARP or RoCE-to-RoCE. In addition, NVMe-oF end-nodes cannot interoperate with iSCSI or iSER end-nodes. Regarding network requirements, NVMe-oF on RoCE requires switches capable of DCB functions (e.g., ETS/PFC or ECN), while NVMe-oF on iWARP can run over any switches supporting TCP/IP, both within data centers, as well as across MANs and WANs, or even over wireless links. While iSCSI is an established market with a broad-based ecosystem enabling high volume shipments, as of this writing NVMe over Fabrics is largely still in the proof-of-concept phase as many necessary component standards, such as drivers, are nearing finalization. Nevertheless, there is a large community working to develop NVMe over Fabrics. As the non-volatile memory-based storage solution space matures, NVMe-oF will become more attractive. Putting it all together iSCSI is a well-established and mature storage networking solution, supported by many arrays, initiators, and nearly all operating systems and hypervisors. Performance can be enhanced with hardware-accelerated iSCSI adapters on the initiator or target. The iSCSI ecosystem continues to evolve by adding support for higher speeds up to 100GbE and with growing support for iSER as a way to deliver iSCSI over RDMA transports. At the same time, NVMe-oF presents enterprise end-users with a major challenge: how to preserve the major software and hardware investment in iSCSI while considering other storage protocols. Similarly, enterprise storage system vendors require continued development of iSCSI storage product lines (including offering higher-performance 100GbE networking) while evolving to support other storage protocols and infrastructures. Immediate performance challenges can be addressed using hardware offloaded 100G iSCSI.  In the longer-term, storage vendors are likely to address these challenges through concurrent support of iSCSI, iSER, and NVMe-oF for evolutionary, non-disruptive deployment of next-genera [1] IDC WW Quarterly Disk Storage Systems Forecast, June, 2015 [2] See, for example, P7 (SQL Server 2016 OLTP IOPS) of Evaluation of the Chelsio T580-CR iSCSI Offload adapter, ©2016, Demartek [3] See, for example, “iSCSI or iSER”, 2015 SNIA Storage Developer Conference (pages 28-29)

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Too Proud to Ask Webcast Series Opens Pandora’s Box – Storage Management

J Metz

Aug 11, 2017

title of post
Storage can be something of a “black box,” a monolithic entity that is at once mysterious and scary. That’s why we created “The Everything You Wanted To Know About Storage But Were Too Proud to Ask” webcast series. So far, we’ve explored various and sundry aspects of storage, focusing on “the naming of the parts.” Our goal has been to break down some of the components of storage and explain how they fit into the greater whole. On September 28th, we’ll be hosting “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Cyan – Storage Management.” This time, we’re going to open up Pandora’s Box and peer inside the world of storage management, uncovering some of the key technologies that are used to manage devices, storage traffic, and storage architectures. In particular, we’ll be discussing:
  • SNMP – The granddaddy of management protocols
  • SMI-S – The bread-and-butter of vendor-neutral storage management
  • SNIA Swordfish – The new storage management solution gaining widespread momentum
  • Software-Defined Storage – The catch-all term for storage that includes architectures and management
There’s so much to say on each of these subject. In fact, we could do a full webcast on any one of them, but for a quick overview of many of the technologies that affect storage in one place, we think you will find your time has been well spent. As always, we’ve assembled a great panel of experts to discuss these topics. So I hope you will be join us on September 28th, 2017, for our continuation of the “Too Proud To Ask” series with Cyan, the Storage Management Pod. Register here. And if you’ve missed any of the other “Too Proud To Ask” webcasts, they are all available on-demand. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Alphabet Soup of Storage Networking Acronyms Explained

Chad Hintz

Aug 7, 2017

title of post
At our most recent webcast, "Everything You Wanted to Know About Storage But Were Too Proud To Ask: Part Turquoise - Where Does My Data Go?, our panel of experts dove into what really happens when you hit "save" and send your data off. It was an alphabet soup of acronyms as they explained the nuances of and the differences between:
  • Volatile v Non-Volatile v Persistent Memory
  • NVDIMM v RAM v DRAM v SLC v MLC v TLC v NAND v 3D NAND v Flash v SSDs v NVMe
  • NVMe (the protocol)
As promised during the live event, here are answers to all the questions we received. Q. Is SRAM still used today? A. SRAM is still in use today as embedded CACHE (Level 1/2/3) within a CPU and very limited in external standalone packaging... This is due to cost and size/capacity. Q. Does 3D NAND use multiple voltage levels? Or does each layer use just two voltages? A. 3D NAND is much like Planar NAND in operation. Supporting all the versions (SLC, MLC, TLC, and future even QLC). Other challenges exist going vertical, but are unrelated to voltage levels being supported Q. How does Symbolic IO work with the NVDIMM-P? A. SNIA does not comment on individual companies. Please contact Symbolic IO directly. Q. When do you see NVMe over Fibre Channel becoming mainstream? Just a "guesstimate" A. At the time of this writing, FC-NVMe (the standardized form of NVMe over Fabrics using Fibre Channel) is in the final ratification phase and is technically stable. By the time you read this it will likely already be completed. The standard itself is already a mainstream form of NVMe-oF, and has been a part of the NVMe-oF standard since the beginning. Market usage for NVMe-oF will ramp up as vendors, products, and ecosystem developments continue to announce innovations. Different transport mechanisms solve different problems, and the uses for Fibre Channel are not 100% overlapped with Ethernet or Fibre Channel. Having said that, it would not be surprising that both FC and Ethernet-based NVMe-oF grew at a somewhat similar pace for the next couple of years. Q. How are networked NVMe SSDs addressed? A. Each NVMe-oF transport layer has an addressing scheme that is used for discovery. NVMe SSDs actually connect to the Fabric transport through a port connected with the NVMe controller. A thorough description of how this works can be found at the SNIA ESF webcast: "Under the Hood with NVMe over Fabrics." You can also check out the Q&A blog from that webcast. Q. NVMe has any specific connectors like SATA or SAS would do? A. When looking at the physical drive connector, the industry came up with an edge connector called "U.2" that supports NVMe, SAS and SATA drives. However, the backplane in the host system must be connected correctly Q. Other than a real-estate savings, what advantage does the 3D NAND offer?     Speed? A. 3D NAND brings to us the added space used for the floating gate. When we get down to 20nm and 16nm (the measured width of the that floating gate) it only allows a few electrons, yes actual electrons, to separate the states. With 3D NAND we have room grow the gate, allowing more electrons per level and gaining us the ability to have things like TLC and beyond a reality. Don't forget, you can check out the  recorded version  of the webcast at your convenience and you can  download the webcasts slides  as well if you'd like to follow along. Remember, this webcast was part of series. I encourage you to  register today  for our next one, which will be on September 28, 2017 at 10:00 am PT – Part Cyan – Storage Management. And please visit the  SNIA ESF website  for our full library of ESF webcasts.      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Alphabet Soup of Storage Networking Acronyms Explained

Chad Hintz

Aug 7, 2017

title of post
At our most recent webcast, “Everything You Wanted to Know About Storage But Were Too Proud To Ask: Part Turquoise – Where Does My Data Go?, our panel of experts dove into what really happens when you hit “save” and send your data off. It was an alphabet soup of acronyms as they explained the nuances of and the differences between:
  • Volatile v Non-Volatile v Persistent Memory
  • NVDIMM v RAM v DRAM v SLC v MLC v TLC v NAND v 3D NAND v Flash v SSDs v NVMe
  • NVMe (the protocol)
As promised during the live event, here are answers to all the questions we received. Q. Is SRAM still used today? A. SRAM is still in use today as embedded CACHE (Level 1/2/3) within a CPU and very limited in external standalone packaging… This is due to cost and size/capacity. Q. Does 3D NAND use multiple voltage levels? Or does each layer use just two voltages? A. 3D NAND is much like Planar NAND in operation. Supporting all the versions (SLC, MLC, TLC, and future even QLC). Other challenges exist going vertical, but are unrelated to voltage levels being supported Q. How does Symbolic IO work with the NVDIMM-P? A. SNIA does not comment on individual companies. Please contact Symbolic IO directly. Q. When do you see NVMe over Fibre Channel becoming mainstream? Just a “guesstimate” A. At the time of this writing, FC-NVMe (the standardized form of NVMe over Fabrics using Fibre Channel) is in the final ratification phase and is technically stable. By the time you read this it will likely already be completed. The standard itself is already a mainstream form of NVMe-oF, and has been a part of the NVMe-oF standard since the beginning. Market usage for NVMe-oF will ramp up as vendors, products, and ecosystem developments continue to announce innovations. Different transport mechanisms solve different problems, and the uses for Fibre Channel are not 100% overlapped with Ethernet or Fibre Channel. Having said that, it would not be surprising that both FC and Ethernet-based NVMe-oF grew at a somewhat similar pace for the next couple of years. Q. How are networked NVMe SSDs addressed? A. Each NVMe-oF transport layer has an addressing scheme that is used for discovery. NVMe SSDs actually connect to the Fabric transport through a port connected with the NVMe controller. A thorough description of how this works can be found at the SNIA ESF webcast: “Under the Hood with NVMe over Fabrics.” You can also check out the Q&A blog from that webcast. Q. NVMe has any specific connectors like SATA or SAS would do? A. When looking at the physical drive connector, the industry came up with an edge connector called “U.2” that supports NVMe, SAS and SATA drives. However, the backplane in the host system must be connected correctly Q. Other than a real-estate savings, what advantage does the 3D NAND offer?   Speed? A. 3D NAND brings to us the added space used for the floating gate. When we get down to 20nm and 16nm (the measured width of the that floating gate) it only allows a few electrons, yes actual electrons, to separate the states. With 3D NAND we have room grow the gate, allowing more electrons per level and gaining us the ability to have things like TLC and beyond a reality. Don’t forget, you can check out the recorded version of the webcast at your convenience and you can download the webcasts slides as well if you’d like to follow along. Remember, this webcast was part of series. I encourage you to register today for our next one, which will be on September 28, 2017 at 10:00 am PT – Part Cyan – Storage Management. And please visit the SNIA ESF website for our full library of ESF webcasts.      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A – When Compute, Networking and Storage Intersect

J Metz

Jul 18, 2017

title of post
In Part Vermillion of our SNIA Ethernet Storage Forum (ESF) “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series – we examined the terms and concepts are at the heart of where compute, networking and storage intersect. That’s why we called it “What if Programming and Networking Had a Storage Baby” If you missed the live webcast, you can watch it on-demand. The discussion from our panel of experts generated a lot of good questions. As promised, here are answers to them all.  Q. With regard to persistent memory, how does one decide if it’s better to use load/store or access via I/O? A. Legacy applications will not change and hence will access the persistent memory the way they were written. If your legacy application needs a lot of memory and you want to use the new persistent memory as just a big and cheap (volatile) memory, then the access will be byte addressable (load/store). If your legacy application uses block storage then it will use the persistent memory using block addressing. New applications can take advantage of using byte addressing and persistency. They can keep all their data structures in memory, and these data structures will also be persistent! This saves applications the need to serialize before storing and to de-serialize when retrieving from storage and enables many other ways to optimize the software. Q. Can you go over again a bit more slowly how byte accessible and LBA change with persistent memory? A. Persistent memory can be accessed in three different ways.
  1. Using byte addressing, in which case it behaves like a big (volatile) memory
  2. Using logical block addressing, in which case it behaves like a block storage device
  3. Using SNIA NVM Programming Model that enable byte addressing along with persistency. In this case byte being written into the device can be made persistent with special APIs
You can configure and decide what model is better use for your application. Q. Is that like flash? A. Persistent memory is a technology that is persistent like flash, but has byte addressing. It can be implemented using underlying flash, battery backed DRAM, Phase Change Memory and more. Q. You were going to parse out flash vs. NVMe, I think. Also, how will the elements discussed during the session impact these evolving technologies? A. Flash is a non-volatile memory technology that supports block addressing. PCM is another non-volatile technology which is newer that supports byte addressing (which implies that it can also do block addressing by emulation). NVMe describes an interface to access non-volatile memory technology, by placing the non-volatile memory over the PCI bus. Storage Class Memory is yet another interface to access non-volatile memory, by placing the non-volatile memory over the memory bus. With this in mind: 1) It is common to see NVMe devices with backing flash devices. They will support block addressable. They have the option to expose a small byte addressable memory buffer as well on the PCI (typically a DRAM), which may or may not be volatile. 2) It is common to see Storage Class Memory with backing PCM device, or with DRAM (that can backup itself to flash on power failure). They will support byte addressable. Q. Regarding SMB & CIFS protocols, is SMB or CIFS the deprecated one? A. The name CIFS hasn’t been used in a while; it’s now all SMB. SMB version1 is deprecated; see this Microsoft article. Don’t use CIFS! Q. Are there any rules of thumb in regards to the efficiency of block vs. file vs. object stores from the storage capacity overhead and network “busyness”? A. Effectively, as you get closer to the lower-level block storage devices, your storage networking architecture needs to become more deterministic. That is, you begin to start caring more about the number of hosts connecting to a particular storage target (fan-in ratio) and the ratio of bandwidth the target has compared to the bandwidth that the hosts connecting to it have (oversubscription). Highly-transactional block storage protocols, such as Fibre Channel, FCoE and lossless iSCSI, for example, will need to have very low oversubscription ratios (sometimes as low as 4:1, depending on the type of application/workload). Most are somewhat more forgiving, and 16:1 and 20:1 are not uncommon. When you move into file-based systems, that oversubscription can be a lot higher (there is no general rule of thumb here, but the oversubscription can be in the low hundreds:1). Object-based systems are so scaled and distributed, that there really are no oversubscription limits at all, because those systems are not highly transactional. Q. Does an object always have to be replaced in entirety? How do systems handle updates to large objects? A. The rule is that you shouldn’t take a lock on an object. Classically, the whole object should be replaced. Updating is not straightforward. Traditional “get/release” locking is too expensive in terms of latency over geographic distances, too hard to manage in a distributed environment, is hard to scale, needs recovery in the case of failure, and introduces state to what is basically storage built for stateless operations. Plus, the object may be sharded across multiple physical systems. Some object systems do allow what they call “pessimistic locking” (take a lock for a fixed period of time, say 10 seconds) but it’s not a true lock that you obtain then release. It’s more like a window of opportunity and is often called, and acts like, a lease. There are also other techniques, like “optimistic concurrency” (using a unique identifier, try and then check if your identifier was successful) and “last writer wins” (as it says, the last write is the one that the storage system remembers). Many systems do this by snapshotting the object, allowing updates on the copy, and then atomically swapping them. Object systems differ in what they permit. In general, applications need to be aware that they may, very occasionally, not be successful when modifying objects, and to have strategies to deal with it, like retrying or even simply giving up. Again, you can check out the recorded version of the webcast at your convenience and you can download the webcasts slides as well if you’d like to follow along. Remember, this webcast was part of series. I encourage you to register today for our next one, which will be on August 1st at 10:00 am PT – Part Turquoise “Where Does My Data Go?” And please visit the SNIA ESF website for our full library of ESF webcasts.  

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Unlock the Power of Persistent Memory in Containers

Chad Thibodeau

Jul 16, 2017

title of post
Containers and persistent memory are both very hot topics these days. Containers are making it easier for developers to know that their software will run, no matter where it is deployed and no matter what the underlying OS is as both Linux and Windows are now fully supported. Persistent memory, a revolutionary data storage technology used in 3d printing london, will boost the performance of next-generation packaging of applications and libraries into containers. On July 27th, SNIA is hosting a live webcast “Containers and Persistent Memory.” In this webcast you’ll learn:
  • What SNIA is doing to advance persistent memory technologies
  • What the ecosystem enablement efforts are around persistent memory solutions and their relationship to containerized applications
  • How NVDIMMs are paving the way for plug-n-play adoption into containers environments for applications demanding extreme performance
  • How next-generation applications (often referred to as cloud-native or web-scale) can take advantage of both NVDIMMs and Containers to achieve both high performance and hyperscale
I hope you will join me, together with my colleagues Arthur Sainio, SNIA NVDIMM SIG Co-chair, and Alex McDonald, Co-chair of SNIA Solid State Storage and SNIA Cloud Storage Initiatives, to find out what application developers, storage administrators and the industry want to see to fully unlock the potential of persistent memory in a containerized environment. I encourage you to register today. And please bring your questions. We’ll be on-hand to answer them on the spot. I hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Unlock the Power of Persistent Memory in Containers

Chad Thibodeau

Jul 16, 2017

title of post
Containers and persistent memory are both very hot topics these days. Containers are making it easier for developers to know that their software will run, no matter where it is deployed and no matter what the underlying OS is as both Linux and Windows are now fully supported. Persistent memory, a revolutionary data storage technology, will boost the performance of next-generation packaging of applications and libraries into containers. On July 27th, SNIA is hosting a live webcast “Containers and Persistent Memory.” In this webcast you’ll learn:
  • What SNIA is doing to advance persistent memory technologies
  • What the ecosystem enablement efforts are around persistent memory solutions and their relationship to containerized applications
  • How NVDIMMs are paving the way for plug-n-play adoption into containers environments for applications demanding extreme performance
  • How next-generation applications (often referred to as cloud-native or web-scale) can take advantage of both NVDIMMs and Containers to achieve both high performance and hyperscale
I hope you will join me, together with my colleagues Arthur Sainio, SNIA NVDIMM SIG Co-chair, and Alex McDonald, Co-chair of SNIA Solid State Storage and SNIA Cloud Storage Initiatives, to find out what application developers, storage administrators and the industry want to see to fully unlock the potential of persistent memory in a containerized environment. I encourage you to register today. And please bring your questions. We’ll be on-hand to answer them on the spot. I hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Too Proud to Ask Train Makes Another Stop: Where Does My Data Go?

Chad Hintz

Jun 22, 2017

title of post
By now, we at the SNIA Storage Ethernet Storage Forum (ESF) hope you are familiar with (perhaps even a loyal fan of) the "Everything You Wanted To Know About Storage But Were Too Proud To Ask," popular webcast series. On August 1st, the "Too Proud to Ask" train will make another stop. In this seventh session, "Everything You Wanted to Know About Storage But Were Too Proud To Ask: Turquoise - Where Does My Data Go?, we will take a look into the mysticism and magic of what happens when you send your data off into the wilderness. Once you click "save," for example, where does it actually go? When we start to dig deeper beyond the application layer, we often don't understand what happens behind the scenes. It's important to understand multiple aspects of the type of storage our data goes to along with their associated benefits and drawbacks as well as some of the protocols used to transport it. In this webcast we will explain:
  • Volatile v Non-Volatile v Persistent Memory
  • NVDIMM v RAM v DRAM v SLC v MLC v TLC v NAND v 3D NAND v Flash v SSDs v NVMe
  • NVMe (the protocol)
Many people get nervous when they see that many acronyms, but all too often they come up in conversation, and you're expected to know all of them? Worse, you're expected to know the differences between them, and the consequences of using them? Even worse, you're expected to know what happens when you use the wrong one? We're here to help. It's an ambitious project, but these terms and concepts are at the heart of where compute, networking and storage intersect. Having a good grasp of these concepts ties in with which type of storage networking to use, and how data is actually stored behind the scenes. Register today to join us for this edition of the "Too Proud To Ask" series, as we work towards making you feel more comfortable in the strange, mystical world of storage. And don't let pride get in the way of asking any and all questions on this great topic. We will be there on August 1st to answer them! Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to