Sorry, you need to enable JavaScript to visit this website.

SNIA Exhibits at OCP Virtual Summit May 12-15, 2020 – SFF Standards featured in Thursday Sessions

Marty Foltyn

May 6, 2020

title of post
All SNIA members and colleagues are welcome to register to attend the free online Open Compute Project (OCP) Virtual Summit, May 12-15, 2020. SNIA SFF standards will be represented in the following presentations (all times Pacific): Thursday, May 14: The SNIA booth (link live at 9:00 am May 12) will be open from 9 am to 4 pm each day of OCP Summit, and feature Chat with SNIA Experts:  scheduled times where SNIA volunteer leadership will answer questions on SNIA technical work, SNIA education, standards, and adoption, and vision for 2020 and beyond. The current schedule is below – and is continually being updated with more speakers and topics – so  be sure to bookmark this page. And let us know if you want to add a topic to discuss! Tuesday, May 12:
  • 11:00 am – 12:00 noon – Computational Storage
  • 12:00 noon – 1:00 pm – SSDs and Form Factors
  • 1:00 pm – 2:00 pm – SNIA Technical Activities and Direction
Wednesday, May 13:
  • 11:00 am – 12:00 noon – SNIA Education, Standards, Promotion, Technology Adoption
  • 11:00 am – 12:00 noon – Persistent Memory Standards and Adoption
  • 12:00 noon – 1:00 pm – Computational Storage
  • 1:00 pm – 2:00 pm – SNIA NVMe and NVMe-oF standards activities and direction
Thursday, May 14
  • 12:00 pm – 1:00 pm – SNIA  SwordfishTM and Redfish
  • 1:00 pm – 2:00 pm – Computational Storage
Friday, May 15: 
  • 11:00 am – 12:00 noon – Computational Storage
  • 12:00 noon – 1:00 pm – SNIA Education – Standards Promotion and Technical Adoption
  • 1:00 pm -2:00 pm – SNIA Technical Activities and Direction
Register today to attend the OCP Virtual Summit. Registration is free for all attendees and is open for everyone, not just those who were registered for the in-person Summit. The SNIA exhibit will be found here once the Summit is live. Please note, the virtual summit will be a 3D environment that will be best experienced on a laptop or desktop computer, however a simplified mobile responsive version will also be available for attendees. No additional hardware, software or plugins are required.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered on Persistent Memory Programming

Jim Fister

May 5, 2020

title of post
On April 14, the SNIA Compute Memory and Storage Initiative (CMSI) held a webcast asking the question – Do You Wanna Program Persistent Memory? We had some answers in the affirmative – answering the call of the NVDIMM Programming Challenge. The Challenge utilizes a set of systems SNIA provides for the development of applications that take advantage of persistent memory. These systems support persistent memory types that can utilize the SNIA Persistent Memory Programming Model, and that are also supported by the Persistent Memory Development Kit (PMDK) Libraries. The NVDIMM Programming Challenge seeks innovative applications and tools that showcase the features persistent memory will enable. Submissions are judged by a panel of SNIA leaders and individual contest sponsors.  Judging is scheduled at the convenience of the submitter and judges, and done via conference call.  The program or results should be able to be visually demonstrated using remote access to a PM-enabled server. NVDIMM Programming Challenge participant Steve Heller from Chrysalis Software joined the webcast to discuss the Three Misses Hash Table, which uses persistent memory to store large amounts of data that greatly increases the speed of data access for programs that use it.  During the webcast a small number of questions came up that this blog answers, and we’ve also provided answers to subjects stimulated by our conversation. Q: What are the rules/conditions to access SNIA PM hardware test system to get hands on experience? What kind of PM hardware is there? Windows/Linux? A: Persistent memory, such as NVDIMM or Intel Optane memory, enables many new capabilities in server systems.  The speed of storage in the memory tier is one example, as is the ability to hold and recover data over system or application resets.  The programming challenge is seeking innovative applications and tools that showcase the features persistent memory will enable. The specific systems for the different challenges will vary depending on the focus.  The current system is built using NVDIMM-N.  Users are given their own Linux container with simple examples in a web-based interface.  The users can also work directly in the Linux shell if they are comfortable with it. Q: During the presentation there was a discussion on why it was important to look for “corner cases” when developing programs using Persistent Memory instead of regular storage.  Would you elaborate on this? A: As you can see in the chart at the top of the blog post, persistent memory significantly reduces the amount of time to access a piece of data in stored memory.  As such, the amount of time that the program normally takes to process the data becomes much more important.  Programs that are used to data retrieval taking a significant amount of time can then occasionally absorb a “processing” performance hit that an extended data sort might imply.  Simply porting a file system access to persistent memory could result in strange performance bottlenecks, and potentially introduce race conditions or strange bugs in the software.  The rewards of fixing these issues will be significant performance, as demonstrated in the webcast. Q: Can you please comment on the scalability of your HashMap implementation, both on a single socket and across multiple sockets? The implementation is single threaded. Multiple threading poses lots of overhead and opportunity for mistakes. It is easy to saturate performance that only persistent memory can provide. There is likely no benefit to the hash table in going multi-threaded. It is not impossible – one could do an example of a hash table per volume. I have run across multiple sockets that were slower with an 8% to 10% variation in performance in an earlier version.  There are potential cache pollution issues with going multi-threaded as well. The existing implementation will scale one to 15 billion records, and we would see the same thing if we have enough storage. The implementation does not use much RAM if it does not cache the index.  It only uses 100mb of RAM for test data and does not use memory. Q: How would you compare your approach to having smarter compilers that are address aware of “preferred” addresses to exploit faster memories? The Three Misses implementation invented three new storage management algorithms.  I don’t believe that compilers can invent new storage algorithms.  Compilers are much improved since their beginnings 50+ years ago when you could not mix integers and floating-point numbers, but they cannot figure out how to minimize accesses.  Smart compilers will probably not help solve this specific problem. The SNIA CMSI is continuing its efforts on persistent memory programming.  If you’re interested in learning more about persistent memory programming, you can contact us at pmhackathon@snia.org to get updates on existing contests or programming workshops.  Additionally, SNIA would be willing to work with you to host physical or virtual programming workshops. Please view the webcast and contact us with any questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Standards Watch: Storage Security Update

Eric Hibbard

May 5, 2020

title of post

The world of storage security standards continues to evolve. In fact, it can be hard to keep up with all that’s happening. Here’s a quick recap of SNIA’s involvement and impact on some notable storage security work – past, present and future.

The Storage Security ISO/IEC 27040 standard provides security techniques and detailed technical guidance on how organizations can define an appropriate level of risk mitigation by employing a well-proven and consistent approach to the planning, design, documentation, and implementation of data storage security. SNIA has been a key industry advocate of this standard by providing many of the concepts and best practices dating back to 2006. Recently, the SNIA Storage Security Technical Work Group (TWG) authored a series of white papers that explored a range of topics covered by the ISO/IEC 27040 standard. 

At the recent ISO/IEC JTC 1/SC 27 (Information security, cybersecurity, and privacy protection) meeting, there were several developments that are likely to have impacts on the storage industry and consumers of storage technology in the future. In particular, three projects are worth noting: 

  • The first is ISO/IEC 27050-4 (Information technology — Electronic discovery — Part 4: Technical readiness), which includes guidance on data/ESI preservation and retention that was derived in part from the SNIA Storage Security: Data Protection White Paper; this project progressed to draft international standard (DIS) and is expected to be published in early 2021. 
  • Next, steps were taken to restart the revision of the ISO/IEC 27031 (Information technology — Security techniques — Guidelines for information and communication technology readiness for business continuity) with an initial focus on what constitutes ICT readiness; recent events (i.e., Covid-19) have highlighted the need for this readiness. SNIA has an obvious interest in this area and anticipates offering contributions. 
  • Last, but not least, work has started on the revision of the ISO/IEC 27040 (Information technology — Storage security) standard. The decision to include requirements (i.e., allowing for conformance) has already been made and will likely increase the importance of the standard when it is published. 

SNIA has published several technical white papers associated with current ISO/IEC 27040 standard and it is expected that SNIA-identified issues and suggestions will be addressed in the standard. For example, concerns raised about the staleness of the media-specific guidance for sanitization has already resulted in IEEE initiating a project authorization request (PAR) for a new standard focused just on Storage Sanitization.

Interested in getting involved in this important work? You can contact the SNIA Storage Security TWG and/or the SNIA Data Protection and Privacy Committee by sending an email to the SNIA Technical Council Managing Director at tcmd@snia.org.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Disaster Recovery in Times of Crisis – What Role Can the Cloud Play?

Mounir Elmously

Apr 30, 2020

title of post

While humanity has always hoped to not have to deal with disasters the last few months have shown that we live in uncertain times and the impact of global warming and pandemics can reach all corners of the globe.

In learning to deal with assessing the risks and how we build resilience in the future could be as simple as finding an alternative way to perform vital tasks or having additional resources to perform those same tasks.

Following the advancement in networking, processor and storage technologies and as the IT industry adopts more standardized components (e.g. Ethernet, x86, Linux, etc.,) the information technology industry has become dependent on the Cloud. The Cloud has its advantages in flexibility and cost especially where IT resources are externally hosted, eliminating the need to build and sustain in-house IT infrastructure and the associated lead time to build it.

The Evolution of IT Disaster Recovery

Since the inception of the computer in the middle of last century data protection has always been an obsession for IT organizations, and we now find ourselves with data sensitive applications where access to the data in a timely manner is a business imperative. Traditional data protection (backup) has many issues and constraints that in many situations cannot meet most organizations recoverability requirements in terms of speed of IT service resumption and potential loss of data in-flight at the time of disaster. Additionally, in many disaster scenarios, the IT infrastructure might become unavailable or inaccessible. Hence, recovering the backup requires a new set of hardware.

Investing in disaster recovery was always an unjustifiable expense considering the potentially prohibitive cost to stand an alternative IT infrastructure that in most cases will sit idle in the expectation of a disaster.

Over time many technologies emerged with innovative approaches for creating recovery tiers and prioritize those tiers based on the critical nature of the business application, however regardless of the efforts, disaster recovery has always been a high cost item that was first to be dropped from the budget.

However, with the introduction of the internet followed by eCommerce, almost all organizations changed their business model to provide non-stop services to their clients. This has put additional pressure to ensure undisrupted business even following a major disaster.

While the term “disaster recovery as a service” is relatively new, the concept has been broadly accepted since it was a low-cost option for organizations to have some level of guarantee to resume business. While this concept was acceptable to some extent, there was a challenge due to the fact that when a major event occurs, multiple organizations could be impacted and if they are subscribed to the same providers, they end up competing for the same infrastructure resources.

Since the Infrastructure as a Service (Cloud) has evolved and matured, it is now readily available in “utility-like” models that eliminate the scalability limitations, and organizations can look into extending their IT disaster recovery capabilities with their preferred cloud provider.

There are a number of factors that have accelerated this change:

  • Social media and its impact of high visibility for organizations, a simple glitch in any organization’s ability to conduct business becomes a public media incident that can severely impact the organization’s reputation.
  • Dramatic enhancement in WAN and LAN technology along with consistent reduction of the WAN services costs provided organizations with a lower point of entry cost into off-site disaster recovery.
  • The commoditization of server and storage hardware coupled with virtualization, have resulted in much lower costs to build an alternate infrastructure which has led to a proliferation of service organizations with major household brands.

So, in most situations, even if an organization has an active disaster recovery site, chances are that keeping it current as the IT infrastructure undergoes a technology refresh is extremely difficult and potentially cost prohibitive to manage, support and keep operational.

In 2002, the first infrastructure as a service (the cloud as we now know it) provider was introduced and the market has expanded rapidly to provide scalable, secure infrastructure services at an extremely low-cost entry point.

The cloud concept has introduced two major values that are pivotal for disaster recovery

  1. Infinite scalability: which eliminated the contention for infrastructure resources between subscribers under a large-scale disaster incident, hence organizations can always have access to infrastructure when needed.
  2. On-demand:  organizations are charged on hourly or daily basis only during the use of the cloud infrastructure, upon completing the recovery back to the original data center, with no further billing and no need for operational support beyond the use case.

Those values have provided small and medium organizations with a smooth entry into disaster recovery with virtually zero Capital Expenditure costs.

It all sounds great, however there are still issues that are holding back adoption:

  1. Security: with on-demand cloud services comes the concept of multi-tenancy, where DR as a service client will share the same hardware with other clients, this creates a level of concern that in many cases CIO’s decide to keep the DR under their control. The alternative would be to use dedicated cloud hardware which defeats the purpose of “on-demand” and brings back the DR TCO to traditional cost of ownership.
  2. Data locality, privacy legislation and safe harbor of data: many organizations will require a level of assurance that their data should remain within a certain geography. Under the cloud concept, data locality can become uncertain depending on the cloud provider.
  3. Legacy infrastructure: despite the ongoing transformation of IT, most organizations are still running many business applications on legacy infrastructure (e.g. z/OS, AIX, Solaris, HP-UX, etc.,).
  4. Storage based replication: Since its introduction in the late nineties, storage array-based replication has become the de-facto standard for data replication and the foundation for most in-house disaster recovery. This storage-based replication is tightly coupled with underlying infrastructure that creates vendor lock and constitutes a major obstacle in any cloud based DR since cloud vendors do not support proprietary hardware.

The above obstacles have kept many regulated industries (e.g. financial services, healthcare, utilities, etc.) away from DR as a service for now and for the foreseeable future.

So, what will happen in the future?  As we have discussed above, disaster recovery to the cloud is an eye opener for an efficient disaster recovery plan that is the ultimate goal for every organization regardless of industry sector.

As all industry sectors undergo technology transformation, they will need to have their eyes set on the Cloud as the path of least financial resistance especially for disaster recovery. However, as the technology matures, every organization should also consider the following:

  1. Consider migrating workloads from legacy infrastructure to hypervisor-based technology. In some situations, this transformation might be extremely challenging and may extend for a few years.
  2. Consider minimizing the dependency on hardware replication and evaluate the option to use software-based replication (e.g. hypervisor-based replication, database replication, etc.,). This will free the organization from hardware vendor lock in and will pave the way to cloud-based disaster recovery.
  3. Carefully consider the regulatory compliance and security aspects of placing sensitive data in the cloud and deploy encryption techniques for data-in-flight and data-at-rest.


About the SNIA Data Protection & Privacy Committee

The SNIA Data Protection & Privacy Committee (DPPC) exists to further the awareness and adoption of data protection technology, and to provide education, best practices and technology guidance on all matters related to the protection and privacy of data.

Within SNIA, Data Protection is defined as the assurance that data is usable and accessible for authorized purposes only, with acceptable performance and in compliance with applicable requirements. The technology behind data protection will remain a primary focus for the DPPC. However, there is now also a wider context which is being driven by increasing legislation to keep personal data private. The term data protection also extends into areas of resilience to cyber attacks and to threat management. To join the DPPC, login to SNIA and request to join the DPPC Governing Committee.

Written by Mounir Elmously, Governing Committee Member, SNIA Data Protection & Privacy Committee and Executive, Advisory Services, Ernst & Young, LLP

Why did I join SNIA DPPC?

With my long history with storage and data protection technologies, along with my current job roles and responsibilities, I can bring my expertise to influence the storage industry technology education and drive industry awareness. During my days with storage vendors, I did not have the freedom to critique specific storage technologies or products. With SNIA I enjoy the freedom of independence to critique and use my knowledge and expertise to help others improve their understanding.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

What’s New with SNIA Swordfish™?

title of post

If you haven’t caught the new wave in storage management, it’s time to dive in and catch up on the latest developments of the SNIA Swordfish specification.

First, a quick recap.

SNIA Swordfish is the storage extension to the DMTF Redfish® specification providing customer-centric, easy integration of scalable storage management solutions. This unified, RESTful approach provides IT administrators, DevOps and others the ability to manage storage equipment, data services and servers in converged, hyperconverged, hyperscale or cloud infrastructure environments as well as traditional data centers.

So, what’s new?

SNIA Swordfish v1.1.0b is now a SNIA Technical Position, available for immediate download here. This version of the specification has been updated to include Features and Profiles.  There are significant enhancements in volumes, storage pools and consistency groups to support management for all scale of devices, from direct-attach to external storage. We have also moved the class of service functionality to become a value-added, optional feature set. The 1.1.0b version also includes a new type of schema that enables the Redfish Device Enablement (RDE) over PLDM Specification. Please see the release bundle for full v1.1.0b change details.

Get involved!

If you’re still feeling left behind, we’d love to take you deep-sea fishing. There are several ways you can speed your Swordfish implementation:

  • Join SNIA’s Scalable Storage Management Technical Working Group (SSM TWG) and help shape future revisions of Swordfish. Send us an email at storagemanagement@snia.org.
  • Download the Swordfish User’s Guide here and the Swordfish Practical Guide here.
  • Watch the Swordfish School Videos on our YouTube channel here.
  • Check out the Swordfish Open Source Tools on SNIA’s GitHub here.
  • Your one-stop place for all Swordfish information is http://www.snia.org/swordfish.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Feedback Needed on New Persistent Memory Performance White Paper

Marty Foltyn

Apr 28, 2020

title of post

A new SNIA Technical Work draft is now available for public review and comment – the SNIA Persistent Memory Performance Test Specification (PTS) White Paper.

A companion to the SNIA NVM Programming Model, the SNIA PM PTS White Paper (PM PTS WP) focuses on describing the relationship between traditional block IO NVMe SSD based storage and the migration to Persistent Memory block and byte addressable storage.

The PM PTS WP reviews the history and need for storage performance benchmarking beginning with Hard Disk Drive corner case stress tests, the increasing gap between CPU/SW/HW Stack performance and storage performance, and the resulting need for faster storage tiers and storage
products.

The PM PTS WP discusses the introduction of NAND Flash SSD performance testing that incorporates pre-conditioning and steady state measurement (as described in the SNIA Solid State Storage PTS), the effects of – and need for testing using – Real World Workloads on Datacenter Storage (as described in the SNIA Real World Storage Workload PTS for Datacenter Storage), the development of the NVM Programming model, the introduction of PM storage and the need for a Persistent Memory PTS.

The PM PTS focuses on the characterization, optimization,
and test of persistent memory storage architectures – including 3D XPoint,
NVDIMM-N/P, DRAM, Phase Change Memory, MRAM, ReRAM, STRAM, and others – using
both synthetic and real-world workloads. It includes test settings, metrics,
methodologies, benchmarks, and reference options to provide reliable and
repeatable test results. Future tests would use the framework established in the
first tests.

The SNIA PM PTS White Paper targets storage professionals involved
with:

  1. Traditional NAND Flash based SSD storage over
    the PCIe bus;
  2. PM storage utilizing PM aware drivers that
    convert block IO access to loads and stores; and
  3. Direct In-memory storage and applications that
    take full advantage of the speed and persistence of PM storage and
    technologies.

The PM PTS WP discussion on the differences between byte and
block addressable storage is intended to help professionals optimize
application and storage technologies and to help storage professionals
understand the market and technical roadmap for PM storage.

Eden Kim, chair of the SNIA Solid State Storage TWG and a co-author, explained that SNIA is seeking comment from Cloud Infrastructure, IT, and Data Center professionals looking to balance server and application loads, integrate PM storage for in-memory applications, and understand how response time and latency spikes are being influenced by applications, storage and the SW/HW stack.

The SNIA Solid State Storage Technical Work Group (TWG) has published several papers on performance testing and real-world workloads, and the  SNIA PM PTS White Paper includes both synthetic and real world workload tests.  The authors are seeking comment from industry professionals, researchers, academics and other interested parties on the PM PTS WP and anyone interested to participate in development of the PM PTS.

Use the SNIA
Feedback Portal
to submit your comments.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Tim Lustig

Apr 27, 2020

title of post

The SNIA Networking Storage Forum’s recent live webcast “QUIC – Will It
Replace TCP/IP
” was a fascinating presentation that was both highly-rated and well-attended. Lars Eggert, technical director of networking at NetApp and current chair of the IETF working group that is delivering this new Internet protocol, explained the history of the protocol, how it is being adopted today, and what the future of QUIC deployment is likely to be. The session generated numerous questions. Here are answers to both the ones Lars had time to answer during the live event as well as those we didn’t get to.

Q. Is QUIC appropriate/targeted to non-HTTP uses like NFS, SMB, ISCSI, etc.?

A. Originally,
when Google kicked off QUIC, the web was the big customer for this protocol.
This is still the case at the moment, the entire protocol design is very much
driven by carrying web traffic better than TLS over TCP can. However, there’s a
strong interest from a bunch of organizations to run other applications and workloads
on top of QUIC, for example, Microsoft has recently been talking about shipping
SMB over QUIC. I fully expect we’re going to see other protocols that want to
run on top of QUIC in the near future.

Q. Have
you mentioned which browsers (or other software) support QUIC?

A. At
the moment, Chrome, which supports Google QUIC, although that is quickly turning
into IETF QUIC with every new Chrome release. Firefox is implementing IETF QUIC
and I think is shipping it as part of their nightly builds. Everybody else is Chrome-
or Chromium-based, so Microsoft Edge, Safari, etc. will all get it from Chrome
and can then enable it at their leisure.

Q.
How robust is QUIC to packets loss?

A. Currently,
QUIC uses TCP congestion control algorithms, so it’s very comparable to TCP.
And like TCP, it doesn’t do forward error correction, which was something that
Google QUIC did initially.

Q. Can
you explain the term “ossification”?

A. Basically,
it means that that the network makes – often too narrow – assumptions about
what “valid” traffic for a given protocol should look like, based on past and
current traffic patterns. This limits evolvability of a protocol, i.e., the
network “ossifies” that protocol. For example, TCP has only had a small set of
TCP options that had been defined, and various middleboxes in the network
therefore dropped TCP packets with options they didn’t recognize from the past.
Some of these middleboxes might eventually be updated, but enough won’t be that
TCP options – TCP’s main extension mechanism – has become much less useful than
envisioned. The situation is worse when trying to redefine meaning for header
bits that were originally specified as reserved. What we have learned from this
is that a protocol must carefully limit the amount of plain text bits it
exposes to the network if it wants to retain long-term evolvability, which is a
key goal for QUIC.

Q. What
does the acronym QUIC stand for?

A. It’s
actually not an acronym anymore., When Jim Roskind came up with Google QUIC, it
originally expanded to “Quick UDP Internet Connections,” but everyone since
decided that QUIC is simply the name of the protocol and not an acronym anymore.

Q. Given
that QUIC is still based on IP and UDP, wouldn’t the middlebox issues remain?

A. No,
it wouldn’t, at least not to the degree it does for TCP. UDP is a very minimal
protocol, and while middleboxes can drop UDP entirely, which would break QUIC,
everything else they might do (rewrite IP addresses and port numbers for NAT),
QUIC can handle. One caveat is that UDP was traditionally mostly used for DNS,
so many middleboxes use shorter binding lifetimes for UDP flows, but QUIC can
deal with that as well. Specifically, there are some measurements that UDP
works on about 95% of all paths, and there’s some anecdotal evidence that where
it doesn’t work it’s typically because it’s enterprise networks that just block
UDP completely.

Q. Do
you expect a push-back from network vendors or governments when they realize
that they can no longer do deep packet inspection and modification?

A. Yes,
we do, and we’ve seen it heavily already. So, the question is who can push
harder. There’s a big group of US banks that showed up in the IETF to complain
about TLS 1.3 enabling forward secrecy because, if I recall correctly, a whole bunch
of their compliance checks were based around taking traces of TLS 1.1 and 1.2 and
storing them then decrypting them later, which TLS 1.3 makes impossible. They
were not happy, but it’s the right thing to do for the web.

Q. Can
you explain where the latency/performance benefit comes from? Is it because UDP
replaces TCP and a lightweight implementation is possible?

A. A
lot of it comes from a faster handshake. You have these TLS session tickets
that let you basically send your “GET” with the first handshake packet to the
server and have the server return data within its first packets. That’s where a
lot of the latency benefits come from. In terms of bulk throughput there’s actually
not a whole lot of benefit because we’re just using TCP congestion control. So,
if you want to push a lot of bytes performance is going to be more or less the same
between QUIC and TCP. After a few hundred KB or so it doesn’t really matter
anymore what you’re using. For very fast paths, e.g., in datacenters, until we
see some NIC support for crypto offload and other QUIC operations, QUIC is not
going to be able to compete with TCP when it comes to high-speed bulk data.
QUIC adds another AES operation in addition to basic TLS which makes it hard to
offload to current-generation NICs. This will change and I think this
bottleneck will disappear, but at the moment QUIC is not your protocol if you
want to do datacenter bulk data.

Q. Do
you have any measurements comparing the energy/battery use of current QUIC
implementations compared to the traditional stack for mobile platforms?

A. Not
at the moment. Sorry.

Q. How
do you guarantee reliability with QUIC? Wouldn’t we have to borrow from TCP
here as well?

A. We’re
borrowing exactly the same concepts that TCP uses for its reliability
mechanisms. UDP, on the other hand, is built on the idea of “sending a packet
and forgetting about it.” A message sent via UDP will be delivered to the
recipient (not guaranteed, with some probability of success). QUIC detects and
recovers from UDP loss.

Q.
Understand and agree on not having protocol in kernel, and ability to rapidly
evolve (QUIC update in application rollout). However, what about the security
implications of this? It seems like it creates massive holes in the security
paradigm

A. It
depends. If you trust your kernel sure, but I think actually a lot of
applications are happy to do that in the app and only trust the kernel with
already encrypted data. So, it is changing the paradigm a little bit. But with
TLS, until very recently, it already happened at the application layer. We’ve
only recently seen TLS NIC support.

Q. Your
layer diagram shows QUIC taking over some HTTP functions and/or changing the
SAP between OSI layer 4 and layer 7. Will this be a problem to having other
protocols such as SNMP, FTP, etc. adopting QUIC?

A. Yes, I think that
might be an artifact in the diagram. This is something that changed in the working
group. In the beginning, when we started standardizing QUIC, we talked about an
application and QUIC, the application was the thing on top of HTTP and QUIC was
providing HTTP semantics and a transport protocol. That view has now somewhat
evolved. Now when we’re talking about an application and QUIC, HTTP is the
application and QUIC the transport protocol for it – and there will be other
applications on top of QUIC, so that diagram might have been a little bit stale
or maybe I have not updated it in a while, but the model very much is that QUIC
at this time intends to be a transport protocol that is general purpose
although with a bunch of features that are inspired by features that the web
needs, but that are hopefully useful for other protocols. So, there should be a
relatively clean interface that other applications can layer on top of.

Q.
Does QUIC provides a Forward Error Correction (FEC) option?

A. Not
at the moment. Google QUIC did initially, but reported mixed results and
therefore the current IETF QUIC does not use FEC. However, there’s now a better
understanding of a different flavor of FEC that might actually be interesting. We’ve
talked to some people that want to revisit that decision and maybe add FEC it back.

Q.
How would QUIC apply for non-HTTP protocols? In particular, data-centric
protocols like SMB, NFS or iSCSI?

A. If
you can run over TCP, you can run over QUIC, because QUIC sort of degrades into
TCP in a sense if you only use one stream, and then you use that one stream on
one connection that you’re basically having TCP-like transfer protocols. If you
can run on top of that, you can run on top of QUIC without changing your
application protocol too much. If you want to take full advantage of QUIC,
specifically the multiple parallel streams and prioritization and all that, you
will need to change your application protocol. If you have an application
protocol that can run on top of SCTP, that binding is going to be very similar to
a QUIC binding.

Q. Do
/ Will corporate middleboxes block QUIC to preserve inspection abilities?

A. All
the things CSOs rely on to protect enterprise networks become harder to use
because they can’t see the traffic, let alone filter or block traffic. These
changes pose challenges for regulated industries such as financial services
where the organizations have to archive all incoming and outgoing
communications for compliance purposes.

Q. Do
standard HTTP engines/servers like Nginx support QUIC?

A. As
of May 2019, Nginx announced starting the development for support of QUIC.
Many other servers such as h2o, lightspeed, etc. will also start supporting
QUIC.

Q. Do
typical QUIC implementations support POSIX socket APIs yet? If not, is there a
plan to incorporate POSIX API wrappers over QUIC APIs that may allow more POSIX
compliant?

A. No QUIC stack
that I know of has a POSIX abstraction API. Some have APIs that are somewhat
inspired by POSIX, but not to a degree where you could simply link against a
QUIC stack. One key reason is that if you want to maximize performance and
minimize latencies, the POSIX abstractions actually get in your way and make it
more difficult. Applications that want to optimize performance need to tie in
very deeply and directly with their transport stacks.

Q. Does
QUIC have a better future than Fibre Channel over Ethernet (FCoE) has had till
now?

A. Those are two
protocols with vastly different scopes of applicability. QUIC’s future
certainly seems very bright at least in the web ecosystem – pretty much all
players plan on migrating to HTTP/3 on top of QUIC.

Q. How
does QUIC affect current hardware deployments?

A. I don’t see QUIC
necessitating changes here.

Q. How
is SNI handled for web hosting of multiple domains on one server?

A. QUIC uses the SNI
in exactly the same way as TLS.

Q. How
does QUIC perform compared to TCP for a locally scoped IoT network (or say ad hoc
network made of mobile devices)?

A. I’m not aware of
a comparison of QUIC and TCP/TLS traffic on IoT networks. I have deployed my
QUIC stack on two embedded platforms (RIOT-OS and Particle DeviceOS), so it is
feasible to deploy QUIC on at least the higher-end of embedded boards, but I
have not had time to do a full performance analysis. (See https://eggert.org/papers/2020-ndss-quic-iot.pdf for what I
measured.)

Q. Do end devices need to be
adapted to QUIC, if yes, how?

A.
No. If the
system allows applications to send and receive UDP traffic, QUIC can be
deployed on them.

Q.
Is it possible to implement QUIC in an IoT environment considering the CPU and
memory cost of QUIC and its code size?

A.
Yes. I have a
proof-of-concept of my QUIC stack on two IoT systems, where a simple client
app, QUIC and TLS together use about 64KB of flash and maybe 10-20 KB of RAM.
See https://eggert.org/papers/2020-ndss-quic-iot.pdf.

Q.
Is the QUIC API exposed to applications (i.e., other than HTTP) a message-based
(e.g., like UDP) or byte stream (e.g., like TCP)?

A.
There really
is no common QUIC API that multiple different stacks would all implement. Each
stack defines its own API, which is tailored to the needs of the specific
applications it intends to support.

Q.
My understanding is that TCP is ‘consistent’ across all implementations, but I
see individual versions from each vendor involved currently (and interop being
tested, etc.) – why are individual/custom variants required?

A.
All vendors
are implementing the current version of IETF QUIC. QUIC makes it very easy to
negotiate use of a private or proprietary variant during the standard
handshake, and some vendors may eventually use that to migrate away from
standard QUIC. We’re certainly testing that capability during interop, but I’m
not aware of anyone planning on shipping proprietary versions at the moment.

Q.
SMB over QUIC comment: I can’t speak for Microsoft, of course, but I have been
through some of their presentations on SMB over QUIC. One feature of using QUIC
is connection stability, particularly over WiFi. The QUIC connection can
survive a transfer from one Access Point to another on different routers, for
example.

A.
Yes, QUIC
uses connection identifiers instead of IP addresses and ports to identify
connections, so QUIC connections can survive changes to those, such as when
access networks are changed.

Q.
So QUIC is basically utilizing UDP in a new way?

A.
Not really.
QUIC is using UDP to send packets just as any other application would.

Q.
To be clearer on my security concerns. I’m thinking of malicious apps/actors
doing /hiding data exfiltration inside the new QUIC environment/protocol/etc.
ie the very problems with our legacy environment, also provides the ability to
inspect and prevent inappropriate data transmission. How to do this with QUIC?

A.
You need to
have control of the endpoint and make the QUIC stack export TLS keying
material.

Q.
UDP + CC + TLS + HTTP = QUIC. What does “CC” stand for?

A.
Congestion
control.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Encryption 101: Keeping Secrets Secret

Alex McDonald

Apr 20, 2020

title of post

Encryption has been used through the ages to protect information, authenticate messages, communicate secretly in the open, and even to check that messages were properly transmitted and received without having been tampered with. Now, it's our first go-to tool for making sure that data simply isn't readable, hearable or viewable by enemy agents, smart surveillance software or other malign actors.

But how does encryption actually work, and how is it managed? How do we ensure security and protection of our data, when all we can keep as secret are the keys to unlock it? How do we protect those keys; i.e., "Who will guard the guards themselves?"

It's a big topic that we're breaking down into three sessions as part of our Storage Networking Security Webcast Series: Encryption 101, Key Management 101, and Applied Cryptography.

Join us on May 20th for the first Encryption webcast: Storage Networking Security: Encryption 101 where our security experts will cover:

  • A brief history of Encryption
  • Cryptography basics
  • Definition of terms – Entropy, Cipher, Symmetric & Asymmetric Keys, Certificates and Digital signatures, etc. 
  • Introduction to Key Management

I hope you will register today to join us on May 20th. Our experts will be on-hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Encryption 101: Keeping Secrets Secret

AlexMcDonald

Apr 20, 2020

title of post
Encryption has been used through the ages to protect information, authenticate messages, communicate secretly in the open, and even to check that messages were properly transmitted and received without having been tampered with. Now, it’s our first go-to tool for making sure that data simply isn’t readable, hearable or viewable by enemy agents, smart surveillance software or other malign actors. But how does encryption actually work, and how is it managed? How do we ensure security and protection of our data, when all we can keep as secret are the keys to unlock it? How do we protect those keys; i.e., “Who will guard the guards themselves?” It’s a big topic that we’re breaking down into three sessions as part of our Storage Networking Security Webcast Series: Encryption 101, Key Management 101, and Applied Cryptography. Join us on May 20th for the first Encryption webcast: Storage Networking Security: Encryption 101 where our security experts will cover:
  • A brief history of Encryption
  • Cryptography basics
  • Definition of terms – Entropy, Cipher, Symmetric & Asymmetric Keys, Certificates and Digital signatures, etc. 
  • Introduction to Key Management
I hope you will register today to join us on May 20th. Our experts will be on-hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Share Your Experiences in Programming PM!

Marty Foltyn

Apr 14, 2020

title of post
by Jim Fister, SNIA Director of Persistent Memory Enabling Last year, the University of California San Diego (UCSD) Non-Volatile Systems Lab (NVSL) teamed with the Storage Networking Industry Association (SNIA) to launch a new conference, Persistent Programming In Real Life (PIRL). While not an effort to set the record for acronyms in a conference announcement, we did consider it a side-goal.  The PIRL conference was focused on gathering a group of developers and architects for persistent memory to discuss real-world results. We wanted to know what worked and what didn’t, what was hard and what was easy, and how we could help more developers move forward. You don’t need another pep talk about how the world has changed and all the things you need to do (though staying home and washing your hands is a pretty good idea right now).  But if you’d like a pep talk on sharing your experiences with persistent memory programming, then consider this just what you need. We believe that continuing the spirit of PIRL — discussing the results of persistent memory programming in real life — should continue. If you’re not aware, SNIA has been delivering some very popular webcasts on persistent programming, cloud storage, and a variety of other topics.  SNIA has a great new webcast featuring PIRL alumni Steve Heller, SNIA CMSI co-chair Alex McDonald, and me on the SNIA NVDIMM programming challenge and the winning entry. You can find more information and check the on-demand viewing at https://www.brighttalk.com/webcast/663/389451. We would like to highlight more “In Real Life” topics via our SNIA webcast channel.  Therefore, SNIA and UCSD NVSL have teamed up to create a submission portal for anyone interested in discussing their real-world persistent memory experiences.  You can submit a topic here  https://docs.google.com/forms/d/e/1FAIpQLSe_Ypo_sf1xxFcPD1F7se02jOWrdslosUnvwyS0RwcQpWAHiA/viewform where we will evaluate your submission.  Acceptable submissions will be featured in conjunction with the SNIA channel over the coming months. As a final note, this year’s PIRL conference is currently scheduled for July.  Even though most software developers are already used to social isolation and distancing from their peers, our organizing team has kept abreast of all the latest information to make a decision on the capability to do an in-person conference on that date.  In our last meeting, we agreed that it would not be prudent to hold the conference on the July date, and have tentatively rescheduled the in-person conference to October 13-14 of 2020. We will announce an exact date and our criteria for moving forward on that date in the coming weeks, so stay tuned!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to