Sorry, you need to enable JavaScript to visit this website.

Everyone Wants Their Java to Persist

Jim Fister

May 19, 2020

title of post

In this time of lockdown, I’m sure we’re all getting a little off kilter. I mean, it’s one thing to get caught up listening to tunes in your office to avoid going out and alerting your family of the fact that you haven’t changed your shirt in two days. It’s another thing to not know where a clean coffee cup is in the house so you can fill it and face the day starting sometime between 5AM and Noon. Okay, maybe we’re just talking about me, sorry. But you get the point.

Wouldn’t it be great if we had some caffeinated source that was good forever? I mean… persistence of Java? At this point, it’s not just me.

Okay, that’s not what this webinar will be talking about, but it’s close. SNIA member Intel is offering an overview of the ways to utilize persistent memory in the Java environment. In my nearly two years here at SNIA, this has been one of the most-requested topics. Steve Dohrmann and Soji Denloye are two of the brightest minds in enabling persistence, and this is sure to be an insightful presentation.

Persistent memory application capabilities are growing significantly.  Since the publication of the persistent memory programming model developed by the TWG, new language support seems to be happening every day.  Don’t miss the opportunity to see the growth of PM programming in such a crucial space as Java.

The presentation is on BrighTALK, and will be live on May 27th at 10am PST. You can see the details at this link.

Now I just have to find a clean cup.

This post is also cross-posted at the PIRL Blog.  PIRL is a joint effort by SNIA and UCSD’s Non-Volatile Systems Lab to advance the conversation on persistent memory programming.  Check out other entries here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Power of Data Aggregation during a Pandemic

Glyn Bowden

May 19, 2020

title of post

The new coronavirus that has been ravaging countries and sending us all into lockdown is the most observed pandemic we’ve ever experienced. Data about the virus itself and perhaps more appropriately, the nations upon which it is having an impact have been shared from multiple sources. These include academic institutions such as John Hopkins University, national governments and international organisations such as the World Health Organisation. The data has been made available in many formats, from programmatically accessible APIs to downloadable comma delimited files to prepared data visualisations. We’ve never been more informed about the current status of anything.

Data Aggregation

What this newfound wealth of data has also brought to light is the true power of data aggregation. There is really only a limited number of conclusions that can be drawn from the number of active and resolved cases per nation and region. Over time, this can show us a trend and it also gives a very real snapshot of where we stand today. However, if we layer on additional data such as when actions were taken, we can see clear pictures of the impact of that strategy over time. With each nation taking differing approaches based on their own perceived position, mixed with culture and other socio-economic factors, we end up with a good side-by-side comparison of the strategies and their effectiveness. This is helping organisations and governments make decisions going forward, but data scientists globally are urging caution. In fact, the data we are producing today by processing all of these feeds may turn out to be far more valuable for the next pandemic, than it will for this one. It will be the analysis that helps create the “new normal.”

So why the suggestions of caution? Surely the more data we have the better? And the answer is an age-old problem relating to, well, age. Data fluctuates. Particularly medical data. With a small and very recent data set, what we have currently is small compared to the global population. The challenge we have at the moment is the data is often being presented to the public in a raw or semi-raw form with little regard as to how that might be interpreted.

The Need for Context

Therefore, when we see a reduction in infections within a country, it could be interpreted as an improvement in the condition rather than a simple variation that is expected. Data has the most value when it is presented in context. On May 15, 2020, at the time of this writing, the total number of recorded deaths from the novel coronavirus stood at more than  300,000. This is a large number and is bound to increase, exponentially for a time, but it needs to be understood in context. It can be large or small depending on the time frame, the geographic scale, and the demographic composition of the population affected. For example, one statistic that has not been clearly shown in the press covering the numbers, is the number expressed as a percentage of the population. This is largely because it does not make such compelling reading, as in the early stages, these numbers are thankfully very small. However, the context provided there is very important. If 5,000 people out of a population of 100,000 per square km is sick, this is very different that 5,000 out of a population of 1,000 per square km. It describes the situation in the context of the population that is feeling the impact.

This is information presented, not in its raw format but normalised, cleaned and presented alongside other influencing data. This is what data scientists have been doing and sharing with the community in order to drive valuable insights from the mass of data we have. The other work going on is around the augmentation of that data to provide new contexts and new insights that are proving ever more valuable in how societies react and even predict the impact of Covid-19.

One such effort is the inclusion of age and prosperity data. This leads to an understanding of how activities such as shuttering businesses, which disproportionately impacts lower paid workers will cause poverty or other social-economic challenges. How the distribution network will need to adapt to serve the new priorities of food and health items in the absence of luxury items. How transport networks that provide the vital links for our health professionals and key workers can be maintained in such a way to provide frequent links whilst still allowing for social distancing and not restricting transport so much that overcrowding is the unintended consequence.

Data Lessons Learned

What all of this work around data has taught eager data scientists and engineers is that there are really two personas when it comes to data science, producers and consumers. There are also multiple levels of consumers, the intended audience, often professionals who have their own implied context, and passive observers who are exposed to the data through news reporting, internet searches and general discovery.

The data scientists and engineers are the producers, and they have a specific view on that data and a firm understanding of how to interpret the data they are working with and which bits can be safely disregarded. Seeing a scatter plot or a hexbin map or other such visualisation can be intuitively processed and provide an immediate understanding to the viewer. The same can be said for the intended consumers. However, the general public do not have the experience and training required to make such judgements and so the way data is presented to the consumer must inform whilst taking care of all the assumptions and context. The skills the population learn in being able to parse and interpret the data, along with the fine tuning of the skills of the data professionals to present the information in consumable packages, will coalesce to bring a data literacy never before seen. This same skill set can then be leveraged for presenting social, political, financial and many other verticals of data to a data-savvy populace.

The Tip of the Iceberg

Guidance and principles on how to get started with data assessment and make sense of the numbers is needed. It should be aimed at citizen data enthusiasts, journalists that might need help interpreting data being presented rapidly, and by anyone consuming the data that would like to be able to understand context and provenance of data.

The work getting the focus in the news around Covid-19 data is the tip of a very large iceberg, and likely not the work that will have the most impact over the longer term. The ability to educate millions on the meaning of data when it is presented in context will drive new social conversations far in the future. Allow us to understand how our societies and economies really work, and fully understand what our priorities should be, so that when the next pandemic hits the world, we are ready and informed. The learning we are doing now, will be the best defence for the next event, whilst helping us make immediate decisions to inform our reaction to this one.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Power of Data Aggregation during a Pandemic

Glyn Bowden

May 19, 2020

title of post

The new coronavirus that has been ravaging countries and sending us all into lockdown is the most observed pandemic we’ve ever experienced. Data about the virus itself and perhaps more appropriately, the nations upon which it is having an impact have been shared from multiple sources. These include academic institutions such as John Hopkins University, national governments and international organisations such as the World Health Organisation. The data has been made available in many formats, from programmatically accessible APIs to downloadable comma delimited files to prepared data visualisations. We’ve never been more informed about the current status of anything.

Data Aggregation

What this newfound wealth of data has also brought to light is the true power of data aggregation. There is really only a limited number of conclusions that can be drawn from the number of active and resolved cases per nation and region. Over time, this can show us a trend and it also gives a very real snapshot of where we stand today. However, if we layer on additional data such as when actions were taken, we can see clear pictures of the impact of that strategy over time. With each nation taking differing approaches based on their own perceived position, mixed with culture and other socio-economic factors, we end up with a good side-by-side comparison of the strategies and their effectiveness. This is helping organisations and governments make decisions going forward, but data scientists globally are urging caution. In fact, the data we are producing today by processing all of these feeds may turn out to be far more valuable for the next pandemic, than it will for this one. It will be the analysis that helps create the “new normal.”

So why the suggestions of caution? Surely
the more data we have the better? And the answer is an age-old problem relating
to, well, age. Data fluctuates. Particularly medical data. With a small and
very recent data set, what we have currently is small compared to the global
population. The challenge we have at the moment is the data is often being
presented to the public in a raw or semi-raw form with little regard as to how
that might be interpreted.

The Need for Context

Therefore, when we see a reduction in
infections within a country, it could be interpreted as an improvement in the
condition rather than a simple variation that is expected. Data has the most
value when it is presented in context. On May 15, 2020, at the time of this writing,
the total number of recorded deaths from the novel coronavirus stood at more
than  300,000. This is a large number and
is bound to increase, exponentially for a time, but it needs to be understood
in context. It can be large or small depending on the time frame, the geographic scale, and the demographic composition of
the population affected. For example, one statistic that has not been clearly
shown in the press covering the numbers, is the number expressed as a
percentage of the population. This is largely because it does not make such
compelling reading, as in the early stages, these numbers are thankfully very
small. However, the context provided there is very important. If 5,000 people
out of a population of 100,000 per square km is sick, this is very different
that 5,000 out of a population of 1,000 per square km. It describes the
situation in the context of the population that is feeling the impact.

This is information presented, not in its
raw format but normalised, cleaned and presented alongside other influencing
data. This is what data scientists have been doing and sharing with the
community in order to drive valuable insights from the mass of data we have.
The other work going on is around the augmentation of that data to provide new
contexts and new insights that are proving ever more valuable in how societies
react and even predict the impact of Covid-19.

One such effort is the inclusion of age and
prosperity data. This leads to an understanding of how activities such as
shuttering businesses, which disproportionately impacts lower paid workers will
cause poverty or other social-economic challenges. How the distribution network
will need to adapt to serve the new priorities of food and health items in the
absence of luxury items. How transport networks that provide the vital links
for our health professionals and key workers can be maintained in such a way to
provide frequent links whilst still allowing for social distancing and not
restricting transport so much that overcrowding is the unintended consequence.

Data Lessons Learned

What all of this work around data has
taught eager data scientists and engineers is that there are really two
personas when it comes to data science, producers and consumers. There are also
multiple levels of consumers, the intended audience, often professionals who
have their own implied context, and passive observers who are exposed to the
data through news reporting, internet searches and general discovery.

The data scientists and engineers are the
producers, and they have a specific view on that data and a firm understanding
of how to interpret the data they are working with and which bits can be safely
disregarded. Seeing a scatter plot or a hexbin map or other such visualisation can
be intuitively processed and provide an immediate understanding to the viewer. The
same can be said for the intended consumers. However, the general public do not
have the experience and training required to make such judgements and so the
way data is presented to the consumer must inform whilst taking care of all the
assumptions and context. The skills the population learn in being able to parse
and interpret the data, along with the fine tuning of the skills of the data
professionals to present the information in consumable packages, will coalesce
to bring a data literacy never before seen. This same skill set can then be
leveraged for presenting social, political, financial and many other verticals
of data to a data-savvy populace.

The Tip of the Iceberg

Guidance and principles on how to get
started with data assessment and make sense of the numbers is needed. It should
be aimed at citizen data enthusiasts, journalists that might need help
interpreting data being presented rapidly, and by anyone consuming the data that
would like to be able to understand context and provenance of data.

The work getting the focus in the news
around Covid-19 data is the tip of a very large iceberg, and likely not the
work that will have the most impact over the longer term. The ability to educate
millions on the meaning of data when it is presented in context will drive new
social conversations far in the future. Allow us to understand how our
societies and economies really work, and fully understand what our priorities
should be, so that when the next pandemic hits the world, we are ready and
informed. The learning we are doing now, will be the best defence for the next
event, whilst helping us make immediate decisions to inform our reaction to
this one.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Scaling Storage in Hybrid Cloud and Multicloud Environments

Pekon Gupta

May 13, 2020

title of post

As data growth in enterprises continues to skyrocket, striking balance between cost and scalability becomes a challenge. Businesses face key decision on how to deploy their cloud service, whether on premises, in hybrid cloud or in multicloud deployments. So, what are enterprise IT organizations supposed to do, given that 'run anything anywhere' is becoming more important than ever? 

Find out on June 11, 2020, when the SNIA Cloud Storage Technologies Initiative will host a live webcast, “Storage Scalability in Hybrid Cloud and Multicloud Environments.This webcast will help architects and consumers of hybrid cloud and multicloud storage solutions better understand:

  • Trends and benefits of hybrid cloud storage and multicloud deployments
    • The range of technologies and capabilities which will help enterprise, hyperscalers and cloud service providers (CSPs) serve their customers
    • How scalability differs in block vs. file workloads
    • What are key factors to keep in mind when considering a 'run anything anywhere' objective?

We hope to see you on June 11th. Our expert presenters will all be on-hand to answer your questions. Register today

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Scaling Storage in Hybrid Cloud and Multicloud Environments

Pekon Gupta

May 13, 2020

title of post
As data growth in enterprises continues to skyrocket, striking balance between cost and scalability becomes a challenge. Businesses face key decision on how to deploy their cloud service, whether on premises, in hybrid cloud or in multicloud deployments. So, what are enterprise IT organizations supposed to do, given that ‘run anything anywhere’ is becoming more important than ever?  Find out on June 11, 2020, when the SNIA Cloud Storage Technologies Initiative will host a live webcast, “Storage Scalability in Hybrid Cloud and Multicloud Environments.This webcast will help architects and consumers of hybrid cloud and multicloud storage solutions better understand:
  • Trends and benefits of hybrid cloud storage and multicloud deployments
    • The range of technologies and capabilities which will help enterprise, hyperscalers and cloud service providers (CSPs) serve their customers
    • How scalability differs in block vs. file workloads
    • What are key factors to keep in mind when considering a ‘run anything anywhere’ objective?
We hope to see you on June 11th. Our expert presenters will all be on-hand to answer your questions. Register today

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Business Resiliency in a Kubernetes World

Jim Fister

May 12, 2020

title of post

At the 2018 KubeCon keynote, Monzo Bank explained the potential risk of running a single massive Kubernetes cluster. A minor conflict between etcd and Java led to an outage during one of their busiest business days, prompting questions, like “If a cluster goes down can our business keep functioning?”  Understanding the business continuity implications of multiple Kubernetes clusters is an important topic and key area of debate.

It’s an opportunity for the SNIA Cloud Storage Technologies Initiative (CSTI) to host “A Multi-tenant Multi-cluster Kubernetes "Datapocalypse" is Coming” - a live webcast on June 23, 2020 where Kubernetes expert, Paul Burt, will dive into:

  • The history of multi-cluster Kubernetes
  • How multi-cluster setups could affect data heavy workloads (such as multiple microservices backed by independent data stores)
  • Managing multiple clusters
  • Keeping the business functioning if a cluster goes down
  • How to prepare for the coming “datapocalypse”

Multi-cluster Kubernetes that provides robustness and resiliency is rapidly moving from “best practice” to a “must have”. Register today to save your spot on June 23rd to learn more and have your questions answered.

Need a refresher on Kubernetes? Check out the CSTI’s 3-part Kubernetes in the Cloud series to get up-to-speed.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Business Resiliency in a Kubernetes World

Jim Fister

May 12, 2020

title of post
At the 2018 KubeCon keynote, Monzo Bank explained the potential risk of running a single massive Kubernetes cluster. A minor conflict between etcd and Java led to an outage during one of their busiest business days, prompting questions, like “If a cluster goes down can our business keep functioning?”  Understanding the business continuity implications of multiple Kubernetes clusters is an important topic and key area of debate. It’s an opportunity for the SNIA Cloud Storage Technologies Initiative (CSTI) to host “A Multi-tenant Multi-cluster Kubernetes “Datapocalypse” is Coming” – a live webcast on June 23, 2020 where Kubernetes expert, Paul Burt, will dive into:
  • The history of multi-cluster Kubernetes
  • How multi-cluster setups could affect data heavy workloads (such as multiple microservices backed by independent data stores)
  • Managing multiple clusters
  • Keeping the business functioning if a cluster goes down
  • How to prepare for the coming “datapocalypse”
Multi-cluster Kubernetes that provides robustness and resiliency is rapidly moving from “best practice” to a “must have”. Register today to save your spot on June 23rd to learn more and have your questions answered. Need a refresher on Kubernetes? Check out the CSTI’s 3-part Kubernetes in the Cloud series to get up-to-speed.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

May 12, 2020

title of post

There's a lot that goes into effective key management. In order to properly use cryptography to protect information, one has to ensure that the associated cryptographic keys themselves are also protected. Careful attention must be paid to how cryptographic keys are generated, distributed, used, stored, replaced and destroyed in order to ensure that the security of cryptographic implementations is not compromised.

It's the next topic the SNIA Networking Storage Forum is going to cover in our Storage Networking Security Webcast Series. Join us on June 10, 2020 for Key Management 101 where security expert and Dell Technologies distinguished engineer, Judith Furlong, will introduce the fundamentals of cryptographic key management.

Key (see what I did there?) topics will include:

  • Key lifecycles
  • Key generation
  • Key distribution
  • Symmetric vs. asymmetric key management, and
  • Integrated vs. centralized key management models

In addition, Judith will also dive into relevant standards, protocols and industry best practices. Register today to save your spot for June 10th we hope to see you there.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

May 12, 2020

title of post
There’s a lot that goes into effective key management. In order to properly use cryptography to protect information, one has to ensure that the associated cryptographic keys themselves are also protected. Careful attention must be paid to how cryptographic keys are generated, distributed, used, stored, replaced and destroyed in order to ensure that the security of cryptographic implementations is not compromised. It’s the next topic the SNIA Networking Storage Forum is going to cover in our Storage Networking Security Webcast Series. Join us on June 10, 2020 for Key Management 101 where security expert and Dell Technologies distinguished engineer, Judith Furlong, will introduce the fundamentals of cryptographic key management. Key (see what I did there?) topics will include:
  • Key lifecycles
  • Key generation
  • Key distribution
  • Symmetric vs. asymmetric key management, and
  • Integrated vs. centralized key management models
In addition, Judith will also dive into relevant standards, protocols and industry best practices. Register today to save your spot for June 10th we hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Exhibits at OCP Virtual Summit May 12-15, 2020 – SFF Standards featured in Thursday Sessions

Marty Foltyn

May 6, 2020

title of post

All SNIA members and colleagues are welcome to register to attend the free online Open Compute Project (OCP) Virtual Summit, May 12-15, 2020.

SNIA SFF standards will be represented in the following presentations (all times Pacific):

 Thursday, May 14:

The SNIA booth (link live at 9:00 am May 12) will be open from 9 am to 4 pm each day of OCP Summit, and feature Chat with SNIA Experts:  scheduled times where SNIA volunteer leadership will answer questions on SNIA technical work, SNIA education, standards, and adoption, and vision for 2020 and beyond.

The current schedule is below – and is continually being updated with more speakers and topics – so  be sure to bookmark this page.

And let us know if you want to add a topic to discuss!

Tuesday, May 12:

  • 10:00 am - 11:00 am - SNIA Education, Standards, and Technology Adoption, Erin Weiner, SNIA Membership Services Manager
  • 11:00 am - 12:00 pm - Uniting Compute, Memory, and Storage, Jenni Dietz, Intel/SNIA CMSI Co-Chair
  • 11:00 am – 12:00 pm – Computational Storage standards and direction, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 11:00 am - 12:00 pm - SNIA SwordfishTM and Redfish Standards and Adoption, Don Deel, NetApp, SNIA SMI Chair
  • 12:00 pm – 1:00 pm – SSDs and Form Factors, Cameron Brett, KIOXIA, SNIA SSD SIG Co-Chair
  • 12:00 pm - 1:00 pm - Persistent Memory Standards and Adoption, Jim Fister, SNIA Persistent Memory Enablement Director
  • 1:00 pm – 2:00 pm - SNIA Technical Activities and Direction, Mark Carlson, KIOXIA, SNIA Technical Council Co-Chair
  • 1:00 pm - 2:00 pm - SNIA Education, Standards, Promotion, Technology Adoption, Michael Meleedy, SNIA Business Operations Director

Wednesday, May 13:

  • 11:00 am – 12:00 pm – SNIA Education, Standards, and Technology Adoption, Arnold Jones, SNIA Technical Council Managing Director
  • 11:00 am – 12:00 pm – Persistent Memory Standards and Adoption, Jim Fister, SNIA Persistent Memory Enablement Director
  • 12:00 pm – 1:00 pm – Computational Storage Standards and Adoption, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 12:00 pm - 1:00 pm - SNIA Technical Activities and Direction, Bill Martin, Samsung, SNIA Technical Council Co-Chair
  • 12:00 pm - 1:00 pm - SSDs and Form Factors, Jonmichael Hands, Intel, SNIA SSD SIG Co-Chair
  • 1:00 pm – 2:00 pm – SNIA NVMe and NVMe-oF standards and direction, Mark Carlson, KIOXIA, SNIA Technical Council Co-Chair
  • 1:00 pm - 2:00 pm - SNIA SwordfishTM and Redfish standards and direction, Richelle Ahlvers, Broadcom, SNIA SSM TWG Chair, and Barry Kittner, Intel, SNIA SMI Marketing Chair

Thursday, May 14

  • 11:00 am - 12:00 pm - Uniting Compute, Memory, and Storage, Jenni Dietz, Intel, SNIA CMSI Co-Chair
  • 11:00 am - 12:00 pm - SNIA Education, Standards, and Technology Adoption, Erin Weiner, SNIA Membership Services Manager
  • 12:00 pm – 1:00 pm – SNIA SwordfishTM and Redfish standards activities and direction, Richelle Ahlvers, Broadcom, SNIA SSM TWG Chair, and Barry Kittner, Intel, SNIA SMI Marketing Chair
  • 12:00 pm - 1:00 pm - SNIA Technical Activities and Direction, Bill Martin, Samsung, SNIA Technical Council Co-Chair
  • 1:00 pm – 2:00 pm – Computational Storage Standards and Adoption, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 1:00 pm - 2:00 pm - SNIA NVMe and NVMe-oF standards activities and direction , Bill Martin, Samsung, SNIA Technical Council Co-Chair

Friday, May 15: 

  • 11:00 am – 12:00 pm – Computational Storage Standards and Adoption, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 11:00 am - 12:00 pm - SNIA SwordfishTM and Redfish standards and direction, Richelle Ahlvers, Broadcom, SNIA SSM TWG Chair
  • 12:00 pm– 1:00 pm – SNIA NVMe and NVMe-oF standards activities and direction, Mark Carlson, KIOXIA, SNIA Technical Council Co-Chair
  • 12:00 pm – 1:00 pm – SNIA Technical Activities and Direction, Bill Martin, Samsung, SNIA Technical Council Co-Chair
  • 1:00 pm - 2:00 pm - SNIA Education, Standards and Technology Adoption, Michael Meleedy, SNIA Business Services Director


Register today to attend the OCP Virtual Summit. Registration is free for all attendees and is open for everyone, not just those who were registered for the in-person Summit. The SNIA exhibit will be found here once the Summit is live.
 
Please note, the virtual summit will be a 3D environment that will be best experienced on a laptop or desktop computer, however a simplified mobile responsive version will also be available for attendees. No additional hardware, software or plugins are required.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to