Sorry, you need to enable JavaScript to visit this website.

Why Object Storage is Important

John Kim

Jan 3, 2020

title of post
Object storage is a secure, simple, scalable, and cost-effective means of embracing the explosive growth of unstructured data enterprises generate every day. Object storage adoption is on the rise. That’s why the SNIA Networking Storage Forum (NSF) is hosting “Object Storage: What, How and Why.”  This webcast, with experts Chris Evans of Bookend LTD, Rick Vanover of Veeam, and Alex McDonald, Vice Chair of SNIA NSF and NetApp, the will explain how object storage works, its benefits and why it’s important. Like other storage technologies, object storage brings its own set of unique characteristics to the market. Join us on February 19th at 10:00 am PT/1:00 pm ET to learn:
  • What object storage is and what it does
  • How to use object storage
  • Essential characteristics of typical consumption
  • Why object storage is important to data creators and consumers everywhere
Save your place now and register for this webcast on February 19th. We look forward to seeing you.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

24G SAS Feature Focus: Inherent Scalability and Flexibility

STA Forum

Dec 12, 2019

title of post
[Editor’s Note: This is part of a series of blog posts discussing 24G SAS features.] By: Jeremiah Tussey, Vice President, SCSI Trade Association; Alliances Manager, Product Marketing, Data Center Solutions, Microchip Technology, December 12, 2019 Serial Attached SCSI (SAS) and the SCSI protocol are undergoing an evolution, transitioning to 24G SAS and implementing new features and enhancements to a well-established storage technology. SAS has set the bar in the storage industry with inherent scalability and flexibility in modern data center, server and storage topologies. This tested ability carries forward to the latest update in the SAS technology roadmap: 24G SAS. Unsurpassed scalability SAS technology addresses a very large and continuously growing market, natively supporting many host connections to thousands of target devices in a single topology. It enables optimized capacity storage solutions and provides interconnect distance that empowers SAS fabric solutions. Today, SAS services cold cloud data centers, high-performance computing and direct server markets. SAS is effectively the persistent storage with flexibility in attached media solutions. A variety of interconnect solutions support SAS, with copper for short to medium reach, and fiber optics for long reach applications. These cabling options utilize mini SAS-HD and Quad Small Form-Factor Pluggable (QSFP), as well as recent SlimSAS™ and MiniLink definitions added into the latest SAS standard. Additionally, native failover and dual-port features allow multiple domains to be configured and then automatically takeover operations in the case of component failures, broken cable connections or domain failures of any form. These inherent features were built into the protocol with high reliability, availability and scalability (RAS) in mind. Overall, these maintain per-lane performance, data integrity, flexible architectures and a native hot plugging ability to add or remove enterprise servers and storage for decades.
 

Scalability in External Storage Architectures

Flexibility for the future What makes this support come all together is the flexibility that is built into SAS. SAS supports simultaneous SAS and SATA end devices, using the Serial SCSI Protocol (SSP) for SAS connections and Serial ATA Tunneling Protocol (STP)for SATA connections. Because SAS technology can support two transport protocols simultaneously, SAS has and will continue to span all storage types. This flexibility allows high-performance SAS SSDs with MultiLink connections for optimized throughput in demanding applications with SAS infrastructure. Traditional enterprise SAS Hard Disk Drives (HDDs) are still effectively utilized in more practical medium IO-intensive, cost-sensitive applications. With rising use of cost-optimized SAS or SATA SSD alternatives, SSDs are increasingly used in these more IO-Intensive and cost-sensitive applications. SAS has effortlessly enabled that media transition as well. Better yet, SAS enables data centers that are filled with capacity-optimized SAS and SATA HDDs, which still maintain a nearly 10 times cost advantage over NAND Flash-based solutions. New standards and technology enhancements have been introduced in HDDs to continue supporting the exponential growth in storage demands that cannot be effectively covered with SSD solutions. SMR and hybrid HDD solutions with dynamic configuration only available as a SCSI feature are recent introductions ideal for these capacity-demanding applications. Last but not least, SAS topologies still support backup technologies such as SAS or SATA Tape products, allowing access to historical data storage and continued use of the latest backup solutions based on tape, DVD or Blu-ray backup storage arrays, even in new 24G storage deployments. SAS Innovations in HDD and SSD Technologies
  • SSHD (Solid State Hard Drive) – hybridization of performance and capacity
  • Storage Intelligence – SSD optimization, including garbage collection
  • TDMR (Two-Dimensional Magnetic Recording) – faster writes
  • Helium – capacity optimization
  • SMR (Shingled Magnetic Recording) – capacity optimization
  • MultiLink – bandwidth
  • Multiple Actuator – IOPs per TB
  • MAMR (Microwave-Assisted Magnetic Recording) – capacity optimization
  • HAMR (Heat-Assisted Magnetic Recording) – capacity optimization
  • Hybrid SMR – Configurable SMR vs. standard capacity; helps with SKU management
The ecosystem is on track for 24G SAS production readiness for the upcoming deployment of next-generation server platforms. Analyzers and test equipment have been available for some time now, with cables and connectors in existing and new form-factors ready for 24G SAS. Next-gen SAS controller and expander products are aligned with upcoming PCIe Gen4 platform launches. New 12G SAS and 6G SATA HDD/SSD capabilities to intersect with 24G SAS ecosystem include: MultiLink SAS™ SSDs and Dual-actuator HDDs for increased IOPs/Bandwidth, hybrid SMR for flexible and increased capacity, as well as HAMR / MAMR technologies for increased capacity. SAS continues to evolve through innovation, targeting enhanced performance and features and continuing inherent scalability and flexibility support in modern data center server and storage topologies.
Preserving the Past, Creating the FutureSAS – Preserving the Past, Creating the Future

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Hyperscalers Take on NVMe™ Cloud Storage Questions

J Metz

Dec 2, 2019

title of post
Our recent webcast on how Hyperscalers, Facebook and Microsoft are working together to merge their SSD drive requirements generated a lot of interesting questions. If you missed “How Facebook & Microsoft Leverage NVMe Cloud Storage” you can watch it on-demand. As promised at our live event. Here are answers to the questions we received. Q. How does Facebook or Microsoft see Zoned Name Spaces being used? A. Zoned Name Spaces are how we will consume QLC NAND broadly. The ability to write to the NAND sequentially in large increments that lay out nicely on the media allows for very little write amplification in the device. Q. How high a priority is firmware malware? Are there automated & remote management methods for detection and fixing at scale? A. Security in the data center is one of the highest priorities. There are tools to monitor and manage the fleet including firmware checking and updating. Q. If I understood correctly, the need for NVMe rooted from the need of communicating at faster speeds with different components in the network. Currently, at which speed is NVMe going to see no more benefit with higher speed because of the latencies in individual components? Which component is most gating/concerning at this point? A. In today’s SSDs, the NAND latency dominates. This can be mitigated by adding backend channels to the controller and optimization of data placement across the media. There are applications that are direct connect to the CPU where performance scales very well with PCIe lane speeds and do not have to deal with network latencies. Q. Where does zipline fit? Does Microsoft expect Azure to default to zipline at both ends of the Azure network? A. Microsoft has donated the RTL for the Zipline compression ASIC to Open Compute so that multiple endpoints can take advantage of “bump in the wire” inline compression. Q. What other protocols exist that are competing with NVMe? What are the pros and cons for these to be successful? A. SATA and SAS are the legacy protocols that NVMe was designed to replace. These protocols still have their place in HDD deployments. Q. Where do you see U.2 form factor for NVMe? A. Many enterprise solutions use U.2 in their 2U offerings. Hyperscale servers are mostly focused on 1U server form factors were the compact heights of E1.S and E1.L allow for vertical placement on the front of the server. Q. Is E1.L form factor too big (32 drives) for failure domain in a single node as a storage target? A. E1.L allows for very high density storage. The storage application must take into account the possibility of device failure via redundancy (mirroring, erasure coding, etc.) and rapid rebuild. In the future, the ability for the SSD to slowly lose capacity over time will be required. Q. What has been the biggest pain points in using NVMe SSD – since inception/adoption, especially, since Microsoft and Facebook started using this. A. As discussed in the live Q&A, in the early days of NVMe the lack of standard drives for both Windows and Linux hampered adoption. This has since been resolved with standard in box drive offerings. Q. Has FB or Microsoft considered allowing drives to lose data if they lose power on an edge server? if the server is rebuilt on a power down this can reduce SSD costs. A. There are certainly interesting use cases where Power Loss Protection is not needed. Q. Do zoned namespaces makes Denali spec obsolete or dropped by Microsoft? How does it impact/compete open channel initiatives by Facebook? A. Zoned Name Spaces incorporates probably 75% of the Denali functionality in an NVMe standardized way. Q. How stable is NVMe PCIe hot plug devices (unmanaged hot plug)? A. Quite stable. Q. How do you see Ethernet SSDs impacting cloud storage adoption?

A. Not clear yet if Ethernet is the right connection mechanism for storage disaggregation. CXL is becoming interesting.

Q. Thoughts on E3? What problems are being solved with E3? A. E3 is meant more for 2U servers. Q. ZNS has a lot of QoS implications as we load up so many dies on E1.L FF. Given the challenge how does ZNS address the performance requirements from regular cloud requirements? A. With QLC, the end to end systems need to be designed to meet the application’s requirements. This is not limited to the ZNS device itself, but needs to take into account the entire system. If you’re looking for more resources on any of the topics addressed in this blog, check out the SNIA Educational Library where you’ll find over 2,000 vendor-neutral presentations, white papers, videos, technical specifications, webcasts and more.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Hyperscalers Take on NVMe™ Cloud Storage Questions

J Metz

Dec 2, 2019

title of post

Our recent webcast on how Hyperscalers, Facebook and Microsoft are working together to merge their SSD drive requirements generated a lot of interesting questions. If you missed "How Facebook & Microsoft Leverage NVMe Cloud Storage" you can watch it on-demand. As promised at our live event. Here are answers to the questions we received.

Q. How does Facebook or Microsoft see Zoned Name Spaces being used?

A. Zoned Name Spaces are how we will consume QLC NAND broadly. The ability to write to the NAND sequentially in large increments that lay out nicely on the media allows for very little write amplification in the device.

Q. How high a priority is firmware malware? Are there automated & remote management methods for detection and fixing at scale?

A. Security in the data center is one of the highest priorities. There are tools to monitor and manage the fleet including firmware checking and updating.

Q. If I understood correctly, the need for NVMe rooted from the need of communicating at faster speeds with different components in the network. Currently, at which speed is NVMe going to see no more benefit with higher speed because of the latencies in individual components? Which component is most gating/concerning at this point?

A. In today's SSDs, the NAND latency dominates. This can be mitigated by adding backend channels to the controller and optimization of data placement across the media. There are applications that are direct connect to the CPU where performance scales very well with PCIe lane speeds and do not have to deal with network latencies.

Q. Where does zipline fit? Does Microsoft expect Azure to default to zipline at both ends of the Azure network?

A. Microsoft has donated the RTL for the Zipline compression ASIC to Open Compute so that multiple endpoints can take advantage of "bump in the wire" inline compression.

Q. What other protocols exist that are competing with NVMe? What are the pros and cons for these to be successful?

A. SATA and SAS are the legacy protocols that NVMe was designed to replace. These protocols still have their place in HDD deployments.

Q. Where do you see U.2 form factor for NVMe?

A. Many enterprise solutions use U.2 in their 2U offerings. Hyperscale servers are mostly focused on 1U server form factors were the compact heights of E1.S and E1.L allow for vertical placement on the front of the server.

Q. Is E1.L form factor too big (32 drives) for failure domain in a single node as a storage target?

A. E1.L allows for very high density storage. The storage application must take into account the possibility of device failure via redundancy (mirroring, erasure coding, etc.) and rapid rebuild. In the future, the ability for the SSD to slowly lose capacity over time will be required.

Q. What has been the biggest pain points in using NVMe SSD - since inception/adoption, especially, since Microsoft and Facebook started using this.

A. As discussed in the live Q&A, in the early days of NVMe the lack of standard drives for both Windows and Linux hampered adoption. This has since been resolved with standard in box drive offerings.

Q. Has FB or Microsoft considered allowing drives to lose data if they lose power on an edge server? if the server is rebuilt on a power down this can reduce SSD costs.

A. There are certainly interesting use cases where Power Loss Protection is not needed.

Q. Do zoned namespaces makes Denali spec obsolete or dropped by Microsoft? How does it impact/compete open channel initiatives by Facebook?

A. Zoned Name Spaces incorporates probably 75% of the Denali functionality in an NVMe standardized way.

Q. How stable is NVMe PCIe hot plug devices (unmanaged hot plug)?

A. Quite stable.

Q. How do you see Ethernet SSDs impacting cloud storage adoption?

A. Not clear yet if Ethernet is the right connection mechanism for storage disaggregation.  CXL is becoming interesting.

Q. Thoughts on E3? What problems are being solved with E3?

A. E3 is meant more for 2U servers.

Q. ZNS has a lot of QoS implications as we load up so many dies on E1.L FF. Given the challenge how does ZNS address the performance requirements from regular cloud requirements?

A. With QLC, the end to end systems need to be designed to meet the application's requirements. This is not limited to the ZNS device itself, but needs to take into account the entire system.

If you're looking for more resources on any of the topics addressed in this blog, check out the SNIA Educational Library where you'll find over 2,000 vendor-neutral presentations, white papers, videos, technical specifications, webcasts and more.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SPDK in the NVMe-oF™ Landscape

Tim Lustig

Nov 25, 2019

title of post
The Storage Performance Development Kit (SPDK) has gained industry-wide recognition as a framework for building highly performant and efficient storage software with a focus on NVMe™. This includes software drivers and libraries for building NVMe over Fabrics (NVMe-oF) host and target solutions. On January 9, 2020, the SNIA Networking Storage Forum is going to kick-off its 2020 webcast program by diving into this topic with a live webcast “Where Does SPDK Fit in the NVMe-oF Landscape.” In this presentation, SNIA members and technical leaders from SPDK, Jim Harris and Ben Walker, will provide an overview of the SPDK project, NVMe-oF use cases that are best suited for SPDK, and insights into how SPDK achieves its storage networking performance and efficiency, discussing:
  • An overview of the SPDK project
  • Key NVMe-oF use cases for SPDK
  • Examples of NVMe-oF use cases not suited for SPDK
  • NVMe-oF target architecture and design
  • NVMe-oF host architecture and design
  • Performance data
I hope you will join us on January 9th by registering today for what is sure to be an insightful discussion about this timely topic.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SAS Lives On and Here’s Why

STA Forum

Nov 22, 2019

title of post
By: Rohit Gupta, Enterprise Segment Management, Western Digital In the Zettabyte Age, data is the new lifeline. Data is transforming anything and everything at an unprecedented rate in the globally connected world. Businesses are constantly looking for new ways of collecting, storing, and transforming various forms of data to extract intelligence for better decisions, improve processes and efficiencies, develop technologies and innovative products, and ultimately maximize business profitability. With data creating so much economic value, it is putting pressure on IT requirements to support peak workloads, storage tiering, data mining, and running complex algorithms using on-premises or hybrid cloud environments. Today, industry-talk is mostly around NVMe™ SSDs to help meet these stringent workload demands; however, not all drives and workloads are created equal. Over the decades, enterprise OEMs, channel partners, and ecosystem players have continued to support and utilize Serial-Attached SCSI (SAS) to address performance, reliability, availability, and data service challenges for traditional enterprise server and storage workloads. SAS — A reliable, proven interface supporting high-availability storage environments Originally debuting in 2004, SAS has evolved over the decades delivering distinguishing and proven enterprise features such as high reliability, fault tolerance (dual-port), speed, and efficiency. As a result, it has become a desirable and popular choice to run traditional, mission-critical enterprise applications such as OLTP and OLAP, hyper converged infrastructure (HCI) and software-defined storage (SDS) workloads. And it still has traction. According to a leading-industry analyst firm, IDC*, it projects SAS to drive SSD market demand over 24% CAGR in PB growth through 2022, contributing to a combined 52% CAGR growth for the data center SSD market as a whole. “SAS continues to be a valued and trusted interface standard for enterprise storage customers worldwide,” said Jeff Janukowicz, vice president, IDC. “With a robust feature set, backward compatibility, and an evolving roadmap, SAS will remain vital storage interconnect for demanding mission-critical data center workloads today and into the future. Capitalizing on its decades of innovation and SAS expertise, Western Digital is announcing the Ultrastar® DC SS540 SAS SSDs, its 6th generation of SAS SSDs that provide exceptional performance by delivering up to 470K/240K random read/write IOPS. The DC SS540 offers reliability of 2.5 million hours mean-time-between-failure (MTBF) with 96-layer 3D NAND from Western Digital’s joint venture with Intel. The new SAS SSD family is the ideal drive of choice for all-flash arrays (AFAs), caching tiers, HPC and SDS environments. Ultrastar DC SS540 leverages existing SAS platform architecture, reliability, performance leadership, and various security and encryption options, which will lead to faster qualifications and time-to-market for enterprise and private cloud customers. The Ultrastar DC SS540 offers performance, enterprise-grade reliability, dual/single port, and multiple power options. It comes in capacities up to 15.36TB with soft SKU options to cover mainstream endurance choices and also reduce SKU management efforts for OEMs and channel customers. The Ultrastar DC SS540 is currently sampling and in qualification with select customers, with mass production scheduled for CYQ1 2020. Learn More Ultrastar SAS Series
Forward-Looking Statements Certain blog and other posts on this website may contain forward-looking statements, including statements relating to expectations for our product portfolio, the market for our products, product development efforts, and the capacities, capabilities and applications of our products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances. Source: *IDC, Worldwide Solid State Drive Forecast, 2019–2023, Doc #US43828819, May 2019

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Judging Has Begun – Submit Your Entry for the NVDIMM Programming Challenge!

Marty Foltyn

Nov 19, 2019

title of post
We’re 11 months in to the Persistent Memory Hackathon program, and over 150 software developers have taken the tutorial and tried their hand at programming to persistent memory systems.   AgigA Tech, Intel SMART Modular, and Supermicro, members of the SNIA Persistent Memory and NVDIMM SIG, have now placed persistent memory systems with NVDIMM-Ns into the SNIA Technology Center as the backbone of the first SNIA NVDIMM Programming Challenge. Interested in participating?  Send an email to PMhackathon@snia.org to get your credentials.  And do so quickly, as the first round of review for the SNIA NVDIMM Programming Challenge is now open.  Any entrants who have progressed to a point where they would like a review are welcome to contact SNIA at PMhackathon@snia.org to request a time slot.  SNIA will be opening review times in December and January as well.  Submissions that meet a significant amount of the judging criteria described below, as determined by the panel, will be eligible for a demonstration slot to show the 400+ attendees at the January 23, 2020 Persistent Memory Summit  in Santa Clara CA. Your program or results should be able to be visually demonstrated using remote access to a PM-enabled server. Submissions will be judged by a panel of SNIA experts.  Reviews will be scheduled at the convenience of the submitter and judges, and done via conference call. NVDIMM Programming Challenge Judging Criteria include: Use of multiple aspects of NVDIMM/PM capabilities, for example:
  1. Use of larger DRAM/NVDIMM memory sizes
  2. Use of the DRAM speed of NVDIMM PMEM for performance
  3. Speed-up of application shut down or restart using PM where appropriate
  4. Recovery from crash/failure
  5. Storage of data across application or system restarts
Demonstrates other innovative aspects for a program or tool, for example:
  1. Uses persistence to enable new features
  2. Appeals across multiple aspects of a system, beyond persistence
Advances the cause of PM in some obvious way:
  1. Encourages the update of systems to broadly support PM
  2. Makes PM an incremental need in IT deployments
Program or results apply to all types of NVDIMM/PM systems, though exact results may vary across memory types. Questions? Contact Jim Fister, SNIA Hackathon Program Director, at pmhackathon@snia.org, and happy coding!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Judging Has Begun – Submit Your Entry for the NVDIMM Programming Challenge!

Marty Foltyn

Nov 19, 2019

title of post

We’re 11 months in to the Persistent Memory Hackathon program, and over 150 software developers have taken the tutorial and tried their hand at programming to persistent memory systems.   AgigA Tech, Intel SMART Modular, and Supermicro, members of the SNIA Persistent Memory and NVDIMM SIG, have now placed persistent memory systems with NVDIMM-Ns into the SNIA Technology Center as the backbone of the first SNIA NVDIMM Programming Challenge.

Interested in participating?  Send an email to PMhackathon@snia.org to get your credentials.  And do so quickly, as the first round of review for the SNIA NVDIMM Programming Challenge is now open.  Any entrants who have progressed to a point where they would like a review are welcome to contact SNIA at PMhackathon@snia.org to request a time slot.  SNIA will be opening review times in December and January as well.  Submissions that meet a significant amount of the judging criteria described below, as determined by the panel, will be eligible for a demonstration slot to show the 400+ attendees at the January 23, 2020 Persistent Memory Summit  in Santa Clara CA.

Your program or results should be able to be visually demonstrated using remote access to a PM-enabled server. Submissions will be judged by a panel of SNIA experts.  Reviews will be scheduled at the convenience of the submitter and judges, and done via conference call.

NVDIMM Programming Challenge Judging Criteria include:

Use of multiple aspects of NVDIMM/PM capabilities, for example:

  1. Use of larger DRAM/NVDIMM memory sizes
  2. Use of the DRAM speed of NVDIMM PMEM for performance
  3. Speed-up of application shut down or restart using PM where appropriate
  4. Recovery from crash/failure
  5. Storage of data across application or system restarts

Demonstrates other innovative aspects for a program or tool, for example:

  1. Uses persistence to enable new features
  2. Appeals across multiple aspects of a system, beyond persistence

Advances the cause of PM in some obvious way:

  1. Encourages the update of systems to broadly support PM
  2. Makes PM an incremental need in IT deployments

Program or results apply to all types of NVDIMM/PM systems, though exact results may vary across memory types.

Questions? Contact Jim Fister, SNIA Hackathon Program Director, at pmhackathon@snia.org, and happy coding!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

24G SAS Feature Focus: Active PHY Transmitter Adjustment

STA Forum

Nov 12, 2019

title of post
[Editor’s Note: This is part of a series of upcoming blog posts discussing 24G SAS features.] By: Rick Kutcipal (SCSI Trade Association), with contributions from Tim Symons (Microchip), November 12, 2019 The first 24G SAS plugfest was held the week of June 24, 2019, and products are expected to begin to trickle out into the marketplace in 2020. Production solutions typically follow 12-18 months after the first plugfest. These storage technologies are tied to OEM server and storage vendors, which release solutions at a regular annual cadence sometimes tied to next generation Intel or AMD processors. Servers almost always lead the way, as they push the performance envelope and keep pace with faster processors, faster networking and faster storage. To address the growing performance requirements of modern data centers, 24G SAS doubles the effective bandwidth of the previous SAS generation, 12Gb/s SAS. To achieve this bandwidth, the physical layer experiences significant electrical demands that technologies such as forward error correction and Active PHY Transmitter Adjustment (APTA) specifically address. These enhancements are required to maintain the rigorous quality and reliability demands of IT professionals managing enterprise and data center operations. To ensure optimized transceiver operation, APTA automatically adjusts the SAS PHY transmitter coefficients to compensate for changes that happen over time in environmental conditions. This compensation is accomplished continuously with no performance implications. In the SAS-4 standard, APTA defines specific binary primitives that the receiver PHY and the transmitter PHY exchange. These primitives request changes to the transmitter PHY coefficient settings and report transmitter status without requiring that the PHY terminate connections and enter a PHY reset sequence, ensuring continuous operation. To ensure that no disruption to data transfer occurs and to allow the SAS PHY receiver to adapt to changes in the transmitter PHY settings, the time between each APTA change request is at least 1ms. Making small, single-step changes allows the SAS PHY receiver to adjust to the change over a long period of active data bits. Small changes give the SAS PHY receiver time to equalize before the SAS PHY receiver algorithm calculates the next change request, if any changes are forthcoming. Small changes over a long time period ensures that the channel does not diverge from an operational range that could cause a link reset sequence. The SAS link is not disrupted by the APTA process and all connections, as well as data transfers, continue uninterrupted, yet optimized, by APTA. APTA states cannot be processed when the PHY is in the SP15: SAS_Phy_Ready state. The APTA process might take several seconds to complete; however, making small adjustments avoids SAS connection disruption and enables a PHY transmitter and PHY receiver to return to optimal signal integrity conditions for reliable data transfer./ While performance requirements continually increase within the modern IT infrastructure, the quality and reliability metrics remain unchanged. Technologies such as APTA enable 24G SAS to meet these requirements, and ensures the most reliable and robust storage connectivity. Look for future installments of 24G SAS Feature Focus, as the SCSI Trade Association takes a closer look at key enhancements that make your storage faster and more reliable.

24G SAS Enhancements: Active PHY Transmitter Adjustment (APTA)

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A to Better Understand Storage Security

Steve Vanderlinden

Nov 1, 2019

title of post
Truly understanding storage security issues is no small task, but the SNIA Networking Storage Forum (NSF) is taking that task on in our Storage Networking Security Webcast Series. Earlier this month, we hosted the first in this series, “Understanding Storage Security and Threats” where my SNIA colleagues and I examined the big picture of storage security, relevant terminology and key concepts. If you missed the live event, you can watch it on-demand. Our audience asked some great questions during the live event. Here are answers to them all. Q. If I just deploy self-encrypting drives, doesn’t that take care of all my security concerns?   A. No it does not. Self-encrypted drives can protect if the drive gets misplaced or stolen, but they don’t protect if the operating system or application that accesses data on those drives is compromised. Q. What does “zero trust” mean? A. “Zero Trust” is a security model that works on the principal that organizations should not automatically trust anything inside their network (typically inside the firewalls). In fact they should not automatically trust any group of users, applications, or servers. Instead, all access should be authenticated and verified. What this typically means is granting the least amount of privileges and access needed based on who is asking for access, the context of the request, and the risk associated.  Q. What does white hat vs. black hat mean?  A. In the world of hackers, a “black hat” is a malicious actor that actually wants to steal your data or compromise your system for their own purposes. A “white hat” is an ethical (or reformed) hacker that attempts to compromise your security systems with your permission, in order to verify its security or find vulnerabilities so you can fix them. There are entire companies and industry groups that make looking for security vulnerabilities a full-time job. Q. Do I need to hire someone to try to hack into my systems to see if they are secure? A. To achieve the highest levels of information security, it is often helpful to hire a “white hat” hacker to scan your systems and try to break into them. Some organizations are required–by regulation–to do this periodically to verify the security of their systems. This is sometimes referred to as “penetration testing” or “ethical hacking” and can include physical as well as electronic testing of an infrastructure or even directing suspicious calls and emails to employees to test their security training. All known IT security vulnerabilities are eventually documented and published. You might have your own dedicated security team that regularly tests your operating systems, applications and network for known vulnerabilities and performs penetration testing, or you can hire independent 3rd parties to do this. Some security companies sell tools you can use to test your network and systems for known security vulnerabilities. Q. Can you go over the difference between authorization and authentication again? A. Authorization is a mechanism for verifying that a person or application has the authority to perform certain actions or access specific data. Authentication is a mechanism or process for verifying a person is who he or she claims to be. For example, use of passwords, secure tokens/badges, or fingerprint/iris scans upon login (or physical entry) can authenticate who a person is. After login or entry, the use of access control lists, color coded badges, or permissions tables can determine what that person is authorized to do. Q. Can you explain what non-repudiation means, and how you can implement it? A. Non-repudiation is a method or technology that guarantees the accuracy or authenticity of information or an action, preventing it from being repudiated (successfully disputed or challenged). For example, a hash could ensure that a retrieved file is authentic, or a combination of biometric authentication with an audit log could prove that a particular person was the one who logged into a system or accessed a file. Q. Why would an attacker want to infiltrate data into a data center, as opposed to exfiltrating (stealing) data out of the data center? A. Usually malicious actors (hackers) want to exfiltrate (remove) valuable data. But sometimes they want to infiltrate (insert) malware into the target’s data center so this malware can carry out other attacks. Q. What is ransomware, and how does it work?  A. Ransomware typically encrypts, hides or blocks access to an organization’s critical data, then the malicious actor who sent it demands payment or action from the organization in return for sharing the password that will unlock the ransomed data. Q. Can you suggest some ways to track and report attacking resources? A. Continuous monitoring tools such as Splunk can be used. Q. Does “trust nobody” mean, don’t trust root/admin user as well? A. Trust nobody means there should be no presumption of trust, instead we should authenticate and authorize all users/requests. For example, it could mean changing the default root/admin password, requiring most administrative work to use specific accounts (instead of the root/admin account), and monitoring all users (including root/admin) to detect inappropriate behavior. Q. How do I determine my greatest vulnerability or the weakest link in my security? A. Activities such as Threat Models and Security Assessments can assist in determining weakest links. Q. What does a ‘trust boundary’ mean? A. Trust boundary is a boundary where program data or execution changes its level of “trust”. For example, Internet vs intranet. We are busy planning out the rest of this webcast series. Please follow us Twitter @SNIANSF for notifications of dates and times for each presentation.                          

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to