Sorry, you need to enable JavaScript to visit this website.

Two Storage Trails on the 10GbE Convergence Path

SteveAbbott

Aug 8, 2011

title of post
As the migration to 10Gb Ethernet moves forward, many data centers are looking to converge network and storage I/O to fully utilize a ten-fold increase in bandwidth.   Industry discussions continue regarding the merits of 10GbE iSCSI and FCoE.  Some of the key benefits of both protocols were presented in an iSCSI SIG webcast that included Maziar Tamadon and Jason Blosil on July 19th: Two Storage Trails on the 10Gb Convergence Path It's a win-win solution as both technologies offer significant performance improvements and cost savings.   The discussion is sure to continue. Since there wasn't enough time to respond to all of the questions during the webcast, we have consolidated answers to all of them in this blog post from the presentation team.   Feel free to comment and provide your input. Question: How is multipathing changed or affected with FCoE? One of the benefits of FCoE is that it uses Fibre Channel in the upper layers of the software stack where multipathing is implemented.   As a result, multipathing is the same for Fibre Channel and FCoE. Question: Are the use of CNAs with FCoE offload getting any traction?   Are these economically viable? The adoption of FCoE has been slower than expected, but is gaining momentum.   Fibre Channel is typically used for mission-critical applications so data centers have been cautious about moving to new technologies.     FCoE and network convergence provide significant cost savings, so FCoE is economically viable. Question: If you run the software FCoE solution would this not prevent boot from SAN? Boot from SAN is not currently supported when using FCoE with a software initiator and NIC.   Today, boot from SAN is only supported using FCoE with a hardware converged networked adapter (CNA). Question:   How do you assign priority for FCoE vs. other network traffic.   Doesn't it still make sense to have a dedicated network for data intensive network use? Data Center Bridging (DCB) standards that enable FCoE allow priority and bandwidth to be assigned to each priority queue or link.     Each link may support one or more data traffic types. Support for this functionality is required between two end points in the fabric, such as between an initiator at the host with the first network connection at the top of rack switch, as an example. The DCBx Standard facilitates negotiation between devices to enable supported DCB capabilities at each end of the wire. Question:   Category 6A uses more power that twin-ax or OM3 cable infrastructures, which in large build-outs is significant. Category 6A does use more power than twin-ax or OM3 cables.   That is one of the trade-offs data centers should consider when evaluating 10GbE network options. Question: Don't most enterprise storage arrays support both iSCSI and FC/FCoE ports?   That seems to make the "either/or" approach to measuring uptake moot. Many storage arrays today support either the iSCSI or FC storage network protocol. Some arrays support both at the same time. Very few support FCoE. And some others support a mixture of file and block storage protocols, often called Unified Storage. But, concurrent support for FC/FCoE and iSCSI on the same array is not universal. Regardless, storage administrators will typically favor a specific storage protocol based upon their acquired skill sets and application requirements. This is especially true with block storage protocols since the underlying hardware is unique (FC, Ethernet, or even Infiniband). With the introduction of data center bridging and FCoE, storage administrators can deploy a single physical infrastructure to support the variety of application requirements of their organization. Protocol attach rates will likely prove less interesting as more vendors begin to offer solutions supporting full network convergence. Question: I am wondering what is the sample size of your poll results, how many people voted? We had over 60 live viewers of the webcast and over 50% of them participated in the online questions. So, the sample size was about 30+ individuals. Question: Tape? Isn't tape dead? Tape as a backup methodology is definitely on the downward slope of its life than it was 5 or 10 years ago, but it still has a pulse. Expectations are that disk based backup, DR, and archive solutions will be common practice in the near future. But, many companies still use tape for archival storage. Like most declining technologies, tape will likely have a long tail as companies continue to modify their IT infrastructure and business practices to take advantage of newer methods of data retention. Question: Do you not think 10 Gbps will fall off after 2015 as the adoption of 40 Gbps to blade enclosures will start to take off in 2012? 10GbE was expected to ramp much faster than what we have witnessed. Early applications of 10GbE in storage were introduced as early as 2006. Yet, we are only now beginning to see more broad adoption of 10GbE. The use of LOM and 10GBaseT will accelerate the use of 10GbE. Early server adoption of 40GbE will likely be with blades. However, recognize that rack servers still outsell blades by a pretty large margin. As a result, 10GbE will continue to grow in adoption through 2015 and perhaps 2016. 40GbE will become very useful to reduce port count, especially at bandwidth aggregation points, such as inter-switch links. 40Gb ports may also be used to save on port count with the use of fanout cables (4x10Gb). However, server performance must continue to increase in order to be able to drive 40Gb pipes. Question: Will you be making these slides available for download? These slides are available for download at www.snia.org/? Question: What is your impression of how convergence will change data center expertise?   That is, who manages the converged network?   Your storage experts, your network experts, someone new? Network Convergence will indeed bring multiple teams together across the IT organization: server team, network team, and storage team to name a few. There is no preset answer, and the outcome will be on a case by case basis, but ultimately IT organizations will need to figure out how a common, shared resource (the network/fabric) ought to be managed and where the new ownership boundaries would need to be drawn. Question: Will there be or is there currently a NDMP equivalent for iSCSI or 10GbE? There is no equivalent to NDMP for iSCSI. NDMP is a management protocol used to backup server data to network storage devices using NFS or CIFS. SNIA oversees the development of this protocol today. Question: How does the presenter justify the statement of "no need for specialized" knowledge or tools?   Given how iSCSI uses new protocols and concepts not found in traditional LAN, how could he say that? While it's true that iSCSI comes with its own concepts and subtleties, the point being made centered around how pervasive and widespread the underlying Ethernet know-how and expertise is. Question: FC vs IP storage. What does IDC count if the array has both FC and IP storage which group does it go in. If a customer buys an array but does not use one of the two protocols will that show up in IDC numbers? This info conflicts SNIA's numbers. We can't speak to the exact methods used to generate the analyst data. Each analyst firm has their own method for collecting and analyzing industry data. The reason for including the data was to discuss the overall industry trends. Question: I noticed in the high-level overview that FCoE appeared not to be a 'mesh' network. How will this deal w/multipathing and/or failover? The diagrams only showed a single path for FCoE to simplify the discussion on network convergence.   In a real-world, best-practices deployment there would be multiple paths with failover.     FCoE uses the same multipathing and failover capabilities that are available for Fibre Channel. Question: Why are you including FCoE in IP-based storage? The graph should indeed have read Ethernet storage rather than IP storage. This was fixed after the webinar and before the presentation got posted on SNIA's website.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Trends in Data Protection

mikedutch

Jul 29, 2011

title of post

Data protection hasn’t changed much in a long time.  Sure, there are slews of product announcements and incessant declarations of the “next big thing”, but really, how much have market shares really changed over the past decade?  You’ve got to wonder if new technology is fundamentally improving how data is protected or is simply turning the crank to the next model year.  Are customers locked into the incremental changes proffered by traditional backup vendors or is there a better way?

Not going to take it anymore

The major force driving change in the industry has little to do with technology.  People have started to challenge the notion that they, not the computing system, should be responsible for ensuring the integrity of their data.  If they want a prior version of their data, why can’t the system simply provide it?   In essence, customers want to rely on a computing system that just works.  The Howard Beale anchorman in the movie Network personifies the anxiety that burdens customers explicitly managing backups, recoveries, and disaster recovery.  Now don’t get me wrong; it is critical to minimize risk and manage expectations.   But the focus should be on delivering data protection solutions that can simply be ignored.

Are you just happy to see me?

The personal computer user is prone to ask “how hard can it be to copy data?”  Ignoring the fact that many such users lose data on a regular basis because they have failed to protect their data at all, the IT professional is well aware of the intricacies of application consistency, the constraints of backup windows, the demands of service levels and scale, and the efficiencies demanded by affordability.    You can be sure that application users that have recovered lost or corrupted data are relieved.  Mae West, posing as a backup administrator, might have said “Is that a LUN in your pocket or are you just happy to see me?”

In the beginning

Knowing where the industry has been is a good step in knowing where the industry is going.  When the mainframe was young, application developers carried paper tape or punch cards.  Magnetic tape was used to store application data as well as a media to copy it to. Over time, as magnetic disk became affordable for primary data, the economics of magnetic tape remained compelling as a backup media.  Data protection was incorporated into the operating system through backup/recovery facilities, as well as through 3rd party products.

As microprocessors led computing mainstream, non-mainframe computing systems gained prominence and tape became relegated to secondary storage.  Native, open source, and commercial backup and recovery utilities stored backup and archive copies on tape media and leveraged its portability to implement disaster recovery plans.  Data compression increased the effective capacity of tape media and complemented its power consumption efficiency.

All quiet on the western front

Backup to tape became the dominant methodology for protecting application data due to its affordability and portability.  Tape was used as the backup media for application and server utilities, storage system tools, and backup applications.

B2T

Backup Server copies data from primary disk storage to tape media

Customers like the certainty of knowing where their backup copies are and physical tapes are comforting in this respect.  However, the sequential access nature of the media and indirect visibility into what’s on each tape led to difficulties satisfying recovery time objectives.  Like the soldier who fights battles that seem to have little overall significance, the backup administrator slogs through a routine, hoping the company’s valuable data is really protected.

B2D phase 1

Backup Server copies data to a Virtual Tape Library

Uncomfortable with problematic recovery from tape, customers have been evolving their practices to a backup to disk model.  Backup to disk and then to tape was one model designed to offset the higher cost of disk media but can increase the uncertainty of what’s on tape.  Another was to use virtual tape libraries to gain the direct access benefits of disk while minimizing changes in their current tape-based backup practices.  Both of these techniques helped improve recovery time but still required the backup administrator to acquire, use, and maintain a separate backup server to copy the data to the backup media.

Snap out of it!

Space-efficient snapshots offered an alternative data protection solution for some file servers. Rather than use separate media to store copies of data, the primary storage system itself would be used to maintain multiple versions of the data by only saving changes to the data.  As long as the storage system was intact, restoration of prior versions was rapid and easy.  Versions could also be replicated between two storage systems to protect the data should one of the file servers become inaccessible.

snapshot

Point in Time copies on disk storage are replicated to other disks

This procedure works, is fast, and is space efficient for data on these file servers but has challenges in terms of management and scale.  Snapshot based approaches manage versions of snapshots; they lack the ability to manage data protection at the file level.  This limitation arises because the customer’s data protection policies may not match the storage system policies.  Snapshot based approaches are also constrained by the scope of each storage system so scaling to protect all the data in a company (e.g., laptops) in a uniform and centralized (labor-efficient) manner is problematic at best.

CDP

Writes are captured and replicated for protection

Continuous Data Protection (both “near CDP” solutions which take frequent snapshots and “true CDP” solutions which continuously capture writes) is also being used to eliminate the backup window thereby ensuring large volumes of data can be protected.  However, the expense and maturity of CDP needs to be balanced with the value of “keeping everything”.

 

 

An offer he can’t refuse

Data deduplication fundamentally changed the affordability of using disk as a backup media.  The effective cost of storing data declined because duplicate data need only be stored once. Coupled with the ability to rapidly access individual objects, the advantages of backing up data to deduplicated storage are overwhelmingly compelling.  Originally, the choice of whether to deduplicate data at the source or target was a decision point but more recent offerings offer both approaches so customers need not compromise on technology.  However, simply using deduplicated storage as a backup target does not remove the complexity of configuring and supporting a data protection solution that spans independent software and hardware products.  Is it really necessary that additional backup servers be installed to support business growth?  Is it too much to ask for a turnkey solution that can address the needs of a large enterprise?

The stuff that dreams are made of

 

PBBA

Transformation from a Backup Appliance to a Recovery Platform

Protection storage offers an end-to-end solution, integrating full-function data protection capabilities with deduplicated storage.  The simplicity and efficiency of application-centric data protection combined with the scale and performance of capacity-optimized storage systems stands to fundamentally alter the traditional backup market.  Changed data is copied directly between the source and the target, without intervening backup servers.  Cloud storage may also be used as a cost-effective target.  Leveraging integrated software and hardware for what each does best allows vendors to offer innovations to customers in a manner that lowers their total cost of ownership.  Innovations like automatic configuration, dynamic optimization, and using preferred management interfaces (e.g., virtualization consoles, pod managers) build on the proven practices of the past to integrate data protection into the customer’s information infrastructure.

No one wants to be locked into products because they are too painful to switch out; it’s time that products are “sticky” because they offer compelling solutions.  IDC projects that the worldwide purpose-built backup appliance (PBBA) market will grow 16.6% from $1.7 billion in 2010 to $3.6 billion by 2015.  The industry is rapidly adopting PBBAs to overcome the data protection challenges associated with data growth.  Looking forward, storage systems will be expected to incorporate a recovery platform, supporting security and compliance obligations, and data protection solutions will become information brokers for what is stored on disk.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Trends in Data Protection

mikedutch

Jul 29, 2011

title of post

Data protection hasn’t changed much in a long time.  Sure, there are slews of product announcements and incessant declarations of the “next big thing”, but really, how much have market shares really changed over the past decade?  You’ve got to wonder if new technology is fundamentally improving how data is protected or is simply turning the crank to the next model year.  Are customers locked into the incremental changes proffered by traditional backup vendors or is there a better way?

Not going to take it anymore

The major force driving change in the industry has little to do with technology.  People have started to challenge the notion that they, not the computing system, should be responsible for ensuring the integrity of their data.  If they want a prior version of their data, why can’t the system simply provide it?   In essence, customers want to rely on a computing system that just works.  The Howard Beale anchorman in the movie Network personifies the anxiety that burdens customers explicitly managing backups, recoveries, and disaster recovery.  Now don’t get me wrong; it is critical to minimize risk and manage expectations.   But the focus should be on delivering data protection solutions that can simply be ignored.

Are you just happy to see me?

The personal computer user is prone to ask “how hard can it be to copy data?”  Ignoring the fact that many such users lose data on a regular basis because they have failed to protect their data at all, the IT professional is well aware of the intricacies of application consistency, the constraints of backup windows, the demands of service levels and scale, and the efficiencies demanded by affordability.    You can be sure that application users that have recovered lost or corrupted data are relieved.  Mae West, posing as a backup administrator, might have said “Is that a LUN in your pocket or are you just happy to see me?”

In the beginning

Knowing where the industry has been is a good step in knowing where the industry is going.  When the mainframe was young, application developers carried paper tape or punch cards.  Magnetic tape was used to store application data as well as a media to copy it to. Over time, as magnetic disk became affordable for primary data, the economics of magnetic tape remained compelling as a backup media.  Data protection was incorporated into the operating system through backup/recovery facilities, as well as through 3rd party products.

As microprocessors led computing mainstream, non-mainframe computing systems gained prominence and tape became relegated to secondary storage.  Native, open source, and commercial backup and recovery utilities stored backup and archive copies on tape media and leveraged its portability to implement disaster recovery plans.  Data compression increased the effective capacity of tape media and complemented its power consumption efficiency.

All quiet on the western front

Backup to tape became the dominant methodology for protecting application data due to its affordability and portability.  Tape was used as the backup media for application and server utilities, storage system tools, and backup applications.

B2T

Backup Server copies data from primary disk storage to tape media

Customers like the certainty of knowing where their backup copies are and physical tapes are comforting in this respect.  However, the sequential access nature of the media and indirect visibility into what’s on each tape led to difficulties satisfying recovery time objectives.  Like the soldier who fights battles that seem to have little overall significance, the backup administrator slogs through a routine, hoping the company’s valuable data is really protected.

B2D phase 1

Backup Server copies data to a Virtual Tape Library

Uncomfortable with problematic recovery from tape, customers have been evolving their practices to a backup to disk model.  Backup to disk and then to tape was one model designed to offset the higher cost of disk media but can increase the uncertainty of what’s on tape.  Another was to use virtual tape libraries to gain the direct access benefits of disk while minimizing changes in their current tape-based backup practices.  Both of these techniques helped improve recovery time but still required the backup administrator to acquire, use, and maintain a separate backup server to copy the data to the backup media.

Snap out of it!

Space-efficient snapshots offered an alternative data protection solution for some file servers. Rather than use separate media to store copies of data, the primary storage system itself would be used to maintain multiple versions of the data by only saving changes to the data.  As long as the storage system was intact, restoration of prior versions was rapid and easy.  Versions could also be replicated between two storage systems to protect the data should one of the file servers become inaccessible.

snapshot

Point in Time copies on disk storage are replicated to other disks

This procedure works, is fast, and is space efficient for data on these file servers but has challenges in terms of management and scale.  Snapshot based approaches manage versions of snapshots; they lack the ability to manage data protection at the file level.  This limitation arises because the customer’s data protection policies may not match the storage system policies.  Snapshot based approaches are also constrained by the scope of each storage system so scaling to protect all the data in a company (e.g., laptops) in a uniform and centralized (labor-efficient) manner is problematic at best.

CDP

Writes are captured and replicated for protection

Continuous Data Protection (both “near CDP” solutions which take frequent snapshots and “true CDP” solutions which continuously capture writes) is also being used to eliminate the backup window thereby ensuring large volumes of data can be protected.  However, the expense and maturity of CDP needs to be balanced with the value of “keeping everything”.

 

 

An offer he can’t refuse

Data deduplication fundamentally changed the affordability of using disk as a backup media.  The effective cost of storing data declined because duplicate data need only be stored once. Coupled with the ability to rapidly access individual objects, the advantages of backing up data to deduplicated storage are overwhelmingly compelling.  Originally, the choice of whether to deduplicate data at the source or target was a decision point but more recent offerings offer both approaches so customers need not compromise on technology.  However, simply using deduplicated storage as a backup target does not remove the complexity of configuring and supporting a data protection solution that spans independent software and hardware products.  Is it really necessary that additional backup servers be installed to support business growth?  Is it too much to ask for a turnkey solution that can address the needs of a large enterprise?

The stuff that dreams are made of

 

PBBA

Transformation from a Backup Appliance to a Recovery Platform

Protection storage offers an end-to-end solution, integrating full-function data protection capabilities with deduplicated storage.  The simplicity and efficiency of application-centric data protection combined with the scale and performance of capacity-optimized storage systems stands to fundamentally alter the traditional backup market.  Changed data is copied directly between the source and the target, without intervening backup servers.  Cloud storage may also be used as a cost-effective target.  Leveraging integrated software and hardware for what each does best allows vendors to offer innovations to customers in a manner that lowers their total cost of ownership.  Innovations like automatic configuration, dynamic optimization, and using preferred management interfaces (e.g., virtualization consoles, pod managers) build on the proven practices of the past to integrate data protection into the customer’s information infrastructure.

No one wants to be locked into products because they are too painful to switch out; it’s time that products are “sticky” because they offer compelling solutions.  IDC projects that the worldwide purpose-built backup appliance (PBBA) market will grow 16.6% from $1.7 billion in 2010 to $3.6 billion by 2015.  The industry is rapidly adopting PBBAs to overcome the data protection challenges associated with data growth.  Looking forward, storage systems will be expected to incorporate a recovery platform, supporting security and compliance obligations, and data protection solutions will become information brokers for what is stored on disk.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

See SSSI at the Flash Memory Summit

Team_SSSI

Jul 26, 2011

title of post
SSSI will be at the Flash Memory Summit in the Santa Clara Convention Center, August 9-11. Come see us in booth 319, where we'll feature an SSD performance demonstration, and experts will be ready to discuss the Solid State Storage Performance Test Specification, including a new spec that will be announced at the show. For booth and conference schedules, see www.flashmemorysummit.com.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

PCs: Better Boost from Flash than DRAM!

Jim Handy

Jul 19, 2011

title of post
Objective Analysis has just published a new study with a somewhat surprising finding - that PCs get a bigger performance improvement by adding a dollar's worth of NAND flash than by adding a dollar's worth of DRAM. This finding is the result of a series of nearly 300 benchmarks in which the company tested PCs with a variety of DRAM and NAND flash sizes running industry-standard benchmarks: PCMark, SYSmark, HDxPRT, and others. In a nutshell the benchmarks showed that dollar-for-dollar NAND yields a greater performance improvement to a PC than does DRAM.  Once PC users and OEMs discover this phenomenon there should be a mass-migration of PC architectures to systems with paired storage (there's a SNIA Webcast on this), perhaps in hybrid HDDs, and this will present difficulties to DRAM makers whose biggest market is the PC. Oddly enough, the study shows that the HDD is likely to remain in PCs for a while to come, since well-designed DRAM-Flash-HDD configurations perform nearly as fast as DRAM-SSD systems with prices and capacities that are similar to those of a conventional DRAM-HDD system.  Future PC users are likely to opt for adding NAND flash, rather than DRAM, to their systems when they upgrade. The report is available for purchase at http://Objective-Analysis.com/Reports.html#DRAM-NAND. Comments and questions are more than welcome.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

CSI Quarterly Update Q3 2011

mac

Jul 5, 2011

title of post

A Message from
SNIA Links:

Follow SNIA:
Linkedin
Twitter
Facebook

SNIA Blogs:

Cloud Storage Initiative

Upcoming Activities

Get Involved Now!

A limited number of these activities are open to all, or Join SNIA and the CSI to participate in any of these activities

July Cloud Plugfest

The purpose of the Cloud Plugfest is for vendors to bring their implementations of CDMI and OCCI to test, identify, and fix bugs in a collaborative setting with the goal of providing a forum in which companies can develop interoperable products.

The Cloud Plugfest starts on Tuesday July 12 and runs thru Thursday July 14, 2011 at the SNIA Technology Center in Colorado Springs, CO.  The SNIA Cloud Storage Initiative (CSI) is underwriting the costs of the event, therefore there is no participation fee.

More Information

SNIA Cloud Burst Event

There are a multitude of events dedicated to cloud computing, but where can you go to find out specifically about cloud storage? The 2011 SNIA Cloud Burst Summit educates and offers insight into this fast–growing market segment. Come hear from industry luminaries, see live demonstrations, and talk to technology vendors about how to get started with cloud storage.

More information

Cloud Lab Plugfest at SDC

Plugfests have always been an important part of the Storage Developers Conference and this year will be the first Cloud Lab Plugfest event held over multiple days to test the interoperability of CDMI, OVF and OCCI implementations.

To get involved, please contact: arnold@snia.org

Cloud Pavilion at SNW

Every SNW, one of highlights is the Cloud Pavilion where attendees can see public and private cloud offerings and discuss solutions. Space is limited, so get involved early to ensure your spot.

To get involved, please contact: lisa.mercurio@snia.org

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

CSI Quarterly Update Q3 2011

mac

Jul 5, 2011

title of post
A Message from
SNIA Links:

Follow SNIA:
Linkedin
Twitter
Facebook

SNIA Blogs:

Cloud Storage Initiative

Upcoming Activities

Get Involved Now!

A limited number of these activities are open to all, or Join SNIA and the CSI to participate in any of these activities

July Cloud Plugfest

The purpose of the Cloud Plugfest is for vendors to bring their implementations of CDMI and OCCI to test, identify, and fix bugs in a collaborative setting with the goal of providing a forum in which companies can develop interoperable products.

The Cloud Plugfest starts on Tuesday July 12 and runs thru Thursday July 14, 2011 at the SNIA Technology Center in Colorado Springs, CO.  The SNIA Cloud Storage Initiative (CSI) is underwriting the costs of the event, therefore there is no participation fee.

More Information

SNIA Cloud Burst Event

There are a multitude of events dedicated to cloud computing, but where can you go to find out specifically about cloud storage? The 2011 SNIA Cloud Burst Summit educates and offers insight into this fast–growing market segment. Come hear from industry luminaries, see live demonstrations, and talk to technology vendors about how to get started with cloud storage.

More information

Cloud Lab Plugfest at SDC

Plugfests have always been an important part of the Storage Developers Conference and this year will be the first Cloud Lab Plugfest event held over multiple days to test the interoperability of CDMI, OVF and OCCI implementations.

To get involved, please contact: arnold@snia.org

Cloud Pavilion at SNW

Every SNW, one of highlights is the Cloud Pavilion where attendees can see public and private cloud offerings and discuss solutions. Space is limited, so get involved early to ensure your spot.

To get involved, please contact: lisa.mercurio@snia.org


Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Team_SSSI

Jun 29, 2011

title of post
Apple recently announced Trim support for all SSD-capable Macs.  What is Trim? The SSSI Glossary defines the Trim command as “A method by which the host operating system may inform a NAND Flash-based SSS device about which blocks of data are no longer in use and can be erased. Such blocks may then be written without having to erase them first, enhancing SSS device write performance.” A drive’s internal Garbage Collection performs a similar task as Trim by erasing blocks that have been previously marked for deletion.  However, because of the way that many operating systems work, there will be some blocks that can be repurposed of which only the OS is aware; Trim addresses this issue. For Trim to be functional, both the SSD and the OS must support it.  Most SSDs of recent vintage support Trim, but check the features list to be sure. In addition to the Apple OS, anyone who’s been paying attention knows that Microsoft Windows 7 supports Trim.  And an increasing number of Linux versions support Trim, plus FreeBSD and OpenSolaris.  Wikipedia has a more detailed list.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Resource about SSD Standards is Now Available

Team_SSSI

Jun 17, 2011

title of post
The SNIA SSSI site has a new page entitled Solid State Storage Standards Explained.   It provides an overview of standards on drivers, interfaces, connectors, form factors, security, and testing that must be considered when designing or evaluating an SSD.  There are links to each standards site to get further details.  The page will be updated regularly and inputs are welcome.  Questions and comments may be sent to asksssi@snia.org.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Webcast - The Benefits of Solid State in Enterprise Storage Systems

Team_SSSI

Jun 6, 2011

title of post
Update:  A recording of this webcast is available here. Presenter: Tom Coughlin, SNIA Solid State Storage Initiative and President, Coughlin Associates Abstract: This session presents a brief overview of the solid state technologies which are being integrated into enterprise storage systems today, including technologies, benefits, and price/performance. It describes where they fit into today’s typical enterprise storage architectures today, with descriptions of specific use cases. Finally, the session speculates briefly on what the future will bring.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to