The Riverbed Blog (testing)

A blog in search of a tagline

Posts Tagged ‘Disaster recovery’

Is Cloud Reality Beginning to Take Hold?

Posted by riverbedtest on September 28, 2011

6a00e5508a3ca78834014e8bb51a30970d-800wi As I’ve been out talking to people about using the public cloud as a target for data protection, I continue to be surprised by how much pain many organizations regularly go through for backup and recovery, as well as the variety of methods used to protect company data. Of course, the old standby is tape, and even in my days at Data Domain, where the mantra “Tape Sucks” was like a religion, everyone was predicting the rapid demise of that 1928 invention’s role in IT. And yes, tape has lost some if its place in the market for data protection, but it continues to hang around, despite all of the pain that I hear from IT professionals about it.

Why? There are probably as many theories about that as who shot Kennedy but I think it is safe to say that tape holds on for a couple of reasons:

• Disk is still relatively expensive, even if deduped, and still complex to manage
• Some (not the majority of) regulatory requirements can best be filled by tape
• Tape is a known quantity, familiar, “better the devil you know” and all that

So people seem to make do, kludging together a patchwork of solutions to keep ahead of that dreaded backup window, often at the expense of any kind of DR planning. In fact, for most SMBs and SMEs, data protection is only a secondary part of someone’s IT job. So it doesn’t always get attacked with the same vigor and focus as other IT issues. Like I said, people make do.

But that is changing. I’ve been seeing people start to take a look at the potential of doing away with all the cost and hassle of standard data protection solutions and replacing it with the public cloud. I know about all the hype about “The Cloud” but over the course of this year, the view of the cloud I’ve seen has become more measured, with people asking deeper questions about the implications of using the cloud. For storage in particular, professionals are starting to see that not all storage lends itself as easily to the cloud. The performance implications and management difficulties of moving primary storage to the cloud has tripped up both trial customers and solution providers and has strengthened the focus for cloud storage on functions such as backup and archiving, which are much better suited for the cloud in terms of performance requirements and storage methodologies. And the majority of people in that camp are looking to jettison the shackles of tape backup and adopt cloud storage.

Mainstream backup solutions are also promoting the extension of data protection to the public cloud. Last week, I wrote about IBM recently releasing a video showing how the Riverbed® Whitewater® cloud storage gateway enables Tivoli Storage Manger users to deploy a drop-in Whitewater appliance and essentially convert all the headaches of managing a backup infrastructure into freed up capital and hours that can be spent on more pressing IT needs.

Humerdeinck_8_Track I’m sure there will be some data protection issues for which tape is a compelling solution, at least for the near future. But there’s a reason you don’t find 8-track or cassette players in cars anymore, nor video tapes available from movie rental outfits. It’s also getting more difficult to find outfits that rent hard copies of movies, and even Netflix is separating off its DVD business and applying its golden brand name to its business based on cloud streaming of videos (BTW, Netflix uses Amazon’s Simple Storage Service (S3) for its own business).

Trends are unmistakably toward more and more use of cloud storage. As technologies like Whitewater address the difficulties and/or concerns about using the cloud, this trend can only accelerate. Will tape and disk disappear? No. But if a TSM user can drop a small box in their datacenter and essentially get access to fast, secure, infinitely scalable storage, the rules of the game have undoubtedly changed and cloud storage will command a big seat at the data protection table.  

Posted in Disaster Recovery, Public Cloud, Storage Cloud | Tagged: , , , , , , , , | 2 Comments »

Optimization Strategies for Disaster Recovery

Posted by bobegilbert on April 19, 2010

Steve Dixon, Managing Director of Riverbed Technology, discusses
Optimisation Strategies for Disaster Recovery

Posted in Disaster Recovery | Tagged: , , | Leave a Comment »

The Secret Value of WAN Optimization – Optimized Disaster Recovery

Posted by bobegilbert on February 9, 2010

Dr

In one of my recent blog posts, I talked about the ABCD's of WAN Optimization where I spotlighted some of the key IT initiatives where WAN optimization provides high value.  While application acceleration, bandwidth optimization, and consolidation are IT initiatives that are becoming more aligned with WAN optimization solutions, disaster recovery (DR) tends to be an initiative that is not often thought of as being related.  This is more of a market perception, but the reality is that WAN optimization can be critical in the success of any DR or business continuity strategy.

Most DR strategies share a common goal and that is to protect data in the event of a disaster.  The data needs to be backed up and recoverable in a certain amount of time.  The amount of time it takes to recover data from a disaster or how long you can go without your application or data is the recovery time objective (RTO).  The obvious goal is to recover as quickly as possible and for some organizations, the longer it takes to recover, the more costly to their business.  Not only is there the need to recover data quickly, there is also the requirement to ensure that the data being recovered is relatively new and not data that is from several hours, days, or even weeks ago.  How much data can you afford to lose?  If you last performed a backup 18 hours ago and a disaster occurs, your backup contains data that is 18 hours old.  This is known as the recovery point objective (RPO).

There are a couple of key challenges in achieving a strict RTO and RPO.  For one, data created by organizations is ever growing and the result is that hundreds of gigabytes to multiple terabytes of data needs to be backed up daily.  Combine this hefty data backup requirement with the fact that the backup needs to go over a wide area network in order to reach the backup data center.  The inefficiencies of the network combined with the high latency cripples performance.  Big bandwidth is not the solution either.  You can invest in an OC-12 622Mps link or event a dedicated GigE connection between data centers, but organizations cannot get near the throughput they need to complete their replication jobs.  It is not uncommon to get 70Mbps of throughput out of your 622Mbps link.  If replication jobs don't complete in time, then your RTO and RPO goals are not being met.

This is where WAN optimization comes to the rescue.  Place a Riverbed Steelhead appliance at each data center and through a combination of WAN-based data de-duplication and TCP optimization techniques, organizations can dramatically increase throughput performance.  Meet replication time targets, replicate more often, and more importantly, recover from a disaster more quickly and with data that is fresher.

As timing would have it, Riverbed announced today a groundbreaking WAN optimization appliance platform for tackling even the most demanding replication jobs.  The Steelhead 7050 raises the bar when it comes to scalability and performance for tackling the most demanding DR environments.  The 7050 is also equipped to handle large hub and spoke data center to branch office environments when you are deploying a WAN optimization solution for IT consolidation.

The net-net is that WAN optimization and Riverbed in particular, should be considered for any DR initiative.  Below is a recent video covering Riverbed and disaster recovery.

Posted in Disaster Recovery | Tagged: , , | Leave a Comment »