The Riverbed Blog (testing)

A blog in search of a tagline

Archive for September, 2011

All Sharepoint. All the time.

Posted by riverbedtest on September 30, 2011

Sharepoint1

Recently, a lot of my work has been focusing on Microsoft SharePoint 2010.  I wanted to take a moment to blog about two recent related efforts:

Microsoft SharePoint 2010 interoperability testing in Redmond

Riverbed Steelhead appliances have been optimizing Microsoft applications since the beginning, but we are getting a fresh look at improving our protocol optimization by working directly with the Microsoft Interoperability Team at the Microsoft Partner Solutions Center in Redmond, WA.  Riverbed has a lab there that we use to work with customers, but now it's great to work with Microsoft directly to get both technical teams working together.

A special thanks goes out to HP, who provided the servers and network hardware for our interoperability lab.

Riverbed booth at Microsoft SharePoint Conference 2011

From the lab, I go on the road to the Microsoft SharePoint Conference 2011 in Aneheim, CA.  Our booth will be showcasing Riverbed Steelheads and the Aptimize SharePoint Accelerator.  In and out of the cloud, from document management to complete searchable websites, Microsoft SharePoint 2010 redefines the collaborative environment.  Riverbed Technology enables that capability on the WAN.

Look for the Riverbed Technology booth at the Microsoft SharePoint Conference 2011 from Oct 3-6.  Conference information can be found at http://www.mssharepointconference.com .

Posted in Application Acceleration, Bandwidth Optimization, Web Content Optimization | Leave a Comment »

Cloud Balancing: Don’t Put All Your Apps in One Cloud

Posted by riverbedtest on September 29, 2011

In today’s post, we close our series introducing Zeus Technology and ADCs. Of course, now that Zeus is part of the Riverbed family, we’ll continue to write about ADCs regularly, especially now that we’ve given you a primer on the technology and what it does. Last week we talked about cloud bursting, so let’s close this series with another useful cloud trick that softADCs enable: cloud balancing.

Cloud balancing is the process of routing transactions and network requests across applications in multiple clouds. In plainer terms, it’s the simple “don’t put your eggs in one basket” approach – or in this case, don’t put all your applications in one cloud.

You might recall, earlier this year, Amazon Web Services was hit with a multi-day service outage on the East Coast after a “misaligned network” brought down several EC2 services in its Northern Virginia data center.  Or perhaps you remember when Microsoft’s Business Productivity Online Services experienced a multi-day email outage, with the culprit being “malformed email traffic on the service.” Cloud balancing acts as an insurance policy against such outages. For instance, Amazon’s EC2 customers could have avoided being impacted by the sweeping outage if they had implemented load balancing across two cloud providers, or with a cloud provider and their own data center. Yet too many organizations transition to the cloud without asking, "What if this provider suffers an outage?"

After all, even if we assume that all cloud providers can deliver 99 percent availability, deploying an application across several cloud instances can significantly decrease the chances that it will ever suffer from an outage due to the cloud provider.

Let me provide some perspective on this: 99% uptime means 1% downtime, which is equivalent to about 3 1/2 days a year.  Each new cloud provider adds another two “nines of availability” (raising the level to 99.99 percent, or about 52 minutes of downtime per year), so three platforms can deliver 99.9999 percent of uptime (six nines, or 31 seconds of downtime per year). From this point, the risk of cloud platform failure is effectively eliminated and the organization can concentrate on managing internal risks. Untitled

For another perspective on highly available system design, "the nines", and clustering (which is Cover effectively what this model emulates) may I recommend my book, Blueprints for High Availability, highly available at amazon.com and other purveyors of excellent technical books.

Because Zeus’ solutions are softADCs, they are designed to balance application traffic across multiple cloud deployments, reducing customers’ risk and improving the performance and capacity of applications. This allows customers to:

  • Increase the reliability of a cloud-based infrastructure by hedging the risk across multiple availability zones and cloud platforms.
  • Improve the performance of the cloud-based service using geographic traffic distribution and local traffic acceleration.

Many of the customers we’ve talked about here in this series, including Gilt Groupe and STA Travel, have deployed Zeus solutions to replicate application content across multiple public or private clouds and have seen measurable performance benefits as a result. Balancing application traffic across multiple clouds delivers confidence that the application will always be available, no matter what misfortune befalls any individual cloud provider. Because the reality is, you need some safety protocols in place with the cloud. In the wise words of ZDNet’s Ken Hess, “The cloud isn’t perfect, People. Computers make up the cloud. Computers are fallible. Therefore the cloud is fallible.” So, stay prepared and don’t put all your applications in one provider.

Posted in Uncategorized | Leave a Comment »

We’re at Interop Mumbai. Are you?

Posted by bobegilbert on September 28, 2011

INTEROP Mumbai – India's Leading IT Show, IT Event & IT Conference
Interop Mumbai starts today at the Bombay Exhibition Center in Goregaon, Mumbai, India, and Riverbed is all over it.  We're a Gold Sponsor of the show, our own Naveen Prabhu spoke earlier today on Application Acceleration, and, of course, you can find us in the Interop mumbaivendor exhibition in booth #7. 

At the booth, we have our Presentation Theater, where you can see a demonstration of WAN Optimization and other Riverbed products, as shown in the photo at left.

If you do stop by, be sure to mention that you read about it on the Riverbed Blog.

And if you can't make it to Mumbai, come see us at Interop New York, which starts Monday.  Riverbed is also a Gold Sponsor of the NY event.  We have five speakers, and our booth with the Presentation Theater. We'll have more details on Monday.

We look forward to seeing you at one or both of these exciting events!

 

Posted in Application Acceleration, Corporate, Events | 1 Comment »

Is Cloud Reality Beginning to Take Hold?

Posted by riverbedtest on September 28, 2011

6a00e5508a3ca78834014e8bb51a30970d-800wi As I’ve been out talking to people about using the public cloud as a target for data protection, I continue to be surprised by how much pain many organizations regularly go through for backup and recovery, as well as the variety of methods used to protect company data. Of course, the old standby is tape, and even in my days at Data Domain, where the mantra “Tape Sucks” was like a religion, everyone was predicting the rapid demise of that 1928 invention’s role in IT. And yes, tape has lost some if its place in the market for data protection, but it continues to hang around, despite all of the pain that I hear from IT professionals about it.

Why? There are probably as many theories about that as who shot Kennedy but I think it is safe to say that tape holds on for a couple of reasons:

• Disk is still relatively expensive, even if deduped, and still complex to manage
• Some (not the majority of) regulatory requirements can best be filled by tape
• Tape is a known quantity, familiar, “better the devil you know” and all that

So people seem to make do, kludging together a patchwork of solutions to keep ahead of that dreaded backup window, often at the expense of any kind of DR planning. In fact, for most SMBs and SMEs, data protection is only a secondary part of someone’s IT job. So it doesn’t always get attacked with the same vigor and focus as other IT issues. Like I said, people make do.

But that is changing. I’ve been seeing people start to take a look at the potential of doing away with all the cost and hassle of standard data protection solutions and replacing it with the public cloud. I know about all the hype about “The Cloud” but over the course of this year, the view of the cloud I’ve seen has become more measured, with people asking deeper questions about the implications of using the cloud. For storage in particular, professionals are starting to see that not all storage lends itself as easily to the cloud. The performance implications and management difficulties of moving primary storage to the cloud has tripped up both trial customers and solution providers and has strengthened the focus for cloud storage on functions such as backup and archiving, which are much better suited for the cloud in terms of performance requirements and storage methodologies. And the majority of people in that camp are looking to jettison the shackles of tape backup and adopt cloud storage.

Mainstream backup solutions are also promoting the extension of data protection to the public cloud. Last week, I wrote about IBM recently releasing a video showing how the Riverbed® Whitewater® cloud storage gateway enables Tivoli Storage Manger users to deploy a drop-in Whitewater appliance and essentially convert all the headaches of managing a backup infrastructure into freed up capital and hours that can be spent on more pressing IT needs.

Humerdeinck_8_Track I’m sure there will be some data protection issues for which tape is a compelling solution, at least for the near future. But there’s a reason you don’t find 8-track or cassette players in cars anymore, nor video tapes available from movie rental outfits. It’s also getting more difficult to find outfits that rent hard copies of movies, and even Netflix is separating off its DVD business and applying its golden brand name to its business based on cloud streaming of videos (BTW, Netflix uses Amazon’s Simple Storage Service (S3) for its own business).

Trends are unmistakably toward more and more use of cloud storage. As technologies like Whitewater address the difficulties and/or concerns about using the cloud, this trend can only accelerate. Will tape and disk disappear? No. But if a TSM user can drop a small box in their datacenter and essentially get access to fast, secure, infinitely scalable storage, the rules of the game have undoubtedly changed and cloud storage will command a big seat at the data protection table.  

Posted in Disaster Recovery, Public Cloud, Storage Cloud | Tagged: , , , , , , , , | 2 Comments »

Riverbed Technical Lead Steve Riley Q&A on Federal Data Protection

Posted by riverbedtest on September 27, 2011

Thanks for tuning in for part three (of five) of the Federal IT initiatives Q&A video series with our illustrious technical leader Steve Riley. As projected by many industry research and analyst firms, data will continue to grow. This is not surprising. And as you may remember, with the Cloud First policy, agencies have a mandate to move data and applications to the cloud. So, for this week's video Q&A, we shift gears, and examine some of the considerations for agencies to protect their data in the cloud.

Steve answers the following:

1. How data is protected in the cloud.

2. What are the technical considerations and strategies for protecting data in the cloud.

3. How Riverbed, specifically, helps protect data in the cloud. Here is a hint – it has something to do with FIPS certification.

Next week, I'm taking a break from posting. But, tune in again October 11 for a Q&A video on teleworking and mobility. It would be appropriate to watch the video on a smart phone or tablet, outside of your workplace.

 

 

Posted in Data Protection, Hybrid Cloud, Private Cloud, Public Cloud | Tagged: , , , , , , , , , | Leave a Comment »

Cloud computing for government, part 2 of 3

Posted by riverbedtest on September 26, 2011

Cloud-computing-panel Today we resume with part two of my three-part series on government cloud computing. Be sure to read part one, in case you missed it.


The benefits of the cloud are supposedly self-evident, but how can agencies actually measure the ROI?

Curiously, in General Alexander’s testimony [see the last question in part 1], while he praised the capabilities of cloud security, he questioned some of the promised economic benefits. Many providers publish online calculators that allow you to compare the costs of a cloud deployment to the costs of running on-premise infrastructures. Frequently these fail to account for the personnel costs of installing and maintaining on-premise equipment, so in one sense they aren’t as good as they could be. However, trying to measure cloud ROI and comparing that to traditional infrastructure ROI ignores the cloud’s most important benefit: elasticity. The cloud allows you to add and remove resources according to demand. Traditional on-premise infrastructures are either under-utilized (and thus waste resources) or over-subscribed (and thus perform poorly). Applications designed to take advantage of the cloud’s elasticity largely eliminate the guesswork associated with predicting demand. A resource availability curve that always matches your demand curve appears a lot like perfect ROI.

What does Riverbed bring to this space that sets you apart from others?

Over eight years Riverbed has built a reputation of making wide area networks feel like they’re right next door. As organizations consolidated dispersed branch office resources into fewer large data centers, our technology has helped eliminate the typical problems that arise from computing at a distance. The cloud is a natural next step for us, because in many ways the cloud is similar to a WAN. Users can be situated anywhere and we can apply the same optimization techniques to make applications feel local. With Steelheads of various flavors you can vastly accelerate the movement of data from on-premise to the cloud and back, and also between clouds—even if the providers are different. In our Whitewater appliance we’ve adapted our optimization technology to remove the drudgery from backups, allowing you to point your backup software to a target that in turn compresses, encrypts, and backs up to the cloud—no more tape. For cases where you aren’t able to deploy our flagship symmetric optimization technology, we offer application acceleration that you can add to your cloud-based applications through two recent acquisitions: Zeus and Apptimize. And soon, through our partnership with Akamai, you can accelerate third-party SaaS applications by optimizing the high-latency link between your current location and a point of presence topologically very close to your ultimate destination. Regardless of which cloud providers you choose and what technology they’re built on, we can make any cloud perform better.

Is the cloud necessarily a permanent solution? When does it make sense to use the cloud as a temporary resource?

This would seem to conflict with the “cloud first” mandate and the notion that the cloud is the new default. It can be tempting to consider the cloud as an extension of an existing data center. Unfortunately, such thinking imposes limits—you’re less free to build applications that incorporate full cloud functionality and you can’t move to a full scale-up/scale-down resource curve. Also, I think this can create a mindset where the cloud becomes that “extra” thing that ends up not being managed well, or at all.

Is moving to the cloud strictly an IT issue? What other stakeholders need to be included in the discussions, and why?

IT organizations that choose, on their own, to move production workloads to the cloud do so at their peril. Capacity planning and disaster recovery require input from the agency’s working units. Data location and portability require consultation with legal and compliance teams. Cloud provider procedures and certifications require review by internal audit groups. Budgetary changes require working with finance folks. Don’t allow cloud projects to become line items in some developer’s monthly expense report!

Agencies will need to develop applications and services for their specific needs. Does the cloud change how they do that?

There are fundamental differences in the way applications should be built to run on clouds. Probably one of the most shocking changes is that servers are now disposable horsepower. Infrastructure is code: when you need compute resources, you simply issue a few API calls and within a matter of minutes those resources are available. Vast amounts of distributed storage are also there, waiting for you to allocate and use. In many cases this storage incorporates automatic replication, so you no longer need to build that into your application. Cloud computing also simplifies the process of updating applications: you clone an existing application, add and test updates, then move users over to the new version. Cloud providers often publish detailed technical guidance for how to develop on their particular platforms.


Part three will follow two weeks from today.

Posted in Hybrid Cloud, Private Cloud, Public Cloud | 1 Comment »

IBM TSM Users Get Fast Track to the Cloud

Posted by riverbedtest on September 23, 2011

IBM recently highlighted new options for data protection using Tivoli Storage Manager in a company Flash video about the benefits of cloud storage. 

The video describes how to think through a cloud strategy and how Riverbed's Whitewater cloud storage gateway enables TSM users to replace tape and disk backup with cloud storage at significant cost and management savings, all without any changes to their TSM environment.  Whitewater maximizes data transfer performance and secures data both locally and in the cloud while minimizing capacity requirements with deduplication and compression.  Essentially, Whitewater looks and acts like you have the cloud as a backup disk target right in your datacenter.

If you are one of the many TSM users that are tired of struggling with cumbersome tape or expensive disk backup systems, take a look at the video and see if Whitewater and the cloud can help address your data protection headaches.

Posted in Disaster Recovery, Public Cloud, Storage Cloud | Leave a Comment »

Bursting Your Way Through the Cloud

Posted by riverbedtest on September 22, 2011

In this post in our series introducing Zeus Technology and ADCs, we’d like to flesh out the meaning of cloud bursting. After all, with autumn officially upon us this week, the holiday season is right around the corner. And with online holiday shopping expected to rival last year’s billion dollar online sales, many retailers are already thinking about how to prepare their websites for this season’s inevitable traffic boom. Cloud bursting is often the answer for dealing with these seasonal traffic upswings, in particular on Black Friday and Cyber Monday. So in this post, we’ll explain the role that ADCs play in enabling cloud bursting and in eliminating problems with demand overload.

Cloud bursting happens when an organization needs to very quickly scale to meet a sudden spike in demand on its website. This might sound like an envious predicament to be in, but in reality, if cloud bursting isn’t handled properly it can create significant problems — most obviously, revenue loss from customers unable to access the site.  

To illustrate this, let’s look at another high-demand area — job posting sites. Given the surge in jobseekers, SnagAJob.com, America’s largest hourly job website, saw double-digit year-on-year growth of visitors to its site. With such rapid growth, the company learned that its hardware traffic manager couldn’t keep up with demand and failed to meet the company’s goal of 99.9 percent uptime. In fact, like many online retailers, the Richmond, Virginia-based company generates all of its revenues from its website. This meant it was absolutely crucial that SnagAJob.com’s growth didn’t disrupt the availability of its online services.

During this time, SnagAJob.com also opted to move into a virtualized environment that would better accommodate its growth trajectory. To do this, SnagAJob.com replaced its hardware traffic manager with Zeus’ softADC solution. So now, if SnagAJob.com suddenly needs to scale from 20 to 100 servers, it can easily “burst” into the cloud and create new virtual machines in a matter of minutes, ensuring that incoming traffic is routed to servers that are ready to handle the capacity.

“The greatest benefit Zeus gives us is the amount of availability it enables for our members and customers,” says Matt Reidy, SnagAJob.com’s Director of IT Operations. “If a server or service is unavailable, Zeus knows about it and will divert traffic seamlessly.”

As a pure-software solution, Zeus plays a critical role in SnagAJob.com’s virtualized infrastructure. When its website experiences an influx of traffic, SnagAJob.com “bursts” into its private cloud at a moment’s notice — but also, at no additional cost to scale its softADC up. Zeus enables SnagAJob.com to absorb and evenly distribute those spikes in traffic, and gives the flexibility to scale on-demand, utilizing more Web servers as needed. Now apply this to the online holiday shopping season, which of course, is ripe with sudden traffic troughs and crests. Wouldn’t it make sense for an online retailer to use cloud bursting to quickly, seamlessly, and efficiently manage ebbs and flows in its Web traffic? 

It’s important to point out that cloud bursting can only occur with 100 percent-software based ADCs. After all, the fluidity and rapid change required for cloud bursting calls for a software solution that can keep up with the cloud’s quick pace. Imagine scaling or distributing applications in the cloud with conventional hardware traffic management solutions.

“If we have 20 servers and need 100, we can easily spin them up,” adds Reidy. “If we still had a physical environment, we couldn’t do that as easily, and it would take days rather than minutes. Speed to change was a big motivation to moving to a virtualized platform, and Zeus is an essential part of that.” 

Last year, demand for e-shopping hit a record high of more than a billion dollars on Cyber Monday — with demand continuing throughout the holiday season. In fact, U.S, online retail sales are expected to undergo a 10 percent compound annual growth rate from 2010 to 2015, with e-retail sales ultimately expected to reach the soaring heights of $279 billion, according to recent predictions by Forrester, a market analyst group. And as Cyber Monday inches closer, wouldn’t you feel better knowing that your favorite e-retailers are cloud bursting this holiday season?

Posted in Uncategorized | Leave a Comment »

Riverbed asks customers to Rock the Vote

Posted by bobegilbert on September 21, 2011

Rtv

Since shipping the first flagship Steelhead appliance product in May 2004, Riverbed has continued to innovate in the WAN optimization market with a number of software updates that deliver faster acceleration performance, improved scalability, and simpler management.  Flash forward more than seven years later and the product management and development teams at Riverbed are hard at work on the next generation features and functionality.

One tradition that hasn't changed in the R&D and product management process at Riverbed is the fact that we involve our customers in the process of identifying what features and enhancements we should be working on next.  I started this voting tradition several years ago using the name "Rock the Vote". Using platforms such as the Riverbed User Group (RUG), we ask our customers to vote for the features that are important to them.  The process is pretty simple.  We give them a list of features covering a variety of products and product areas and give each customer a total of three votes in each area.  If they want to vote for a feature that is not listed, they can spend a vote by writing in their own feature request.  The result is that we get several hundred votes that help our product management and R&D teams prioritize what features should be focused on next.

Delivering innovative software updates full of features and functionality that customers are asking for has proven to be a successful practice for Riverbed.  I am personally looking forward to seeing what requests we will get from our next group of customers.  If you are interested to see what the popular votes are, just wait for the next software update to come out.

Posted in Uncategorized | Tagged: | Leave a Comment »

Riverbed Technical Lead Steve Riley Q&A on Federal Cloud First Initiative

Posted by riverbedtest on September 20, 2011

For part two of our federal IT initiatives Q&A with Steve Riley, we focus on the Cloud First policy. Now, if you do not work in the the federal IT space, the Cloud First policy is the federal government's strategy for cloud computing, which is part of the greater plan to reform federal IT. The general estimate is $20 billion of the federal government's $80 billion in IT spending could be used for cloud computing.

In the enterprise, cloud computing is a trend that has been discussed and migrated to for many years. However, the push for cloud computing in the federal IT space was kicked-off and championed by Vivek Kundra, the U.S. government's first CIO. And, although Kundra recently left his post last month, former Microsoft executive and managing director at the Federal Communications Commissions Steve VanRoekel has taken the position and reigns, and plans to use Kunrda's grand vision for IT reform as a foundation for even greater changes to federal IT. 

Grab your coffee, tea or something stronger, and watch the below Q&A, which covers what spurred the Cloud First policy (cost reductions and collaboration among agencies), considerations (safety and security), as well as how Riverbed helps agencies to execute on the Cloud First policy. 

 

Stay tuned. Next week, we'll talk about data protection. 

 

Posted in Hybrid Cloud, Private Cloud, Public Cloud | Tagged: , , , , , , , , | Leave a Comment »