The Riverbed Blog (testing)

A blog in search of a tagline

Archive for the ‘Hybrid Cloud’ Category

Riverbed at Akamai Government Forum; Steve Riley to Present on Hybrid Cloud

Posted by riverbedtest on November 9, 2011

With initiatives, mandates and reforms in place aimed at bringing efficiencies to government IT, it should be no surprise that over the last few months you’ve seen a lot of Riverbed at government IT conferences and events. After all, our IT performance solutions help government agencies meet initiatives, mandates and reforms – from enabling data center consolidation, to helping reduce costs for IT, and executing on the cloud first policy.

On November 16, Riverbed will be at the Akamai Government Forum, taking place at the Grand Hyatt Washington in Washington, D.C. The second annual Akamai Government Forum will focus on the latest solutions for scaling the Internet infrastructure for local, state and federal government agencies. Visit the Riverbed station to see demos and learn about our cloud performance solutions, including Steelhead WAN optimization, Stingray application delivery and Web content optimization, Cascade application-aware network performance management for traffic visibility, and Whitewater cloud storage gateways for data protection.

And, because you can’t get enough of him, Riverbed technical leader, cloud expert and aficionado Steve Riley will deliver the cloud track discussion on hybrid cloud from 3:00 to 4:00 p.m. ET.

In his presentation, Steve will highlight how the performance problems associated with distance computing can be mitigated with optimization techniques designed for multiple layers: application, transport, network and storage.

Here is the teaser:

No longer just the fluff of airplane magazine articles, cloud computing is here to stay. The architectures envisioned for large public cloud providers are revolutionizing on-premises data centers, too. Hybrid clouds – clouds that utilize both public and private resources – allow agencies to spread workloads across multiple locations to satisfy distinct policy, regulatory, security and financial requirements. Hybrid clouds, like their individual counterparts, involve adding distance between users and their data. In most cases, the particular distance at any point in time is unpredictable, which will lead to inconsistent user experiences. Applications deployed in hybrid clouds often move large amounts of data across multiple internal and external providers; long waits for data transfer will affect productivity and availability.

Stop by; learn everything you need to know about optimization, acceleration and performance to meet the government IT mandates; and tell us what you thought of the conference.

 

Advertisements

Posted in Application Acceleration, Bandwidth Optimization, Data Protection, Events, Hybrid Cloud, Private Cloud, Public Cloud, Visibility, Web Content Optimization | Tagged: , , , , , , , , | Leave a Comment »

The importance of agility

Posted by riverbedtest on November 3, 2011

 An underappreciated aspect of the Steelhead product line is that it has a diverse set of form factors and – crucially – those different packages all use the same optimization architecture, and thus interoperate. What does that mean for a customer?  It gives tremendous flexibility to adapt to changes in how data and users are distributed, without needing to cause ripple effects elsewhere in the infrastructure.  Let’s consider a simple (and common) example first before we move on to looking at the larger implications. 

Organizations often have some branch offices that are very small.  For the very smallest offices and individual users, it’s usually not hard to decide that the right solution is to use Steelhead Mobile on a laptop or workstation.  And when you get to  10-12 people in an office, both the technology and the ROI arguments for a Steelhead appliance (physical or virtual) are pretty easy to make.  But there’s an area in the middle, around 5-6 users, where there’s enough overlap of capabilities that either approach could work.  Add to this that a given office may grow or shrink enough so that the original configuration in the office may need to be replaced with a different one.

 Using the Steelhead family, these choices and changes at the branch can be accommodated with no additional impact on the data center side.  For a given workload from a given set of users, it just doesn’t matter whether they’re coming from a Steelhead appliance or Steelhead Mobile. 

 Now, if you’re only familiar with Riverbed, at this point your reaction is probably something like “so what?  Big deal!”  But let’s look at just this one scenario with the #2 vendor: their mobile client doesn’t use the same technology as their appliance, so you have to have two separate data-center infrastructures to support the branches if you have a mixture of the technologies.  And as you migrate a given branch from appliance to mobile or vice-versa, you’re changing the load on the corresponding data-center pieces. 

 That divided-technology approach means that it’s easy with the #2 vendor to be in a situation where an apparently-straightforward change at a branch gets tripped up because it exceeds the capacity of some piece of data center infrastructure.  Another layer of complexity comes from the fact that these two different technologies have different network characteristics: their appliance uses an autodiscovery mechanism somewhat like the way that Steelheads work, while their mobile client needs an explicit connection set up to its data-center counterpart.  Their appliance marketing repeatedly insists on the necessity of transparency and the avoidance of tunnels, while the mobile client uses a tunnel-based system – so it’s possible that a particular branch network configuration that works with one of the technologies simply won’t work with the other.

 It’s tempting to say that the divided-technology problem of the #2 vendor is just a typical lapse by a very large company, and that smaller competitors would have a better approach. So we look at the #3 vendor in our space, which is a private company that prides itself on only doing WAN optimization.  But they don’t have any mobile client at all!  So their theory is that you should just pretend that you don’t need WAN optimization when you’re out on the road and dealing with networks in coffee shops and hotels – exactly the opposite of most real-world experience.  And apparently when your branch is too small to support an appliance or virtual appliance, you should just stop using WAN optimization.  (All of a sudden, the #2 vendor looks really good by comparison.)

Before we leave this topic, it’s worth noting that the preceding comparison actually understates the Riverbed advantage. A further advantage comes from the fact that Steelhead Mobile and a Steelhead appliance (physical or virtual) can cooperate via branch warming. In branch warming, Steelhead Mobile and a local Steelhead appliance work together: each time a piece of "optimization vocabulary" is used by the machine running Steelhead Mobile, the mobile client and the appliance coordinate so that both have a copy.  As the mobile client is used in the branch office, their vocabularies will tend to converge.

Without spending too much time on the details of how it works, let’s talk about where it’s useful:  Sometimes there are enough people in an office to justify an appliance, but the nature of the work means that some or all of them have a significant need for mobility – often because they are salespeople, hands-on repair technicians, or field supervisors.  They can use Steelhead Mobile when they are on the road, but they stop needing a mobile license when they’re in the office, and they take the benefits of their office work (newly learned optimizations) back on the road with them when they leave. 

 Now let’s talk about the bigger picture of why this matters.  After all, your organization may not have small branches or mobile users, so that set of examples might not impress you. But the same general principle of agility through a common architecture is more broadly useful, and almost certainly can make a difference to your organization now or in a future configuration.

 A way of getting a handle on this is to list out the different “packages” of Steelhead technology:

  • Physical appliance
  • Virtual appliance
  • Cluster of appliances (physical and/or virtual)
  • Software client
  • Cloud-integrated service
  • “Blade” for HP switch

 All of these interoperate with each other – so it’s easy to go “physical to virtual” or vice-versa without needing to disrupt the other side of the application.  Likewise it’s easy to have a set of services growing beyond the capacity of a single appliance, or migrating into (or out of) a cloud service, without prompting a redesign or redeployment of the client side.

 Again, a comparison with the #2 vendor is illuminating. A casual examination of their WAN optimization product line would suggest a similar kind of breadth and agility. They have a variety of packages of WAN optimization technology. But it turns out that the commonality is more marketecture than architecture.  That is, they use a common branding for what are actually 3 very-different classes of products: what we might call “main”, “mobile”, and “express.”  The “mobile” products can’t interoperate at all with “main” products or with “express” products.  The “main” products and “express” products can interoperate, but only at the lower level of function supported by the “express” products.  So actually trying to use the #2 vendor products for Riverbed-like agility can lead to all sorts of unpleasant surprises, as WAN optimization functionality either doesn’t work at all (mobile/main and mobile/express combinations) or works with sharply reduced functionality and performance (main/express combinations).

IT organizations need agility and flexibility to meet changing circumstances and demands.  The Riverbed single common architecture approach for WAN optimization helps ensure that Steelhead technology can help meet that need.

Posted in Application Acceleration, Bandwidth Optimization, Hybrid Cloud, Mobile, Private Cloud, Public Cloud, Site Consolidation | Tagged: , , , | Leave a Comment »

Video: Application Performance in the Cloud

Posted by bobegilbert on November 2, 2011

Bob Gilbert sits down with Zeus Kerravala from the Yankee Group to discuss application performance in the cloud.

 

 

Posted in Application Acceleration, Hybrid Cloud, Public Cloud, Storage Cloud | Tagged: , , , | Leave a Comment »

Riverbed Technical Leader Steve Riley Q&A on Distributed Recentralization

Posted by riverbedtest on November 1, 2011

Thank you for tuning in to the Federal IT Q&A series with Steve Riley, our friend for all things cloud. With this episode, we're wrapping up the series with one question and one answer. 

The question: what does the Federal Data Center Consolidation Initiative, Cloud First policy, data protection, mobility and telework, and desktop virtualization, have in common?

The answer: Distributed recentralization. In the below video, Steve provides a history on the computing models we've experienced (i.e., mainframe, client-server and centralized computing), and talks about the direction we're moving towards – distributed recentralization. The trend is that we're moving to fewer but larger data centers. And, compared to centralized computing (creation, access and process happening in one place), with distributed decentralization, access and creation are happening in one place, and processing and storage are happening in another place. Also, with fewer data centers, these two activities are occurring at even greater distances in the past ten years. 

This is why adding a layer of intelligence to networks is critical.

 

Actually, there is one more question. What topics — within the realm of IT performance — would you like to see from us?  

 

Posted in Application Acceleration, Bandwidth Optimization, Data Protection, Hybrid Cloud, Private Cloud, Public Cloud, Storage Cloud, Virtualization | Tagged: , , , , , , , , , , | Leave a Comment »

Video: Riverbed is looking for SaaS acceleration beta customers

Posted by bobegilbert on October 27, 2011

Riverbed is looking for customers that are interested in extending their Steelhead WAN optimization to SaaS applications such as Office 365, Salesforce.com, and Google Apps.

This groundbreaking technology is a joint effort from both Riverbed and Akamai and for existing Steelhead customers, it requires no new hardware. If you are interested in participating in the SaaS acceleration beta, please email Bob Gilbert: bob@riverbed.com.

 

Posted in Application Acceleration, Hybrid Cloud, Public Cloud | Tagged: , , , , , | Leave a Comment »

Stingray Traffic Manager Overview Video

Posted by bobegilbert on October 27, 2011

Owen Garrett, Director of Product Management for Riverbed's Stingray line of asymmetric optimization products, provides an overview of the Stingray Traffic Manager.

 

 

Posted in Application Acceleration, Hybrid Cloud, Private Cloud, Public Cloud, Virtualization, Web Content Optimization | Tagged: , , , , | Leave a Comment »

Introducing the Stingray Product Family

Posted by bobegilbert on October 25, 2011

Apurva Dave, Riverbed VP of Marketing, provides an introduction to Stingray, which is Riverbed's new line of asymmetric optimization products.
 
You can also read the press release here

Posted in Hybrid Cloud, Virtualization, Web Content Optimization | Tagged: , | Leave a Comment »

Riverbed 101 – An introduction video to Riverbed’s products

Posted by bobegilbert on October 19, 2011

Bob Gilbert gives a high level overview of Riverbed's products.

 

 

Posted in Application Acceleration, Bandwidth Optimization, Data Protection, Disaster Recovery, Hybrid Cloud, Mobile, Packet Capture, Private Cloud, Public Cloud, Site Consolidation, Visibility, Web Content Optimization | Leave a Comment »

Cloud computing for government, part 3 of 3

Posted by riverbedtest on October 10, 2011

Cloud-computing-panel Today we conclude my three-part series on government cloud computing. Be sure to read part one and part two in case you missed them.


What’s the status of standards for the cloud? What do agencies need to keep in mind as they develop their cloud strategies?

Typically, standards lag innovation. Cloud computing is one of the IT industry’s most rapidly-evolving developments—providers add new features and services several times every month, it seems. Although standards around APIs are in their nascent stages, I’m not sure that cloud standards are mature enough yet to be a major part of the decision process for choosing a provider. More important, I think, is that a provider offer a graceful way to retrieve and remove your data and processing workloads should you ever decide to move elsewhere. Look for providers who clearly state that your data belongs to you, not to them. Avoid providers who won’t make this assurance.

What does moving to the cloud mean for an agency's IT resources? Will “regular” IT skills suffice, or is something else needed?

Technical challenges aside, the personnel issue is, in my observation, one of the biggest barriers to cloud adoption. No one ever publicly declares that they’re going to resist cloud for fear of losing their job, but I know from experience that such fears exist. IT staff will require new skills. Good IT staff will relish the opportunities—they can gain a better understanding of the agency’s business and provide greater value. Bernard Golden, CEO of Hyperstratus, regularly writes about how cloud computing will fundamentally alter the human element of IT. The entire history of technological advancement has affected every form of work ever devised. There are no more buggy-whip manufacturers in the United States; the good ones figured out how to build automobile starters.

At what point can you call a cloud-based IT project a success?

To call something a success sounds like it has to reach some kind of conclusion—a way to know that a project is finished. Not to sound evasive, but one intriguing aspect of using the cloud for IT projects is that they never truly have to be done. “Done” is a side-effect of old-style waterfall development methodologies, which began with an end state in mind. Agile development methodologies have largely replaced waterfall development, and cloud computing is the ideal platform for agile development. The cloud’s on-demand resource elasticity permits continuous updates and improvements. IT projects become iterative and can easily adapt to meet the ever evolving needs of agency business. “Done” is no longer a requirement; success comes from knowing that new functionality can be envisioned, developed, tested, and deployed quickly without disrupting existing operations.

What are going to be the major drivers in the government cloud space in the next 3-5 years? Is there anything else that could emerge that's not evident now?

I believe finding a champion to replace Vivek Kundra’s passion is absolutely essential. While on-going financial pressures could conceivably be the primary (or even sole) driver for government compute consolidation, someone who can keep prodding all agencies with a grand vision is still important at this stage. Also, as IT staff members retire, I’d suggest that agencies look for replacements with some experience developing for and managing cloud resources. Such staff will already understand how to adapt their work skills and strategies as cloud computing continues its relentless evolution. As for predicting how the cloud space itself will evolve, well, today’s reality certainly looks different than predictions from three years ago! I’m certain, though, that the explosive growth of data we’ve seen over the past few years will continue apace. All that data has to go somewhere and the cloud is the best place for that.

What will be your company's strategy for the government cloud space over the next few years?

We’ll continue to strive to make the cloud easier and faster for agencies. We work closely with our Federal customers and partners to ensure we’re building the right products and creating useful guidance. We’ll continue to pursue appropriate certifications and compliance so that agencies can rely on Riverbed’s technology to safely accelerate their move to the cloud.

Posted in Hybrid Cloud, Private Cloud, Public Cloud | Leave a Comment »

Riverbed Technical Lead Steve Riley Q&A on Federal Data Protection

Posted by riverbedtest on September 27, 2011

Thanks for tuning in for part three (of five) of the Federal IT initiatives Q&A video series with our illustrious technical leader Steve Riley. As projected by many industry research and analyst firms, data will continue to grow. This is not surprising. And as you may remember, with the Cloud First policy, agencies have a mandate to move data and applications to the cloud. So, for this week's video Q&A, we shift gears, and examine some of the considerations for agencies to protect their data in the cloud.

Steve answers the following:

1. How data is protected in the cloud.

2. What are the technical considerations and strategies for protecting data in the cloud.

3. How Riverbed, specifically, helps protect data in the cloud. Here is a hint – it has something to do with FIPS certification.

Next week, I'm taking a break from posting. But, tune in again October 11 for a Q&A video on teleworking and mobility. It would be appropriate to watch the video on a smart phone or tablet, outside of your workplace.

 

 

Posted in Data Protection, Hybrid Cloud, Private Cloud, Public Cloud | Tagged: , , , , , , , , , | Leave a Comment »