The Riverbed Blog (testing)

A blog in search of a tagline

Archive for the ‘Hybrid Cloud’ Category

Cloud computing for government, part 2 of 3

Posted by riverbedtest on September 26, 2011

Cloud-computing-panel Today we resume with part two of my three-part series on government cloud computing. Be sure to read part one, in case you missed it.


The benefits of the cloud are supposedly self-evident, but how can agencies actually measure the ROI?

Curiously, in General Alexander’s testimony [see the last question in part 1], while he praised the capabilities of cloud security, he questioned some of the promised economic benefits. Many providers publish online calculators that allow you to compare the costs of a cloud deployment to the costs of running on-premise infrastructures. Frequently these fail to account for the personnel costs of installing and maintaining on-premise equipment, so in one sense they aren’t as good as they could be. However, trying to measure cloud ROI and comparing that to traditional infrastructure ROI ignores the cloud’s most important benefit: elasticity. The cloud allows you to add and remove resources according to demand. Traditional on-premise infrastructures are either under-utilized (and thus waste resources) or over-subscribed (and thus perform poorly). Applications designed to take advantage of the cloud’s elasticity largely eliminate the guesswork associated with predicting demand. A resource availability curve that always matches your demand curve appears a lot like perfect ROI.

What does Riverbed bring to this space that sets you apart from others?

Over eight years Riverbed has built a reputation of making wide area networks feel like they’re right next door. As organizations consolidated dispersed branch office resources into fewer large data centers, our technology has helped eliminate the typical problems that arise from computing at a distance. The cloud is a natural next step for us, because in many ways the cloud is similar to a WAN. Users can be situated anywhere and we can apply the same optimization techniques to make applications feel local. With Steelheads of various flavors you can vastly accelerate the movement of data from on-premise to the cloud and back, and also between clouds—even if the providers are different. In our Whitewater appliance we’ve adapted our optimization technology to remove the drudgery from backups, allowing you to point your backup software to a target that in turn compresses, encrypts, and backs up to the cloud—no more tape. For cases where you aren’t able to deploy our flagship symmetric optimization technology, we offer application acceleration that you can add to your cloud-based applications through two recent acquisitions: Zeus and Apptimize. And soon, through our partnership with Akamai, you can accelerate third-party SaaS applications by optimizing the high-latency link between your current location and a point of presence topologically very close to your ultimate destination. Regardless of which cloud providers you choose and what technology they’re built on, we can make any cloud perform better.

Is the cloud necessarily a permanent solution? When does it make sense to use the cloud as a temporary resource?

This would seem to conflict with the “cloud first” mandate and the notion that the cloud is the new default. It can be tempting to consider the cloud as an extension of an existing data center. Unfortunately, such thinking imposes limits—you’re less free to build applications that incorporate full cloud functionality and you can’t move to a full scale-up/scale-down resource curve. Also, I think this can create a mindset where the cloud becomes that “extra” thing that ends up not being managed well, or at all.

Is moving to the cloud strictly an IT issue? What other stakeholders need to be included in the discussions, and why?

IT organizations that choose, on their own, to move production workloads to the cloud do so at their peril. Capacity planning and disaster recovery require input from the agency’s working units. Data location and portability require consultation with legal and compliance teams. Cloud provider procedures and certifications require review by internal audit groups. Budgetary changes require working with finance folks. Don’t allow cloud projects to become line items in some developer’s monthly expense report!

Agencies will need to develop applications and services for their specific needs. Does the cloud change how they do that?

There are fundamental differences in the way applications should be built to run on clouds. Probably one of the most shocking changes is that servers are now disposable horsepower. Infrastructure is code: when you need compute resources, you simply issue a few API calls and within a matter of minutes those resources are available. Vast amounts of distributed storage are also there, waiting for you to allocate and use. In many cases this storage incorporates automatic replication, so you no longer need to build that into your application. Cloud computing also simplifies the process of updating applications: you clone an existing application, add and test updates, then move users over to the new version. Cloud providers often publish detailed technical guidance for how to develop on their particular platforms.


Part three will follow two weeks from today.

Advertisements

Posted in Hybrid Cloud, Private Cloud, Public Cloud | 1 Comment »

Riverbed Technical Lead Steve Riley Q&A on Federal Cloud First Initiative

Posted by riverbedtest on September 20, 2011

For part two of our federal IT initiatives Q&A with Steve Riley, we focus on the Cloud First policy. Now, if you do not work in the the federal IT space, the Cloud First policy is the federal government's strategy for cloud computing, which is part of the greater plan to reform federal IT. The general estimate is $20 billion of the federal government's $80 billion in IT spending could be used for cloud computing.

In the enterprise, cloud computing is a trend that has been discussed and migrated to for many years. However, the push for cloud computing in the federal IT space was kicked-off and championed by Vivek Kundra, the U.S. government's first CIO. And, although Kundra recently left his post last month, former Microsoft executive and managing director at the Federal Communications Commissions Steve VanRoekel has taken the position and reigns, and plans to use Kunrda's grand vision for IT reform as a foundation for even greater changes to federal IT. 

Grab your coffee, tea or something stronger, and watch the below Q&A, which covers what spurred the Cloud First policy (cost reductions and collaboration among agencies), considerations (safety and security), as well as how Riverbed helps agencies to execute on the Cloud First policy. 

 

Stay tuned. Next week, we'll talk about data protection. 

 

Posted in Hybrid Cloud, Private Cloud, Public Cloud | Tagged: , , , , , , , , | Leave a Comment »

Cloud computing for government, part 1 of 3

Posted by riverbedtest on September 12, 2011

Cloud-computing-panel One of Vivek Kundra's most significant contributions in his position as first CIO of the United States was to introduce a "cloud first" policy for government computing projects. Mr. Kundra's replacement, Steven VanRoekel, vows to continue this policy, which will help numerous government agencies streamline their missions and improve citizen services.

Recently I was interviewed as part of a series of technology provider perspectives on government cloud computing. I'd like to share that interview with you, our blog readers. I plan to post the questions and answers in a three-part series, the first of which follows here. As always, we welcome your thoughts and reactions.


Agencies are under a “cloud first” mandate for procuring IT services, so awareness of the cloud should be there. But what's the level of understanding about how agencies can benefit from it?

Cloud providers love to wax rhapsodic about the benefits of utility computing, and you can find plenty of appealing goodness on their marketing web pages. What’s missing, I think, is a way for agencies to translate the generic promises into specific benefits that they can then measure. Of course, this means you already need a fairly good understanding of what you have, what works well, and what doesn’t work well. From this you can then more easily evaluate the benefits of the cloud in general and also compare specific benefits of various providers. Unfortunately, if you don’t have a good idea of what you’re already doing, it’s difficult to truly know whether moving to the cloud will bring positive results.

Is moving to the cloud a “no brainer” for agencies, and they should just go ahead and do it? What process do they need to go through to decide if they are ready?

Assuming you can accurately translate the promises into measurable benefits, I’d say yes, agencies should adopt cloud computing as the new default deployment model for new projects and for existing projects that are planned to undergo a refresh cycle. I don’t like characterizing it as a “no brainer,” though. To wring maximum value from a cloud deployment requires a fair amount of brains: cloud architecture is fundamentally different from traditional on-premise architecture, and this is reflected in how you develop applications, where you locate data, how you plan for disaster recovery, and how you implement information security controls.

Are there any agency applications or services that should never move to the cloud, or is everything an agency does open to that move? In either case—why?

One way to influence change is to set new defaults. For example, in states where applicants for driver licenses have to opt in to organ donation, only 20% chose to do so—vastly limiting organ availability. Some states have reversed this; drivers are organ donors by default unless they opt out. 80% stick with the default, and all residents of these states benefit from the greater availability of organs. So the “cloud first” mandate along with the mental shift to cloud as default requires that an agency must obtain an exception if it wishes to deploy a project on premise. If you make the exception process sufficiently painful, this will discourage agencies from inventing convenient excuses to continue doing things the old (meaning familiar) way. Clearly there are certain exception criteria that will prevent some workloads from moving to shared infrastructures. But does each one need its own dedicated data center? Could, perhaps, all these workloads share a single private “top secret” cloud? I’d imagine so.

How can agencies decide which flavor of cloud—private, public, or hybrid—is right for them?

It doesn’t make much sense to choose a deployment model from the start and then attempt to force all workloads into that one model. Different workloads can use different models—that’s one of the neat things about cloud and emerging technologies that make it easy to port workloads between clouds. So I’d say that the decision of which deployment model to use for any particular workload is driven by the answer to the previous question and, of course, the following question.

Many potential agency users of the cloud believe it's not yet secure enough for their needs. Are they right?

Perhaps we should let General Keith Alexander, chief of the US Cyber Command, answer that one for us:

“This architecture would seem at first glance to be vulnerable to insider threats—indeed, no system that human beings use can be made immune to abuse—but we are convinced the controls and tools that will be built into the cloud will ensure that people cannot see any data beyond what they need for their jobs and will be swiftly identified if they make unauthorized attempts to access data… The idea is to reduce vulnerabilities inherent in the current architecture and to exploit the advantages of cloud computing and thin-client networks, moving the programs and the data that users need away from the thousands of desktops we now use—up to a centralized configuration that will give us wider availability of applications and data combined with tighter control over accesses and vulnerabilities and more timely mitigation of the latter.”

These are quotes from his testimony to Congress in March 2011. His statements reveal a remarkably keen understanding of where risk to information lies and how to mitigate those risks. If the world’s largest online retail company stores and retrieves its entire product catalog from the public cloud, if Treasury.gov, Recovery.gov, and NASA all use the public cloud, if major pharmaceutical manufacturers use public cloud resources for testing the protein folding sequences of trade-secret chemical compounds, if the world’s largest movie streaming/subscription service runs its whole business—front and back office plus its intellectual property—from the public cloud, then just who are these people who claim “oh, the cloud isn’t secure enough for me”? Cloud providers are under constant pressure to prevent their services from becoming attractive to bad guys and to make it exceptionally difficult for one customer to interfere with another. And they’re constantly striving to obtain ever more stringent certifications. That’s a lot of work, more work than most private or single-purpose data centers have the staff or budget to undertake. Now, having said all that, if your cloud provider refuses to be transparent about how they manage their security, I suggest you take your business elsewhere.


Part 2 will follow two weeks from today, and part 3 will follow two weeks after that.

Posted in Hybrid Cloud, Private Cloud, Public Cloud | Leave a Comment »

Shifting Models & Lumped Circuits

Posted by riverbedtest on August 22, 2011

Today's Guest Blogger is Mark Day, Riverbed's Chief Scientist.

Back when I was studying electrical engineering, I first learned about lumped circuit models.  In a lumped model, we considered the wires to be ideal and just focused on the behavior of the connected Lumpedcircuits components (resistors, capacitors, inductors).  After we knew what we were doing (more or less) in that simplified world, we learned about transmission-line models, where we modeled the behavior of the wires.  And we learned that for certain kinds of real-world problems like managing an electrical grid, a lumped model would give you hopelessly wrong answers.

A little later, when I was in graduate school for computer science, I read an entertaining rant about ideal wires vs. the reality of building a fast parallel computer. Today that item came back to mind as I was thinking about explaining WAN issues and cloud performance to people whose frame of reference (their model, if you will) might be mostly LANs.

I was happy to find that my memory had mostly served me correctly.  Here’s a relevant excerpt from Danny Hillis’s book The Connection Machine (MIT Press, 1985):

“Fundamental to our old conception of computation was the idealized connection, the wire.  A Connection Machine wire, as we once imagined it, was a marvelous thing. You put in data at one end and simultaneously it appears at any number of useful places throughout the machine.  Wires are cheap, take up little room, and do not dissipate any power.

“Lately, we have become less enamored of wires. As switching components become smaller and less expensive, we begin to notice that most of our costs are in wires, most of our space is filled with wires, and most of our time is spent transmitting from one end of the wire to the other.  We are discovering that it previously appeared as if we could connect a wire to as many places as we wanted, only because we did not yet want to connect to many places.”

I think you can see that some of the same reality-check critique applies to ideas about the performance of cloud computing and distributed computing across WANs, with the network taking the place of the wire.  It sure is simple when the network’s behavior doesn’t matter, but unfortunately that isn’t always true.

At Riverbed it’s nice to have a variety of technologies that can be brought to bear on those network-related issues. One subtle problem is that because we think about it all the time, it’s easy for us to take for granted that “switch of models” that is sometimes harder for our customers.  People sometimes have to shift from assuming that everything “just works” in ideal fashion, to actually thinking about the WAN as an element of the system.

Posted in Hybrid Cloud, Private Cloud, Public Cloud | Leave a Comment »

The Industrial Cloud Revolution

Posted by riverbedtest on August 12, 2011

Does anyone else miss writing those high school history papers? You know, the ones that all start “With the Industrial Revolution came widespread change across the economic, political and social fabric of western civilization…” Blah, blah, blah. Anyone? Anyone?

Not so much, eh? Okay, it wouldn’t be the first time I’ve flown solo in the history-nerd department.

PSM_V41_D520_A_leather_factory_stitching_room But the reason the Industrial Revolution was such a great opener for many a high-school history paper was that it really did transform the economics of production, which had wide-reaching implications for modern civilization. Production went from small-scale, localized, cottage-industry to large-scale, concentrated production centers benefiting from economies of scale and scope. Various innovations, from the flying shuttle to the assembly line, were instrumental in creating the efficiencies of industrial revolution factories, but at the end of the day, creating more products, more quickly wasn’t worth much unless you had exposure to enough customers to buy them. In other words, you had to get all those products to market.

Enter the steam engine. With a steam-powered railway infrastructure, industrialized manufacturers could get their products to more markets, faster.  Which is a good thing when you just churned out more pairs of pants in a year than everyone in a hundred mile radius could wear in their combined lifetimes.
BMR_423,_1993,_Reading,_Pennsylvania

So, why am I going off about steam engines and the Industrial Revolution? Well, here in IT land, we’re having a bit of an Industrial Revolution redux. Virtualization has enabled IT administrators to consolidate servers and gain economies of scale, and companies like Amazon, Rackspace and AT&T are beginning to offer basic IT services on-demand, passing on even GREATER economies of scale. But all the cheaper, on-demand compute and storage in the world isn’t worth much if you can’t get the product (applications) to market (users).

We need a steam engine for the cloud revolution. Oh, wait! WAN optimization has proven to accelerate network-based applications by reducing bandwidth consumption and the impact of latency. Choo-choo!! Layer on network performance monitoring (who’s keeping track of all these trains?), web content optimization (how are we loading these products on the trains?), and application delivery controllers (what train is going where, when?) and you have yourself the speed and intelligence for a high-performance cloud delivery system.

 All aboard!

Extra credit: Join Amazon Web Services Senior Evangelist, Jeff Barr, and me on August 17 for a webinar on how to optimize your cloud server deployments. Register here!

Posted in Fun, Hybrid Cloud, Public Cloud, Web Content Optimization | Leave a Comment »

Riverbed’s Optimization Solutions for the Cloud

Posted by riverbedtest on August 11, 2011

Recently I saw a blog post on ReadWriteWeb / Cloud by David Strom where he described the roles that WAN Optimization can play in helping accelerate Cloud-based IT services.

This has long been an area of attention at Riverbed; for years now we have been helping Enterprises address and solve the challenges they've faced with business applications performing poorly across their private WANs. Riverbed's award-winning Steelhead family of WAN Optimization appliances have held a leading position in the global market for the last several years, according to several leading industry analyst firms.

Now, in the era of Cloud-based IT services, the performance problems created by the increased distance between users and their data, combined with the lack of QoS and un-guaranteed internet performance are significantly worse than those faced within a structured and well-known corporate IT environment. Thus the need for performance optimization in these cloud environments is even greater than in traditional, private corporate IT.

These requirements have prompted Riverbed to develop and offer a whole range of products and technologies, to address the vast majority of Cloud-based IT applications and environments. In his recent blog post,  David mentioned only one Riverbed product in this context, the Steelhead Appliance.   SH

In addition to this though Riverbed also has the following products available to address the Acceleration & Optimization needs of virtual and cloud environments :

  1. Virtual Steelhead – as the name suggests, a virtual version of the Steelhead product that can be run on VMWare ESX/ESXi platforms
  2. Cloud Steelhead – Steelhead WAN Optimization + simple portal-based management, On-demand instantiation, easy cloning, fliexible sizing and pricing
  3. Riverbed Whitewater – a single-ended Cloud Storage Gateway that delivers speed, security, cost-efficiency and ease of use for Cloud-based storage services Ww and of course
  4. Steelhead Mobile – PC and MAC client acceleration software, so you can enjoy accelerated cloud IT services from anywhere, over any connectivity medium.

Additionally with the recent acquisition of both Zeus and Aptimize, Riverbed now also has two new Single-Ended technologies – Application Delivery Controller and Web Content Optimization – to help accelerate  both public and private cloud-based web content and applications.

So in summary, Riverbed really should be your first port of call for any cloud IT service acceleration & optimization requirements.

Posted in Application Acceleration, Bandwidth Optimization, Hybrid Cloud, Mobile, Private Cloud, Public Cloud, Storage Cloud, Virtualization, Web Content Optimization | Tagged: , , , , , , , , , | Leave a Comment »