The Riverbed Blog (testing)

A blog in search of a tagline

Post-Apocalyptic IT Glory

Posted by riverbedtest on March 25, 2011

Today's Guest Blogger is Matt Berry.  Matt is a Sales Engineer from Down Under (Australia), which explains the theme of his piece.  

Are you mad? Considering changing your first name to Max? Just been on eBay looking for some “V8 Mad max car Interceptor” muscle?

Well I’m here to tell you that you’re looking in the wrong place. Riverbed can give you the Interceptor without all the '70s wind resistance. It goes well beyond a blown V8 with fat tires and an exhaust system you could only dream of on your Toyota hatchback. The Riverbed Interceptor can intercept 1,000,000 – yep, 1 million – TCP sessions; can be souped up with 10Gigabits/sec of Ethernet connectivity madness; and can pump 12Gbits/s of throbbing WAN accelerated glory into the network!

“What on earth is he talking about?” I hear you ask. Well check this out:

Presentation1 Put a pair of Riverbed Interceptors between your WAN and Data Center LAN and you’ve got a highly P54587_Large+Ford_Falcon_Fastback+Full_Driver_Side_View available, Steelhead-aware, load-balancing solution with full stateful failover! Try and do that in a 1970s Ford Falcon Fastback! If one Interceptor fails (which makes Max really Mad), the other will seamlessly Intercept and redirect optimized sessions to the same Steelhead – thereby not interrupting the Optimized flow.

Ever wanted to pause a Steelhead for planned work? “Pause? Huh?” Yes, have the Driver Steelhead reject new sessions so old sessions can be gracefully shut down. Well the Riverbed Interceptor now makes this possible – you can plan work on a Steelhead without losing a single packet. “That’s just crazy talk!” I hear you say.

“How about the ability to load-balance across any sized Steelheads that I like? Maybe I want to start small, with a 2050, then later add a couple of 5050s but still keep that 2050 workhorse in the mix?” Well unlike Max’s beast of a wagon, the Riverbed Interceptor lets you do just that.

“How about a couple of kick-butt, solid state, 7050 bad boys for my DR traffic, balanced with a humble cluster of 5050s for all my whining users?” Yep! The Interceptor’s load balancing rules were made for that!

Just remember, when you think post-apocalyptic IT glory, think Interceptor!

3 Responses to “Post-Apocalyptic IT Glory”

  1. NoHornHonking said

    Hi Matt,
    Isn’t the Riverbed an x86-based platform? How is it possible to achieve 12Gbps of throughput on an x86 processor? Is this just a marketing number? What traffic type/mix was this tested with?
    Thanks.

  2. Matt asked me to post the following on his behalf:
    Hi NoHornHonking,
    Sorry for the long delay in responding. I’ve asked our Interceptor product team to provide some details here so I’d make sure I got my facts right! 😉 Here’s what they’ve said: “Performance is a simple matter of optimizing CPU, I/O and Memory.” What that means is that with today’s hardware it’s really the software that determines peak performance. Nothing makes the propeller-heads we have on staff at Riverbed happier than to write code that squeezes the best performance out of our boxes.
    Other bloggers on this site have done a good job of explaining some of the architectural reasons why we leave our competitors in the dust, so I will stick to answering the question at hand, how did we derive the 12 Gbps number?
    Before every major release, our QA group sets up a performance test with a large number of Steelheads, clients and servers to simulate a customer environment. The 12 Gbps number is derived from traffic measured during those tests.
    In addition, with our Hardware Assist Pass-through feature, we are able to do line-rate for pass-through traffic on our 10G cards.
    From my perspective, it’s also important to remember that multiple Steelhead appliances clustered behind the Interceptor(s) are actually doing all of the hard work here i.e. SDR, Layer 7 protocol and TCP streamlining. Once an optimized session is in place between a Steelhead in the farm and a remote Steelhead, the Interceptor only has to redirect the Server response (normally destined to the Client‘s IP address) to the appropriate Steelhead appliance in the farm.
    Unlike other redirect solutions, such as router-based WCCP, the Interceptor does not use GRE tunneling for redirect once the session is established (it does temporarily use GRE to pass on the SYN/SYN-ACK to the farm Steelhead during session setup) instead using Destination NATting (DNAT). DNAT by its very nature is a lightweight operation and does not increase the packet size like GRE does, allowing the Interceptor to easily work with the 12Gbits/s throughput quoted.
    I hope this helps.
    Regards,
    Matt

  3. so I will stick to answering the question at hand, how did we derive the 12 Gbps number?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

 
%d bloggers like this: