The Riverbed Blog (testing)

A blog in search of a tagline

Riverbed and Thin-Client Traffic

Posted by riverbedtest on April 20, 2009

The latest Riverbed software release–RiOS 5.5–provided a number of significant new features, including RSP as I discussed in a previous blog.  Another important feature first available in RiOS 5.5 is known as SDR-M, or memory-based Scalable Data Referencing.  The SDR-M feature only stores redundant byte-level data patterns in the DRAM memory of the Steelhead appliance.  Prior to RiOS 5.5, Riverbed's SDR mechanisms always stored redundant data patterns on disk-based storage media.  However, when disk-based SDR was applied to real-time applications such as thin-client traffic, added latency and jitter would result from the process of storing and reading data on the spinning disk media in the Steelhead appliance.  Since the thin-client interface is particularly sensitive to latency, we would usually advise customers to not use SDR technology on thin-client traffic with RiOS 5.0 and earlier versions of Steelhead software.  But that is no longer the case with RiOS 5.5, where SDR-M can now be applied with no adverse impacts to thin-client traffic.  The following shows the significant data reduction benefits achieved by SDR-M when applied to thin-client traffic:

Citrix  Click on the above diagram to see it clearly.  It illustrates the outcome of a test where Riverbed's SDR-M technology achieved significantly better data reduction than the native default compression in the Citrix ICA (XenApp server) platform.  Similar results can be obtained for RDP-based network traffic issued by platforms such as VMWare View or Windows Terminal Server.  More significantly, the above results are achieved without added jitter and latency that would have been introduced if the redundant data had to be written to and read from disk-based storage.

Also notable is the observation that the results achieved above were obtained for a single Citrix ICA session; SDR-M is even more effective in eliminating redundant data when it can be applied to multiple ICA sessions optimized through the same Steelhead appliance.  When this occurs, SDR-M will leverage the redundant byte patterns observed in each Citrix user's screen view for other Citrix users that are observing the same or similar screen views.

Beyond just applying SDR-M to thin-client traffic, Riverbed Steelhead appliances also have a number of other features that can improve the performance of thin-client applications.  These include QoS enforcement capabilities that allow prioritization of thin-client traffic over other bandwidth-intensive applications such as CIFS and FTP.  Riverbed also offers an enhanced transport called MX-TCP, which addresses TCP slow-start and window expansion issues that occur in high packet-loss environments, including in WAN links experiencing network congestion.

The following Riverbed white paper is available for those who are interested in further detail on Riverbed's capabilties to optimize thin-client traffic:

Download WhitePaper-Riverbed-optimizingthinclients_nc


2 Responses to “Riverbed and Thin-Client Traffic”

  1. Jeff said

    I’m replicating SSH data over to our DR site with Double-Take. I have managed to establish optimization with the riberbed appliances but reduction stays at 0%. Is there anything more I can do?

  2. Josh Tseng said

    If you’re getting 0% data reduction, the most likely cause is that the data is encrypted. Since you described this as SSH data, then I take it you’re sending the data over an encrypted SSH connection. Please reconfigure Double-take to send the data over the clear, and then use IPSec or SSL encryption on the Steelhead.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: