UDgateway Technology Highlights

 

TCP ACCELERATION

  • Transparent proxy: The TCP accelerator preserves TCP ports and IP addresses, such that inserting OneAccess WAN optimization remains compatible with firewall rules. Integration is thus made very easy, the system can be deployed to operate either in router mode or in bridge mode.

  • Window scaling for high-speed TCP: TCP throughput is limited by maximum TCP window size which equals the product max bandwidth x network delay. OneAccess acceleration supports large TCP windows exceeding standard 64 kB limitation and adapts to round trip delays of few seconds.

  • Enhanced slow-start: Enables to start sending TCP segments at a higher rate. TCP reaches its optimal rate faster and transfer times are thus reduced.

  • Highly reactive congestion control: Long Fat Networks (LFN) or satellite networks makes TCP quite slow to react to congestion event. A number of mechanisms are implemented to improve standard TCP congestion control.
    • Resilience to losses: Adaptation of fast recovery, fast retransmit, congestion avoidance algorithms make it robust to losses.
    • Adaptation to dynamic bandwidth.
    • Coupling with QoS: What may sometimes be interpreted as congestion is often due to the enforcement of QoS policies.
    • Communication between the TCP congestion control and the QoS layer enables the adaptation of the TCP throughput while not having the negative impact of a TCP loss.
    • The bottom line is reduced packet loss and improved TCP transfer rates.
  • Optimized return link: TCP acceleration solution often focus on filling bandwidth on the download direction. However, in some highly asymmetrical context, the limitation comes from the upstream ability to acknowledge downstream data. The optimized return link reduces the acknowledgement to what is actually needed by both stacks. The acknowledgments are regenerated outside the optimized link.

  • Selective Acknowledgement (SACK): SACK improves the efficiency of TCP retransmissions. SACK is a way to partially acknowledge the reception data and to signal missing data. Only missing segments are retransmitted.

APPLICATION ACCELERATION

  • CIFS: CIFS performs poorly on networks with high latencies due to protocol chattiness. CIFS acceleration pre-fetches client requests and enables faster browsing through network folders.

  • Transparent HTTP caching: HTTP requests / responses are intercepted and cached. If the same object is requested later, at LAN speed.

  • HTTP acceleration: The HTTP traffic is further optimized. The HTML content is parsed and found URLs are pre-fetched.

  • Packet compression: This compression technique is packet based and provides gains for all IP packets including for UDP protocols. It partially compensate for extra overheads with VPN-encrypted traffic..
  • Stream compression: TCP streams are compressed by deflate (zip, gzip) algorithm. Such stream compression is efficient on protocol headers (such as HTTP), as well as on clear-text content, such as HTML, XML or source code files. Stream compression provides an immediate gain in bandwidth saving, which results in smaller transfer times and improved user experience.

DATA REDUNDANCY ELIMINATION (DRE)

The DRE engine stores all transmitted data at each end of the link, so that next time the same data or file is sent, only a reference will be transmitted over the WAN link. The data can be retrieved after several days or even weeks when the same information is transferred.

In order to perform the compression in real time, data matching is done at the byte level. This can provide very high compression ratios when redundant data is found in the file, even the first time a file is transmitted. Subsequently, when a file has been transmitted through the link and then modified and transmitted again, only a marker indicating the modifications will be sent, greatly reducing the amount of bandwidth consumed.

Such dictionary-based compression is optimized for the following characteristics:

  • WAN to LAN speed: Data reduction can reach up to 98% and more on flows which are highly redundant. WAN speed is no more the bottleneck and file opening on a remote server is nearly as fast as if the servers were on the LAN.

  • Efficient scalability in number of peers: A single, shared dictionary holds data for all peers.Should the same data be sent to/received from several peers, a single copy of the data chunks is stored. The technology has been designed for a high peer-scalability, such that performance does not degrade as the number of peers increases.

  • Fully transparent bridge for facilitated deployment: Unlike competing products, DRE control information is sent in-band. No extra TCP session to synchronize dictionary information is required. The product does not need an IP address and is as easy to deploy as an Ethernet bridge.

  • No idle traffic: The in-band DRE signaling does not generate any extra traffic, when no user traffic flows through OneAccess WAN optimizer. For pay-per-volume links, such characteristic is proven very critical.

  • Content-aware: The DRE technology manages an extensible application-specific framework, which enables a more efficient detection of redundant data patterns. Protocols such as HTTP benefit from this feature.Popular business applications are proven to be efficiently compressed such as emails, file transfers, CIFS or SMB.