wiki:DataChannelOffload

Version 3 (modified by Antonio Quartulli, 2 years ago) (diff)

--

OpenVPN Data Channel Offload (aka OVPN-DCO)

Intro

The expression Data Channel Offload refers to any technique implemented with the goal of moving the processing of data packets from the userspace program to a separate entity.

Given that OpenVPN spends a considerable amount of time passing data packets back and forth from kernel-land to user-land, where decryption and re-routing happens, it was decided to offload the data processing directly to the kernel. As direct consequence, data packets are not required to leave the kernelspace anymore, thus boosting the performance of ongoing VPN connections.

Before DCO

The picture below depicts how data packets flow on a generic OpenVPN server:

{image here}

From the application, packets enter the kernel networking stack, but then they are immediately sent back to userspace, so that the OpenVPN process can decide where they are headed to and then encrypt them. At this point packets are sent back to the networking stack of the kernel and finally to the Internet.

There are two main disadvantages with this approach:

  1. data packets enter and leave the kernel twice;
  2. all clients traffic is handled by the single-threaded OpenVPN process.

Both have been known limiting factors for OpenVPN2.x performance. Running OpenVPN on a beefy CPU has often been the answer in the past in order to push the throughput as high as possible, but due to continuously increasing line rates this approach is simply not feasible anymore.

Introducing DCO

Attachments (1)

Download all attachments as: .zip