Changes between Initial Version and Version 1 of DataChannelOffload/WhatChanges


Ignore:
Timestamp:
05/04/23 21:24:02 (14 months ago)
Author:
Antonio Quartulli
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • DataChannelOffload/WhatChanges

    v1 v1  
     1= Data Channel Offload: what changes
     2
     3== Before DCO
     4The picture below depicts how data packets flow on a generic OpenVPN server:
     5
     6[[Image(no-dco.png)]]
     7
     8From the application, packets enter the kernel networking stack through ''tun0'', but then they are immediately sent back to userspace, so that the OpenVPN process can decide where they are headed to and then encrypt them. At this point packets are sent back to the networking stack of the kernel and finally to the Internet.
     9On the way back the sequence of events is the same but in the opposite direction.
     10
     11There are two main disadvantages with this approach:
     121. data packets enter and leave the kernel twice;
     131. all client traffic is handled by the single-threaded OpenVPN process.
     14
     15Both have been known limiting factors for OpenVPN2.x performance.
     16Running OpenVPN on a beefy CPU has often been the answer in the past in order to push the throughput as high as possible, but due to continuously increasing line rates this approach is simply not feasible anymore.
     17
     18== Introducing DCO
     19To overcome the limitations described in the section above we have developed ''ovpn-dco'', a Linux kernel module designed to work back-to-back with the OpenVPN userspace software.
     20When a VPN connection is established, being it on a client, on a server or on a p2p instance, the userspace process will first perform the usual handshake and will then pass the data channel parameters to ovpn-dco using its ''NetLink'' interface, so that it can take over from there. At this point, data packets are all handled in kernelspace and are never sent up to the userspace process.
     21
     22The picture below helps to visualize the difference with to the basic scenario shown above:
     23
     24[[Image(dco.png)]]
     25
     26Context switches are therefore reduced to the minimum and packet processing can take advantage of the kernel concurrency model.
     27The two main OpenVPN functions, crypto and routing, are now implemented in kernel using the provided API.
     28For what concerns routing, the system routing table is directly used to understand if packets have to be re-routed directly to another peer (i.e. client-to-client mode), without the need to ask the userspace process at all.
     29
     30OpenVPN in userspace is still in charge of handling the control channel, where all the complex and less throughput-critical operations take place. This is considered an advantage as it allowed to keep the complexity of the ovpn-dco kernel module to the minimum and thus reduce the attack surface. This means that the TLS handshake, data channel key (re-)negotiations and parameters exchange is still performed in userspace.
     31
     32It should be noted that embedded devices, like small routers, will probably benefit considerably from DCO.
     33
     34The ovpn-dco source code for Linux is currently available at the following repository: https://github.com/OpenVPN/ovpn-dco
     35
     36Please note that **OpenVPN 2.6 or greater** is required in order to use ovpn-dco.