Version 14 (modified by Samuli Seppänen, 11 years ago) (diff)



This page outlines the efforts taken to maintain OpenVPN's quality without excessive compromises on development speed.

The next major release (as of Sep 2011), OpenVPN 2.3, will contain significant low-level changes to OpenVPN 2.2.x and 2.1.x. Therefore lots of work is needed on the QA front to avoid regressions. The plan is to setup good QA environment for 2.3a -> 2.3GA development phase. This environment will contain several different parts:

Static testing

Peer review

Static testing usually refers to static code analysis, which is baked in into our development process in the form of mandatory ACK process every patch has to go through. The ACK process not only improves code quality, it also prevents highly specialized or rarely used features from polluting the codebase.

Automated static testing

OpenVPN's codebase is scanned using Coverity Scan which detects many potential security vulnerabilities.

Dynamic testing

Dedicated black-box tests

Dynamic black-box testing means trying out an application and verifying if it works as intended. In closed-source software development which is organized around a waterfall model there are usually dedicated testers who do various scripted or intuitive tests to verify an application works as intended. This usually happens just before launch. In complex applications (such as OpenVPN) testing even a small fraction of functionality would be impractical and very costly. Fortunately, in Lean software development methodologies such as Scrum and especially in community-driven OSS development doing extensive, dedicated testing is in general just a waste of time. It is replaced by

  • Constant quality assurance achieved with static whitebox technique (e.g. code reviews)
  • Testing in real environments (by users)

This said, a minimal amount of dedicated, dynamic testing (a.k.a. smoke testing) goes into each release to catch the most obvious errors.

Performance tests

To ensure good performance, performance tests are in place. For details, look at the performance testing wiki page.

Testing in real environments

In OpenVPN (and most other open source projects), the stability of stable releases is ensured with real-life testing by it's users during all phases of software development, starting from development code in Git and leading into stable releases. There are at least two kinds of barriers to using pre-release code:

  • Psychological barriers
    • Risk avoidance
  • Technical barriers
    • Unfamiliarity with required tools (e.g. Git)
    • Difficulty of deployment, e.g. building software from sources (especially on Windows)

This means that the closer were getting closer to release, the more people we can expect to be testing the codebase. The figures below are not based on any real data and can only be considered rough estimates:


Use of snapshots help overcome some of the technical barriers. The only way to overcome psychological barriers is to speed up the release cycle. This results in new features get into wide circulation faster, which in turn results into issues being reported more quickly. This also gives more confidence in integrity of stable releases. On the flipside, more bugs will probably end up in the initial versions of the stable releases.

In practice, the goal is to make live testing by users as easy as possible by

  • Providing snapshot packages regularly for common *NIX platforms and Windows
  • Providing apt repositories for Debian 6 and Ubuntu 10.04+
  • Providing rpm repositories for latest Fedora releases

Continuous integration

The project also has a Buildbot buildmaster, which drives several buildslaves. These together form a continuous integration environment for OpenVPN. Each of these buildslaves is running a different operating system, and every commit to the OpenVPN Git repository triggers a build on each. After the build, each buildslave's newly compiled openvpn instance tries to connect to a test server using several different configurations. This ensures that

  • OpenVPN builds properly on a variety of platforms
  • Basic functionality is unaffected by commits

Unit testing

At the moment, there is no coherent set of unit tests to spot regressions. One option would be to use CUnit or similar unit test framework to cover the most commonly used and/or critical codepaths.