Opened 5 years ago

Closed 4 years ago

Last modified 4 years ago

#461 closed Bug / Defect (fixed)

Change default sndbuf and rcvbuf values

Reported by: ValdikSS Owned by: Gert Döring
Priority: major Milestone: release 2.3.9
Component: Networking Version: OpenVPN 2.3.4 (Community Ed)
Severity: Not set (select this one, unless your'e a OpenVPN developer) Keywords:
Cc: JJK, plaisthos

Description

Default value of 64K for sndbuf and rcvbuf can be speed limiter for, for example, wifi. With default values, I get 25 mbit/s for download and 30 mbit/s for upload over wifi, but with 512K values I get 80/80 mbit/s (maximum for my internet connection).

I suggest removing default values and use OS default values for rcvbuf and sndbuf, or increasing default values up to 256K for example.

Attachments (1)

remove_default_rcvbuf_sndbuf_values.patch (421 bytes) - added by ValdikSS 5 years ago.

Download all attachments as: .zip

Change History (28)

comment:1 Changed 5 years ago by ValdikSS

Default value, as of Linux Kernel 3.16, is 212992 bytes.

comment:2 Changed 5 years ago by Gert Döring

Owner: set to Gert Döring
Status: newaccepted

Good argument. I'll look into it. (Don't we have an option to control that anyway?)

comment:3 in reply to:  2 Changed 5 years ago by ValdikSS

Replying to cron2:

Good argument. I'll look into it. (Don't we have an option to control that anyway?)

I don't think so. From what I read from the source, OpenVPN always sets so_recvbuf and so_sndbuf on non-Windows OS (and it is 65536 by default). I can't see any reason for forcibly set custom buffer values, especially as low as 65536 bytes. Most of the networking software uses default OS values and doesn't set buffers at all.

comment:4 Changed 5 years ago by Gert Döring

I suspect that when this code was written, 64k was considered "lots!" and the default buffers were smaller (and I can remember own code that did this, years ago, where I couldn't actually go higher than 64k because that was the system enforced maximum...)

Pushing the queue too high isn't without risks, though, as it can increase latency if you can't actually send out the data due to "link full". Which, OTOH, is usually not possible to see in modern network technologies where everything is "ethernet" with magic underlying limitations (like, 2Mbit links presented on a 10Mbit ethernet ports). I'll discuss this and either we go for an option, or drop it...

comment:5 in reply to:  4 Changed 5 years ago by ValdikSS

Replying to cron2:

Thanks for clarifying this. It would be nice to add some information about buffers to wiki or anywhere else.

comment:6 Changed 5 years ago by ValdikSS

Some tests over direct gigabit connection with:

cipher none
auth none
tun-mtu 9000
txqueuelen 1000
mssfix 0
% iperf -l 1M -w 2M -c 192.168.90.129
------------------------------------------------------------
Client connecting to 192.168.90.129, TCP port 5001
TCP window size: 4.00 MByte (WARNING: requested 2.00 MByte)
------------------------------------------------------------
[  3] local 192.168.90.131 port 37963 connected with 192.168.90.129 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   862 MBytes   722 Mbits/sec

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.90.129 port 5001 connected with 192.168.90.131 port 37963
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   862 MBytes   720 Mbits/sec

And with added:

sndbuf 524288
rcvbuf 524288
% iperf -l 1M -w 2M -c 192.168.90.129
------------------------------------------------------------
Client connecting to 192.168.90.129, TCP port 5001
TCP window size: 4.00 MByte (WARNING: requested 2.00 MByte)
------------------------------------------------------------
[  3] local 192.168.90.131 port 37993 connected with 192.168.90.129 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.02 GBytes   871 Mbits/sec

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.90.129 port 5001 connected with 192.168.90.131 port 37993
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.1 sec  1.02 GBytes   868 Mbits/sec

comment:7 Changed 5 years ago by Gert Döring

Ah, so we do have options to do that :-) - you were quicker than I in checking the man page and found sndbuf/rcvbuf. So we "just" need to write up understandable text to add to the manpage, to explain the options better.

comment:8 in reply to:  7 Changed 5 years ago by ValdikSS

Replying to cron2:

Not only document it, but we should either use system settings by _not_ setting so_recvbuf and so_sndbuf or increase default OpenVPN value (64K) to something more suitable for internet (256K).

comment:9 Changed 5 years ago by Samuli Seppänen

According to James we agreed in Munich to use the operating system's default values. Is anybody willing to provide a patch?

Changed 5 years ago by ValdikSS

comment:10 Changed 5 years ago by ValdikSS

I hope I understood the code correctly and we can just remove this code in the patch above.
This patch is not tested but should work. I will test it as soon as possible.

Try to run OpenVPN client with "verb 4" in config. It should print something like:

Socket Buffers: R=[212992->212992] S=[212992->212992]

Values before the arrow and after it should match.

By the way, it seems that there is no default buffer values for Windows. Why?

Maybe it's better to increase txqueue size, too? Linux uses 1000 for almost all interfaces by default, strongSwan (IPsec daemon) uses 500. OpenVPN uses 100.

comment:11 Changed 5 years ago by ValdikSS

By the way, it seems that there is no default buffer values for Windows. Why?

2004.07.27 -- Version 2.0-beta8
* On Windows, don't setsockopt SO_SNDBUF or SO_RCVBUF by
  default on TCP/UDP socket in light of reports that this
  action can have undesirable global side effects on the
  MTU settings of other adapters.  These parameters can
  still be set, but you need to explicitly specify
  --sndbuf and/or --rcvbuf.

comment:12 Changed 5 years ago by ValdikSS

Tested and it works.

comment:13 Changed 5 years ago by ValdikSS

The problem is a bit wider than I thought.
On Linux, there is TCP Autotune which works if you don't set buffer sizes, so it's clearly more effective not to set SO_RCVBUF and SO_SNDBUF with TCP connections, but with UDP there is no such algorithm and it uses net.core.rmem_default and net.core.wmem_default for default buffer sizes, which could be not sufficiently high, especially on high-latency links.
I still would like to remove default values and document changed behavior in manual.

comment:15 Changed 4 years ago by Gert Döring

Milestone: release 2.3.7

comment:16 Changed 4 years ago by Gert Döring

Cc: JJK added

We want this in 2.3.7.

Janjust, can you share the results of your speed tests done in Munich, with --sndbuf/rcvbuf variations?

comment:17 Changed 4 years ago by JJK

I've rerun my speed tests on two different sets of hardware. Oddly enough, you can no longer specify --cipher none - it fails on an assertion:

Assertion failed at crypto_openssl.c:523

with the default tun-mtu setting of 1500 and with sndbuf + rcvbuf set to 262144 I get the following values on an "empty" gigabti link:

  • blowfish : up/down 205 Mbps
  • aes-256 : up/down 435/464 Mbps

with "auth none" the values jump dramatically for aes-256:

  • aes-256 : up/down 556/819 Mbps

increasing sndbuf/rcvbuf any further decreased performance slightly.

comment:18 in reply to:  17 Changed 4 years ago by Steffan Karger

Replying to janjust:

Oddly enough, you can no longer specify --cipher none - it fails on an assertion:

Assertion failed at crypto_openssl.c:523

What version was that? This should have been fixed in git, see ticket #473. Fix will be included in 2.3.7.

comment:19 Changed 4 years ago by JJK

"stock" 2.3.6 - I almost never run git-based versions of OpenVPN.

comment:20 Changed 4 years ago by Gert Döring

Milestone: release 2.3.7release 2.3.8

bouncing to 2.3.8 - not forgotten, but we want to get 2.3.7 out and this needs more time

comment:21 Changed 4 years ago by Gert Döring

Milestone: release 2.3.8release 2.3.9

sorry

comment:22 Changed 4 years ago by ValdikSS

Just wanted to show how important buffer sizes are for TCP.

TCP, latency 50ms,
cipher none
auth none
mssfix 0
tun-mtu 6000
txqueuelen 1000

Default buffers:

valdikss@valaptop ~ % iperf -c 192.168.210.1
------------------------------------------------------------
Client connecting to 192.168.210.1, TCP port 5001
TCP window size:  256 KByte (default)
------------------------------------------------------------
[  3] local 192.168.210.2 port 59308 connected with 192.168.210.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.2 sec  12.2 MBytes  10.1 Mbits/sec
valdikss@valaptop ~ % iperf -c 192.168.210.1
------------------------------------------------------------
Client connecting to 192.168.210.1, TCP port 5001
TCP window size:  256 KByte (default)
------------------------------------------------------------
[  3] local 192.168.210.2 port 59317 connected with 192.168.210.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  12.1 MBytes  10.1 Mbits/sec

with:

sndbuf 0
rcvbuf 0

on both sides

valdikss@valaptop ~ % iperf -c 192.168.210.1
------------------------------------------------------------
Client connecting to 192.168.210.1, TCP port 5001
TCP window size:  256 KByte (default)
------------------------------------------------------------
[  3] local 192.168.210.2 port 59367 connected with 192.168.210.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   145 MBytes   122 Mbits/sec
valdikss@valaptop ~ % iperf -c 192.168.210.1
------------------------------------------------------------
Client connecting to 192.168.210.1, TCP port 5001
TCP window size:  256 KByte (default)
------------------------------------------------------------
[  3] local 192.168.210.2 port 59372 connected with 192.168.210.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   282 MBytes   235 Mbits/sec
valdikss@valaptop ~ % iperf -c 192.168.210.1
------------------------------------------------------------
Client connecting to 192.168.210.1, TCP port 5001
TCP window size:  256 KByte (default)
------------------------------------------------------------
[  3] local 192.168.210.2 port 59377 connected with 192.168.210.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   283 MBytes   237 Mbits/sec

comment:23 Changed 4 years ago by ValdikSS

…and stumbled upon another strange thing: you can't set buffer sizes more then 999999 (which is not really that high)

https://github.com/OpenVPN/openvpn/blob/859f6aaac6ef35c54306b6f10d2ec902dd41c89b/src/openvpn/socket.h#L47

comment:24 Changed 4 years ago by Gert Döring

Cc: plaisthos added

This ticket is really silly, open since 12 months for a two-line change... (that Arne just sent to the list).

ValdikSS: if you change that line to bump the maximum 10x higher and recompile, will it actually make a difference? The current value of 1 Mbyte is quite a bit of data (and you listen to the bufferbloat people, too much buffer is as harmful as too little).

comment:25 in reply to:  24 Changed 4 years ago by ValdikSS

Replying to cron2:

ValdikSS: if you change that line to bump the maximum 10x higher and recompile, will it actually make a difference? The current value of 1 Mbyte is quite a bit of data (and you listen to the bufferbloat people, too much buffer is as harmful as too little).

Yes it will. 1 Mbyte is not that much. It would allow you to transfer only 170 megabits/s on a link with 50ms latency over TCP, which is really not that high. Typical buffer size for gigabit links is 16MB or more. Buffer size doesn't mean much for UDP.

I understand that OpenVPN is rarely used on such fast links, but still.

comment:26 Changed 4 years ago by plaisthos

@ValdikSS: Buffersize depends on bandwidth delay product and not bandwidth alone.

16 MB Buffersize means that you have a 120 ms delay to reach Gigabit. While with 1ms delay a buffer size of 128k is enough to reach Gigabit.

Then testing TCP over TCP is always asking for trouble. And in the example you are providing iperf only uses 256k buffer size.

The SOCKET_SND_RCV_BUF_MAX is stupid and really should go away. Most OSes also have a sysctl etc. that defines the maximum buffersize a program can set.

comment:27 Changed 4 years ago by Gert Döring

Resolution: fixed
Status: acceptedclosed

commit f0b64e5dc00f35e3b0fe8c53a316ee74c9cbf15f (master)
commit c72dbb8b470ab7b25fc74e41aed4212db48a9d2f (release/2.3)

Author: Arne Schwabe
Date: Thu Oct 15 16:38:38 2015 +0200

Do not set the buffer size by default but rely on the operation system default.

pushed to master, will be part of 2.3.9

thanks

Last edited 4 years ago by Gert Döring (previous) (diff)
Note: See TracTickets for help on using tickets.