Opened 9 years ago

Last modified 4 years ago

#640 reopened Bug / Defect

Windows 7 wrong snd/rcv buffer size

Reported by: pplars Owned by:
Priority: minor Milestone: release 2.3.14
Component: Generic / unclassified Version: OpenVPN 2.3.8 (Community Ed)
Severity: Not set (select this one, unless your'e a OpenVPN developer) Keywords:
Cc: stipa

Description

Although the docs and a first look at the source indicate that the default buffer size should be 64k,
its 8k in most cases, and sometimes 4k on Windows 7. ( Socket Buffers: R=[8192->8192] S=[8192->8192]) .
This has a serious impact on speed for servers that are far away.

I work for a vpn provider, this happens on all my machines and measuring from logs that customers send, on all customer machines. This happened for at least the last year, I fixed this long ago by pushing sndbuf and rcvbuf 131072 to the clients.

I report this as a bug now because I was sending other bug reports and remembered that I wanted to report this issue some time ago :)

Regards
Lars

Change History (22)

comment:1 Changed 9 years ago by ValdikSS

Actually that's what Windows use by default I suppose. OpenVPN doesn't explicitly set rcvbuf and sndbuf on Windows by default, only on other OSes (up to 2.3.8, 2.3.9 doesn't set it by default at all). Do you have any tests which shows the impact on speed?

comment:2 Changed 9 years ago by ValdikSS

Also see bug #461

comment:3 Changed 9 years ago by Steffan Karger

Resolution: duplicate
Status: newclosed

This is a duplicate of #461. A fix has been included in the master and release/2.3 branches, and will be part of the soon to be released OpenVPN 2.3.9.

comment:4 Changed 9 years ago by ValdikSS

syzzer, that's not actually a duplicate because OpenVPN never set buffer sizes on Windows.

comment:5 Changed 9 years ago by Steffan Karger

Resolution: duplicate
Status: closedreopened

Hm, agreed. Not really a duplicate. Though I have my doubts about adding a 'better' default for Windows, since we did that on other platforms and that cause #461. I'll reopen for now and let the networking people decide.

comment:6 Changed 9 years ago by ValdikSS

I'll make tests on this today to understand if it's really an issue. I have lots of high latency servers.

comment:7 Changed 9 years ago by pplars

This is a problem for some users,
I only noticed this issue because we had customers report unexplainable speed limitations.
We are located in Germany so many people have 50mbit or more connections and
they where limited to below 10mbit because of that 8k buffer size.

Regards
Lars

comment:8 Changed 9 years ago by Gert Döring

please provide numbers - same client machine, default value of sndbuf/rcvbuf, iperf result, different values, iperf result, etc. - please also keep latency in mind, too much buffering can lead to inacceptable latency when a connection is busy.

measuring network performance is a highly tricky matter, and users will always complain that "things are slow!"...

Since we've been through this in #461, I'm very much inclined to leave it at "we have the option to set the buffer size, but we shouldn't try to outsmart the OS, as it will backfire" (as it did on Linux). But I can be convinced by numbers.

comment:9 Changed 9 years ago by ValdikSS

My ISP links are all high loaded right now, will check in the morning.

comment:10 Changed 9 years ago by ValdikSS

Can confirm this problem. OpenVPN doesn't set buffer sizes, only read them, but it really behaves like the buffer sizes are set to 8192 which is very limiting for TCP. Not sure right now why is it set to 8192 by default and why other applications doesn't suffer from low speeds, probably OpenVPN creates socket in non-standard way. Should investigate further.

comment:11 Changed 9 years ago by ValdikSS

Windows XP: without sndbuf and rcvbuf over UDP VPN download speed 2-3 mbit/s, upload 24 mbit/s. With sndbuf 999999 rcvbuf 999999 download 16 mbit/s upload 18 mbit/s.

Windows XP: without sndbuf and rcvbuf over TCP VPN download speed 15 mbit/s, upload 1.5-3 mbit/s. With sndbuf 999999 rcvbuf 999999 download 16 mbit/s upload 14 mbit/s.

So on Windows XP you get low download speed with UDP and low upload speed with TCP without buffer sizes explicitly set.

The same configuration on Linux transfers 40/40 with both protocols and buffer sizes. I also get 40/40 outside VPN on XP.

comment:12 Changed 9 years ago by pplars

I did some tests some time ago before I started to push 128K buffer size to clients.
64k is ok for users that are < 50ms from the server. But 128k it better for servers far away.

Windows 8 and 10 use 64k as the default, win7 uses 8k. Maybe openvpn should set 64k on xp and 7?

Also should't the bandwidth math be pretty simple, because no matter how fast your internet connection is, you can only send $buffersize every $rtt-to-server seconds?
For example 50ms rtt to server, 8 k buffer -> 8kbyte / 0.05 seconds = 160 kbyte/s max.
same with 64k buffer gives 1280 kbyte/s max.

comment:13 in reply to:  12 Changed 9 years ago by Gert Döring

Hi,

Replying to pplars:

Also should't the bandwidth math be pretty simple, because no matter how fast your internet connection is, you can only send $buffersize every $rtt-to-server seconds?
For example 50ms rtt to server, 8 k buffer -> 8kbyte / 0.05 seconds = 160 kbyte/s max.
same with 64k buffer gives 1280 kbyte/s max.

This is true for TCP connections and for the TCP window - ACKs need to come in before you can send more data.

For a UDP stream, unless you exceed output interface speed, you basically can send whatever you want, but not more than one bufsize per write()/sendmsg() call. Since nothing is waiting for ACKs and nothing is buffering, RTT does not enter the picture.

The results valdikss is observing actually seem to back that - send speed for UDP VPNs does not really depend on buffer size (interesting enough, the speed goes down when the buffer goes up). Interestingly download speeds on XP on UDP are affected - which hints at "incoming traffic overruns the socket buffer, so we see packet loss, retransmits at the higher level protocol inside the VPN [or slowdown by the sender, noticing the loss]" - and that improves with larger receive buffers.

comment:14 Changed 9 years ago by pplars

Yep.

But anyway, it should be pretty clear that 8k buffers are to small for todays internet.

comment:15 Changed 9 years ago by ValdikSS

First, socket buffer sizes are not the same as TCP Window buffers

I'm looking into this, and I don't think SO_RCVBUF and the TCP Window are necessarily the same thing.

If you look at http://msdn.microsoft.com/en-us/magazine/cc302334.aspx section "The Windows NT and Windows 2000 Sockets Architecture" you see that the Windows Kernel Socket driver Afd.sys sits on top of the Transport protocols. It has its own socket SND/RCV buffers which are what you set in the socket options SO_SNDBUF, SO_RCVBUF or via Afd registry keys. Then the TCP transport protocol has its own TCP Window Buffer which is the one everyone is familiar with and is set either in the registry Tcpip parameters or is determined automatically bearing in mind among other things SO_RCVBUF which I think is where the confusion comes from. http://msdn.microsoft.com/en-us/library/ms819736.aspx

So I believe data is read from the transport layer into the afd.sys socket buffer SO_RCVBUF as needed where it waits to be read out by the application. You would want the SO_RCVBUF to be at least as large as the data you hope to read at once.

However I don't know how the SO_RCVBUF and TCP Window will interplay. Will TCP wait to ACK data until it is read into SO_RCVBUF? That is unclear to me.

http://stackoverflow.com/questions/2655057/how-can-the-so-rcvbuf-be-smaller-than-the-tcp-receive-window-windows-xp

Second. If I use literally any software which doesn't explicitly set buffer sizes on any Windows OS (XP, 7, 8.1, 10) and uses TCP, it utilizes much more of my bandwidth than OpenVPN. After some search, it seems that there's a problem with non-blocking sockets on Windows XP: https://support.microsoft.com/en-us/kb/823764
OpenVPN uses non-blocking sockets. There could be other problems with non-blocking sockets on newer Windows versions too.

Also,

Applications that perform one blocking or non-blocking send request at a time typically rely on internal send buffering by Winsock to achieve decent throughput. The send buffer limit for a given connection is controlled by the SO_SNDBUF socket option. For the blocking and non-blocking send method, the send buffer limit determines how much data is kept outstanding in TCP. If the ISB value for the connection is larger than the send buffer limit, then the throughput achieved on the connection will not be optimal. In order to achieve better throughput, the applications can set the send buffer limit based on the result of the ISB query as ISB change notifications occur on the connection.

and

Dynamic send buffering for TCP was added on Windows 7 and Windows Server 2008 R2. By default, dynamic send buffering for TCP is enabled unless an application sets the SO_SNDBUF socket option on the stream socket.

Third, there is something wrong with either OpenVPN or TAP adapter driver. I will install fresh Windows versions tomorrow and test it again. My last test ended with the result of only 100/110 mbit/s with cipher none and auth none on a 1 gbit/s link directly connected to other PC, and 130/150 mbit/s with cipher bf-cbc and auth sha1 (!!!) regardless of buffer sizes. I re-tested everything 3 times. Linux gets me 250/250 on bf-cbc and full link speed with none/none.

comment:16 Changed 9 years ago by ValdikSS

Tests with remote server with ping of 22ms.

Windows 7 (UDP)

sndbuf and rcvbuf not set:
4 Mbit/s download / 40 Mbit/s upload

sndbuf and rcvbuf set to 786432 (131072*6):
39.7 Mbit/s download / 40 Mbit/s upload

Windows 7 (TCP)

sndbuf and rcvbuf not set:
40 Mbit/s download / 3.18 Mbit/s upload

sndbuf and rcvbuf not set, tcp-nodelay set:
42.2 Mbit/s download / 3.3 Mbit/s upload

sndbuf and rcvbuf set to 786432 (131072*6):
39.3 Mbit/s download / 45.7 Mbit/s upload

sndbuf and rcvbuf set to 786432 (131072*6), tcp-nodelay set:
35.7 Mbit/s download / 43.5 Mbit/s upload

Windows 8.1 (UDP)

sndbuf and rcvbuf not set:
34.4 Mbit/s download / 48.9 Mbit/s upload

sndbuf and rcvbuf set to 786432 (131072*6):
34.7 Mbit/s download / 48.3 Mbit/s upload

Windows 8.1 (TCP)

sndbuf and rcvbuf not set:
34 Mbit/s download / 45.4 Mbit/s upload

sndbuf and rcvbuf not set, tcp-nodelay set:
31.9 Mbit/s download / 45.4 Mbit/s upload

sndbuf and rcvbuf set to 786432 (131072*6):
34.9 Mbit/s download / 44.4 Mbit/s upload

sndbuf and rcvbuf set to 786432 (131072*6), tcp-nodelay set:
33 Mbit/s download / 48.3 Mbit/s upload

Windows 7 TCP congestion control function is rather bad. It drops window size quickly and raises it slowly, that's why we have not the highest average speed while the peaks are just the same as in 8.1 or 10.

comment:17 Changed 9 years ago by Gert Döring

Cc: stipa added
Milestone: release 2.3.10

Thanks for the impressive tests. So, we can learn that Win8 and up (and Linux) we do not need to do anything to sndbuf/rcbuf (31.9 vs. 34.9 sounds like test noise to me), but the recommendation for XP, Vista and Win7 should be "use big buffers".

Given that Lev is working on "which windows version is this?" right now, we might want to introduce setting the defaults (again) #ifdef WIN32 and if (windows <= Win7)...

comment:19 Changed 8 years ago by Samuli Seppänen

Milestone: release 2.3.10release 2.3.12

comment:20 Changed 8 years ago by Gert Döring

Milestone: release 2.3.12release 2.3.14

So, we know what to do, just nobody has done it yet...

comment:21 Changed 4 years ago by Gert Döring

Do we still care? XP is long end of life, Win7 is also end of life, and with wintun and the more optimized code in 2.4.9 + 2.5.0, we're much faster now anyway...

If yes, for which versions do we want to do something?

Or just improving documentation?

comment:22 Changed 4 years ago by Samuli Seppänen

Seems like "improve documentation" to me. Windows 7-based operating systems no longer have any mainstream support. Even Windows Server 2012r2 ran out on October 9, 2018.

Note: See TracTickets for help on using tickets.