I have two machines that have a two-way latency of <5ms, and have about 700 Mbps bandwidth (up & down).
I am trying to simulate a WAN connection between them with 60mbps bandwidth (up & down) and a two-way latency of 100ms. However, when I set the latency to 50ms in connection emulator, the bandwidth also drops to 10mbps. The same behavior is observed even if I set the bandwidth to unlimited.
Can you provide some guidance for simulating high latency high bandwidth environment?
The standard window size on a Windows machine is 2^16 bytes, so this formula gives us 2^16 * 8 / 0.05 = 10,485,760 bits-per-second-throughput. It is exactly what you are observing: with the standard window size and 50ms latency the maximum throughput is 10 Mbps. This article How to calculate TCP throughput for long distance links provides more details on the matter.
To achieve higher speeds, you would need to use UDP or adjust TCP window size, depending on what devices you are using for the test. While recent versions of Windows support the so-called TCP window auto-tuning, it may not work if either side or an intermediate router do not support it. This article from Microsoft explains more: TCP Receive Window Auto-Tuning Level feature in Window.
You may need to employ a protocol analyser to see what Window Scaling ends up being set to.
Windows 10 is supposed to auto adjust the TCP window size through TCP window scaling option. In fact, there doesn't seem to be a way to manually adjust the TCP window size in Windows 10. Bottom line, TCP window size should not be an issues for Windows devices especially for simple network.
So it is unclear why when I apply a 100ms both direction latency, the bandwidth usage dropped from 100MB/s (i.e., 1Gbps) to around 7MB/s when copying a big file between two machines in a same network.