I'm a beginner for network programming. I read some resources that I could find on the internet, where I came across TCP Window Scaling. As I understood, the scaling factor is negotiated when the connection is first established, in the SYN packet. So does this mean that TCP Window scaling cannot be set by the code that we would write for socket programming? Is it the operating system which does this? Say, in a windows environment, how does this happen and is there a way for us to manually/dynamically change it?
Window scaling is enabled automatically if you set a socket receive buffer size of more than 64k, via setsockopt().
As the window scaling negotiation happens during the connection handshake, you have to do that before connecting the socket. In the case of sockets accepted by a server via a listening socket, this is obviously impossible, so you have to do the apparently odd operation of setting the socket receive buffer size on the listening socket instead, from where it is inherited by all sockets accepted from it.
No, I believe this can only be set at a global level. There is a registry setting for this under the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters key.
It's called GlobalMaxTcpWindowSize. See here: http://technet.microsoft.com/en-us/library/cc957546.aspx
If what you actually mean is to change the size of the socket receive and transmit buffers then these can be changed using Winsock. See SO_RCVBUF and SO_SNDBUF.
The window size of TCP packages are managed by the operation system. So far you cannot change that dynamically. For a static way to change the window size for the whole system see Nicks answer.
There is just one very hard way: You WinCap there you can write out every TCP package you want. But that is a real pain.
Related
I have a network application reading from two sockets from Port A and Port B. The sender of data to Port A is very quick (flooding data), while the one on Port B is very slow.
If the application is very slow in consuming the data, a 'TCP Zero Window' will show up and who sends the data to Port A will be blocked.
Do you know if a 'TCP Zero Windows' is something that affects ALL remaining ports and ALL remaining sockets open at that very moment?
Do you know if also the sender of data to Port B might be blocked as well when the TCP buffer is filled?
I am using C/C++ in Linux.
TCP flow control is applied on a per-connection basis. The sliding window size on port A has no effect on port B's window size at all.
When the window size reaches zero the sender uses a periodic timer to keep probing the window size to check when your end is ready again. Allowing the window size to hit zero is bad for throughput but I'm sure you're aware of this already.
I have a C++ program that uses Boost ASIO to communicate with a network device over a TCP socket. The program is working fine on Linux, but with Windows 7 I'm finding that the communication is not working very well. After some experimentation, I found that there's a 0.5-second delay between command and response when communicating with the device using the ASIO example telnet program, even though the response shows up in Wireshark much more quickly.
I gather that the problem is that the network device is not setting the PSH flag after it completes a chunk of data. See: http://smallvoid.com/article/winnt-tcp-push-flag.html.
I need to somehow set up my app so that it receives data from the TCP socket regardless of whether a packet has arrived with the PSH bit set. I know this must be possible because PuTTY can communicate with my device normally. I'd rather not use a registry key to get the effect, because I want to change the behavior only for this one socket, not the entire system.
What do I need to do to get Windows to ignore the PSH flag for this connection?
You could try specifying the MSG_PUSH_IMMEDIATE flag on the receiving side (https://msdn.microsoft.com/en-us/library/windows/desktop/ms741688(v=vs.85).aspx).
I have a server application written in C++. When a client connects, it creates a new thread for him. In that thread there is a BLOCKING reading from a socket. Because there is a possibility for a client to accidentally disconnect and left behind a thread still hanging on the read function, there is a thread that checks if the sockets are still alive by sending "heartbeat messages". The message consists of 1 character and is "ignored" by the client (it is not processed like other messages). The write looks like this:
write(fd, ";", 1);
It works fine, but is it really necessary to send a random character through the socket? I tried to send an empty message ("" with length 0), but it didn't work. Is there any better way to solve this socket checking?
Edit:
I'm using BSD sockets (TCP).
I'm assuming when you say, "socket, you mean a TCP network socket.
If that's true, then the TCP protocol gives you a keepalive option that you would need to ask the OS to use.
I think this StackOverflow answer gets at what you would need to do, assuming a BSDish socket library.
In my experience, using heartbeat messages on TCP (and checking for responses, e.g. NOP/NOP-ACK) is the easiest way to get reliable and timely indication of connectivity at the application layer. The network layer can do some interesting things but getting notification in your application can be tricky.
If you can switch to UDP, you'll have more control and flexibility at the application layer, and probably reduced traffic overall since you can customize the communications, but you'll need to handle reliability, packet ordering, etc. yourself.
You can set connection KEEPALIVE. You may have interests in this link: http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
It is ok you create a thread for each new coming requests if it is only toy. In most of time, i use poll, that is non-blocking io, for performance improvement.
I'm having a problem with one of my C++ applications on Windows 2008x64 (same app runs just fine on Windows 2003x64).
After a crash or even sometimes after a regular shutdown/restart cycle it has a problem using a socket on port 82 it needs to receive commands.
Looking at netstat I see the socket is still in listening state more than 10 minutes after the application stopped (the process is definitely not running anymore).
TCP 0.0.0.0:82 LISTENING
I tried setting the socket option to REUSEADDR but as far as I know that only affects re-connecting to a port that's in TIME_WAIT state. Either way this change didn't seem to make any difference.
int doReuse = 1;
setsockopt(listenFd, SOL_SOCKET, SO_REUSEADDR,
(const char *)&doReuse, sizeof(doReuse));
Any ideas what I can do to solve or at least avoid this problem?
EDIT:
Did netstat -an but this is all I am getting:
TCP 0.0.0.0:82 0.0.0.0:0 LISTENING
For netstat -anb I get:
TCP 0.0.0.0:82 0.0.0.0:0 LISTENING
[System]
I'm aware of shutting down gracefully, but even if the app crashes for some reason I still need to be able to restart it. The application in question uses an in-house library that internally uses Windows Sockets API.
EDIT:
Apparently there is no solution for this problem, so for development I will go with a proxy / tool to work around it. Thanks for all the suggestions, much appreciated.
If this is only hurting you at debug time, use tcpview from the sysinternals folks to force the socket closed. I am assuming it works on your platform, but I am not sure.
If you're doing blocking operations on any sockets, do not use an indefinite timeout. This can cause weird behavior on a multiprocessor machine in my experience. I'm not sure what Windows server OS it was, but, it was one or two versions previous to 2003 Server.
Instead of an indefinite timeout, use a 30 to 60 second timeout and then just repeat the wait. This goes for overlapped IO and IOCompletion ports as well, if you're using them.
If this is an app you're shipping for others to use, good luck. Windows can be a pure bastard when using sockets...
I tried setting the socket option to
REUSEADDR but as far as I know that
only affects re-connecting to a port
that's in TIME_WAIT state.
That's not quite correct. It will let you re-use a port in TIME_WAIT state for any purpose, i.e. listen or connect. But I agree it won't help with this. I'm surprised by the comment about the OS taking 10 minutes to detect the crashed listener. It should clean up all resources as soon as the process ends, other than ports in the TIME_WAIT state.
The first thing to check is that it really is your application listening on that port. Use:
netstat -anb
to figure out which process is listenin on that port.
The second thing to check is that your are closing the socket gracefully when your application shuts down. If you're using a high-level socket API that shouldn't be too much of an issue (you are using a socket API, right?).
Finally, how is your application structured? Is it threaded? Does it launch other processes? How do you know that your application is really shut down?
Run
netstat -ano
This will give you the PID of the process that has the port open. Check that process from the task manager. Make sure you have "list processes from all users" is checked.
http://hea-www.harvard.edu/~fine/Tech/addrinuse.html is a great resource for "Bind: Address Already in Use" errors.
Some extracts:
TIME_WAIT is the state that typically ties up the port for several minutes after the process has completed. The length of the associated timeout varies on different operating systems, and may be dynamic on some operating systems, however typical values are in the range of one to four minutes.
Strategies for Avoidance
SO_REUSEADDR
This is the both the simplest and the most effective option for reducing the "address already in use" error.
Client Closes First
TIME_WAIT can be avoided if the remote end initiates the closure. So the server can avoid problems by letting the client close first.
Reduce Timeout
If (for whatever reason) neither of these options works for you, it may also be possible to shorten the timeout associated with TIME_WAIT.
After seeing https://superuser.com/a/453827/56937 I discovered that there was a WerFault process that was suspended.
It must have inherited the sockets from the non-existent process because killing it freed up my listening ports.
How to increase the TCP receive window for a specific socket?
- I know how to do so for all the sockets by setting the registry key TcpWindowSize,
but how do do that for a specific one?
According to MSFT's documents, the way is
Calling the Windows Sockets function
setsockopt, which sets the receive
window on a per-socket basis.
But in setsockopt, it is mentioned about SO_RCVBUF :
Specifies the total per-socket buffer
space reserved for receives. This is
unrelated to SO_MAX_MSG_SIZE and does
not necessarily correspond to the size
of the TCP receive window.
So is it possible? How?
Thanks.
SO_MAX_MSG_SIZE is for UDP. Here's from MSDN:
SO_MAX_MSG_SIZE - Returns the maximum outbound message size for message-oriented sockets supported by the protocol. Has no meaning for stream-oriented sockets.
It's also not settable.
For TCP just use SO_(SND|RCV)BUF.
I am fairly sure that SO_RCVBUF is what you want. The first link says that SO_RCVBUF has the highest priority for determining the TCP window size over and above anything set on the system. From the way I am reading it, I think that all second part is saying is that the SO_RCVBUF size does not have to match the system receive window size. In other words, it can be a different size that you set.
You need to be careful tuning this and testing the results. Windows Vista and above have a smart adaptive window size auto tuning feature which specifically tunes the window size to work well both on LANs and long fat networks such as 3G and high loss networks. Setting the window size yourself will override this so that windows can no longer tune the window size automatically. This may damage your performance should you ever need to run over a particularly high latency network such as a cellular network.