I am using Windows socket for my application(winsock2.h). Since the blocking socket doesn't let me control connection timeout, I am using non-blocking one. Right after send command I am using shutdown command to flush(I have to). My timeout is 50ms and the thing I want to know is if the data to be sent is so big, is there a risk of sending only a portion of data or sending nothing at all? Thanks in advance...
hSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
u_long iMode=1;
ioctlsocket(hSocket,FIONBIO,&iMode);
connect(hSocket, (sockaddr*)(&sockAddr),sockAddrSize);
send(hSocket, sendbuf, sendlen, 0);
shutdown(hSocket, SD_BOTH);
Sleep(50);
closesocket(hSocket);
Non-blocking TCP socket and flushing right after send?
There is no such thing as flushing a TCP socket.
Since the blocking socket doesn't let me control connection timeout
False. You can use select() on a blocking socket.
I am using non-blocking one.
Non sequitur.
Right after send command I am using shutdown command to flush(I have to).
You don't have to, and shutdown() doesn't flush anything.
My timeout is 50ms
Why? The time to send data depends on the size of the data. Obviously. It does not make any sense whatsoever to use a fixed timeout for a send.
and the thing I want to know is if the data to be sent is so big, is there a risk of sending only a portion of data or sending nothing at all?
In blocking mode, all the data you provided to send() will be sent if possible. In non-blocking mode, the amount of data represented by the return value of send() will be sent, if possible. In either case the connection will be reset if the send fails. Whatever timeout mechanism you superimpose can't possibly change any of that: specifically, closing the socket asynchronously after a timeout will only cause the close to be appended to the data being sent. It will not cause the send to be aborted.
Your code wouldn't pass any code review known to man. There is zero error checking; the sleep is completely pointless; and shutdown before close is redundant. If the sleep is intended to implement a timeout, it doesn't.
I want to be sending data as fast as possible.
You can't. TCP implements flow control. There is exactly nothing you can do about that. You are rate-limited by the receiver.
Also the 2 possible cases are: server waits too long to accept connection
There is no such case. The client can complete a connection before the server ever calls accept(). If you're trying to implement a connect timeout shorter than the default of about a minute, use select().
or receive.
Nothing you can do about that: see above.
So both connecting and writing should be done in max of 50ms since the time is very important in my situation.
See above. It doesn't make sense to implement a fixed timeout for operations that take variable time. And 50ms is far too short for a connect timeout. If that's a real issue you should keep the connection open so that the connect delay only happens once: in fact you should keep TCP connections open as long as possible anyway.
I have to flush both write and read streams
You can't. There is no operation in TCP that will flush either a read stream or a write stream.
because the server keeps sending me unnecessarly big data and I have limited internet connection.
Another non sequitur. If the server sends you data, you have to read it, otherwise you will stall the server, and that doesn't have anything to do with flushing your own write stream.
Actually I don't even want a single byte from the server
Bad luck. You have to read it. [If you were on BSD Unix you could shutdown the socket for input, which would cause data from the server to be thrown away, but that doesn't work on Windows: it causes the server to get a connection reset.]
Thanks to EJP and Martin, now I have created a second thread to check. Also in the code I posted in my question, I added "counter=0;" line after the "send" line and removed shutdown. It works just as I wanted now. It never waits more than 50ms :) Really big thanks
unsigned __stdcall SecondThreadFunc( void* pArguments )
{
while(1)
{
counter++;
if (counter > 49)
{
closesocket(hSocket);
counter = 0;
printf("\rtimeout");
}
Sleep(1);
}
return 0;
}
Related
I am making a program which sends UDP packets to a server at a fixed interval, something like this:
while (!stop) {
Sleep(fixedInterval);
send(sock, pkt, payloadSize, flags);
}
However the periodicity cannot be guaranteed because send is a blocking call (e.g., when fixedInterval is 20ms, and a call to send is > 20ms ). Do you know how I can turn the send into a non-blocking operation?
You need to use a non-blocking socket. The send/receive functions are the same functions for blocking or non-blocking operations, but you must set the socket itself to non-blocking.
u_long mode = 1; // 1 to enable non-blocking socket
ioctlsocket(sock, FIONBIO, &mode);
Also, be aware that working with non-blocking sockets is quite different. You'll need to make sure you handle WSAEWOULDBLOCK errors as success! :)
So, using non-blocking sockets may help, but still will not guarantee an exact period. You would be better to drive this from a timer, rather than this simple loop, so that any latency from calling send, even in non-blocking mode, will not affect the timing.
The API ioctlsocket can do it.You can use it as below.But why don't you use I/O models in winsock?
ioctlsocket(hsock,FIOBIO,(unsigned long *)&ul);
My memory is fuzzy here since it's probably been 15 years since I've used UDP non-blocking.
However, there are some things of which you should be aware.
Send only smallish packets if you're going over a public network. The PATH MTU can trip you up if either the client or the server is not written to take care of incomplete packets.
Make sure you check that you have sent the number of bytes you think you have to send. It can get weird when you're expecting to see 300 bytes sent and the receiving end only gets 248. Both client side and server side have to be aware of this issue.
See here for some good advice from the Linux folks.
See here for the Unix Socket FAQ for UDP
This is a good, general network programming FAQ and example page.
How about measuring the time Send takes and then just sleeping the time missing up to the 20ms?
I am making a program which sends UDP packets to a server at a fixed interval, something like this:
while (!stop) {
Sleep(fixedInterval);
send(sock, pkt, payloadSize, flags);
}
However the periodicity cannot be guaranteed because send is a blocking call (e.g., when fixedInterval is 20ms, and a call to send is > 20ms ). Do you know how I can turn the send into a non-blocking operation?
You need to use a non-blocking socket. The send/receive functions are the same functions for blocking or non-blocking operations, but you must set the socket itself to non-blocking.
u_long mode = 1; // 1 to enable non-blocking socket
ioctlsocket(sock, FIONBIO, &mode);
Also, be aware that working with non-blocking sockets is quite different. You'll need to make sure you handle WSAEWOULDBLOCK errors as success! :)
So, using non-blocking sockets may help, but still will not guarantee an exact period. You would be better to drive this from a timer, rather than this simple loop, so that any latency from calling send, even in non-blocking mode, will not affect the timing.
The API ioctlsocket can do it.You can use it as below.But why don't you use I/O models in winsock?
ioctlsocket(hsock,FIOBIO,(unsigned long *)&ul);
My memory is fuzzy here since it's probably been 15 years since I've used UDP non-blocking.
However, there are some things of which you should be aware.
Send only smallish packets if you're going over a public network. The PATH MTU can trip you up if either the client or the server is not written to take care of incomplete packets.
Make sure you check that you have sent the number of bytes you think you have to send. It can get weird when you're expecting to see 300 bytes sent and the receiving end only gets 248. Both client side and server side have to be aware of this issue.
See here for some good advice from the Linux folks.
See here for the Unix Socket FAQ for UDP
This is a good, general network programming FAQ and example page.
How about measuring the time Send takes and then just sleeping the time missing up to the 20ms?
Suppose I have a server application - the connection is over TCP, using UNIX sockets.
The connection is asynchronous - in other words, clients' and servers' sockets are non-blocking.
Suppose the following situation: in some conditions, the server may decide to send some data to a connected client and immediately close the connection: using shutdown with SHUT_RDWR.
So, my question is - is it guaranteed, that when the client call recv, it will receive the (sent by the server) data?
Or, to receive the data, recv must be called before the server's shutdown? If so, what should I do (or, to be more precise, how should I do this), to make sure, that the data is received by the client?
You can control this behavior with "setsockopt(SO_LINGER)":
man setsockopt
SO_LINGER
Waits to complete the close function if data is present. When this option is enabled and there is unsent data present when the close
function is called, the calling application is blocked during the
close function until the data is transmitted or the connection has
timed out. The close function returns without blocking the caller.
This option has meaning only for stream sockets.
See also:
man read
Beej's Guide to Network Programming
There's no guarantee you will receive any data, let alone this data, but the data pending when the socket is closed is subject to the same guarantees as all the other data: if it arrives it will arrive in order and undamaged and subject to TCP's best efforts.
NB 'Asynchronous' and 'non-blocking' are two different things, not two terms for the same thing.
Once you have successfully written the data to the socket, it is in the kernel's buffer, where it will stay until it has been sent and acknowledged. Shutdown doesn't cause the buffered data to get lost. Closing the socket doesn't cause the buffered data to get lost. Not even the death of the sending process would cause the buffered data to get lost.
You can observe the size of the buffer with netstat. The SendQ column is how much data the kernel still wants to transmit.
After the client has acknowledged everything, the port disappears from the server. This may happen before the client has read the data, in which case it will be in RecvQ on the client. Basically you have nothing to worry about. After a successful write to a TCP socket, every component is trying as hard as it can to make sure that your data gets to the destination unharmed regardless of what happens to the sending socket and/or process.
Well, maybe one thing to worry about: If the client tries to send anything after the server has done its shutdown, it could get a SIGPIPE and die before it has read all the available data from the socket.
I am having issues reading data from a socket. Supposedly, there is a server socket that is waiting for clients to connect. When I write a client to connect() to the server socket/port, it appears that I am connected. But when I try to read() data that the server is supposedly writing on the socket, the read() function hangs until the server app is stopped.
Why would a read() call ever hang if the socket is connected? I believe that I am not ever really connected to the socket/port but I can't prove it, b/c the connect() call did not return an error. The read() call is not returning an error either, it is just never returning at all.
Read is blocking until is receives some I/O (or an error).
As John & Whirl mentioned, the problem is almost certainly that the server hasn't sent any data for your read() call to return. Another easy thing to overlook when you're starting out with network programming is that the data transfered in a server's write() call is not always symmetrical with a client's read() call. Where the server may write("hello world"), your read() could easily return "hello world", "hello wo", "hel", or even just "h"
Unless you explicitly changed your reader's socket to non-blocking mode, a call to read will do exactly what you say until there is data available: It will block forever until some data is actually read.
You can also use netstat (I use it with -f inet) to figure out connections that have been made and see the status of your socket connection.
Your server is probably not writing data to the socket, so your reader just blocks waiting for data to appear on the socket.
well..I use a typical model of epoll+multithread to handle massive sockets, that is, I have a thread called epollWorkThread that use epoll_wait to handle i/o sockets. While there's an event of EPOLLIN, recv() will do the work and I do use the noblocking mode to allow immediate return. And recv() is indeed in a while(true) loop.
Everything is fine in the intial time(maybe a couple of hours or maybe minutes or if I'm lucky days), I can receive the information. But some time later, recv() insists to return -1 with the errno = 107(ENOTCONN). The other peer of the transport is written in AS3 which makes sure that the socket is connected. So I'm confused by the recv() behaviour. Thank you in advance and any comment is appreciated!
Errno 107 means that the socket is NOT connected (any more).
There are several reasons why this could happen. Assuming you're right and both sides of the connection claim that the socket is still open, an intermediate router/switch may have dropped the connection due to a timeout. The safest way to avoid such things from happen is to periodically send a 'health' or 'keep-alive' message. (Thus the intermediate router/switch accepts the connection as living...)=