I have client code which gets a response from the server using UDP and recvfrom(). This is working fine when the server is ON, but once I stop the server my client program is hanging; I suspect recvfrom() is waiting for the response from the server.
If the server and client are both are installed on same system them I am getting error from recvfrom() when the server is OFF, but when the server and client are on different systems then the client hangs at recvfrom() as there is no response from the server since its OFF.
Please some one can give me idea how can I can deal with this situation, maybe a timer signal interuption can solve the issue.. can anyone throw some light on this?
I am Using Visual studio 2005.
Your call is blocking, because there is no data for this socket right now. When the server was on, it was fast enough to send data so the recvfrom call got it and returned quickly. When the server is off, nobody's sending data and recvfrom waits forever. It does not matter whether server is on or off, recvfrom is doing the same thing in both cases; you just don't notice the delay in the first case.
You need to use non-blocking sockets. In non-blocking mode, recvfrom will return an error when there is no data, instead of waiting. You can then use select call to sleep until a timeout happens or the data arrive.
Related
I use epoll linux server for multiplexing. I noticed that the request to close the connection occurs in two ways:
the EPOLLHUP event fires
recv returns 0
It’s not entirely clear to me what the client should do so that we get EPOLLHUP on the server, and it’s also not clear what the client should do so that we get 0 on the server with recv.
I just need to close the connection in the right way, but I don't know how.
I was wondering if there is any way to notify a server if a client side application was closed. Normally, if I Ctrl+C my client side terminal an EOF-signal is sent to the server side. The server side async_read function has a handle which has boost::system::error_code ec argument fed into it. The handle is called when the server side receives EOF-signal which I can happily process and tell the server to start listening again.
However, if I try to cleanly close my client application using socket.shutdown() and socket.close() nothing happens and the server side socket remains open.
I was wondering, is there a way to somehow send an error signal to the server-side socket so I could then process it using the error code?
The approaches described in comments covers 99% of cases. It doesn't work when client machine was (not gracefully) turned off, or network problems.
To get reliable notification of disconnected client you need to implement "ping" feature: to send ping packets regularly and to check that you received pong packets.
I have a question regarding non-blocking sockets in TCP connections.
I have implemented two c++ classes, one for the tcp server and one for the client. The server has two sockets file descriptors, one for the server and one for the client. The client has one socket file descriptor.
My server runs asynchronously and my client runs at a fixed rate. Therefore I would like to have a non-blocking socket for sending data from the client to the server, s.t. the client can send data at a fixed rate without stalling and the server asynchronously reads all data that has been buffered meanwhile.
So my question is: Does it make a difference, if I set the client socket to non-blocking in the client or the server class? (using fcntl(this->newsockfd_, F_SETFL, fcntl(this->newsockfd_, F_GETFL, 0) | O_NONBLOCK), where this->newsockfd_ is the client's socket file descriptor in both classes)
I tried this in my programm and it seemed like setting the client socket to non-blocking in the client-class didn't do the trick, but setting it in the server-class did. However, I don't understand why this should make a difference.
If your socket is set to non blocking mode, you will get just that. It will never block. But that does not mean that your api calls will succeed.
There are buffers that are being used behind the scenes and if they are full, which would mean in blocking mode that the socket would block, you will get a return code EWOULDBLOCK, which means that your sent has failed. This means that you basically have to wait for the buffers to empty and then try again.
Your idea of sending at an even rate despite of the server rate to receive, is impossible. You cannot have a client sending at a fixed rate. The whole idea of TCP is that there is a constant negotiation between client and server and the speed will be heavily depending on the network conditions. Congestion and the like.
Moving to non blocking sockets creates some problems of its own. You have to detect that the send fails, you have to check if the socket becomes writeable again, you have to store the bytes that you tried to send, and reattempt a send as soon as the socket becomes writable again.
There is a lot of difference on both client and server between working with blocking and non blocking sockets. non blocking sockets are in my opinion more difficult to be dealt with. You need the select api, with a timeout very likely to detect all the possible socket states. In case of blocking sockets, you can just use a socket in a thread, and if the socket blocks, it is just the thread that will block as well. If your gui is on a different thread, the GUI will be responsive.
Since your client is only sending data the non-blocking setting will not effect it. According to the excellent beej.us guide on socket programming, only calls to accept() and recv() are effected by the non-blocking setting. Since only your server is calling these you are seeing the change on your server code. If your client received data then the non-blocking setting would effect it and you would have to use select() to check if there is data and read from it accordingly.
I am trying to make a online simple game, when I test my game on localhost , there is no problem with the server and the client but when I try to connect my pc to my laptop over local network this start receiving data but few seconds after it stopped.
here is my code:
Server
Client
Your problem probably is that UDP is unreliable, and that sockets by default are blocking.
So think about this situation:
Server is blocked in recvfrom waiting for a packet from the client
The client sends a packet, which is dropped and never reaches the server
The client goes on to it's own recvfrom call which blocks.
Now you have a deadlock as both the server and the client are blocked in recvfrom.
For a simple game like yours you might not need reliability, so it's okay if a packet here or there doesn't arrive. But what is important is that you don't block as then the deadlock situation might occur.
There are basically two solutions to this: The first is to make the sockets non-blocking, and handle the case where recvfrom doesn't receive anything. Take care here though, as your threads doesn't do any sleeping they will consume quite a lot of CPU power.
The second solution is to use polling like e.g. select to see when you can read from the socket.
I'm running a client and server on the same machine using a loopback address for learning purposes yet my "server" code seems to fly back listen() and then hangs on my connect(). Does listen() need to be in an endless loop to keep receiving connections?
How would I determine a connection is made/in the queue if listen() returns 0 even when I haven't made a connection yet?
I have an accept() call following but the code hangs on that spot. I have debug statements right before and after and it never gets past the accept().
On the other end my client code seems to connect() just fine (doesn't throw an error) and appears to write and complete even though the server code never gets the connection.
The listen function defines the backlog. Only need to call this once.
Then use accept to receive an incoming connection. Best to deal with this promptly and go around again for another accept.
Both connect() and accept() should block while waiting for a connection.
connect() #client blocks while waiting for remote server to answer
accept() #server blocks while waiting for a client
listen() should not block at all. It tells the OS to allocate additional memory for requests, so that the OS can queue clients who arrive at the same time. You only need to call it once.
If accept() never completes in the server, then you most likely never have a connection. If the connect() call is completing in your client, then you need to check its return value. If it returns with a -1, then the connection failed. It sounds like this is most likely what is happening. You can still write to a socket without a connection, but your message will not go anywhere.