Set TCP client socket to non-blocking: Server vs client - c++

I have a question regarding non-blocking sockets in TCP connections.
I have implemented two c++ classes, one for the tcp server and one for the client. The server has two sockets file descriptors, one for the server and one for the client. The client has one socket file descriptor.
My server runs asynchronously and my client runs at a fixed rate. Therefore I would like to have a non-blocking socket for sending data from the client to the server, s.t. the client can send data at a fixed rate without stalling and the server asynchronously reads all data that has been buffered meanwhile.
So my question is: Does it make a difference, if I set the client socket to non-blocking in the client or the server class? (using fcntl(this->newsockfd_, F_SETFL, fcntl(this->newsockfd_, F_GETFL, 0) | O_NONBLOCK), where this->newsockfd_ is the client's socket file descriptor in both classes)
I tried this in my programm and it seemed like setting the client socket to non-blocking in the client-class didn't do the trick, but setting it in the server-class did. However, I don't understand why this should make a difference.

If your socket is set to non blocking mode, you will get just that. It will never block. But that does not mean that your api calls will succeed.
There are buffers that are being used behind the scenes and if they are full, which would mean in blocking mode that the socket would block, you will get a return code EWOULDBLOCK, which means that your sent has failed. This means that you basically have to wait for the buffers to empty and then try again.
Your idea of sending at an even rate despite of the server rate to receive, is impossible. You cannot have a client sending at a fixed rate. The whole idea of TCP is that there is a constant negotiation between client and server and the speed will be heavily depending on the network conditions. Congestion and the like.
Moving to non blocking sockets creates some problems of its own. You have to detect that the send fails, you have to check if the socket becomes writeable again, you have to store the bytes that you tried to send, and reattempt a send as soon as the socket becomes writable again.
There is a lot of difference on both client and server between working with blocking and non blocking sockets. non blocking sockets are in my opinion more difficult to be dealt with. You need the select api, with a timeout very likely to detect all the possible socket states. In case of blocking sockets, you can just use a socket in a thread, and if the socket blocks, it is just the thread that will block as well. If your gui is on a different thread, the GUI will be responsive.

Since your client is only sending data the non-blocking setting will not effect it. According to the excellent beej.us guide on socket programming, only calls to accept() and recv() are effected by the non-blocking setting. Since only your server is calling these you are seeing the change on your server code. If your client received data then the non-blocking setting would effect it and you would have to use select() to check if there is data and read from it accordingly.

Related

UDP Server stop receiving data

I am trying to make a online simple game, when I test my game on localhost , there is no problem with the server and the client but when I try to connect my pc to my laptop over local network this start receiving data but few seconds after it stopped.
here is my code:
Server
Client
Your problem probably is that UDP is unreliable, and that sockets by default are blocking.
So think about this situation:
Server is blocked in recvfrom waiting for a packet from the client
The client sends a packet, which is dropped and never reaches the server
The client goes on to it's own recvfrom call which blocks.
Now you have a deadlock as both the server and the client are blocked in recvfrom.
For a simple game like yours you might not need reliability, so it's okay if a packet here or there doesn't arrive. But what is important is that you don't block as then the deadlock situation might occur.
There are basically two solutions to this: The first is to make the sockets non-blocking, and handle the case where recvfrom doesn't receive anything. Take care here though, as your threads doesn't do any sleeping they will consume quite a lot of CPU power.
The second solution is to use polling like e.g. select to see when you can read from the socket.

how to detect tcp client connect to server in c++

I have a tcp client/server, and I want to detect connection loss in client side; because my client have multiple interfaces and at a time I connected to server with one of them, I want to know how to detect connection loss in client side so that I could connect my tcp client with another interface to the server and if all of them are down I store my data in text files. I googled it and I already seen keep alive but it's not what I want.
if it is important my project is in linux and code is in c++.
Try to read from the socket. When the socket closes, the read will fail, giving you simple detection. You can do this in a dedicated detection thread so that your main thread doesn't block.
TCP connections are designed to be error correcting and not time critical. This error correction includes network timeouts.
Reads and Write will not fail until the socket is actually closed, which may not be for a very long time.
The only way for a client to decide if a connection has timed-out is for the client to detect that it hasn't received any messages for a specified time, and manually close the socket.
That's what Keep Alive messages are for.
The best way that I found is to check buffer, if buffer is empty it means that your TCP client send the packet to the TCP server successfully and you can send the next packet; for checking the buffer you can use SIOCOUTQ; its very easy to use and show you how much data you have in your buffer.

C++ socket design

I am designing a client server socket program using TCP/IP.
The server listens on a certain port, the client program makes 2 connections to the server. One is for command and response and the other is for streaming of data.
For the command and response, I can use the normal blocking socket mode to receive the client command and send the server response.
For the streaming data, the server would wait for the client to send a start stream command and begins continuous sending of data to that client. The issue now is I need the handler to also listen on this connection for the stop stream command. Hence, I was thinking of making this connection non-blocking so that the receive would not block followed by a non-blocking send.
Is this method of implementing the server and client handler efficient?
Take a look at Boost::asio socket management layer. It's very well written.
http://www.boost.org/doc/libs/1_49_0/doc/html/boost_asio/tutorial/tutdaytime1.html
Yes it is very efficient.
You can use libraries like libevent.
From perspective of efficiency, the server should always be designed to use non-blocking sockets, and use event-driven asynchronous I/O architecture. Blocking sockets should be avoided at server side.
Fortunately, there've been a few mature open source frameworks you can use. Among them, libev is most lightweight.

Sockets - send and receive

I'm currently writing a chat server in C++. When a user connects to it, I open a socket and I create two threads, one to receive and one to send data.
Now my question:
Do I have to check if the other thread is currently using the socket, or will the send/recv function just wait until the socket is ready?
Sending and receiving from TCP socket simultaneously should be entirely fine. (barring any possible OS bugs)
Socket send and receive are independent. You do not need to worry about interleaving them yourself.

What is the best way to implement a heartbeat in C++ to check for socket connectivity?

Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out.
Is this "heartbeat" the best way to check for connectivity?
The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.)
Here's a TCP keepalive overview which outlines some of the reasons you might want to use TCP keepalive; this Linux-specific HOWTO describes how to configure your socket to use TCP keepalive at runtime.
It looks like you can enable TCP keepalive in Windows sockets by setting SIO_KEEPALIVE_VALS using the WSAIoctl() function.
If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
Yes, this heartbeat is the best way. You'll have to build it into the protocol the server and client use to communicate.
The simplest solution is to have the client send data periodically and the server close the connection if it hasn't received any data from the client in a particular period of time. This works perfectly for query/response protocols where the client sends queries and the server sends responses.
For example, you can use the following scheme:
The server responds to every query. If the server does not receive a query for two minutes, it closes the connection.
The client sends queries and keeps the connection open after each one.
If the client has not send a query for one minute, it sends an "are you there" query. The server responds with "yes I am". This resets the server's two minutes timer and confirms to the client that the connection is still available.
It may be simpler to just have the client close the connection if it hasn't needed to send a query for the past minute. Since all operations are initiated by the client, it can always just open a new connection if it needs to perform a new operation. That reduces it to just this:
The server closes the connection if it hasn't received a query in two minutes.
The client closes the connection if it hasn't needed to send a query in one minute.
However, this doesn't assure the client that the server is present and ready to accept a query at all times. If you need this capability, you will have to implement an "are you there" "yes I am" query/response into your protocol.
If the other side has gone away (i.e. the process has died, the machine has gone down, etc.), attempting to receive data from the socket should result in an error. However if the other side is merely hung, the socket will remain open. In this case, having a heartbeat is useful. Make sure that whatever protocol you are using (on top of TCP) supports some kind of "do-nothing" request or packet - each side can use this to keep track of the last time they received something from the other side, and can then close the connection if too much time elapses between packets.
Note that this is assuming you're using TCP/IP. If you're using UDP, then that's a whole other kettle of fish, since it's connectionless.
Ok, I don't know what your program does or anything, so maybe this isn't feasible, but I suggest that you avoid trying to always keep the socket open. It should only be open when you are using it, and should be closed when you are not.
If you are between reads and writes waiting on user input, close the socket. Design your client/server protocol (assuming you're doing this by hand and not using any standard protocols like http and/or SOAP) to handle this.
Sockets will error if the connection is dropped; write your program such that you don't lose any information in the case of such an error during a write to the socket and that you don't gain any information in the case of an error during a read from the socket. Transactionality and atomicity should be rolled into your client/server protocol (again, assuming you're designing it yourself).
maybe this will help you, TCP Keepalive HOWTO
or this SO_SOCKET