C++ socket design - c++

I am designing a client server socket program using TCP/IP.
The server listens on a certain port, the client program makes 2 connections to the server. One is for command and response and the other is for streaming of data.
For the command and response, I can use the normal blocking socket mode to receive the client command and send the server response.
For the streaming data, the server would wait for the client to send a start stream command and begins continuous sending of data to that client. The issue now is I need the handler to also listen on this connection for the stop stream command. Hence, I was thinking of making this connection non-blocking so that the receive would not block followed by a non-blocking send.
Is this method of implementing the server and client handler efficient?

Take a look at Boost::asio socket management layer. It's very well written.
http://www.boost.org/doc/libs/1_49_0/doc/html/boost_asio/tutorial/tutdaytime1.html

Yes it is very efficient.
You can use libraries like libevent.

From perspective of efficiency, the server should always be designed to use non-blocking sockets, and use event-driven asynchronous I/O architecture. Blocking sockets should be avoided at server side.
Fortunately, there've been a few mature open source frameworks you can use. Among them, libev is most lightweight.

Related

gRPC polling for incoming packets from multiple sockets at once

I am looking into possibility of listening on different sockets at once. To handle multiple socket connection at the same fd_set can be used in Linux. I have seen that gRPC also support this functionality with having epoll based pollset.
https://github.com/grpc/grpc/blob/18df25228cfa1f97fc5cca9176fbaef64c0e4221/doc/epoll-polling-engine.md
I intend to call different services in async mode and providing a service at the same time. Therefore, I was thinking about having a poll-set consist of client sockets waiting for async responses and server sockets. It seems to be possible in gRPC. I haven't been able to find anything in gRPC API that exposes construction of a poll-set.
Therefore, my question is how to use this capability of gRPC?
Does gRPC manages this automatically? In that case how can I wait for incoming messages?
The same CompletionQueue should be used for both client and server. To wait for the incoming messages next needs to be invokek.

boost::asio::tcp two-way communication on a socket

I am using boost::asio, TCP communication and C++ to create a client and a server that talk over a TCP socket. I need both the client and the server to be able send and receive data to each other. I am able to make them communicate over a socket where the server is continuously sending some data and the client is continuously reading on the socket. It works.
Now for the other way communication : For client to send some data and the server to be able to read it, can I use the same socket for this? Or Do I need to use a separate socket? Is it possible to read and write on the same socket for two applications communicating over TCP?
A boost::asio based example to illustrate this will be great if available. But I am able to find examples which are about only one-way communications.
For client to send some data & server to be able to read it, can I use the same socket for this? Or Do I need to use a separate socket ? Is it possible to read and write on the same socket for two applications communicating over TCP ?
Yes. TCP is full duplex. Applications define the protocol of what/how messages are exchanged between client and server. Weather they do asynchronously or synchronously, TCP doesn't care.
The client server paradigm in tcp is one where the client initiates the connection and the server listens for incoming connections. Once tge connection is established, it is up to higher layer protocol like http to establish how data is exchanged. As far as tcp is concerned both client and server may send or receive data any way they choose. Tcp is full duplex.

Set TCP client socket to non-blocking: Server vs client

I have a question regarding non-blocking sockets in TCP connections.
I have implemented two c++ classes, one for the tcp server and one for the client. The server has two sockets file descriptors, one for the server and one for the client. The client has one socket file descriptor.
My server runs asynchronously and my client runs at a fixed rate. Therefore I would like to have a non-blocking socket for sending data from the client to the server, s.t. the client can send data at a fixed rate without stalling and the server asynchronously reads all data that has been buffered meanwhile.
So my question is: Does it make a difference, if I set the client socket to non-blocking in the client or the server class? (using fcntl(this->newsockfd_, F_SETFL, fcntl(this->newsockfd_, F_GETFL, 0) | O_NONBLOCK), where this->newsockfd_ is the client's socket file descriptor in both classes)
I tried this in my programm and it seemed like setting the client socket to non-blocking in the client-class didn't do the trick, but setting it in the server-class did. However, I don't understand why this should make a difference.
If your socket is set to non blocking mode, you will get just that. It will never block. But that does not mean that your api calls will succeed.
There are buffers that are being used behind the scenes and if they are full, which would mean in blocking mode that the socket would block, you will get a return code EWOULDBLOCK, which means that your sent has failed. This means that you basically have to wait for the buffers to empty and then try again.
Your idea of sending at an even rate despite of the server rate to receive, is impossible. You cannot have a client sending at a fixed rate. The whole idea of TCP is that there is a constant negotiation between client and server and the speed will be heavily depending on the network conditions. Congestion and the like.
Moving to non blocking sockets creates some problems of its own. You have to detect that the send fails, you have to check if the socket becomes writeable again, you have to store the bytes that you tried to send, and reattempt a send as soon as the socket becomes writable again.
There is a lot of difference on both client and server between working with blocking and non blocking sockets. non blocking sockets are in my opinion more difficult to be dealt with. You need the select api, with a timeout very likely to detect all the possible socket states. In case of blocking sockets, you can just use a socket in a thread, and if the socket blocks, it is just the thread that will block as well. If your gui is on a different thread, the GUI will be responsive.
Since your client is only sending data the non-blocking setting will not effect it. According to the excellent beej.us guide on socket programming, only calls to accept() and recv() are effected by the non-blocking setting. Since only your server is calling these you are seeing the change on your server code. If your client received data then the non-blocking setting would effect it and you would have to use select() to check if there is data and read from it accordingly.

Multithreaded Server Issue

I am writing a server in linux that is supposed to serve an API.
Initially, I wanted to make it Multi-threaded on a single port, meaning that I'd have multiple threads working on various request received on a single port.
One of my friends told me that it not the way it is supposed to work. He told me that when a request is received, I first have to follow a Handshake procedure, create a thread that is listening to some other port dedicated to the request and then redirect the requested client to the new port.
Theoretically, it's very interesting but I could not find any information on how to implement the handshake and do the redirection. Can someone help?
If I'm not wrong in interpreting your responses, once I create a multithreaded server with a main thread listening to a port, and creates a new thread to handle requests, I'm essentially making it multithreaded on a single port?
Consider the scenario where I get a large number of requests every second. Isn't it true that every request on the port should now wait for the "current" request to complete? If not, how would the communication still be done: Say a browser sends a request, so the thread handling this has to first listen to the port, block it, process it, respond and then unblock it.
By this, eventhough I'm having "multithreads" , all I'm using is one single thread at a time apart from the main thread because the port is being blocked.
What your friend told you is similar to passive FTP - a client tells the server that it needs a connection, the server sends back the port number and the client creates a data connection to that port.
But all you wanted to do is a multithreaded server. All you need is one server socket listening and accepting connections on a given port. As soon as the automatic TCP handshake is finished, you'll get a new socket from the accept function - that socket will be used for communication with the client that has just connected. So now you only have to create a new thread, passing that client socket to the thread function. In your server thread, you will then call accept again in order to accept another connection.
TCP/IP does the handshake, if you can't think of any reason to do a handshake than your application does not demand it.
An example of an application specific handshake could be for user authentication.
What your colleague is suggesting sounds like the way FTP works. This is not a good thing to do -- the internet these days is more or less used for protocols which use a single port, and having a command port is bad. One of the reasons is because statefull firewalls aren't designed for multi-port applications; they have to be extended for each individual application that does things this way.
Look at ASIO's tutorial on async TCP. There one part accept connections on TCP and spawns handlers that each communicate with a single client. That's how TCP-servers usually work (including HTTP/web, the most common tcp protocol.)
You may disregard the asynchronous stuff of ASIO if you're set on creating a thread per connection. It doesn't apply to your question. (Going fully async and have one worker-thread per core is nice, but it might not integrate well with the rest of your environment.)

What is the best way to implement a heartbeat in C++ to check for socket connectivity?

Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out.
Is this "heartbeat" the best way to check for connectivity?
The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.)
Here's a TCP keepalive overview which outlines some of the reasons you might want to use TCP keepalive; this Linux-specific HOWTO describes how to configure your socket to use TCP keepalive at runtime.
It looks like you can enable TCP keepalive in Windows sockets by setting SIO_KEEPALIVE_VALS using the WSAIoctl() function.
If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
Yes, this heartbeat is the best way. You'll have to build it into the protocol the server and client use to communicate.
The simplest solution is to have the client send data periodically and the server close the connection if it hasn't received any data from the client in a particular period of time. This works perfectly for query/response protocols where the client sends queries and the server sends responses.
For example, you can use the following scheme:
The server responds to every query. If the server does not receive a query for two minutes, it closes the connection.
The client sends queries and keeps the connection open after each one.
If the client has not send a query for one minute, it sends an "are you there" query. The server responds with "yes I am". This resets the server's two minutes timer and confirms to the client that the connection is still available.
It may be simpler to just have the client close the connection if it hasn't needed to send a query for the past minute. Since all operations are initiated by the client, it can always just open a new connection if it needs to perform a new operation. That reduces it to just this:
The server closes the connection if it hasn't received a query in two minutes.
The client closes the connection if it hasn't needed to send a query in one minute.
However, this doesn't assure the client that the server is present and ready to accept a query at all times. If you need this capability, you will have to implement an "are you there" "yes I am" query/response into your protocol.
If the other side has gone away (i.e. the process has died, the machine has gone down, etc.), attempting to receive data from the socket should result in an error. However if the other side is merely hung, the socket will remain open. In this case, having a heartbeat is useful. Make sure that whatever protocol you are using (on top of TCP) supports some kind of "do-nothing" request or packet - each side can use this to keep track of the last time they received something from the other side, and can then close the connection if too much time elapses between packets.
Note that this is assuming you're using TCP/IP. If you're using UDP, then that's a whole other kettle of fish, since it's connectionless.
Ok, I don't know what your program does or anything, so maybe this isn't feasible, but I suggest that you avoid trying to always keep the socket open. It should only be open when you are using it, and should be closed when you are not.
If you are between reads and writes waiting on user input, close the socket. Design your client/server protocol (assuming you're doing this by hand and not using any standard protocols like http and/or SOAP) to handle this.
Sockets will error if the connection is dropped; write your program such that you don't lose any information in the case of such an error during a write to the socket and that you don't gain any information in the case of an error during a read from the socket. Transactionality and atomicity should be rolled into your client/server protocol (again, assuming you're designing it yourself).
maybe this will help you, TCP Keepalive HOWTO
or this SO_SOCKET