Maximum number of TCP connections - c++

I am doing a TCP client - server simulation. In the simulation, I have created 2 clients and 2 servers. And I have programmed that read requests will go to server 1 and write requests will go to server 2. Thus, the client will always renew it's socket and make a new connection to the servers.
However, after the client has made 66561 times of connections to the server, instead of sending request packets, it will just simply send some empty ACK packets.
I expected both the clients to be able to send up to millions of requests, but currently, both the clients are only able to send up to 13k requests. Can anyone give me tips or advices?

Nagle's algorithm
Solutions:
Do not use small package in your app protocol
Use socket option TCP_NODELAY on both side client/server

Sounds like most previously created connections are still taking the resource (not released from system). From the information you give,
However, after the client has made 66561 times of connections to the server, instead of sending request packets, it will just simply send some empty ACK packets.
Looks like about 1000+ connections are released. Probably because of the 2msl time is due. If this is the case, suggest you explicitly release a connect before you create a new one.
Copy and paste your C/S part code would help the analyse.

Related

TCP push-pull socket server design

I am designing a cross-platform messaging service as a learning exercise. I have programmed socket-based servers before, but always a "client-polls-server" design, like a web server. I want to be able to target mobile platforms, and I read that polling is a battery drain, so I would like to do push notification.
The server will be TCP-based, written in C++. What I'm having trouble getting my head around is how to manage the bi-directional nature of the design. I need a client to be able to send packets to the server as normal, but also listen for packets. How do I mitigate situations like, the client is sending data when the server is trying to send to it, or it's blocked listening for data but then needs to send something?
For example, consider the following crude diagram:
So, let's say client A is in the middle of sending a chunk of data (arrow 1). While this is happening, client B sends a message (arrow 2), which causes the server to attempt to send data back to client A (arrow 3), but client A hasn't finished sending arrow 1 yet. What happens in this instance? Should I setup 2 separate ports on each client, one for inbound, one for outbound? Do I need to keep track of the state of each connection?
Or is there a better approach to this altogether?
One socket port is inherently bidirectional. To handle both inbound and outbound traffic more or less concurrently you need to use nonblocking sockets.
I think the solution is pretty simple. The TCP server should have a list with connected clients. Since a TCP connection is bi-directional, the push mechanism is quite simple.
Another important thing, as long as your server isn't multithreaded, you can read from or write to one client at the same time.

Network transmission time from the TCP/IP stack

Every time a client connects to my server I would like to log the transmission time between the client and the server - is there a way of doing this without having to ping the client. My server is written in C++ and listens on a LAN TCP/IP socket. I was hoping that there was something on the TCP/IP stack that I could use once every client connected.
Update
Transmission time: Number of milliseconds that it took the client to connect to the server over a computer network like WAN/LAN. Basically the same result that you would get from a running ping client1.example.com in a shell.
There is no completely portable way to find the time the client took to setup the connection, since there is not API to query the kernel for this information. But you have some avenues:
If you can capture on the interface you can easily measure the time from the first SYN to the ACK. You should be able to do this on most systems.
On some Unix systems you can enable the SO_DEBUG socket option which might give you this information. In my experience SO_DEBUG is hard to use.
There's nothing explicitly built in to the TCP APIs to do what you want, but assuming you have control over the server and clients, you could do something like:
When the client's connect() completes, have the client record the current time (using gettimeofday() or similar)
When the server accepts the TCP connection, have it immediately send back some data to the client (doesn't matter what the data is)
When the client receives the data from the server, have the client record the current time again, and subtract from it the time recorded in (1)
Now you have the TCP round-trip time; divide it by two to get an estimation of the one-way time.
Of course this method will include in the measurement the vagaries of the TCP protocol (e.g. if a TCP packet gets lost on the way from the server to the client, the packet will need to be retransmitted before the client sees it, and that will increase the measured time). That may be what you want, or if not you could do something similar with UDP packets instead (e.g. have the client send a UDP packet and the server respond, and the client then measures the elapsed time between send() and recv()). Of course with UDP you'll have to deal with firewalls and NAT blocking the packets going back to the client, etc.

Multiple TCP connections vs single connection

I am designing a Data Distributor (say generating random number) where it will serve multiple clients.
The client C first sends the list of numbers in which it is interested to DD over TCP and listens for data on UDP. After some time (few minutes) the client may renew its subscription list by sending more numbers to DD.
I can design this in 2 ways.
FIRST:
New_Client_Connected_Thread(int sock_fd)
{
--Get Subscription
--Add to UDP Publisher List
--close(sock_fd)
}
Everytime client wants to subscribe to new set of data it will establish a new TCP connection.
SECOND:
New_Client_Connected_Thread(int sock_fd)
{
while(true)
{
--wait for new subscription list
--Get subscription
--Add to UDP Publisher List.
}
}
Here only 1 TCP connection per client would be required.
However if the client does not send new request, the Client_Thread would be waiting unnecessarily for long time.
Given that my Data Distributor would be serving lots of clients which of them seems to be the efficient way?
Libevent, or libev, which supports an event driven loop is probably more appropriate for the TCP portion of this.
You can avoid the threading, and have a single loop for the TCP portion to add your clients to the Publishers' list. Libevent is very efficient at managing lots and lots of connections and socket teardowns per second and is used by things like Tor (The onion router)
It seems like the TCP connection in your application is more of a "Control Plane" connection, and thus it's probably going to depend on how often your clients need to "control" your server thats going to make the decision whether to leave the socket open or close it after controlling. Keep in mind that keeping thousands of TCP connections open permanently does it kernel resource on the host, but on the other opening and closing connections the whole time introduces some latency due to the connection setup time.
See https://github.com/libevent/libevent/blob/master/sample/hello-world.c for an example of a libevent TCP server.
Since you're coding in C++, you will probably interested in the http://llucax.com.ar/proj/eventxx/ wrapper for libevent

Using different port numbers on server

I am pretty new to socket programming - so this might be a simple question but I'd really like to clarify.
I have a multiple-client to single server program: the individual clients send messages to the server which processes them, and then passes it on the destination i.e. the server is an intermediary.
On the server side, there is one thread for each client which is meant to 'listen' for messages from the clients (which will be placed in a buffer). At the moment all the clients are sending messages to the same port (as far as I can tell).
I am thinking of setting up another thread on which the server will process the messages before transmitting them on. Does it make sense to use another port on the server to send those messages?
I don't mean this to be a discussion, but I don't know what is common or more logical to do - any advice?
On the client-side, I am planning for it to have one thread for sending messages to the server, and another thread for receiving. Please let me know if any other information is required!
edit
At the moment it is a 1-server-to-multiple(tens now, hundreds later)-client program - I seem to have problems with the client receiving messages from my server (I am troubleshooting so I thought that using the same ports might be the problem), but I will try it with the same ports again and see. I thought it might be a matter of the receiving port being too busy to send messages as well.
At the moment all the clients are sending messages to the same port (as far as I can tell).
What do you mean 'as far as I can tell'? You must know whether you are opening more than one port at the server.
Does it make sense to use another port on the server to send those messages?
No it doesn't. If you're using TCP, send the messages back down the same socket. If you're using UDP you don't need more than one UDP socket, and it simplifies the client and the application protocol if replies come from the same ip:port the request was sent from.

What is the best way to implement a heartbeat in C++ to check for socket connectivity?

Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out.
Is this "heartbeat" the best way to check for connectivity?
The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.)
Here's a TCP keepalive overview which outlines some of the reasons you might want to use TCP keepalive; this Linux-specific HOWTO describes how to configure your socket to use TCP keepalive at runtime.
It looks like you can enable TCP keepalive in Windows sockets by setting SIO_KEEPALIVE_VALS using the WSAIoctl() function.
If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
Yes, this heartbeat is the best way. You'll have to build it into the protocol the server and client use to communicate.
The simplest solution is to have the client send data periodically and the server close the connection if it hasn't received any data from the client in a particular period of time. This works perfectly for query/response protocols where the client sends queries and the server sends responses.
For example, you can use the following scheme:
The server responds to every query. If the server does not receive a query for two minutes, it closes the connection.
The client sends queries and keeps the connection open after each one.
If the client has not send a query for one minute, it sends an "are you there" query. The server responds with "yes I am". This resets the server's two minutes timer and confirms to the client that the connection is still available.
It may be simpler to just have the client close the connection if it hasn't needed to send a query for the past minute. Since all operations are initiated by the client, it can always just open a new connection if it needs to perform a new operation. That reduces it to just this:
The server closes the connection if it hasn't received a query in two minutes.
The client closes the connection if it hasn't needed to send a query in one minute.
However, this doesn't assure the client that the server is present and ready to accept a query at all times. If you need this capability, you will have to implement an "are you there" "yes I am" query/response into your protocol.
If the other side has gone away (i.e. the process has died, the machine has gone down, etc.), attempting to receive data from the socket should result in an error. However if the other side is merely hung, the socket will remain open. In this case, having a heartbeat is useful. Make sure that whatever protocol you are using (on top of TCP) supports some kind of "do-nothing" request or packet - each side can use this to keep track of the last time they received something from the other side, and can then close the connection if too much time elapses between packets.
Note that this is assuming you're using TCP/IP. If you're using UDP, then that's a whole other kettle of fish, since it's connectionless.
Ok, I don't know what your program does or anything, so maybe this isn't feasible, but I suggest that you avoid trying to always keep the socket open. It should only be open when you are using it, and should be closed when you are not.
If you are between reads and writes waiting on user input, close the socket. Design your client/server protocol (assuming you're doing this by hand and not using any standard protocols like http and/or SOAP) to handle this.
Sockets will error if the connection is dropped; write your program such that you don't lose any information in the case of such an error during a write to the socket and that you don't gain any information in the case of an error during a read from the socket. Transactionality and atomicity should be rolled into your client/server protocol (again, assuming you're designing it yourself).
maybe this will help you, TCP Keepalive HOWTO
or this SO_SOCKET