Network transmission time from the TCP/IP stack - c++

Every time a client connects to my server I would like to log the transmission time between the client and the server - is there a way of doing this without having to ping the client. My server is written in C++ and listens on a LAN TCP/IP socket. I was hoping that there was something on the TCP/IP stack that I could use once every client connected.
Update
Transmission time: Number of milliseconds that it took the client to connect to the server over a computer network like WAN/LAN. Basically the same result that you would get from a running ping client1.example.com in a shell.

There is no completely portable way to find the time the client took to setup the connection, since there is not API to query the kernel for this information. But you have some avenues:
If you can capture on the interface you can easily measure the time from the first SYN to the ACK. You should be able to do this on most systems.
On some Unix systems you can enable the SO_DEBUG socket option which might give you this information. In my experience SO_DEBUG is hard to use.

There's nothing explicitly built in to the TCP APIs to do what you want, but assuming you have control over the server and clients, you could do something like:
When the client's connect() completes, have the client record the current time (using gettimeofday() or similar)
When the server accepts the TCP connection, have it immediately send back some data to the client (doesn't matter what the data is)
When the client receives the data from the server, have the client record the current time again, and subtract from it the time recorded in (1)
Now you have the TCP round-trip time; divide it by two to get an estimation of the one-way time.
Of course this method will include in the measurement the vagaries of the TCP protocol (e.g. if a TCP packet gets lost on the way from the server to the client, the packet will need to be retransmitted before the client sees it, and that will increase the measured time). That may be what you want, or if not you could do something similar with UDP packets instead (e.g. have the client send a UDP packet and the server respond, and the client then measures the elapsed time between send() and recv()). Of course with UDP you'll have to deal with firewalls and NAT blocking the packets going back to the client, etc.

Related

How Do I Send Data From One Computer To Another Without A Server (in C++)?

so I want to send an int32 (or any 4 bytes data) from one pc to another, the size of the data will always be the same, I don't need any checking to see if both pcs are online or any disconnect function, if pc2 didn't receive the data or he went offline, I just want pc1 to send the data, if pc2 is offline nothing happens and if it's online it store it somewhere.
Most tutorials I've found uses a server way of connecting, so there are 3 pcs, 2 clients and 1 server, client1 sends data to the server and the server sends it to client2, but is there a way to send it directly to client2, as if client2 is the server?
There are two common protocols used to send raw data over an ip based network. They are called TCP and UDP and serve slightly different approaches.
TCP is connection oriented and relies heavily on the server client model. On host acts as a server and accepts incoming requests on a predefined socekt. After the TCP connection is setup, you have a duplex (two-way) stream that you can use to exchange data.
UDP is a packet oriented protocol. One host (usually called the server) listens to imcoming packets and can answer them. No real "connection" is established tough.
You probably want to use UDP. Note that altough this protocol does not establish a connecion, there still needs to be at least one host, that is waiting for incoming data on a predefined port. This one is usually called the "server". However also the client can bind its UDP socket to a specific port and thus can act as a "client" and a "server" during the same time.
You can setup both hosts to listen and send on/to the same preefined port number and achieve a connectionless packetoriented way to exchange data. That way both hosts act as server and client simultaneously.
How you actually implement this, depends on your operating system. On Linux (and other POSIX compatible OSes) you can use standard UDP sockets, on Windows there is some equivalent API. Either way I suggest you to first follow a tutorial on how to program a standard TCP server and client, as most of the operations on the sockets are similar (create the socket, bind it to an address:port, and read/write data from it).

Maximum number of TCP connections

I am doing a TCP client - server simulation. In the simulation, I have created 2 clients and 2 servers. And I have programmed that read requests will go to server 1 and write requests will go to server 2. Thus, the client will always renew it's socket and make a new connection to the servers.
However, after the client has made 66561 times of connections to the server, instead of sending request packets, it will just simply send some empty ACK packets.
I expected both the clients to be able to send up to millions of requests, but currently, both the clients are only able to send up to 13k requests. Can anyone give me tips or advices?
Nagle's algorithm
Solutions:
Do not use small package in your app protocol
Use socket option TCP_NODELAY on both side client/server
Sounds like most previously created connections are still taking the resource (not released from system). From the information you give,
However, after the client has made 66561 times of connections to the server, instead of sending request packets, it will just simply send some empty ACK packets.
Looks like about 1000+ connections are released. Probably because of the 2msl time is due. If this is the case, suggest you explicitly release a connect before you create a new one.
Copy and paste your C/S part code would help the analyse.

Concurrent UDP connection limit in C/C++

I wrote a server in c which receive UDP data from client in port X. I have used Epoll(non block) socket for UDP listening and has only one thread as worker. Pseudo code is following:
on_data_receive(socket){
process(); //take 2-4 millisecond
send_response(socket);
}
But when I send 5000 concurrent (using thread) request server miss 5-10% request. on_data_receive() never called for 5-10% request. I am testing in local network so you can assume there is no packet loss. My question is why on_data_receive didn't call for some request? What is the connection limit for socket? With the increase of concurrent request loss ratio also increase.
Note: I used random sleep upto 200 millisecond before sending the request to server.
There is no 'connection' for UDP. All packets are just sent between peers, and the OS does some magic buffering to avoid packet loss to some degree.
But when too many packets arrive, or if the receiving application is too slow reading the packets, some packets get dropped without notice. This is not an error.
For example Linux has a UDP receive buffer which is about 128k by default (I think). You can probably change that, but it is unlikely to solve the systematic problem that UDP may expose packet loss.
With UDP there is no congestion control like there is for TCP. The raw artifacts of the underlying transport (Ethernet, local network) are exposed. Your 5000 senders get probably more CPU time in total than your receiver, and so they can send more packets than the receiver can receive. With UDP senders do not get blocked (e.g. in sendto()) when the receiver cannot keep up receiving the packets. With UDP the sender always needs to control and limit the data rate explicitly. There is no back pressure from the network side (*).
(*) Theoretically there is no back pressure in UDP. But on some operating systems (e.g. Linux) you can observe that there is back pressure (at least to some degree) when sending for example over a local Ethernet. The OS blocks in sendto() when the network driver of the physical network interface reports that it is busy (or that its buffer is full). But this back pressure stops working when the local network adapter cannot determine the network 'being busy' for the whole network path. See also "Ethernet flow control (Pause Frames)". Through this the sending side can block the sending application even when the receive-buffer on the receiving side is full. This explains why often there seems to be a UDP back-pressure like a TCP back pressure, although there is nothing in the UDP protocol to support back pressure.

Send same packets to multiple clients

I have to develop a software to send same packets to multiple destination.
But i must not use multicast scheme.!!!! ( because my boss is a stupid man )
so, any way, the problem is that:
i have same packets and multiple IP address ( clients) and i can not use multicast
how can i do that in the best way?
i must use c++ as a language and Linux as a platform.
so please help me
Thanx
If your boss said you can't use multicast, maybe he/she has his/her reason. I guess broadcasting is out of the game too?
If these are the requisites, your only chance is to establish a TCP connection with every remote host you want to send packet to.
EDIT
UDP, conversely, would not provide much benefit over multicasting if your application will run over a LAN you are in charge for configuration of, that's the reason I specified TCP.
Maybe you have to describe your scenario a little better.
This could be done with either TCP or UDP depending on your reliability requirements. Can you tolerate lost or reordered packets? Are you prepared to handle timeouts and retransmission? If both answers are "yes", pick UDP. Otherwise stay with TCP. Then:
TCP case. Instead of single multicast UDP socket you would have a number of TCP sockets, one per destination. You will have to figure out the best scheme for connection establishment. Regular listening and accepting connecting clients works as usual. Then you just iterate over connected sockets and send your data to each one.
UDP case. This could be done with single UDP socket on the server side. If you know the IPs and ports of the clients (data receivers) use sendto(2) on the same data for each address/port. The clients would have to be recv(2)-ing at that time. If you don't know your clients upfront you'd need to devise a scheme for clients to request the data, or just register with the server. That's where recvfrom(2) is usefull - it gives you the address of the client.
You have restricted yourself by saying no to multicast. I guess sending packets to multiple clients is just a part of your requirement and unless you throw more light, it will be difficult to provide a complete solution.
Are you expecting two way communication between the client and the server ? in that case choosing multicast may prove complex. please clarify
You have to iterate through the clients and send packets one after another. You may want to persist the sessions if you are expecting response from the clients back.
Choice of UDP or TCP again depends on the nature of data being sent. with UDP you would need to handle out of sequence packets and also need to implement re-transmission.
You'll have to create a TCP Listerner on your server running at a particular port listening for incoming Tcp Client connections (Sockets).
Every time a client connects, you'll have to cache it in some kind of datastructre like a Name value pair (name being a unique name for the client amd value being the Network Stream of that client obtained as a result of the TCP socket).
Then when you are finally ready to transmit the data you could either iterate through this collection of name value pair connections and send them data as byte array one by one to each client or spawm off one thread per connected client and have it send the data concurrently.
TCP is a bulky protocol (due to its connection-oriented nature) and transmission of large data (like videos/images) can be quite slow.
UDP is definitely the choice for streaming large data packets but you'll have to trade-off with the delivery gurantee.

What is the best way to implement a heartbeat in C++ to check for socket connectivity?

Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out.
Is this "heartbeat" the best way to check for connectivity?
The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.)
Here's a TCP keepalive overview which outlines some of the reasons you might want to use TCP keepalive; this Linux-specific HOWTO describes how to configure your socket to use TCP keepalive at runtime.
It looks like you can enable TCP keepalive in Windows sockets by setting SIO_KEEPALIVE_VALS using the WSAIoctl() function.
If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
Yes, this heartbeat is the best way. You'll have to build it into the protocol the server and client use to communicate.
The simplest solution is to have the client send data periodically and the server close the connection if it hasn't received any data from the client in a particular period of time. This works perfectly for query/response protocols where the client sends queries and the server sends responses.
For example, you can use the following scheme:
The server responds to every query. If the server does not receive a query for two minutes, it closes the connection.
The client sends queries and keeps the connection open after each one.
If the client has not send a query for one minute, it sends an "are you there" query. The server responds with "yes I am". This resets the server's two minutes timer and confirms to the client that the connection is still available.
It may be simpler to just have the client close the connection if it hasn't needed to send a query for the past minute. Since all operations are initiated by the client, it can always just open a new connection if it needs to perform a new operation. That reduces it to just this:
The server closes the connection if it hasn't received a query in two minutes.
The client closes the connection if it hasn't needed to send a query in one minute.
However, this doesn't assure the client that the server is present and ready to accept a query at all times. If you need this capability, you will have to implement an "are you there" "yes I am" query/response into your protocol.
If the other side has gone away (i.e. the process has died, the machine has gone down, etc.), attempting to receive data from the socket should result in an error. However if the other side is merely hung, the socket will remain open. In this case, having a heartbeat is useful. Make sure that whatever protocol you are using (on top of TCP) supports some kind of "do-nothing" request or packet - each side can use this to keep track of the last time they received something from the other side, and can then close the connection if too much time elapses between packets.
Note that this is assuming you're using TCP/IP. If you're using UDP, then that's a whole other kettle of fish, since it's connectionless.
Ok, I don't know what your program does or anything, so maybe this isn't feasible, but I suggest that you avoid trying to always keep the socket open. It should only be open when you are using it, and should be closed when you are not.
If you are between reads and writes waiting on user input, close the socket. Design your client/server protocol (assuming you're doing this by hand and not using any standard protocols like http and/or SOAP) to handle this.
Sockets will error if the connection is dropped; write your program such that you don't lose any information in the case of such an error during a write to the socket and that you don't gain any information in the case of an error during a read from the socket. Transactionality and atomicity should be rolled into your client/server protocol (again, assuming you're designing it yourself).
maybe this will help you, TCP Keepalive HOWTO
or this SO_SOCKET