I have a code written in C/C++ that look like this:
while(1)
{
//Accept
struct sockaddr_in client_addr;
int client_fd = this->w_accept(&client_addr);
char client_ip[64];
int client_port = ntohs(client_addr.sin_port);
inet_ntop(AF_INET, &client_addr.sin_addr, client_ip, sizeof(client_ip));
//Listen first string
char firststring[512];
memset(firststring,0,512);
if(this->recvtimeout(client_fd,firststring,sizeof(firststring),u->timeoutlogin) < 0){
close(client_fd);
}
if(strcmp(firststring,"firststr")!=0)
{
cout << "Disconnected!" << endl;
close(client_fd);
continue;
}
//Send OK first string
send(client_fd, "OK", 2, 0);
//Listen second string
char secondstring[512];
memset(secondstring,0,512);
if(this->recvtimeout(client_fd,secondstring,sizeof(secondstring),u->timeoutlogin) < 0){
close(client_fd);
}
if(strcmp(secondstring,"secondstr")!=0)
{
cout << "Disconnected!!!" << endl;
close(client_fd);
continue;
}
//Send OK second string
send(client_fd, "OK", 2, 0);
}
}
So, it's dossable.
I've write a very simple dos script in perl that takedown the server.
#Evildos.pl
use strict;
use Socket;
use IO::Handle;
sub dosfunction
{
my $host = shift || '192.168.4.21';
my $port = 1234;
my $firststr = 'firststr';
my $secondstr = 'secondstr';
my $protocol = getprotobyname('tcp');
$host = inet_aton($host) or die "$host: unknown host";
socket(SOCK, AF_INET, SOCK_STREAM, $protocol) or die "socket() failed: $!";
my $dest_addr = sockaddr_in($port,$host);
connect(SOCK,$dest_addr) or die "connect() failed: $!";
SOCK->autoflush(1);
print SOCK $firststr;
#sleep(1);
print SOCK $secondstr;
#sleep(1);
close SOCK;
}
my $i;
for($i=0; $i<30;$i++)
{
&dosfunction;
}
With a loop of 30 times, the server goes down.
The question is: is there a method, a system, a solution that can avoid this type of attack?
EDIT: recvtimeout
int recvtimeout(int s, char *buf, int len, int timeout)
{
fd_set fds;
int n;
struct timeval tv;
// set up the file descriptor set
FD_ZERO(&fds);
FD_SET(s, &fds);
// set up the struct timeval for the timeout
tv.tv_sec = timeout;
tv.tv_usec = 0;
// wait until timeout or data received
n = select(s+1, &fds, NULL, NULL, &tv);
if (n == 0){
return -2; // timeout!
}
if (n == -1){
return -1; // error
}
// data must be here, so do a normal recv()
return recv(s, buf, len, 0);
}
I don't think there is any 100% effective software solution to DOS attacks in general; no matter what you do, someone could always throw more packets at your network interface than it can handle.
In this particular case, though, it looks like your program can only handle one connection at a time -- that is, incoming connection #2 won't be processed until connection #1 has completed its transaction (or timed out). So that's an obvious choke point -- all an attacker has to do is connect to your server and then do nothing, and your server is effectively disabled for (however long your timeout period is).
To avoid that you would need to rewrite the server code to handle multiple TCP connections at once. You could either do that by switching to non-blocking I/O (by passing O_NONBLOCK flag to fcntl()), and using select() or poll() or etc to wait for I/O on multiple sockets at once, or by spawning multiple threads or sub-processes to handle incoming connections in parallel, or by using async I/O. (I personally prefer the first solution, but all can work to varying degrees). In the first approach it is also practical to do things like forcibly closing any existing sockets from a given IP address before accepting a new socket from that IP address, which means that any given attacking computer could only tie up a maximum of one socket on your server at a time, which would make it harder for that person to DOS your machine unless he had access to a number of client machines.
You might read this article for more discussion about handling many TCP connections at the same time.
The main issue with DOS and DDOS attacks is that they play on your weakness: namely the fact that there is a limited memory / number of ports / processing resources that you can use to provide the service. Even if you have infinite scalability (or close) using something like the Amazon farms, you'll probably want to limit it to avoid the bill going through the roof.
At the server level, your main worry should be to avoid a crash, by imposing self-preservation limits. You can for example set a maximum number of connections that you know you can handle and simply refuse any other.
Full strategies will include specialized materials, like firewalls, but there is always a way to play them and you will have to live with that.
For example of nasty attacks, read about Slow Loris on wikipedia.
Slowloris tries to keep many connections to the target web server open and hold them open as long as possible. It accomplishes this by opening connections to the target web server and sending a partial request. Periodically, it will send subsequent HTTP headers, adding to—but never completing—the request. Affected servers will keep these connections open, filling their maximum concurrent connection pool, eventually denying additional connection attempts from clients.
There are many variants of DOS attacks, so a specific answer is quite difficult.
Your code leaks a filehandle when it succeeds, this will eventually make you run out of fds to allocate, making accept() fail.
close() the socket when you're done with it.
Also, to directly answer your question, there is no solution for DOS caused by faulty code other than correcting it.
This isn't a cure-all for DOS attacks, but using non-blocking sockets will definitely help for scalability. And if you can scale-up, you can mitigate many DOS attacks. This design changes includes setting both the listen socket used in accept calls and the client connection sockets to non-blocking.
Then instead of blocking on a recv(), send(), or an accept() call, you block on either a poll, epoll, or select call - then handle that event for that connection as much as you are able to. Use a reasonable timeout (e.g. 30 seconds) such that you can wake up from polling call to sweep and close any connections that don't seem to be progressing through your protocol chain.
This basically requires every socket to have it's own "connection" struct that keeps track of the state of that connection with respect to the protocol you implement. It likely also means keeping a (hash) table of all sockets so they can be mapped to their connection structure instance. It also means "sends" are non-blocking as well. Send and recv can return partial data amounts anyway.
You can look at an example of a non-blocking socket server on my project code here. (Look around line 360 for the start of the main loop in Run method).
An example of setting a socket into non-blocking state:
int SetNonBlocking(int sock)
{
int result = -1;
int flags = 0;
flags = ::fcntl(sock, F_GETFL, 0);
if (flags != -1)
{
flags |= O_NONBLOCK;
result = fcntl(sock , F_SETFL , flags);
}
return result;
}
I would use boost::asio::async_connector from boost::asio functionality to create multiple connection handlers (works both on single and multi-threaded environment). In the single threaded case, you just need to run from time to time boost::asio::io_service::run in order to make sure communications have time to be processed
The reason why you want to use asio is because its very good at handling asynchronous communication logic, so it won't block (as in your case) if a connection gets blocked. You can even arrange how much processing you want to devote to opening new connections, while keep serving existing ones
Related
I have a system that can start multiple instances.
Every instance has a client and a server.
They are connected over socket/TCP
Every instance is started by starting a client.
The client starts (checks if IP is available, if not increase the IP by 1, checks again ...) -
The client starts the server with the free IP and connects to it. (for legacy reasons has to be like this)
Instance numbers 2, 3, 4, 5 work without issues.
...
Instance number 6. -> Fails on checking if the first IP in the range is available.
To check if IP is already in use, I do not close the socket on the server side so that it can accept the additional connection.
On the client-side, I check if I can connect to the server-side with the following code:
bool CheckIPInUse(char *ip)
{
bool ret = false;
int port = 12345;
int sock;
struct sockaddr_in serv_addr;
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(port);
// **non blocking** because I want the check to be fast.
sock = socket(AF_INET, SOCK_STREAM | SOCK_NONBLOCK, 0);
inet_pton(AF_INET, ip, &serv_addr.sin_addr);
int ret_conn = connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
if (ret_conn == 0){
fprintf(stdout, "connected");
ret = true;
}
else if (ret_conn < 0 && (errno != EINPROGRESS)){
fprintf(stdout, "failed to connect");
}
else
{
int check_if_connected = 10;
while (check_if_connected--)
{
socklen_t len = sizeof(serv_addr);
int ret_getpeer = getpeername(sock, (struct sockaddr *)&serv_addr, &len);
if (ret_getpeer == 0)
{
fprintf(stdout, "connected");
ret = true;
break;
}
usleep(100000);
}
}
close(sock);
return ret;
}
This works for the first 5 instances.
6th instance fails to connect to the first IP in range and tries to start the server with IP which is already in use. (always the 6th).
Is there any better way to check programmatically if IP/Port is already busy?
Any ideas on what to check. for failure in the instance number 6?
The only way to check if an ip/port on a server is available is to bind() to it. If it worked, it was available (but not any more).
Any approach that involves a test connect()ion first, to see if it fails, or anything along the lines of poking somewhere in /proc to see which IPs and ports are in use -- nothing along these lines will ever be 100% foolproof. That's because even if you reach the conclusion that the port is available, it may no longer be by the time you get around to try to bind() to it.
Now, you can take, as a starting position, that a particular IP and/or port range is reserved for your application's use, and you only wish to arbitrate IP/port allocation between different instances of your application. In that case you can do that pretty much whatever you want, you're not limited to attempting to actually start instances of your application, and hope for the best. One simplistic approach is to use lock files in /var/tmp to represent all possible IP/port combination, and have your application try, in turn, to acquire a lock on the corresponding lock file, first, and once it's official, and the lock file is acquired, then the corresponding IP/port then can be established at your leisure, but the lock file must remain locked until the IP/port is no longer in use.
But in terms of attempting to check if a socket port is available, or not, the only way to do it is to bind() it, because that, by definition, is what it does. You could attempt to implement a multi-layered approach, like trying to connect() first, and then attempt to bind() it, and if the bind() fails, then keep looking for a free port. But that's creating extra complexity, without much of a benefit.
Did you check that the server did not meet its maximum backlog length ?
You may be getting "connection refused" if the server you are trying to connect to
has more pending connections then the defined backlog.
So if multiple clients are testing at the same time, one of them may encounter this.
The most probable cause of your problem is that your client is getting a connect from the server due to the listen queue. The best way to avoid this problem is to close the socket on which you call accept(2) once all the instances are in use, and reopen it again when any of the server instances are finished.
The listen queue makes the kernel to accept (send the SYN/ACK segment) connections on the otherwise not yet open socket waiting, and this will make the connection establishment quicker for the next server instances if many such connections are entering in the system. All those connections are handled in the accept(2) socket, so the best way to accept five such connections is to close the accept socket as soon as the last connection has been established (this will not avoid the problem if a connection happens to enter the server in the time between one accept(2) and the next, but the connection so established will be closed as soon as the accept socket is still open)
In my opinion, you should have a master server process that forks new processes to handle the different connection and closes the accept socket as soon as it reaches the full capacity. Once one of the servers attending the connections closes one of them, it should reopen the accept socket and accept a new connection.
IMHO, also the most robust way of implementing such a system is to allow the extra connections to get in, but not attend them, so the connection remains open in case a new client happens to enter, and it can close it if the server doesn't attend it in a timeout interval. Having a sixth client already connected, but waiting for the server to say hello, will leave you in a state in which you can start talking to the server as soon as the last service ends.
After closing client socket on sever side and exit application, socket still open for some time.
I can see it via netstat
Every 0.1s: netstat -tuplna | grep 6676
tcp 0 0 127.0.0.1:6676 127.0.0.1:36065 TIME_WAIT -
I use log4cxx logging and telnet appender. log4cxx use apr sockets.
Socket::close() method looks like that:
void Socket::close() {
if (socket != 0) {
apr_status_t status = apr_socket_close(socket);
if (status != APR_SUCCESS) {
throw SocketException(status);
}
socket = 0;
}
}
And it's successfully processed. But after program is finished I can see opened socket via netstat, and if it starts again log4cxx unable to open 6676 port, because it is busy.
I tries to modify log4cxx.
Shutdown socket before close:
void Socket::close() {
if (socket != 0) {
apr_status_t shutdown_status = apr_socket_shutdown(socket, APR_SHUTDOWN_READWRITE);
printf("Socket::close shutdown_status %d\n", shutdown_status);
if (shutdown_status != APR_SUCCESS) {
printf("Socket::close WTF %d\n", shutdown_status != APR_SUCCESS);
throw SocketException(shutdown_status);
}
apr_status_t close_status = apr_socket_close(socket);
printf("Socket::close close_status %d\n", close_status);
if (close_status != APR_SUCCESS) {
printf("Socket::close WTF %d\n", close_status != APR_SUCCESS);
throw SocketException(close_status);
}
socket = 0;
}
}
But it didn't helped, bug still reproduced.
This is not a bug. Time Wait (and Close Wait) is by design for safety purpose. You may however adjust the wait time. In any case, on server's perspective the socket is closed and you are relax by the ulimit counter, it has not much visible impact unless you are doing stress test.
As noted by Calvin this isn't a bug, it's a feature. Time Wait is a socket state that says, this socket isn't in use any more but nevertheless can't be reused quite yet.
Imagine you have a socket open and some client is sending data. The data may be backed up in the network or be in-flight when the server closes its socket.
Now imagine you start the service again or start some new service. The packets on the wire aren't aware that its a new service and the service can't know the packets were destined for a service that's gone. The new service may try to parse the packets and fail because they're in some odd format or the client may get an unrelated error back and keep trying to send, maybe because the sequence numbers don't match and the receiving host will get some odd error. With timed wait the client will get notified that the socket is closed and the server won't potentially get odd data. A win-win. The time it waits should be sofficient for all in-transit data to be flused from the system.
Take a look at this post for some additional info: Socket options SO_REUSEADDR and SO_REUSEPORT, how do they differ? Do they mean the same across all major operating systems?
TIME_WAIT is a socket state to allow all in travel packets that could remain from the connection to arrive or dead before the connection parameters (source address, source port, desintation address, destination port) can be reused again. The kernel simply sets a timer to wait for this time to elapse, before allowing you to reuse that socket again. But you cannot shorten it (even if you can, you had better not to do it), because you have no possibility to know if there are still packets travelling or to accelerate or kill them. The only possibility you have is to wait for a socket bound to that port to timeout and pass from the state TIME_WAIT to the CLOSED state.
If you were allowed to reuse the connection (I think there's an option or something can be done in the linux kernel) and you receive an old connection packet, you can get a connection reset due to the received packet. This can lead to more problems in the new connection. These are solved making you wait for all traffic belonging to the old connection to die or reach destination, before you use that socket again.
I am currently working on a project written in C++ involving UDP real time connection. I receive UDP packets from a control computer containing commands to start/stop an infinite while loop that reads data from an IMU and sends that data to the control computer.
My problem is the following: First I implemented an exit condition from the loop using recvfrom() and read(), but the control computer sends a UDP packet every second, which was delaying the whole loop and made sending the data in the desired time interval of 5ms impossible.
I tried to fix this problem by usingfcntl(fd, F_SETFL, O_NONBLOCK);and using only read(), which actually works fine, but I am unsure whether this is a wise idea or not, since I am not checking for errors anymore. Is there any elegant way how to solve this problem? I thought about using Pthreads or something like that, however I have never worked with threads or parallel programming so I would have to spend some time learning that.
I appreciate any advice on that problem you could give me.
Here is a code example:
//include
...
int main() {
RNet cmd; //RNet: struct that contains all the information of the UDP header and the command
RNet* pCmd = &cmd;
ssize_t b;
int fd2;
struct sockaddr_in snd; // sender is control computer
socklen_t length;
// further declaration of variables, connecting to socket, etc...
...
fcntl(fd2, F_SETFL, O_NONBLOCK);
while (1)
{
// read messages from control computer
if ((b = read(fd2, pCmd, 19)) > 0) {
memcpy(&cmd, pCmd, b);
}
// transmission
while (cmd.CLout.MotionCommand == 1) // MotionCommand: 1 - send messages; 0 - do nothing
{
if(time_elapsed >= 5) // elapsed time in ms
{
// update sensor values
...
//sendto ()
...
// update control time, timestamp, etc.
...
}
if (recvfrom(fd2, pCmd, (int)sizeof(pCmd), 0, (struct sockaddr*) &snd, &length) < 0) {
perror("error receiving data");
return 0;
}
// checking Control Model Command
if ((b = read(fd2, pCmd, 19)) > 0) {
memcpy(&cmd, pCmd, b);
}
}
}
}
I really like the "blocking calls on multiple threads" design. It enables you to have distinct independent tasks, and you don't have to worry about how each task can disturb another. It can have some drawbacks but it is usually a good fit for many needs.
To do that, just use pthread_create to create a new thread for each task (you may keep the main thread for one task). In your case, you should have a thread to receive commands, and another one to send your data. You also need for the receiving thread to notify the sending thread of the commands. To do that, you can use some synchronization tool, like a mutex.
Overall, you should have your receiving thread blocking on recvfrom, and the sending thread waiting for a signal from the mutex (wait for the mutex to be freed, technically). When the receiving thread receive a start command, it signals the mutex and go back to recvfrom (optionally you can set a variable to provide more information to the other thread).
As a comment, remember that UDP are 1-to-many, thus your code here will react to any packet sent to you (even from some random or malicious host). You may want to filter with the remote sockaddr after recvfrom, or use connect + recv. It depends on what you want.
My problem is that I have a thread that is in a recv() call. The remote host suddenly terminates (without a close() socket call) and the recv() call continues to block. This is obviously not good because when I am joining the threads to close the process (locally) this thread will never exit because it is waiting on a recv that will never come.
So my question is what method do people generally consider to be the best way to deal with this issue? There are some additional things of note that should be known before answering:
There is no way for me to ensure that the remote host closes the socket prior to exit.
This solution cannot use external libraries (such as boost). It must use standard libraries/features of C++/C (preferably not C++0x specific).
I know this has likely been asked in the past but id like to get someones take as to how to correct this issue properly (without doing something super hacky which I would have done in the past).
Thanks!
Assuming you want to continue to use blocking sockets, you can use the SO_RCVTIMEO socket option:
SO_RCVTIMEO and SO_SNDTIMEO
Specify the receiving or sending timeouts until reporting an
error. The parameter is a struct timeval. If an input or out-
put function blocks for this period of time, and data has been
sent or received, the return value of that function will be the
amount of data transferred; if no data has been transferred and
the timeout has been reached then -1 is returned with errno set
to EAGAIN or EWOULDBLOCK just as if the socket was specified to
be nonblocking. If the timeout is set to zero (the default)
then the operation will never timeout.
So, before you begin receiving:
struct timeval timeout = { timo_sec, timo_usec };
int r = setsockopt(s, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(timeout));
assert(r == 0); /* or something more user friendly */
If you are willing to use non-blocking I/O, then you can use poll(), select(), epoll(), kqueue(), or whatever the appropriate event dispatching mechanism is for your system. The reason you need to use non-blocking I/O is that you need to allow the system call to recv() to return to notify you that there is no data in the socket's input queue. The example to use is a little bit more involved:
for (;;) {
ssize_t bytes = recv(s, buf, sizeof(buf), MSG_DONTWAIT);
if (bytes > 0) { /* ... */ continue; }
if (bytes < 0) {
if (errno == EWOULDBLOCK) {
struct pollfd p = { s, POLLIN, 0 };
int r = poll(&p, 1, timo_msec);
if (r == 1) continue;
if (r == 0) {
/*...handle timeout */
/* either continue or break, depending on policy */
}
}
/* ...handle errors */
break;
}
/* connection is closed */
break;
}
You can use TCP keep-alive probes to detect if the remote host is still reachable. When keep-alive is enabled, the OS will send probes if the connection has been idle for too long; if the remote host doesn't respond to the probes, then the connection is closed.
On Linux, you can enable keep-alive probes by setting the SO_KEEPALIVE socket option, and you can configure the parameters of the keep-alive with the TCP_KEEPCNT, TCP_KEEPIDLE, and TCP_KEEPINTVL socket options. See tcp(7) and socket(7) for more info on those.
Windows also uses the SO_KEEPALIVE socket option for enabling keep-alive probes, but for configuring the keep-alive parameters, use the SIO_KEEPALIVE_VALS ioctl.
You could use select()
From http://linux.die.net/man/2/select
int select(int nfds, fd_set *readfds, fd_set *writefds,
fd_set *exceptfds, struct timeval *timeout);
select() blocks until the first event (read ready, write ready, or exception) on one or more file descriptors or a timeout occurs.
sockopts and select are probably the ideal choices. An additional option that you should consider as a backup is to send your process a signal (for example using the alarm() call). This should force any syscall in progress to exit and set errno to EINTR.
I have a program that maintains a list of "streaming" sockets. These sockets are configured to be non-blocking sockets.
Currently, I have used a list to store these streaming sockets. I have some data that I need to send to all these streaming sockets hence I used the iterator to loop through this list of streaming sockets and calling the send_TCP_NB function below:
The issue is that my own program buffer that stores the data before sending to this send_TCP_NB function slowly decreases in free size indicating that the send is slower than the rate at which data is put into the program buffer. The rate at which the program buffer is about 1000 data per second. Each data is quite small, about 100 bytes.
Hence, i am not sure if my send_TCP_NB function is working efficiently or correct?
int send_TCP_NB(int cs, char data[], int data_length) {
bool sent = false;
FD_ZERO(&write_flags); // initialize the writer socket set
FD_SET(cs, &write_flags); // set the write notification for the socket based on the current state of the buffer
int status;
int err;
struct timeval waitd; // set the time limit for waiting
waitd.tv_sec = 0;
waitd.tv_usec = 1000;
err = select(cs+1, NULL, &write_flags, NULL, &waitd);
if(err==0)
{
// time limit expired
printf("Time limit expired!\n");
return 0; // send failed
}
else
{
while(!sent)
{
if(FD_ISSET(cs, &write_flags))
{
FD_CLR(cs, &write_flags);
status = send(cs, data, data_length, 0);
sent = true;
}
}
int nError = WSAGetLastError();
if(nError != WSAEWOULDBLOCK && nError != 0)
{
printf("Error sending non blocking data\n");
return 0;
}
else
{
if(nError == WSAEWOULDBLOCK)
{
printf("%d\n", nError);
}
return 1;
}
}
}
One thing that would help is if you thought out exactly what this function is supposed to do. What it actually does is probably not what you wanted, and has some bad features.
The major features of what it does that I've noticed are:
Modify some global state
Wait (up to 1 millisecond) for the write buffer to have some empty space
Abort if the buffer is still full
Send 1 or more bytes on the socket (ignoring how much was sent)
If there was an error (including the send decided it would have blocked despite the earlier check), obtain its value. Otherwise, obtain a random error value
Possibly print something to screen, depending on the value obtained
Return 0 or 1, depending on the error value.
Comments on these points:
Why is write_flags global?
Did you really intend to block in this function?
This is probably fine
Surely you care how much of the data was sent?
I do not see anything in the documentation that suggests that this will be zero if send succeeds
If you cleared up what the actual intent of this function was, it would probably be much easier to ensure that this function actually fulfills that intent.
That said
I have some data that I need to send to all these streaming sockets
What precisely is your need?
If your need is that the data must be sent before proceeding, then using a non-blocking write is inappropriate*, since you're going to have to wait until you can write the data anyways.
If your need is that the data must be sent sometime in the future, then your solution is missing a very critical piece: you need to create a buffer for each socket which holds the data that needs to be sent, and then you periodically need to invoke a function that checks the sockets to try writing whatever it can. If you spawn a new thread for this latter purpose, this is the sort of thing select is very useful for, since you can make that new thread block until it is able to write something. However, if you don't spawn a new thread and just periodically invoke a function from the main thread to check, then you don't need to bother. (just write what you can to everything, even if it's zero bytes)
*: At least, it is a very premature optimization. There are some edge cases where you could get slightly more performance by using the non-blocking writes intelligently, but if you don't understand what those edge cases are and how the non-blocking writes would help, then guessing at it is unlikely to get good results.
EDIT: as another answer implied, this is something the operating system is good at anyways. Rather than try to write your own code to manage this, if you find your socket buffers filling up, then make the system buffers larger. And if they're still filling up, you should really give serious thought to the idea that your program needs to block anyways, so that it stops sending data faster than the other end can handle it. i.e. just use ordinary blocking sends for all of your data.
Some general advice:
Keep in mind you are multiplying data. So if you get 1 MB/s in, you output N MB/s with N clients. Are you sure your network card can take it ? It gets worse with smaller packets, you get more general overhead. You may want to consider broadcasting.
You are using non blocking sockets, but you block while they are not free. If you want to be non blocking, better discard the packet immediately if the socket is not ready.
What would be better is to "select" more than one socket at once. Do everything that you are doing but for all the sockets that are available. You'll write to each "ready" socket, then repeat again while there are sockets that are not ready. This way, you'll proceed with the sockets that are available first, and then with some chance, the busy sockets will become themselves available.
the while (!sent) loop is useless and probably buggy. Since you are checking only one socket FD_ISSET will always be true. It is wrong to check again FD_ISSET after a FD_CLR
Keep in mind that your OS has some internal buffers for the sockets and that there are way to extend them (not easy on Linux, though, to get large values you need to do some config as root).
There are some socket libraries that will probably work better than what you can implement in a reasonable time (boost::asio and zmq for the ones I know).
If you need to implement it yourself, (i.e. because for instance zmq has its own packet format), consider using a threadpool library.
EDIT:
Sleeping 1 millisecond is probably a bad idea. Your thread will probably get descheduled and it will take much more than that before you get some CPU time again.
This is just a horrible way to do things. The select serves no purpose but to waste time. If the send is non-blocking, it can mangle data on a partial send. If it's blocking, you still waste arbitrarily much time waiting for one receiver.
You need to pick a sensible I/O strategy. Here is one: Set all sockets non-blocking. When you need to send data to a socket, just call write. If all the data writes, lovely. If not, save the portion of data that wasn't sent for later and add the socket to your write set. When you have nothing else to do, call select. If you get a hit on any socket in your write set, write as many bytes as you can from what you saved. If you write all of them, remove that socket from the write set.
(If you need to write to a data that's already in your write set, just add the data to the saved data to be sent. You may need to close the connection if too much data gets buffered.)
A better idea might be to use a library that already does all these things. Boost::asio is a good one.
You are calling select() before calling send(). Do it the other way around. Call select() only if send() reports WSAEWOULDBLOCK, eg:
int send_TCP_NB(int cs, char data[], int data_length)
{
int status;
int err;
struct timeval waitd;
char *data_ptr = data;
while (data_length > 0)
{
status = send(cs, data_ptr, data_length, 0);
if (status > 0)
{
data_ptr += status;
data_length -= status;
continue;
}
err = WSAGetLastError();
if (err != WSAEWOULDBLOCK)
{
printf("Error sending non blocking data\n");
return 0; // send failed
}
FD_ZERO(&write_flags);
FD_SET(cs, &write_flags); // set the write notification for the socket based on the current state of the buffer
waitd.tv_sec = 0;
waitd.tv_usec = 1000;
status = select(cs+1, NULL, &write_flags, NULL, &waitd);
if (status > 0)
continue;
if (status == 0)
printf("Time limit expired!\n");
else
printf("Error waiting for time limit!\n");
return 0; // send failed
}
return 1;
}