I am learning to make a client-server communication model through socket use Winsock (C++) in my school.
And I am facing problem how to make server serve multi client with limit resources, and my mission is handle this problem using multithreading and exploration technical and Specific here is use two function in C++ and winSock is: select() and _beginthreadex(). (with limited assumptions about resources)
If I use only select() function so my server only serve max 1024 client and if I use only _beginthreadex() so it will be limited on resources.
But I need to serve more than 4096 clients just use select() and _beginthreadex() (can not use only one of them because the above reason)
This is my struct of program use select():
SOCKET client[FD_SETSIZE], connSock;
fd_set readfds, initfds; //use initfds to initiate readfds at the begining of every loop step
sockaddr_in clientAddr;
int nEvents, clientAddrLen;
bool ret;
for (int i = 0; i < FD_SETSIZE; i++)
client[i] = 0; // 0 indicates available entry
FD_ZERO(&initfds);
//Step 5: Communicate with clients
while (1) {
FD_SET(listenSock, &initfds);
for (int i = 0; i < FD_SETSIZE; ++i) {
if (client[i] > 0) {
FD_SET(client[i], &initfds);
}
}
readfds = initfds; /* structure assignment */
nEvents = select(0, &readfds, 0, 0, 0);
if (nEvents < 0) {
printf("\nError! Cannot poll sockets: %d", WSAGetLastError());
break;
}
//new client connection
if (FD_ISSET(listenSock, &readfds)) {
clientAddrLen = sizeof(clientAddr);
if ((connSock = accept(listenSock, (sockaddr *)&clientAddr, &clientAddrLen)) < 0) {
printf("\nError! Cannot accept new connection: %d", WSAGetLastError());
break;
}
else {
printf("You got a connection from %s\n", inet_ntoa(clientAddr.sin_addr)); /* prints client's IP */
cout << "\tconnSocket: " << (int)connSock << endl;
int i;
for (i = 0; i < FD_SETSIZE; i++)
if (client[i] == 0) {
client[i] = connSock;
FD_SET(client[i], &initfds);
break;
}
if (i == FD_SETSIZE) {
printf("\nToo many clients.");
closesocket(connSock);
}
if (--nEvents == 0)
continue; //no more event
}
}
//receive data from clients
for (int i = 0; i < FD_SETSIZE; i++) {
if (client[i] == 0)
continue;
if (FD_ISSET(client[i], &readfds)) {
ret = business(client[i]); // the function handle logic of my server
if (ret == false ) {
FD_CLR(client[i], &initfds);
closesocket(client[i]);
client[i] = 0;
}
}
if (--nEvents <= 0)
continue; //no more event
}
}
//Step 5: Close socket
closesocket(listenSock);
With program above my server can not serve more than 4096 client. Somebody can give a solution for my problem ? Thank you very much!
One thread can handle 4096 as you show. That is because a FD_SET is limited to 4096 entries. However you can have multiple threads, and multiple FD_SETs.
Each thread would have its FD_SET to read from and reply, handling up to 4k clients. When the client 4097 arrives, you create a new thread to handle a new set of sockets. You'll need synchronisation around accepting and connecting. But apart from that, the thread handling the first 4k can run independently of the thread handling the next 4k clients.
A full example is out of scope. :-)
Related
I remake two detached threads to poll() cycle in server: for receiving and sending data.
All work fine, until setting client's sending frequency to 16 ms (< ~200 ms). In this state one thread always wins the race and answers to only one client with ~1 us ping. What I need to do to send and receive data in poll() with two (or one) UDP sockets?
Part of server's code:
struct pollfd pollStruct[2];
// int timeout_sec = 1;
pollStruct[0].fd = getReceiver()->getSocketDesc();
pollStruct[0].events = POLLIN;
pollStruct[0].revents = 0;
pollStruct[1].fd = getDispatcher()->getSocketDesc();
pollStruct[1].events = POLLOUT;
pollStruct[1].revents = 0;
while(true) {
if (poll(pollStruct, 2, 0 /* (-1 / timeout) */) > 0) {
if (pollStruct[0].revents & POLLIN) {
recvfrom(getReceiver()->getSocketDesc(), buffer, getBufferLength(), 0, (struct sockaddr *) ¤tClient, getReceiver()->getLength());
// add message to client's own thread-safe buffer
}
if (pollStruct[1].revents & POLLOUT) {
// get message from thread-safe general buffer after processing
// dequeue one
if (!getBuffer().isEmpty()) {
auto message = getBuffer().dequeue();
if (message != nullptr) {
sendto(getDispatcher()->getSocketDesc(), "hi", 2, 0, message->_addr, *getDispatcher()->getLength());
}
}
}
}
}
I don't see these sort of question asked. Which is odd because of good benefits from single threaded server applications. But how would I be able to implement a timeout system in my code when the server is in nonblock state?
Currently I'm using this method.
while(true)
{
FD_ZERO(&readfds);
FD_SET(server_socket, &readfds);
for (std::size_t i = 0; i < cur_sockets.size(); i++)
{
uint32_t sd = cur_sockets.at(i).socket;
if(sd > 0)
FD_SET(sd, &readfds);
if(sd > max_sd){
max_sd = sd;
}
}
int activity = select( max_sd + 1 , &readfds, NULL , NULL, NULL);
if(activity < 0)
{
continue;
}
if (FD_ISSET(server_socket, &readfds))
{
struct sockaddr_in cli_addr;
uint32_t newsockfd = (uint_fast32_t)accept((int)server_socket,
(struct sockaddr *) &cli_addr,
&clientlength);
if(newsockfd < 1) {
continue;
}
//Ensure we can even accept the client...
if (num_clients >= op_max_clients) {
close(newsockfd);
continue;
}
fcntl(newsockfd, F_SETFL, O_NONBLOCK);
/* DISABLE TIMEOUT EXCEPTION FROM SIGPIPE */
#ifdef __APPLE__
int set = 1;
setsockopt(newsockfd, SOL_SOCKET, SO_NOSIGPIPE, (void *) &set, sizeof(int));
#elif __LINUX__
signal(SIGPIPE, SIG_IGN);
#endif
/* ONCE WE ACCEPTED THE CONNECTION ADD CLIENT TO */
num_clients++;
client_con newCon;
newCon.socket = newsockfd;
time_t ltime;
time(<ime);
newCon.last_message = (uint64_t) ltime;
cur_sockets.push_back(newCon);
}
handle_clients();
}
As you can tell, I've added a unix timestap to the client when they successfully connected. I was thinking of maybe adding another thread that sleeps every 1 second, and simply checks if any clients haven't made any sends for the max duration, but I'm afraid I'll run into bottlenecking because of the second thread locking up constantly when dealing with large amounts of connections.
Thank you,
Ajm.
The last argument for select is the timeout for the select call and the return code of select tells you, if it returned because a socket was ready or because of a timeout.
In order to implement your own timeout handling for all sockets you could have a time stamp for each socket and update it on any socket operation. Then before calling select compute the timeout for each socket and use the minimal value for the timeout of the select call. This is just the basic idea and one can implement it more efficient so that you don't need to recompute all timeouts before calling select. But I consider a separate thread overkill.
I seem to have an issue with increasing latency on my packet transmission with my TCP server. Now, this server has to be TCP, since UDP is blocked by firewalls (this is a client-server-client type of communication). I'm also aware that the sending of a struct with floating point integers as I am is extremely non-portable, however, this system will operate Windows client to Windows server to Windows client for the foreseeable future.
The issue is this: the client begins receiving the data properly from the other client, however, there is a delay which gets exponentially worse (where, by about 3 minutes in, the packets are nearly 30 seconds behind - but correct, when they DO arrive). I researched it and found an answer on a Microsoft page explaining it is due to full send buffers, however, their syntax for the setsockopt doesn't match the documented examples, so perhaps I'm wrong.
Anyway, any advice would be appreciated:
The relevant part of the server:
(When accept() is called:)
int buff_size = 2048000;
int nodel = 1;
setsockopt(sock, SOL_SOCKET, SO_SNDBUF, (char*)&buff_size, sizeof(int));
setsockopt(sock, SOL_SOCKET, SO_RCVBUF, (char*)&buff_size, sizeof(int));
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char*)&nodel, sizeof(nodel));
The message redirect loop:
if (gp->curr_pilot < sz && gp->users[gp->curr_pilot].pilot == TRUE) {
char* pbuf = new char[1024];
int recvd = recv(gp->users[gp->curr_pilot].sockfd_data, pbuf, 1024, NULL);
if (recvd > 0) {
for (int i = 0; i < sz; i++) {
if (i != gp->curr_pilot && gp->users[i].unioned == TRUE)
send(gp->users[i].sockfd_data, pbuf, recvd, NULL);
}
}
delete[] pbuf;
}
The client (master is set when it's sending, and it does get set properly by my code):
(data is my struct of doubles that gets written by the client, cdata is a copy of it that gets written into the client).
while (kill_dataproc == FALSE) {
if (master == TRUE) {
char* buff = new char[1024];
int packet_signer = 1192;
memcpy_s(buff, intsz, &packet_signer, intsz);
memcpy_s((void*)(buff + intsz), sz, data, sz);
send(server_sock, buff, buffsize, NULL);
delete[] buff;
}
else {
char* buffer = new char[1024];
int recvd = recv(server_sock, buffer, 1024, MSG_PEEK);
if (recvd > 0) {
int newpacketsigner = 0;
memcpy_s(&newpacketsigner, intsz, buffer, intsz);
if (newpacketsigner == 1192) {
if (recvd >= buffsize) {
char* nbuf = new char[buffsize];
int recvd2 = recv(server_sock, nbuf, buffsize, NULL);
int err = WSAGetLastError();
memcpy_s(&newpacketsigner, intsz, nbuf, intsz);
memcpy_s(cdata, sz, (void*)(nbuf + intsz), sz);
//do things w/ the struct
delete[] nbuf;
}
}
else
recv(server_sock, buffer, 1024, NULL);
}
delete[] buffer;
}
Sleep(10);
}
As well, identical calls to setsockopt and are called for the client's sockets, and all of the sockets, server and client, are nonblocking.
You're assuming that your reads are filling the buffer. They are only obliged to transfer at least one byte. You you need to loop.
So, you have unread data backing up and stalling the sender.
NB Those receive buffers are greater than 64k and so may be inoperative unless they are set before the socket is connected. In the case of the server you need to set the receive buffer size on the listening socket. Accepted sockets will inherit it. If you don't to it his way, window scaling won't be in effect so a window > 64k cannot be advertised (unless the platform has window scaling on by default).
I have written simple C/S applications to test the characteristics of non-blocking sockets, here is some brief information about the server and client:
//On linux The server thread will send
//a file to the client using non-blocking socket
void *SendFileThread(void *param){
CFile* theFile = (CFile*) param;
int sockfd = theFile->GetSocket();
set_non_blocking(sockfd);
set_sock_sndbuf(sockfd, 1024 * 64); //set the send buffer to 64K
//get the total packets count of target file
int PacketCOunt = theFile->GetFilePacketsCount();
int CurrPacket = 0;
while (CurrPacket < PacketCount){
char buffer[512];
int len = 0;
//get packet data by packet no.
GetPacketData(currPacket, buffer, len);
//send_non_blocking_sock_data will loop and send
//data into buffer of sockfd until there is error
int ret = send_non_blocking_sock_data(sockfd, buffer, len);
if (ret < 0 && errno == EAGAIN){
continue;
} else if (ret < 0 || ret == 0 ){
break;
} else {
currPacket++;
}
......
}
}
//On windows, the client thread will do something like below
//to receive the file data sent by the server via block socket
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
while (1){
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, NULL, &timeout );
if (ret == 0){
// log that timer expires
CLogger::log("RecvFileThread---Calling select() timeouts\n");
} else if (ret) {
//log the number of data it received
int ret = 0;
char buffer[1024 * 256];
int len = recv(sockfd, buffer, sizeof(buffer), 0);
// handle error
process_tcp_data(buffer, len);
} else {
//handle and break;
break;
}
}
}
What surprised me is that the server thread fails frequently because of socket buffer full, e.g. to send a file of 14M size it reports 50000 failures with errno = EAGAIN. However, via logging I observed there are tens of timeouts during the transfer, the flow is like below:
on the Nth loop, select() succeeds and read 256K's data successfully.
on the (N+1)th loop, select() failed with timeout.
on the (N+2)th loop, select() succeeds and read 256K's data successfully.
Why there would be timeouts interleaved during the receving? Can anyone explain this phenomenon?
[UPDATE]
1. Uploading a file of 14M to the server only takes 8 seconds
2. Using the same file with 1), the server takes nearly 30 seconds to send all data to the client.
3. All sockets used by the client are blocking. All sockets used by the server are non-blocking.
Regarding #2, I think timeouts are the reason why #2 takes much more time then #1, and I wonder why there would be so many timeouts when the client is busy in receiving data.
[UPDATE2]
Thanks for comments from #Duck, #ebrob, #EJP, #ja_mesa , I will do more investigation today
then update this post.
Regarding why I send 512 bytes per loop in the server thread, it is because I found the server thread sends data much faster than the client thread receiving them. I am very confused that why timeout happened to the client thread.
Consider this more of a long comment than an answer but as several people have noted the network is orders of magnitude slower than your processor. The point of non-blocking i/o is that the difference is so great that you can actually use it to do real work rather than blocking. Here you are just pounding on the elevator button hoping that makes a difference.
I'm not sure how much of your code is real and how much is chopped up for posting but in the server you don't account for (ret == 0) i.e. normal shutdown by the peer.
The select in the client is wrong. Again, not sure if that was sloppy editing or not but if not then the number of parameters are wrong but, more concerning, the first parameter - i.e. should be the highest file descriptor for select to look at plus one - is zero. Depending on the implementation of select I wonder if that is in fact just turning select into a fancy sleep statement.
You should be calling recv() first and then call select() only if recv() tells you to do so. Don't call select() first, that is a waste of processing. recv() knows if data is immediately available or if it has to wait for data to arrive:
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
char buffer[1024 * 256];
while (1){
int ret = 0;
int len = recv(sockfd, buffer, sizeof(buffer), 0);
if (len == -1) {
if (WSAGetLastError() != WSAEWOULDBLOCK) {
//handle error
break;
}
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, &timeout );
if (ret == -1) {
// handle error
break;
}
if (ret == 0) {
// log that timer expires
break;
}
// socket is readable so try read again
continue;
}
if (len == 0) {
// handle graceful disconnect
break;
}
//log the number of data it received
process_tcp_data(buffer, len);
}
}
Do something similar on the sending side as well. Call send() first, and then call select() waiting for writability only if send() tells you to do so.
I am trying to use non-blocking TCP sockets. The problem is that they are still blocking. The code is below -
server code -
struct sockaddr name;
char buf[80];
void set_nonblock(int socket) {
int flags;
flags = fcntl(socket,F_GETFL,0);
assert(flags != -1);
fcntl(socket, F_SETFL, flags | O_NONBLOCK);
}
int main(int agrc, char** argv) {
int sock, new_sd, adrlen; //sock is this socket, new_sd is connection socket
name.sa_family = AF_UNIX;
strcpy(name.sa_data, "127.0.0.1");
adrlen = strlen(name.sa_data) + sizeof(name.sa_family);
//make socket
sock = socket(AF_UNIX, SOCK_STREAM, 0);
if (sock < 0) {
printf("\nBind error %m", errno);
exit(1);
}
//unlink and bind
unlink("127.0.0.1");
if(bind (sock, &name, adrlen) < 0)
printf("\nBind error %m", errno);
//listen
if(listen(sock, 5) < 0)
printf("\nListen error %m", errno);
//accept
new_sd = accept(sock, &name, (socklen_t*)&adrlen);
if( new_sd < 0) {
cout<<"\nserver accept failure "<<errno;
exit(1);
}
//set nonblock
set_nonblock(new_sd);
char* in = new char[80];
std::string out = "Got it";
int numSent;
int numRead;
while( !(in[0] == 'q' && in[1] == 'u' && in[2] == 'i' && in[3] == 't') ) {
//clear in buffer
for(int i=0;i<80;i++)
in[i] = ' ';
cin>>out;
cin.get();
//if we typed something, send it
if(strlen(out.c_str()) > 0) {
numSent = send(new_sd, out.c_str(), strlen(out.c_str()), 0);
cout<<"\n"<<numSent<<" bytes sent";
}
numRead = recv(new_sd, in, 80, 0);
if(numRead > 0)
cout<<"\nData read from client - "<<in;
} //end while
cout<<"\nExiting normally\n";
return 0;
}
client code -
struct sockaddr name;
void set_nonblock(int socket) {
int flags;
flags = fcntl(socket,F_GETFL,0);
assert(flags != -1);
fcntl(socket, F_SETFL, flags | O_NONBLOCK);
}
int main(int agrc, char** argv) {
int sock, new_sd, adrlen;
sock = socket(AF_UNIX, SOCK_STREAM, 0);
if (sock < 0) {
printf("\nserver socket failure %m", errno);
exit(1);
}
//stuff for server socket
name.sa_family = AF_UNIX;
strcpy(name.sa_data, "127.0.0.1");
adrlen = strlen(name.sa_data) + sizeof(name.sa_family);
if(connect(sock, &name, adrlen) < 0) {
printf("\nclient connection failure %m", errno);
exit(1);
}
cout<<"\nSuccessful connection\n";
//set nonblock
set_nonblock(sock);
std::string out;
char* in = new char[80];
int numRead;
int numSent;
while(out.compare("quit")) {
//clear in
for(int i=0;i<80;i++)
in[i] = '\0';
numRead = recv(sock, in, 80, 0);
if(numRead > 0)
cout<<"\nData read from server - "<<in;
cout<<"\n";
out.clear();
cin>>out;
cin.get();
//if we typed something, send it
if(strlen(out.c_str())) {
numSent = send(sock, out.c_str(), strlen(out.c_str()), 0);
cout<<"\n"<<numSent<<" bytes sent";
}
} //end while
cout<<"\nExiting normally\n";
return 0;
}
Whenever I run it, the server still waits for me to send something before it will read and output what the client has sent. I want either the server or client to be able to send the message as soon as I type it, and have the other read and output the message at that time. I thought non-blocking sockets was the answer, but maybe I am just doing something wrong?
Also, I was using a file instead of my 127.0.0.1 address as the sockaddr's data. If that is not how it should be properly used, feel free to say so (it worked how it worked previously with a file so I just kept it like that).
Any help is appreciated.
General approach for a TCP server where you want to handle many connections at the same time:
make listening socket non-blocking
add it to select(2) or poll(2) read event set
enter select(2)/poll(2) loop
on wakeup check if it's the listening socket, then
accept(2)
check for failure (the client might've dropped the connection attempt by now)
make newly created client socket non-blocking, add it to the polling event set
else, if it's one of the client sockets
consume input, process it
watch out for EAGAIN error code - it's not really an error, but indication that there's no input now
if read zero bytes - client closed connection, close(2) client socket, remove it from event set
re-init event set (omitting this is a common error with select(2))
repeat the loop
Client side is a little simpler since you only have one socket. Advanced applications like web browsers that handle many connections often do non-blocking connect(2) though.
Whenever I run it, the server still waits for me to send something before it will read and output what the client has sent.
Well, that is how you wrote it. You block on IO from stdin, and then and only then do you send/receive.
cin>>out;
cin.get();
Also, you are using a local socket (AF_UNIX) which creates a special file in your filesystem for interprocess communication - this is a different mechanism than IP, and is definitely not TCP as you indicate in your question. I suppose you could name the file 127.0.0.1, but that really doesn't make sense and implies confusion on your part, because that is an IP loopback address. You'll want to use AF_INET for IP.
For an excellent starter guide on unix networking, I'd recommend http://beej.us/guide/bgnet/
If you want the display of messages received to be independant of your cin statements, either fork() off a seperate process to handle your network IO, or use a separate thread.
You might be interested in select(). In my opinion non-blocking sockets are usually a hack, and proper usage of select() or poll() is generally much better design and more flexible (and more portable). try
man select_tut
for more information.
I think you have to set non-block sooner (ie get the socket then set it non block)
also check that the fcntl to set it actually worked
If you want non-blocking i/o, you want to use select. You can set it with stdin as one of the sockets it is listening on, along with the client sockets (just add file descriptor 1, which is stdin, to the fd_set).
http://beej.us/guide/bgnet/output/html/multipage/advanced.html
I would recommend reading through what beej has to say about select. It looks a little intimidating but is really useful and simple to use if you take a little time to wrap your head around it.