I'm currently managing a server that can serve at most MAX_CLIENTS clients concurrently.
This is the code I've written so far:
//create and bind listen_socket_
struct pollfd poll_fds_[MAX_CLIENTS];
for (auto& poll_fd: poll_fds_)
{
poll_fd.fd = -1;
}
listen(listen_socket_, MAX_CLIENTS);
poll_fds_[0].fd = listen_socket_;
poll_fds_[0].events = POLLIN;
while (enabled)
{
const int result = poll(poll_fds_, MAX_CLIENTS, DEFAULT_TIMEOUT);
if (result == 0)
{
continue;
}
else if (result < 0)
{
// throw error
}
else
{
for (auto& poll_fd: poll_fds_)
{
if (poll_fd.revents == 0)
{
continue;
}
else if (poll_fd.revents != POLLIN)
{
// throw error
}
else if (poll_fd.fd == listen_socket_)
{
int new_socket = accept(listen_socket_, nullptr, nullptr);
if (new_socket < 0)
{
// throw error
}
else
{
for (auto& poll_fd: poll_fds_)
{
if (poll_fd.fd == -1)
{
poll_fd.fd = new_socket;
poll_fd.events = POLLIN;
break;
}
}
}
}
else
{
// serve connection
}
}
}
}
Everything is working great, and when a client closes the socket on its side, everything gets handled well.
The problem I'm facing is that when a client connects and send a requests, if it does not close the socket on its side afterwards, I do not detect it and leave that socket "busy".
Is there any way to implement a system to detect if nothing is received on a socket after a certain time? In that way I could free that connection on the server side, leaving room for new clients.
Thanks in advance.
You could close the client connection when the client has not sent any data for a specific time.
For each client, you need to store the time when the last data was received.
Periodically, for example when poll() returns because the timeout expired, you need to check this time for all clients. When this time to too long ago, you can shutdown(SHUT_WR) and close() the connection. You need to determine what "too long ago" is.
If a client does not have any data to send but wants to leave the connection open, it could send a "ping" message periodically. The server could reply with a "pong" message. These are just small messages with no actual data. It depends on your client/server protocol whether you can implement this.
Related
I'm doing my computer network homework. My teacher asked me to write a proxy server (it supports HTTP/1.0 and HTTP/1.1).
In my source code, I create a socket manager to manage connections. Since having a new connection, I create a socket client and pass it into the thread (call it thread1). The socket manager will continue listening. And in the thread I've created I do something such as: getting IP, port, method... and then create a new socket to connect to the web server.
When connected successfully I send a request to webserver and create a thread (thread2) to receive data from server to proxy -> client. Thread2 inside thread1, thread1 will receive request from client and send to proxy -> server. I know that chrome uses persistent with non-pipeline http. But when I check the order of sending requests and receiving responses, I see that it's pipelined. Some requests are subsequently sent before client receives response. Can you explain that problem?
To solve it, I decided not to create thread2. Instead, I use a While loop in thread1. Client will receive until Server completely sent response (only 1 response) then sending new request without closing connection. But I don't know how to know when the server completely sent its data. I think it's done when length of received data == 0, but the server hasn't closed connection.
Can you give me a solution?
Thanks.
//thread1: call thread2 and send request to Proxy -> Server
CWinThread *p = AfxBeginThread((AFX_THREADPROC)proxyToServer, (LPVOID)P);
//up stream, Client -> Proxy -> Server
while (P->isClientClose == FALSE && P->isServerClose == FALSE){
memset(DataTemp, 0, 10001);
Length = recv(*Client, DataTemp, 10000, 0);
if (Length <= 0){
break;
}
Length = send(*Server, DataTemp, Length, 0);
if (Length <= 0){
break;
}
}
if (P->isClientClose == FALSE){
P->isClientClose = TRUE;
closesocket(*Client);
}
if (P->isServerClose == FALSE){
P->isServerClose = TRUE;
closesocket(*Server);
}
//thread2, send data from Server -> Proxy -> Client
DWORD WINAPI proxyToServer(LPVOID* pParam){
Param *P = (Param*)pParam;
char *Data = new char[10001];
while (P->isClientClose == FALSE && P->isServerClose == FALSE){
memset(Data, 0, 10001);
int length = recv(*(P->SERVER), Data, 10000, 0);
if (length <= 0)
break;
length = send(*(P->CLIENT), Data, length, 0);
if (length <= 0)
break;
}
if (P->isClientClose == FALSE){
P->isClientClose = TRUE;
closesocket(*(P->CLIENT));
}
if (P->isServerClose == FALSE){
P->isServerClose = TRUE;
closesocket(*(P->SERVER));
}
delete[]Data;
return 1;
}
I am trying to send GET request to nodejs server from a C++ client.
nodejs server:
const server = http.createServer((request, response) => {
console.log(request.url);
response.end("received");
})
and here is my C++ clients:
while(getline(cin, random_input)) {
int s_len;
input = "GET / HTTP/1.1\r\n\r\n";
s_len = send(sock, input.c_str(), input.size(), 0);
if( s_len < 0)
{
perror("Send failed : ");
return false;
}
cout<<socket_c.receive(1024);
}
string tcp_client::receive(int size=512)
{
char buffer[size];
string reply;
int r_len; // received len
//Receive a reply from the server
r_len = recv(sock, buffer, sizeof(buffer), 0);
if( r_len < 0)
{
puts("recv failed");
}
if(buffer[r_len-1] == '\n') {
buffer[r_len-1] = '\0';
} else {
buffer[r_len] = '\0';
}
reply = buffer;
return reply;
}
so the C++ client can send GET requests each time when it's typing something in the terminal.
It works pretty fine if I type something right after the connection has been established. However, if I wait for 15-30 seconds after establish the connection, then type something on the client program, although the number of byte s_len that has been sent is correct, the server could't received anything.
May I know what goes wrong?
A few errors I spotted:
send return value is not checked correctly. Condition input.size() == s_len must be true.
recv return value is not checked of EOF. It treats r_len of 0 as valid data instead of disconnect. This may be the reason you do not see server replies: it may have disconnected but you did not notice that.
Setting the value of keepAliveTimeout of the node.js server to 0 could solve the problem
I have been trying to figure out this problem for over a month now. I have no where else to turn.
I have a server that listens to many multicast channels (100ish). Each socket is its own thread. Then I have a client listener (single threaded) that handles all incoming connections, disconnects, and client messaging within the same server. The idea is that a client comes in, connects, requests data from a multicast channels and I send the data back to the client. The client stays connected and I relay the UDP data back to the client. The client can either request UDP or TCP has the protocol for the data relay. At one point this was working beautifully for a couple of weeks. We did some code and kernel changes, and now we cant figure out whats gone wrong.
The server will run for hours and have hundreds of clients connected throughout the day. But at some point, randomly, the server will just stop. And by stop, I mean: all UDP sockets stop receiving/handling data (tcpdump shows data still coming to the box), the client_listener thread stops receiving client packets. BUT!!! the main client_listener socket can still receive new connections and new disconnects on the main socket. On a new connection, the main sockets is able to send a "Connection Established" packet back to the client, but then when the client responds, the select never returns.
I can post code if someone would like. If anyone has any suggestions where to look or if this sounds like something. Please let me know.
If you have any questions, please ask.
Thank you.
I would like to share my TCP Server code:
This is a single thread. Works fine for hours and then I will only receive "New Connections" and "Disconnects". NO CLIENT PACKETS WILL COME IN.
int opt = 1;
int addrlen;
int sd;
int max_sd;
int valread;
int activity;
int new_socket;
char buffer[MAX_BUFFER_SIZE];
int client_socket[m_max_clients];
struct sockaddr_in address;
fd_set readfds;
for(int i = 0; i<m_max_clients; i++)
{
client_socket[i]=0;
}
if((m_master_socket = socket(AF_INET,SOCK_STREAM,0))==0)
LOG(FATAL)<<"Unable to create master socket";
if(setsockopt(m_master_socket,SOL_SOCKET,SO_REUSEADDR,(char*)&opt,sizeof(opt))<0)
LOG(FATAL)<<"Unable to set master socket";
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons(m_listenPort);
if(bind(m_master_socket,(struct sockaddr*)& address, sizeof(address))!=0)
LOG(FATAL)<<"Unable to bind master socket";
if(listen(m_master_socket,SOMAXCONN)!=0)
LOG(FATAL)<<"listen() failed with err";
addrlen = sizeof(address);
LOG(INFO)<<"Waiting for connections......";
while(true)
{
FD_ZERO(&readfds);
FD_SET(m_master_socket, &readfds);
max_sd = m_master_socket;
for(int i = 0; i<m_max_clients; i++)
{
sd = client_socket[i];
if(sd > 0)
FD_SET(sd, &readfds);
if(sd>max_sd)
max_sd = sd;
}
activity = select(max_sd+1,&readfds,NULL,NULL,NULL);
if((activity<0)&&(errno!=EINTR))
{
// int err = errno;
// LOG(ERROR)<<"SELECT ERROR:"<<activity<<" "<<err;
continue;
}
if(FD_ISSET(m_master_socket, &readfds))
{
if((new_socket = accept(m_master_socket,(struct sockaddr*)&address, (socklen_t*)&addrlen))<0)
LOG(FATAL)<<"ERROR:ACCEPT FAILED!";
LOG(INFO)<<"New Connection, socket fd is (" << new_socket << ") client_addr:" << inet_ntoa(address.sin_addr) << " Port:" << ntohs(address.sin_port);
for(int i =0;i<m_max_clients;i++)
{
if(client_socket[i]==0)
{
//try to set the socket to non blocking, tcp nagle and keep alive
if ( !SetSocketBlockingEnabled(new_socket, false) )
LOG(INFO)<<"UNABLE TO SET NON-BLOCK: ("<<new_socket<<")" ;
if ( !SetSocketNoDelay(new_socket,false) )
LOG(INFO)<<"UNABLE TO SET DELAY: ("<<new_socket<<")" ;
// if ( !SetSocketKeepAlive(new_socket,true) )
// LOG(INFO)<<"UNABLE TO SET KeepAlive: ("<<new_socket<<")" ;
ClientConnection* con = new ClientConnection(m_mocSrv, m_udpPortGenerator, inet_ntoa(address.sin_addr), ntohs(address.sin_port), new_socket);
if(con->login())
{
client_socket[i] = new_socket;
m_clientConnectionSocketMap[new_socket] = con;
LOG(INFO)<<"Client Connection Logon Complete";
}
else
delete con;
break;
}
}//for
}
else
{
try{
for(int i = 0; i<m_max_clients; i++)
{
sd = client_socket[i];
if(FD_ISSET(sd,&readfds))
{
if ( (valread = recv(sd, buffer, sizeof(buffer),MSG_DONTWAIT|MSG_NOSIGNAL)) <= 0 )
{
//remove from the fd listening set
LOG(INFO)<<"RESET CLIENT_SOCKET:("<<sd<<")";
client_socket[i]=0;
handleDisconnect(sd,true);
}
else
{
std::map<int, ClientConnection*>::iterator client_connection_socket_iter = m_clientConnectionSocketMap.find(sd);
if(client_connection_socket_iter != m_clientConnectionSocketMap.end())
{
client_connection_socket_iter->second->handle_message(buffer, valread);
if(client_connection_socket_iter->second->m_logoff)
{
LOG(INFO)<<"SOCKET LOGGED OFF:"<<sd;
client_socket[i]=0;
handleDisconnect(sd,true);
}
}
else
{
LOG(ERROR)<<"UNABLE TO FIND SOCKET DESCRIPTOR:"<<sd;
}
}
}
}
}catch(...)
{
LOG(ERROR)<<"EXCEPTION CATCH!!!";
}
}
}
From the information given I would state the following:
Do not use a thread for each connection. Since you're on Linux use EPOLL Edge Triggered Multiplexing. Most newer web frameworks use this technology. For more info check 10K Problem.
By eliminating threads from the equation you're eliminating the possibilities of a deadlock and reducing the complexity of debugging / worrying about thread safe variables.
Ensure each connection when finished is completely closed.
Ensure that you do not have some new firewall rules that popped up in iptables since the upgrade.
Check any firewalls on the network to see if they are restricting certain types of activity (is your server on a new IP since the upgrade?)
In short I would put my money on a thread deadlock and / or starvation. I've personally conducted experiments in which I've created a multithreaded server vs a single threaded server using Epoll. The results where night and day, Epoll blows away multithreaded implementation (for I/O) and makes the code simpler to write, debug and maintain.
I have to create a basic p2p connection with c++ sockets, which means each user has a server for listening onto connections and and a client for connecting, right?
For now I'm trying to create a master client which has a dedicated server and is a client too.
This means creating the server and client in the same program and I have used fork() which creates a child process of the server and the parent is the client. Now, fork works fine and I'm using select() to check sockets for reading data and i have modeled the server on this http://beej.us/guide/bgnet/output/html/multipage/advanced.html#select
Now when I run the program, the master client is able to connect to its own dedicated server, but the messages don't always get received by the server. Sometimes, it receives it, sometimes it doesn't. Any idea why?
Also, when a second client gets connected to the master client, and it doesn't have it's own server for now, the server shows that it gets a new connection, but when I write the message and send it, it doesn't receive any message from the second client, but it receives a message from the master client sometimes and not always.
EDIT: Added cout.flush
EDIT: I think forking the process causes some delay when a client and server run on the same program.
UPDATE: Added the new server code which causes a delay by one message (in response to the comments)
Here's the code.
SERVER CODE
while (1) {
unsigned int s;
readsocks = socks;
if (select(maxsock + 1, &readsocks, NULL, NULL, NULL) == -1) {
perror("select");
return ;
}
for (s = 0; s <= maxsock; s++) {
if (FD_ISSET(s, &readsocks)) {
//printf("socket %d was ready\n", s);
if (s == sock) {
/* New connection */
cout<<"\n New Connection";
cout.flush();
int newsock;
struct sockaddr_in their_addr;
socklen_t size = sizeof(their_addr);
newsock = accept(sock, (struct sockaddr*)&their_addr, &size);
if (newsock == -1) {
perror("accept");
}
else {
printf("Got a connection from %s on port %d\n",
inet_ntoa(their_addr.sin_addr), htons(their_addr.sin_port));
FD_SET(newsock, &socks);
if (newsock > maxsock) {
maxsock = newsock;
}
}
}
else {
/* Handle read or disconnection */
handle(s, &socks);
}
}
}
}
void handle(int newsock, fd_set *set)
{
char buf[256];
bzero(buf, 256);
/* send(), recv(), close() */
if(read(newsock, buf, 256)<=0){
cout<<"\n No data";
FD_CLR(newsock, set);
cout.flush();
}
else {
string temp(buf);
cout<<"\n Server: "<<temp;
cout.flush();
}
/* Call FD_CLR(newsock, set) on disconnection */
}
I have the following C++ code in linux:
if (epoll_wait(hEvent,&netEvents,1,0))
{
// check FIRST for disconnection to avoid send() to a closed socket (halts on centos on my server!)
if ((netEvents.events & EPOLLERR)||(netEvents.events & EPOLLHUP)||(netEvents.events & EPOLLRDHUP)) {
save_log("> client terminated connection");
goto connection_ended; // ---[ if its a CLOSE event .. close :)
}
if (netEvents.events & EPOLLOUT) // ---[ if socket is available for write
{
if (send_len) {
result = send(s,buffer,send_len,MSG_NOSIGNAL);
save_slogf("1112:send (s=%d,len=%d,ret=%d,errno=%d,epoll=%d,events=%d)",s,send_len,result,errno,hEvent,netEvents.events);
if (result > 0) {
send_len = 0;
current_stage = CL_STAGE_USE_LINK_BRIDGE;
if (close_after_send_response) {
save_log("> destination machine closed connection");
close_after_send_response = false;
goto connection_ended;
}
} else {
if (errno == EAGAIN) return;
else if (errno == EWOULDBLOCK) return;
else {
save_log("> unexpected error on socket, terminating");
connection_ended:
close_client();
reset();
return;
}
}
}
}
}
}
hEvent: epoll created to listen to EPOLLIN,EPOLLOUT,EPOLLERR,EPOLLHUP,EPOLLRDHUP
s: NON-BLOCKING (!!!) socket created from an accept on a nonblocking listening socket
Basically this code is attempting to send a packet back to a connected user that connected to a server. It usually works ok but on RANDOM occasions (perhaps when some wierd network event happends) the program hangs indefinitely on the "result = send(s,buffer,send_len,MSG_NOSIGNAL) line.
I have no idea what may be the cause for this, I have tried to monitor the socket operations and nothing seemed to give me a hint of a clue as to why it happends. I have to assume this is either a KERNEL bug or something very wierd because I have the same program written under Windows and it works perfect there.