Multiple threads writing to same socket causing issues - c++

I have written a client/server application where the server spawns multiple threads depending upon the request from client.
These threads are expected to send some data to the client(string).
The problem is, data gets overwritten on the client side. How do I tackle this issue ?
I have already read some other threads on similar issue but unable to find the exact solution.
Here is my client code to receive data.
while(1)
{
char buff[MAX_BUFF];
int bytes_read = read(sd,buff,MAX_BUFF);
if(bytes_read == 0)
{
break;
}
else if(bytes_read > 0)
{
if(buff[bytes_read-1]=='$')
{
buff[bytes_read-1]='\0';
cout<<buff;
}
else
{
cout<<buff;
}
}
}
Server Thread code :
void send_data(int sd,char *data)
{
write(sd,data,strlen(data));
cout<<data;
}
void *calcWordCount(void *arg)
{
tdata *tmp = (tdata *)arg;
string line = tmp->line;
string s = tmp->arg;
int sd = tmp->sd_c;
int line_no = tmp->line_no;
int startpos = 0;
int finds = 0;
while ((startpos = line.find(s, startpos)) != std::string::npos)
{
++finds;
startpos+=1;
pthread_mutex_lock(&myMux);
tcount++;
pthread_mutex_unlock(&myMux);
}
pthread_mutex_lock(&mapMux);
int t=wcount[s];
wcount[s]=t+finds;
pthread_mutex_unlock(&mapMux);
char buff[MAX_BUFF];
sprintf(buff,"%s",s.c_str());
sprintf(buff+strlen(buff),"%s"," occured ");
sprintf(buff+strlen(buff),"%d",finds);
sprintf(buff+strlen(buff),"%s"," times on line ");
sprintf(buff+strlen(buff),"%d",line_no);
sprintf(buff+strlen(buff),"\n",strlen("\n"));
send_data(sd,buff);
delete (tdata*)arg;
}

On the server side make sure the shared resource (the socket, along with its associated internal buffer) is protected against the concurrent access.
Define and implement an application level protocol used by the server to make it possible for the client to distinguish what the different threads sent.
As an additional note: One cannot rely on read()/write() reading/writing as much bytes as those two functions were told to read/write. It is an essential necessity to check their return value to learn how much bytes those functions actually read/wrote and loop around them until all data that was intended to be read/written had been read/written.

You should put some mutex to your socket.
When a thread use the socket it should block the socket.
Some mutex example.
I can't help you more without the server code. Because the problem is probably in the server.

Related

Passing data to another thread in a C++ winsock app

So I have this winsock application (a server, able to accept multiple clients), where in the main thread I setup the socket and create another thread where I listen for clients (listen_for_clients function).
I also constantly receive data from a device in the main thread, which I afterwards concatenate to char arrays (buffers) of Client objects (BroadcastSample function). Currently I create a thread for each connected client (ProcessClient function), where I initialize a Client object and push it to a global vector of clients after which I send data to this client through the socket whenever the buffer in the corresponding Client object exceeds 4000 characters.
Is there a way I can send data from the main thread to the separate client threads so I don't have to use structs/classes (also to send a green light if I want to send the already accumulated data) and also if I'm going to keep a global container of objects, what is a good way to remove a disconnected client object from it without crashing the program because another thread is using the same container?
struct Client{
int buffer_len;
char current_buffer[5000];
SOCKET s;
};
std::vector<Client*> clientBuffers;
DWORD WINAPI listen_for_clients(LPVOID Param)
{
SOCKET client;
sockaddr_in from;
int fromlen = sizeof(from);
char buf[100];
while(true)
{
client = accept(ListenSocket,(struct sockaddr*)&from,&fromlen);
if(client != INVALID_SOCKET)
{
printf("Client connected\n");
unsigned dwThreadId;
HANDLE hThread = (HANDLE)_beginthreadex(NULL, 0, &ProcessClient, (void*)client, 0, &dwThreadId);
}
}
closesocket(ListenSocket);
WSACleanup();
ExitThread(0);
}
unsigned __stdcall ProcessClient(void *data)
{
SOCKET ClientSocket = (SOCKET)data;
Client * a = new Client();
a->current_buffer[0] = '\0';
a->buffer_len = 0;
a->s = ClientSocket;
clientBuffers.push_back(a);
char szBuffer[255];
while(true)
{
if(a->buffer_len > 4000)
{
send(ClientSocket,a->current_buffer,sizeof(a->current_buffer),0);
memset(a->current_buffer,0,5000);
a->buffer_len = 0;
a->current_buffer[0] = '\0';
}
}
exit(1);
}
//function below is called only in main thread, about every 100ms
void BroadcastSample(Sample s)
{
for(std::vector<Client*>::iterator it = clientBuffers.begin(); it != clientBuffers.end(); it++)
{
strcat((*it)->current_buffer,s.to_string);
(*it)->buffer_len += strlen(s.to_string);
}
}
This link has some Microsoft documentation on MS-style mutexes (muticies?).
This other link has some general info on mutexes.
Mutexes are the general mechanism for protecting data which is accessed by multiple threads. There are data structures with built-in thread safety, but in my experience, they usually have caveats that you'll eventually miss. That's just my two cents.
Also, for the record, you shouldn't use strcat, but rather strncat. Also, if one of your client servicing threads accesses one of those buffers after strncat overwrites the old '\0' but before it appends the new one, you'll have a buffer overread (read past end of allocated buffer).
Mutexes will also solve your current busy-waiting problem. I'm not currently near a windows compiler, or I'd try to help more.

Reading on serial port returns what i just wrote

I just started a project where i'm struggling since days now about serial ports. I wrote a static library that can handle all the serial routine and give an interface with "readLine()" and "writeLine()" functions.
Everything works flawlessly on the write and read (which are threaded by the way) except if the slave does not anwser after he gets the data, then, the data is sent back to me, and i read it.
I open my fd with O_NDELAY and configure my read system call as Non blocking with fcntl.
here are the two threaded loops that work perfectly beside that.
void *Serial_Port::readLoop(void *param)
{
Serial_Port *sp = static_cast<Serial_Port*>(param);
std::string *line = NULL;
char buffer[128];
while (1)
{
line = new std::string();
while ((line->find("\r\n")) == std::string::npos)
{
usleep(100);
bzero(buffer, 128);
pthread_mutex_lock(sp->getRLock());
if (read(sp->getDescriptor(), buffer, 127) > 0)
*line += buffer;
pthread_mutex_unlock(sp->getRLock());
}
pthread_mutex_lock(sp->getRLock());
sp->getRStack()->push(line->substr(0, line->find("\r\n")));
pthread_mutex_unlock(sp->getRLock());
delete (line);
}
return (param);
}
void *Serial_Port::writeLoop(void *param)
{
Serial_Port *sp = static_cast<Serial_Port*>(param);
std::string *line;
while (1)
{
line = NULL;
pthread_mutex_lock(sp->getWLock());
if (!sp->getWStack()->empty())
{
line = new std::string(sp->getWStack()->front());
sp->getWStack()->pop();
}
pthread_mutex_unlock(sp->getWLock());
if (line != NULL)
{
pthread_mutex_lock(sp->getWLock());
write(sp->getDescriptor(), line->c_str(), line->length());
// fsync(sp->getDescriptor());
pthread_mutex_unlock(sp->getWLock());
}
usleep(100);
}
return (param);
}
I tried to flush the file descriptor, but i can't manage to receive any data after doing that. How can I get rid of that duplicate, needless data?
Thanks.
After multiple tests and behavior analysis, I discovered it was the "Pulsar3" (the device i was using on serial) that kept giving me back what i sent as "Acknowledge". Nice to know!

Windws C++ Intermittent Socket Disconnect

I've got a server that uses a two thread system to manage between 100 and 200 concurrent connections. It uses TCP sockets, as packet delivery guarantee is important (it's a communication system where missed remote API calls could FUBAR a client).
I've implemented a custom protocol layer to separate incoming bytes into packets and dispatch them properly (the library is included below). I realize the issues of using MSG_PEEK, but to my knowledge, it is the only system that will fulfill the needs of the library implementation. I am open to suggestions, especially if it could be part of the problem.
Basically, the problem is that, randomly, the server will drop the client's socket due to a lack of incoming packets for more than 20 seconds, despite the client successfully sending a keepalive packet every 4. I can verify that the server itself didn't go offline and that the connection of the users (including myself) experiencing the problem is stable.
The library for sending/receiving is here:
short ncsocket::send(wstring command, wstring data) {
wstringstream ss;
int datalen = ((int)command.length() * 2) + ((int)data.length() * 2) + 12;
ss << zero_pad_int(datalen) << L"|" << command << L"|" << data;
int tosend = datalen;
short __rc = 0;
do{
int res = ::send(this->sock, (const char*)ss.str().c_str(), datalen, NULL);
if (res != SOCKET_ERROR)
tosend -= res;
else
return FALSE;
__rc++;
Sleep(10);
} while (tosend != 0 && __rc < 10);
if (tosend == 0)
return TRUE;
return FALSE;
}
short ncsocket::recv(netcommand& nc) {
vector<wchar_t> buffer(BUFFER_SIZE);
int recvd = ::recv(this->sock, (char*)buffer.data(), BUFFER_SIZE, MSG_PEEK);
if (recvd > 0) {
if (recvd > 8) {
wchar_t* lenstr = new wchar_t[4];
memcpy(lenstr, buffer.data(), 8);
int fulllen = _wtoi(lenstr);
delete lenstr;
if (fulllen > 0) {
if (recvd >= fulllen) {
buffer.resize(fulllen / 2);
recvd = ::recv(this->sock, (char*)buffer.data(), fulllen, NULL);
if (recvd >= fulllen) {
buffer.resize(buffer.size() + 2);
buffer.push_back((char)L'\0');
vector<wstring> data = parsewstring(L"|", buffer.data(), 2);
if (data.size() == 3) {
nc.command = data[1];
nc.payload = data[2];
return TRUE;
}
else
return FALSE;
}
else
return FALSE;
}
else
return FALSE;
}
else {
::recv(this->sock, (char*)buffer.data(), BUFFER_SIZE, NULL);
return FALSE;
}
}
else
return FALSE;
}
else
return FALSE;
}
This is the code for determining if too much time has passed:
if ((int)difftime(time(0), regusrs[i].last_recvd) > SERVER_TIMEOUT) {
regusrs[i].sock.end();
regusrs[i].is_valid = FALSE;
send_to_all(L"removeuser", regusrs[i].server_user_id);
wstringstream log_entry;
log_entry << regusrs[i].firstname << L" " << regusrs[i].lastname << L" (suid:" << regusrs[i].server_user_id << L",p:" << regusrs[i].parent << L",pid:" << regusrs[i].parentid << L") was disconnected due to idle";
write_to_log_file(server_log, log_entry.str());
}
The "regusrs[i]" is using the currently iterated member of a vector I use to story socket descriptors and user information. The 'is_valid' check is there to tell if the associated user is an actual user - this is done to prevent the system from having to deallocate the member of the vector - it just returns it to the pool of available slots. No thread access/out-of-range issues that way.
Anyway, I started to wonder if it was the server itself was the problem. I'm testing on another server currently, but I wanted to see if another set of eyes could stop something out of place or cue me in on a concept with sockets and extended keepalives that I'm not aware of.
Thanks in advance!
I think I see what you're doing with MSG_PEEK, where you wait until it looks like you have enough data to read a full packet. However, I would be suspicious of this. (It's hard to determine the dynamic behaviour of your system just by looking at this small part of the source and not the whole thing.)
To avoid use of MSG_PEEK, follow these two principles:
When you get a notification that data is ready (I assume you're using select), then read all the waiting data from recv(). You may use more than one recv() call, so you can handle the incoming data in pieces.
If you read only a partial packet (length or payload), then save it somewhere for the next time you get a read notification. Put the packets and payloads back together yourself, don't leave them in the socket buffer.
As an aside, the use of new/memcpy/wtoi/delete is woefully inefficient. You don't need to allocate memory at all, you can use a local variable. And then you don't even need the memcpy at all, just a cast.
I presume you already assume that your packets can be no longer than 999 bytes in length.

Client application crash causes Server to crash? (C++)

I'm not sure if this is a known issue that I am running into, but I couldn't find a good search string that would give me any useful results.
Anyway, here's the basic rundown:
we've got a relatively simple application that takes data from a source (DB or file) and streams that data over TCP to connected clients as new data comes in. its a relatively low number of clients; i would say at max 10 clients per server, so we have the following rough design:
client: connect to server, set to read (with timeout set to higher than the server heartbeat message frequency). It blocks on read.
server: one listening thread that accepts connections and then spawns a writer thread to read from the data source and write to the client. The writer thread is also detached(using boost::thread so just call the .detach() function). It blocks on writes indefinetly, but does check errno for errors before writing. We start the servers using a single perl script and calling "fork" for each server process.
The problem(s):
at seemingly random times, the client will shutdown with a "connection terminated (SUCCESFUL)" indicating that the remote server shutdown the socket on purpose. However, when this happens the SERVER application ALSO closes, without any errors or anything. it just crashes.
Now, to further the problem, we have multiple instances of the server app being started by a startup script running different files and different ports. When ONE of the servers crashes like this, ALL the servers crash out.
Both the server and client using the same "Connection" library created in-house. It's mostly a C++ wrapper for the C socket calls.
here's some rough code for the write and read function in the Connection libary:
int connectionTimeout_read = 60 * 60 * 1000;
int Socket::readUntil(char* buf, int amount) const
{
int readyFds = epoll_wait(epfd,epEvents,1,connectionTimeout_read);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TIMEOUT;
return 0;
}
int fd = epEvents[0].data.fd;
if( fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
int rec = recv(fd,buf,amount,MSG_WAITALL);
if(rec == 0)
status = CONNECTION_CLOSED;
else if(rec < 0)
status = convertFlagToStatus(errno);
else
status = CONNECTION_NORMAL;
lastReadBytes = rec;
return rec;
}
int Socket::write(const void* buf, int size) const
{
int readyFds = epoll_wait(epfd,epEvents,1,-1);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TERMINATED;
return 0;
}
int fd = epEvents[0].data.fd;
if(fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
if(epEvents[0].events != EPOLLOUT)
{
status = CONNECTION_CLOSED;
return 0;
}
int bytesWrote = ::send(socket, buf, size,0);
if(bytesWrote < 0)
status = convertFlagToStatus(errno);
lastWriteBytes = bytesWrote;
return bytesWrote;
}
Any help solving this mystery bug would be great! at the VERY least, I would like it to NOT crash out the server even if the client crashes (which is really strange for me, since there is no two-way communication).
Also, for reference, here is the server listening code:
while(server.getStatus() == connection::CONNECTION_NORMAL)
{
connection::Socket s = server.listen();
if(s.getStatus() != connection::CONNECTION_NORMAL)
{
fprintf(stdout,"failed to accept a socket. error: %s\n",connection::getStatusString(s.getStatus()));
}
DATASOURCE* dataSource;
dataSource = open_datasource(XXXX); /* edited */ if(dataSource == NULL)
{
fprintf(stdout,"FATAL ERROR. DATASOURCE NOT FOUND\n");
return;
}
boost::thread fileSender(Sender(s,dataSource));
fileSender.detach();
}
...And also here is the spawned child sending thread:
::signal(SIGPIPE,SIG_IGN);
//const int headerNeeds = 29;
const int BUFFERSIZE = 2000;
char buf[BUFFERSIZE];
bool running = true;
while(running)
{
memset(buf,'\0',BUFFERSIZE*sizeof(char));
unsigned int readBytes = 0;
while((readBytes = read_datasource(buf,sizeof(unsigned char),BUFFERSIZE,dataSource)) == 0)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
}
socket.write(buf,readBytes);
if(socket.getStatus() != connection::CONNECTION_NORMAL)
running = false;
}
fprintf(stdout,"socket error: %s\n",connection::getStatusString(socket.getStatus()));
socket.close();
fprintf(stdout,"sender exiting...\n");
Any insights would be welcome! Thanks in advance.
You've probably got everything backwards... when the server crashes, the OS will close all sockets. So the server crash happens first and causes the client to get the disconnect message (FIN flag in a TCP segment, actually), the crash is not a result of the socket closing.
Since you have multiple server processes crashing at the same time, I'd look at resources they share, and also any scheduled tasks that all servers would try to execute at the same time.
EDIT: You don't have a single client connecting to multiple servers, do you? Note that TCP connections are always bidirectional, so the server process does get feedback if a client disconnects. Some internet providers have even been caught generating RST packets on connections that fail some test for suspicious traffic.
Write a signal handler. Make sure it uses only raw I/O functions to log problems (open, write, close, not fwrite, not printf).
Check return values. Check for negative return value from write on a socket, but check all return values.
Thanks for all the comments and suggestions.
After looking through the code and adding the signal handling as Ben suggested, the applications themselves are far more stable. Thank you for all your input.
The original problem, however, was due to a rogue script that one of the admins was running as root that would randomly kill certain processes on the server-side machine (i won't get into what it was trying to do in reality; safe to say it was buggy).
Lesson learned: check the environment.
Thank you all for the advice.

Chat server design of the "main" loop

I am writing on a small tcp chat server, but I am encountering some problems I canĀ“t figure out how to solve "elegantly".
Below is the code for my main loop: it does:
1.Initiates a vector with the basic event, which is flagged, when a new tcp connection is made.
2. gets this connection and pushes it back into a vector, too. Then with the socket it creates a CSingleConnection object and passes the socket into it.
2.1. gets the event from the CSingleConnection, which is flagged when the connection receives data...
3. when it receives data. the wait is fullfilled and returns the number of the handle in the array... with all those other vectors it seems i can determine which one is sending now...
but as everybody can see: this methodology is really poorly... I cant figure out how to do all this better, with getting the connection socket, creating a single connection and so on :/...
Any suggestions, improvements, etc?...
void CServer::MainLoop()
{
DWORD dwResult = 0;
bool bMainLoop = true;
std::vector<std::string> vecData;
std::vector<HANDLE> vecEvents; //Contains the handles to wait on
std::vector<SOCKET> vecSocks; //contains the sockets
enum
{
ACCEPTOR = 0, //First element: sequence is mandatory
EVENTSIZE //Keep as the last element!
};
//initiate the vector with the basic handles
vecEvents.clear();
GetBasicEvents(vecEvents);
while(bMainLoop)
{
//wait for event handle(s)
dwResult = WaitForMultipleObjects(vecEvents.size(), &vecEvents[0], true, INFINITE);
//New connection(s) made
if(dwResult == (int)ACCEPTOR)
{
//Get the sockets for the new connections
m_pAcceptor->GetOutData(vecSocks);
//Create new connections
for(unsigned int i = 0; i < vecSocks.size(); i++)
{
//Add a new connection
CClientConnection Conn(vecSocks[i]);
m_vecConnections.push_back(Conn);
//Add event
vecEvents.push_back(Conn.GetOutEvent());
}
}
//Data from one of the connections
if(dwResult >= (int)EVENTSIZE)
{
Inc::MSG Msg;
//get received string data
m_vecConnections[dwResult].GetOutData(vecData);
//handle the data
for(unsigned int i = 0; i < vecData.size(); i++)
{
//convert data into message
if(Inc::StringToMessage(vecData[i], Msg) != Inc::SOK)
continue;
//Add the socket to the sender information
Msg.Sender.sock = vecSocks[dwResult];
//Evaluate and delegate data and task
EvaluateMessage(Msg);
}
}
}
}
Do not re-invent the wheel, use Boost.ASIO. It is well optimized utilizing kernel specific features of different operating systems, designed the way which makes client code architecture simple. There are a lot of examples and documentation, so you just cannot get it wrong.