Server application crashing as soon as a client disconnects - c++

So i just programmed a simple multithreaded client server application using winsock2 and TCP.
Here is a quick summary of how it works:
The servers main-thread is in a endless loop accepting clients and then also adding them to the servers vector which holds every connected client like this:
(only adding in the important stuff for my question)
std::vector <Client*> clients;
while (true){
clients.push_back(&Client(accept(serverSocket, NULL, NULL), this));
}
When a new client connects to the server we basically create a new Client object with the socket of the new client and the server itself as parameters.
My idea was then to give every client its own thread so every client can send data at the same time.
std::thread tickThread;
Client::Client(SOCKET socket,Server* server) :
isConnected(true),
socket(socket),
server(server)
{
tickThread = std::thread(&Client::tick,this);
}
The thread for the client then checks if the client sent something and then sends it to the server. It also checks wether the client is still connected.
void Client::tick(){
while (isConnected){
errorHandler = recv(socket, receivedData, 255, 0);
if (errorHandler == SOCKET_ERROR){
disconnect();
}
else {
//send received data to server
}
}
If the client disconnected it tells the server to remove the client from the connected clients vector and then sets the "isConnected" bool to false so the thread can exit its function.
void Client::disconnect(){
isConnected = false;
server->removeClient(this);
}
This is how it's supposed to work, however as soon as a client disconnects again the server crashes with the error:
R6010 - abort() has been called
All debugging shows me is this as my error:
switch (_CrtDbgReportW(_CRT_ERROR, NULL, 0, NULL, L"%s", error_text)){
case 1: _CrtDbgBreak(); msgshown = 1; break;
case 0: msgshown = 1; break;
}
So yeah i don't really know whats causing this crash, however i suspect that it might be related to the thread using a function of the client that's basically being deleted as it is being removed from the client vector of the server.
And if this turns out to be the problem could you guys give me ideas for a better way of implementing every client having its own thread?
Edit: changed the vector error, however the crash still happens as soon as a client disconnects

The error is in this block of code:
while (true){
clients.push_back(&Client(accept(serverSocket, NULL, NULL), this));
}
Client(accept(serverSocket, NULL, NULL), this) is an expression which generates a temporary Client object that is destroyed when the statement finishes executing. However, you take the address of that temporary object and add it to your vector.
If you want to create Client objects and store pointers to them, you will need to allocate memory for them. I would recommend using std::unique_ptr to manage them so that your vector claims ownership of their memory and automatically frees them if they are removed from the vector or the vector itself is destroyed. Then your code becomes:
std::vector<std::unique_ptr<Client>> clients;
while (true){
clients.push_back(std::make_unique<Client>(accept(serverSocket, NULL, NULL), this));
}

In this piece of code:
clients.push_back(&Client(accept(serverSocket, NULL, NULL), this));
You are pushing an address of a temporary object into the container. When push_back() is done, the temporary object is destroyed, so that address is no longer valid. I wonder, what kind of compiler allows you to do this.

Related

How do I (gracefully) terminate a gSOAP server?

I have gSOAP server generated from a WSDL file + a Qt GUI. The generated code works perfectly fine, except one point that causes my process to stay alive after GUI exits. (I'm deploying on Windows, so I have no signaling)
I need my GUI to stay alive (naturally) so I moved server-proxy object to a QObject-based class that the latter is moved to another QThread, and the I fire it up by an external signal. The server now runs on event-loop of its parent QObject and works fine.
The only problem is that I have no clue how to terminate server on exit. I tried tweaking generated code for server (is that really a good idea by the way?)
int MySweetService::run(int port)
{ if (!soap_valid_socket(this->soap->master) && !soap_valid_socket(this->bind(NULL, port, 100)))
return this->soap->error;
for (;;) // =====> Maybe here I can put my while(module_is_running_atomic_bool) ?
{ if (!soap_valid_socket(this->accept()))
{ if (this->soap->errnum == 0) // timeout?
this->soap->error = SOAP_OK;
break;
}
if (this->serve())
break;
this->destroy();
}
return this->soap->error;
}
Calling soap_done(&soap) from another thread terminates blocking call to accept() and next your "serving" thread. It works for me on Windows but I doesn't on Linux - it looks like gsoap has some multitasking issue. You also need some boolean flag to let "serving" thread know that you shut it down and it's not just error in gsope.

Writing to QTcpSocket does not always emit readyRead signal on opposite QTcpSocket

I have been stuck on this for the past 5 days, I have no idea how to proceed.
Overview:
I have a client UI which interacts with a data handler library, and the data handler library utilizes a network manager library, which is where my problem lies.
More Info
Firstly, QT provides a basic example for interactions between a QTcpServer (Fortune Server)and a QTcpSocket (Fortune Client).
I thus implemented this code into an extremely basic example of my own, which works like a charm and has no issues.
My own adaption of fortune client and server for the record (basic)
Quick Explaination:
Server application runs, click on start server, then on the client side, enter text in field and click connect to server and text is displayed, easy!
Problem:
Implementing the code above into my network manager library, does not fire the QTcpSocket::readyRead() in the server application above.
It connects to the server, where the QTcpServer::newConnection() is fired, as expected, straight after which the client writes to the socket but the readyRead() on the server socket does not fire, however in the example given it does.
Note:
The same port and ip address is used in this server-client application example and my current application, and the server is also running.
Further Information:
From the above code, I copied over directly from the client. Only 2 things were changed/modified:
String that is sent to server
return types for method
This was copied into my network mannager ::write() method. When running my application, and instance of QMainWindow is passed via data handler class and creates an instance of my network manager class which inherits QObject and implements the Q_OBJECT macro.
Code Examples:
//client_UI Class (snippet):
data_mananger *dman = new data_mananger(this); //this -> QMainWindow
ReturnObject r = dman->NET_AuthenticateUser_GetToken(Query);
//data_manager library (snippet)
data_mananger::data_mananger(QObject *_parent) :
parent(_parent)
{}
ReturnObject data_mananger::NET_AuthenticateUser_GetToken(QString Query){
//Query like "AUTH;U=xyz#a;P=1234"
//convert query string to char
QByteArray ba = Query.toLatin1();
//send query and get QList return
ReturnCode rCode = networkManager.write(ba);
//...
}
//netman library (snippet)
//.h
class NETMANSHARED_EXPORT netman : public QObject
{
Q_OBJECT
public
netman();
netman(QObject *_parent);
//...
private:
QTcpSocket *tcp_con;
//...
};
//cpp
netman::netman(QObject *_parent) :
parent(_parent)
{
tcp_con = new QTcpSocket(parent);
}
return;
}
serverIP.setAddress(serverInfo.addresses().first().toIPv4Address());
}
ReturnCode netman::write(QByteArray message, int portNumber){
tcp_con->connectToHost(QHostAddress("127.0.0.1"), 5000);
if (!tcp_con->waitForConnected())
{
qDebug(log_lib_netman_err) << "Unable to connect to server";
return ReturnCode::FailedConnecting;
}
if (!tcp_con->isValid()) {
qDebug(log_lib_netman_err) << "tcp socket invalid";
return ReturnCode::SocketError;
}
if (!tcp_con->isOpen()) {
qDebug(log_lib_netman_err) << "tcp socket not open";
return ReturnCode::SocketError;
}
// QByteArray block(message);
QByteArray block;
QDataStream out(&block,QIODevice::WriteOnly);
out.setVersion(QDataStream::Qt_4_0);
out << QString("Hello world");
if (!tcp_con->write(block)){
qDebug(log_lib_netman_err) << "Unable to send data to server";
return ReturnCode::WriteFailed;
}
else{
qDebug(log_lib_netman_info) << "Data block sent";
return ReturnCode::SentSuccess;
}
}
Conclusion:
The core code of the client side has been fully implemented, yet I cannot see why this error occurs.
I would very much appreciate help/advice!
Add a tcp_con->flush() statement to the end of your write function.
Why/how this works
You weren't getting a readyRead signal in your receiver because the written data was being buffered into the socket but not actually transmitted 'over the wire'. The flush() command causes the buffer to be transmitted. From the docs
This function writes as much as possible from the internal write
buffer to the underlying network socket, without blocking. If any data
was written, this function returns true; otherwise false is returned.
How are you supposed to know
In my case a lot of experience/frustration with serial ports and flushing. It's the equivalent of "have you rebooted it?" in the socket debugging toolbox.
If everything else is working fine, you may not have to flush, but it's kind of application specific and depends on the lifetime of the socket, the TCP window size, socket option settings, and various other factors. That said, I always flush because I like having complete control over my sockets, and I want to make sure data is transmitted when I want it to be. I don't think it's a hack, but in some cases it could be indicative of some other problem. Again, application specific.
Why might the buffer not be flushing itself?
I'm pretty sure no flush is needed in the fortune server example because they disconnectFromHost at the end of the sendFortune() function, and from the Qt documentation:
Attempts to close the socket. If there is pending data waiting to be
written, QAbstractSocket will enter ClosingState and wait until all
data has been written.
The socket would disconnect if it were destroyed as well, but from what I can see of your code you aren't doing that either, and the buffer isn't full, so probably nothing is actually stimulating the buffer to flush itself.
Other causes can be:
flow control isn't returned to the event loop (blocking calls, etc), so the buffer flush is never performed.
Transmit is occuring inside of a loop, which seems like it will exit (e.g. while(dataToTransmit)), but in fact the condition never becomes false, which leads to the event loop being blocked.
Nagles algorithm: the buffer may be waiting for more data before it flushes itself to keep network throughput high. You can disable this by setting the QAbstractSocket::LowDelayOption, but it may adversely affect your throughput... it's normally used for latency-sensative applications.

Can i use QTcpSocket again for another connection after deleteLater?

I try to use QTcpSserver, which would keep connection with one and only one client at a time, until the client disconnects. So, I keep the client with a member pointer in my class.
The problem arises here: In the examples I see on the internet, after disconnected(), it is called deleteLater(). Good, but I would use this class-member pointer again for another connection. Remember that the server keeps one and only one client at a time. So, what if the socket object is deleted after another connection assigned on it?
What I mean is:
class TcpServer(QObject* o) : public QTcpServer {
...
private:
QTcpSocket* client;
}
void TcpServer::connected() {
client = this->nextPendingConnection();
this->pauseAccepting();
connect(client, SIGNAL(disconnected()), client, SLOT(clientDisconnected()));
}
void TcpServer::clientDisconnected() {
client->deleteLater();
this->resumeAccepting();
}
Scenario is this:
Client connected. So, client = nextPendingConnection();
Server paused listening. Does not accept new connection.
Client is disconnected. client needs to be released. So, client->deleteLater() is calleed.
Server continues listening.
New connection comes. So, I need to client = nextPendingConnection();
But, previous client object was deleted? Maybe? Maybe not? What if event loop tries to delete client, after I have assigned the new connection to it in step 5?
So, how would I keep one and only one client, while deleting previous disconnected ones?
Would it be safe if I do this?
void TcpServer::clientDisconnected()
{
QSocket* ptr = client;
ptr->deleteLater();
...
}
I will cite Qt documentation about it:
The object will be deleted when control returns to the event loop.
So deleteLater() is a delayed delete. The object is to be regarded as deleted as soon as the call deleteLater() was made.
Your nextPendingConnection() call will create another object that need to be deleted some time later.
However in your case you only allow one pending connection as you said and disallow accepting until client gets disconnected. I this case it should be safe, in other cases you could overwrite your client pointer and will lose control over it (memory leak).
Even in your case, I would prefer this solution:
void TcpServer::clientDisconnected()
{
if (qobject_cast<QAbstractSocket*>(sender())) {
sender()->deleteLater();
}
...
}
This would also be safe if more than one connection is allowed in future changes of your application.
As i understand nextPendingConnection(); will return pointer to new QTcpSocket class object so you have nothing to worry about.
deleteLater() will scheduled for deletion only your old object. QTcpSocket* client contains only pointer to QTcpSocket class object. When you calling deleteLater() Qt will delete only object to which client was pointed at time of calling this function.

Indy10 TCP Server Freezing

I using Indy with C++ Builder XE3. It's perfect system but i have some problems. IdTCPServer works really good but when i have some connections on him and i want to stop server then my application freezed. I try to tell how i do it step by step:
1) Starting application (and server listening)
2) wait for new connections (or simulate it, no difference)
3) when we have 10-15 connections - then try to stop server listening.
4) when code came to IdTCPServer1->Active = false - application will be frozen
i made little video. Maybe it explain situation much better. http://www.youtube.com/watch?v=BNgTxYbLx8g
And here my code:
OnConnect:
EnterCriticalSection(&CritLock);
++ActiveConnections;
SetActiveConnections(ActiveConnections);
LeaveCriticalSection(&CritLock);
OnDisconnect:
EnterCriticalSection(&CritLock);
--ActiveConnections;
SetActiveConnections(ActiveConnections);
LeaveCriticalSection(&CritLock);
StopServer Code:
void TForm1::StopServer()
{
TList *list = IdTCPServer1->Contexts->LockList();
try
{
for(int i = 0; i < list->Count; ++i)
{
TIdContext *AContext = reinterpret_cast<TIdContext*>(list->Items[i]);
try
{
if (AContext->Connection->Connected())
{
AContext->Connection->IOHandler->InputBuffer->Clear();
AContext->Connection->IOHandler->WriteBufferCancel();
AContext->Connection->IOHandler->WriteBufferClear();
AContext->Connection->IOHandler->WriteBufferClose();
AContext->Connection->IOHandler->CloseGracefully();
AContext->Connection->Disconnect();
}
}
catch (const Exception &e)
{
}
}
}
__finally
{
IdTCPServer1->Contexts->UnlockList();
}
IdTCPServer1->Contexts->Clear();
//IdTCPServer1->StopListening();
IdTCPServer1->Active = false;
}
Thanks for advise!
You need to get rid of all your StopServer() code except for the very last line. When TIdTCPServer is deactivated, it performs all necessary cleanups for you. DO NOT DO IT YOURSELF (especially since you are doing it wrong anyway).
void TForm1::StopServer()
{
IdTCPServer1->Active = false;
}
Now, with just that code, if your app is still freezing, then that means you are deadlocking the main thread. That happens if you call StopServer() in the context of the main thread and one of two things are happening in your server code:
one of your TIdTCPServer event handlers performs a synchronized operation to the main thread (either via TIdSync or TThread::Synchronize()).
one of your TIdTCPServer event handlers swallows Indy exceptions and does not allow TIdTCPServer to terminate one or more client threads correctly when needed.
Internally, the TIdTCPServer::Active property setter closes all active sockets and waits for their respective threads to fully terminate, blocking the calling thread until the property setter exits. If yoou are deactivating the server in the main thread and one of the server threads performs a sync that the main thread cannot process, or otherwise does not terminate correctly when it should be, that will block the server deactivation from exiting and thus deadlock the main thread.
So make sure that:
you are not performing sync operations to the main thread while the server is being deactivated by the main thread. If you must sync, then deactivate the server in a worker thread instead so the main thread is not blocked anymore.
your event handlers are not swallowing any Indy EIdException-derived exceptions in try/catch blocks. If you catch such an exception, re-throw it when you are finshed using it. Let TIdTCPServer handle any Indy exceptions so it can perform internal cleanups as needed.
Lastly, on a side note, you do not need to keep track of connections manually. TIdTCPServer already does that for you in the Contexts property. If you need to know how many clients are currently connected at any moment, simply Lock() the Contexts list, read its Count property (or do anything else you need to do with the clients), and then Unlock() the list.

C++ MFC server app with sockets crashes and I cannot find the fault, help!

My program has one dialog and two sockets. Both sockets are derived from CAsyncSocket, one is for listening, other is for receiving data from client. My program crashes when client tries to connect to server application and server needs to initialize receiving socket.
This is my MFC dialog class.
class CFileTransferServerDlg : public CDialog
{
...
ListeningSocket ListenSock;
ReceivingSocket* RecvSock;
void OnAccept(); // called when ListenSock gets connection attempt
...
};
This is my derived socket class for receiving data that calls parent dialogs method when event is signaled.
class ReceivingSocket : public CAsyncSocket
{
...
CFileTransferServerDlg* m_pDlg; // for accessing parent dialogs controls
virtual void OnReceive(int nErrorCode);
...
}
ReceivingSocket::ReceivingSocket()
{
}
This is dialogs function that handles incoming connection attempt when listening socket gets event notification. This is where the crash happens.
void CFileTransferServerDlg::OnAccept()
{
RecvSock = new ReceivingSocket; /* CRASH */
}
OR
void CFileTransferServerDlg::OnAccept()
{
ReceivingSocket* tmpSock = new ReceivingSocket;
tmpSock->SetParentDlg(this);
CString message;
if( ListenSock.Accept(*tmpSock) ) /* CRASH */
{
message.LoadStringW(IDS_CLIENT_CONNECTED);
m_txtStatus.SetWindowTextW(message);
RecvSock = tmpSock;
}
}
My program crashes when I try to create a socket for receiving file sent from client application. OnAccept starts when Listening socket signals incoming connection attempt, but my application then crashes. What could be wrong?
Error in debug mode:
Unhandled exception at 0x009c30e1 in FileTransferServer.exe: 0xC0000005: Access violation reading location 0xccccce58.
UPDATE:
I edited code a little and I've found that inside sockcore.cpp where Accept is defined, program failes on this line of code:
ASSERT(rConnectedSocket.m_hSocket == INVALID_SOCKET);
I don't understand how that can happen. ReceivingSocket class is somehow not getting constructed right. I derive it from CAsyncSock, leave constructor empty, and no matter where I create it, on stack or on heap, it always crashes.
Here is complete project, both client and server, if anyone can take a look at it I would be really grateful. I apologize for the comments, they are in Croatian.
Visual Studio project
I've looked into your code. The issue seems to be that you never call ListeningSocket::SetParentDlg(CFileTransferServerDlg* parent). Since you also do not initialize the m_pDlg pointer in the ListeningSocket constructor, it has random values and the program might crash here and there when you access this pointer. (I had also a crash but slightly at another location than you pointed out.)
I've changed it this way:
In ListeningSocket.h changed the constructor:
ListeningSocket(CFileTransferServerDlg* parent);
Also in ListeningSocket.cpp:
ListeningSocket::ListeningSocket(CFileTransferServerDlg* parent)
: m_pDlg(parent)
{
}
Constructor of CFileTransferServerDlg changed this way:
CFileTransferServerDlg::CFileTransferServerDlg(CWnd* pParent /*=NULL*/)
: CDialog(CFileTransferServerDlg::IDD, pParent),
ListenSock(this)
{
m_hIcon = AfxGetApp()->LoadIcon(IDR_MAINFRAME);
}
Crash disappeared.
Other ways are possible of course.
Really nice little programs, by the way :) I'll delete them of course now since I can't probably afford the license fees :)
Maybe check to see if you've inherited ReceivingSocket correctly?
Check this out.