I have some devices attached to the same network. All of them runs a TCP server. I have my PC also connected to the same network. I need to collect data from the other devices. So I'm about to write an app in Qt framework which does that. I will exchange small packets with the devices, so I thought I make a generic object for devices which has a QTcpSocket member, and I use signals and slots for receiving data. I have an other class which I use to connect to devices. It inherits QObject and QRunnable. the QRunnables's run method implements the connecting procedure and it looks like this:
QTcpSocket socket;
socket.connectToHost(this->hostAddress, this->portNumber);
if(socket.waitForConnected())
{
emit Connected(this->deviceId, socket.socketDescriptor());
}
else
{
emit Error(this->deviceId);
}
This function is run in a separate thread using QThreadPool to avoid long delay due to connecting time:
Connector* connector = new Connector(hostAddress, port, id);
connect(connector, &Connector::Connected, this, &CommunicationLayer::Connected);
connect(connector, &Connector::Error, this, &CommunicationLayer::Error);
QThreadPool::globalInstance()->start(connector);
And when the Connected signal is fired, I instantiate a device object for that specific id:
this->devices.push_back(new Device(id, socketDescriptor, this));
connect(this->devices.back(), &Device::DataReceived, this, &CommunicationLayer::DataReceived);
The socket's descriptor is passed as an argument and then when the Device object is instantiated, and I call QTcpSocket::setSocketDescriptor on the socket inside the device object with that argument.
My problem is that sometimes I got strange messages:
QSocketNotifier: Invalid socket 7 and type 'Read', disabling...
QSocketNotifier: Invalid socket 7 and type 'Write', disabling...
Sometimes I don't get anything and it just works, sometimes I get both, sometimes just one of them. I am a bit clueless.
Edit:
I think I found what's a problem: because I'm declaring the socket in the run function as an auto variable, it goes out of scope as the function returns, so the socket descriptor is invalid. And because it's running in a separate thread sometimes my device is constructed before the function returns and this way the socket descriptor remains valid. I made the socket object in run function a pointer, it works now, but I don't know if it could cause a memory leak. Any ideas? (If I use a smart pointer to manage the socket's lifetime, I get the same result as before with the auto variable)
Related
I am writing simple synchronous asio server.
Workflow is following - in endless cycle accept connections and create thread for each connection. I know, this is not so optimal, but async is too hard for me.
Here's my ugly code:
std::vector<asio::io_service*> ioVec;
std::vector<std::thread*> thVec;
std::vector<CWorker> workerVec;
std::vector<tcp::acceptor*> accVec;
while (true) {
ioVec.emplace_back(new asio::io_service());
accVec.emplace_back(new tcp::acceptor(*ioVec.back(), tcp::endpoint(tcp::v4(), 3228)));
tcp::socket* socket = new tcp::socket(*ioVec.back());
accVec.back()->accept(*socket);
workerVec.push_back(CWorker());
thVec.emplace_back(new std::thread(&CWorker::run, &workerVec.back(), socket));
}
The problem is first connection being done, it's correctly accepted, thread is created, and everything is good. Breakpoint is correctly triggered on "accept()" string. But if I want to create second connection (it does not matter if first is DCed or not) -> telnet is connected, but breakpoint on next string to "accept" is not triggered, and connection is not responding to anything.
All this vector stuff - I've tried to debug somehow to create separate acceptor, io_service for any connection - not helped. Could anyone point me where is error?
P.S. Visual Studio 2013
The general pattern for an asio-based listener is:
// This only happens once!
create an asio_service
create a socket into which a new connection will be accepted
call asio_service->async_accept passing
the accept socket and
a handler (function object) [ see below]
start new threads (if desired. you can use the main thread if it
has nothing else to do)
Each thread should:
call asio_service->run [or any of the variations -- run_one, poll, etc]
Unless the main thread called asio_service->run() it ends up here
"immediately" It should do something to pass the time (like read
from the console or...) If it doesn't have anything to do, it probably
should have called run() to make itself available in the asio's thread pool.
In the handler function:
Do something with the socket that is now connected.
create a new socket for the next accept
call asio_service->async_accept passing
the new accept socket and
the same handler.
Notice in particular that each accept call only accepts one connection, and you should not have more than one accept at a time listening on the same port, so you need to call async_accept again in the handler from the previous call.
Boost ASIO has some very good tutorial examples like this one
I want to use Overlapped I/O with Completion Routine to handle client connections.
In my UI thread I want to use WSASend(), but in order for the system to call my callback function to inform me that data has been sent, the UI thread must be in a wait state, but this will freeze my UI!
How should I fix this problem?
I agree with #DavidHeffernan - the UI thread should be doing UI things. The IO thread surely needs a binding and port, (server), or peer address and port(client). The socket from ConnectEx or AcceptEx is surely better loaded in the IO thread, but a Socket class with, (at this time, undefined), socket member could surely be created in the UI thread and signaled into the IO thread for handling. Whether buffers form part of your Socket class, or a separate Buffer class, is a design consideration.
One implementation, (that I have used successfully):
Design/define an 'Inter Thread Comms', (ITC'), message class. This has a 'command' enum member that can tell other threads to do stuff, together with any other useful stuff that might be required in such a message
Derive a 'Socket' class from ITC. This has string members for the IP/port, the socket handle and anything else that may be required.
Derive a 'Buffer' class from ITC. This has a 'BoundSocket' member, buffer-space and an 'OVERLAPPED' struct.
Comms with the IO thread is fairly easy. Since it has to wait on something altertably, it can wait on a semaphore that manages a 'Commands' ConcurrentQueue.
If you UI wishes to instruct the IO thread to, say, connect to a server, it creates a Socket instance, (new), loads the IP and Port members from UI elements, sets the Command enum to 'Connect', pushes the socket onto the Commands queue and signals the semaphore, (ReleaseSemaphore).
The alertable wait in the IO thread then returns with WAIT_OBJECT_0, (it needs to ignore returns with WAIT_IO_COMPLETION) and so knows that a command has ben queued. It pops it from the Commands queue and acts upon the command enum, (maybe switching on it), to perform the required action/s. For connect, this would involve an overlapped 'ConnectEx' call to queue up a connect request and set up the connect completion handler.
The connect completion handler, when called, checks for a succesfull connect and, if so, could new up a Buffer, load it, issue a WSARecv with it for the server to send stuff and store the returned Socket object in a container. If failed, it could load the Socket
with a suitable error message and PostMessage it back to the UI thread to inform the user of the fail.
See - it's not that difficult and does not need 10000 lines of code:)
The only thing I don't know how to do immediately is getting the 'this' for the socket object back from the OVERLAPPED struct that is returned in the completion routine. On 32-bit systems, I shoved the Buffer 'this' into the hEvent field of the overlapped struct in the Buffer instance and cast it back in the completion routine. The Buffer instance has a Socket reference, so the job was done. On 64-bit systems, hEvent has not enough room to store the 48/64-bit 'this' Buffer pointer and, (aparrently), this required an extended OVERLAPPED struct:( Not sure how that is done - maybe you will find out:)
[edit] #BenVoigt has advice on the 32/64 bit 'getting the Socket context 'this' back in the completion routine' issue - it's easier than I thought:):
https://stackoverflow.com/a/28660537/758133
My program uses a NetworkOutput object which can be used to write data to a remote server. The semantic is that in case the object is currently connected (because there is a remote server), then the data is actually sent over the socket. Otherwise, it's silently discarded. Some code sketch:
class NetworkOutput
{
public:
/* Constructs a NetworkOutput object; this constructor should not block, but it
* should start attempting to the given host/port in the background.
*
* In case the connection gets closed for some reason, the object should immediately
* try reconnecting.
*/
NetworkOutput( const std::string &hostName, unsigned short port );
/* Tells whether there is a remote client connected to this NetworkOutput object.
* Clients can use this function to determine whether they need to both serializing
* any data at all before calling the write() function below.
*/
bool isConnected() const;
/* Write data to the remote client, if any. In case this object is not connected
* yet, the function should return immediately. Otherwise it should block until
* all data has been written.
*
* This function must be thread-safe.
*/
void write( const std::vector<char> &data );
};
Right now, I have this implemented using nonblocking sockets. I'n the NetworkOutput constructor, I'm creating a TCP socket as well as an internal helper window. I then do a WSAAsyncSelect call on the socket. This makes the socket nonblocking, and it will cause a magic window message (which I registered myself) to be sent to the internal helper window in case any interesting event (such as 'connection established' or 'connection closed') happens on the socket. Finally, I start a connection attempt using WSAConnect. This returns immediately, and the window procedure of my internal helper window will get notified as soon as the connection succeeded. In case the connection is closed (because the remote client went away), the message procedure will be called and I will attempt to reconnect.
This system allows the me to attach and detach a remote client at will. It works quite well, but unfortunately it requires that I have a message loop running. Without the message loop, the notifications sent by the WSAAsyncSelect call don't seem to arrive at my helper window.
Is there any way to implement a class as described above without requiring a message loop? I was toying around with using blocking sockets in a helper thread, but I couldn't come up with anything reasonable yet. I also considered using a UDP socket, so that I don't even need to connect at all, but I'd like to know whether there is a remote client listening so that in case there is no remote client, the clients of the NetworkOutput class don't need to do any serialization work of complex objects before they can call write().
You can use WSAEventSelect instead of WSAASyncSelect, which takes the handle of a WSAEVENT instead of a message ID, and then use WSAWaitForMultipleEvents to wait for the event to be signalled.
Instead of WSAEVENT you can also use normal Win32 events created with CreateEvent, and the normal synchronisation functions such as WaitForMultipleObjects.
You are looking for the select function:
http://support.sas.com/documentation/onlinedoc/sasc/doc750/html/lr2/select.htm
Basically you specify a set of ports you want to listen to.
When called the select deschedules the thread (thus allowing other threads to work while you do a non busy wait). Your thread is woken up after either a time limit (usually infinite) a signal (if you want to manually make the thread or the system does) or there is some input that needs to be handled on any of the ports.
When your thread wakes up it is usually best to let another thread handle the work so; what usually happens is that you create a work object for each port that has data waiting to be read and add these to a queue where a set of worker threads than start handling the input. Once this is done you call select() again to wait for more input.
Note: You don't have to do this it can be done in a single thread.
I have a server application with such structure:
There is one object, call him Server, that in endless cycle listens and accepts connections.
I have descendant class from CAsyncSocket, that has overriden event OnReceive, call him ProxySocket.
Also I have a thread pool with early created threads.
When connection is received by server object he accepts the new connection on the new object ProxySocket.
When data arrives to the ProxySocket, he creates a command object and gives it to thread pool. In this command object I giving the socket handle of a ProxySocket. When new object of command is creating - I creating a new Socket in working thread and attach handle to it.
My issue is next:
When command ends, socket doesn't close, I just detach handle it and set CSocket handle to INVALID_SOCKET value, as planned. But my first ProxySocket object doesn't receives messages of new data receiving after that. How can I solve this?
I don't think you can use CAsyncSocket objects (or their descendants) in a thread pool secenario. CAsyncSockets are implemented on top of WSASsyncSelect - which tells the winsock to send notifcations to a window handle.
Because windows have thread affinity, one can never "move" the CAsyncSocket handling to a different thread.
I am designing a game server with scripting capabilities. The general design goes like this:
Client connects to Server,
Server initializes Client,
Server sends Client to EventManager (separate thread, uses libevent),
EventManager receives receive Event from Client socket,
Client manages what it received via callbacks.
Now the last part is what's the most tricky for me now.
Currently my design allows me for a class which inherits Client to create callbacks to specific received events. These callbacks are managed in a list and the received buffer goes through a parsing process each time something is received. If the buffer is valid, the callback is called where it is act upon what is in the buffer. One thing to note is that the callbacks can go down to the scripting engine, at which point nothing is sure what can happen.
Each time a callback finishes, the current receive buffer has to be reset etc. Callbacks currently have no capability of returning a value, because as stated before, anything can happen.
What happens is that when somewhere in the callback something says this->disconnect(), I want to immediately disconnect the Client, remove it from the EventManager, and lastly remove it from the Server, where it also should get finally destructed and free memory. However, I still have some Code running after the callback finishes in the Client, thus I can't free memory.
What should I change in the design? Should I have some timed event in the Server which checks which Clients are free to destroy? Would that create additional overhead I don't need? Would it still be okay after the callback finishes to run minimal code on the stack (return -1;) or not?
I have no idea what to do, but I am open for complete design revamps.
Thanks in advance.
You can use a reference counted pointer like boost::shared_ptr<> to simplify memory management. If the manager's client list uses shared_ptrs and the code that calls the callbacks creates a local copy of the shared_ptr the callback is called on, the object will stay alive until it is removed from the manager and the callback function is complete:
class EventManager {
std::vector< boost::shared_ptr<Client> > clients;
void handle_event(Event &event) {
// local |handler| pointer keeps object alive until end of function, even
// if it removes itselfe from |clients|
boost::shared_ptr<Client> handler = ...;
handler->process(event);
}
};
class Client {
void process(Event &event) {
manager->disconnect(this);
// the caller still holds a reference, so the object lives on
}
}
The Client object will automatically be deleted once the last shared_ptr to it goes out of scope, but not before. So creating a local copy of the shared_ptr before a function call makes sure the object is not deleted unexpectedly.
You should consider having an object like "Session" which will track particular message flow from start to finish (from 1 client).
This object should also take care of current state: primarily the buffers and processing.
Each event which triggers a callback MUST update the state of corresponding session.
Libevent is capable of providing you with any result of scheduled event: success, failure, timeout. Each of this types should be reflected with your logic.
In general, when working with events, consider your processing logic to be an automaton with a state.
http://en.wikipedia.org/wiki/Reactor_pattern may be a good resource for your task.
Let the Client::disconnect() function send an event to the EventManager (or Server) class. This means that you need some sort of event handling in EventManager (or Server), an event loop for instance.
My general idea is that Client::disconnect() does not disconnect the Client immediately, but only after the callback finished executing. Instead, it just posts an event to the EventManager (or Server) class.
One could argue that the Client::disconnect() method is on the wrong class. Maybe it should be Server::disconnect( Client *c ). That would be more in-line with the idea that the Server 'owns' the Client and it's the Server which disconnects Clients (and then updates some internal bookkeeping).