I'm writing a multi-threaded application, where there is a main thread which distributes tasks to the worker threads. According to the task, a worker thread creates a connection, by using a global occi environment. When a worker thread completes its task, it closes the connection (I'm sure, there is no exception thrown while termination).
My problem is that after a while(sometimes 5 mins, sometimes 5 hours) the threads cannot get connection from the environment, and they get blocked there.
What can be the problem?
I guess I didn't identify the problem correctly. I thought the threads get blocked, but actually they didn't, they simply exited there unexpectedly :). Problem solved.
Have you considered using a thread pool? Then you don't need to close the connection every time the work is done.
Setting up/closing the database connection is also quite expensive, i think.
Related
I am creating an application, that communicates with a server using an API's functions, from an existing code base written in C++/Qt 5.6 and Boost. The code is written in a way such that, any communication with the server is done by the API's functions that runs in a worker object. The worker object runs in a QThread(), and is moved using moveToThread.
My problem is that, I need to be able to stop the thread immediately and disconnect in the situation where the network connection drops. However, the thread blocks when it sends data to the server. If I try to stop the thread through quit() or wait(), the request still goes through to the server which is undesirable. The API doesn't offer any method to cancel any ongoing requests either.
My current solution is terminating the thread, and destroying the worker object it owns when the network connection drops. I then create a new QThread and new worker object when connecting to the server.
I know that terminate() or any kind of immediate termination of a thread should be avoided like the plague but it seems to work I think.
The worker object that runs in the QThread uses std::shared_ptr for it's members through std::make_shared.
Are there still chances of memory leaks/corruption?
Apart from this, because I create my QThread in a method, I receive a warning from QT:
QObject: cannot create children for a parent that is in a different thread
Despite this warning, my code still runs but I have doubts. Is it safe to ignore this warning? What are the risks/consequences of ignoring this?
Is it safer to litter the server connecting code with QT's interruption checking points/rewrite it in boost using boost::interruption_point instead of calling terminate? i.e
sendData();
if (QThread::currentThread()->isInterruptionRequested())
{
return;
}
sendData();
if (QThread::currentThread()->isInterruptionRequested())
{
return;
}
...
Advice much appreciated thanks.
I have a single threaded asynchronous tcp server written using boost asio. Each incoming request will go through several processing steps (synchronous and asynchronous) and finally send back the response using async write.
For small loads with 10 concurrent requests, it works decently. However, when I test using a parallelism of 100, things start worsening. Response latency starts increasing as time progresses. So, I want to try with some multi-threaded processing for handling requests.
I am looking for a decent example / help on creating and running multiple threads for asynchronous reading/writing to clients. I have the following doubts:
Should I use a single IOS object and call its run method in all of the threads of the thread pool, or should I use a separate IOS per thread?
If I use a single IOS, is there a possibility that part of the tcp data goes to one thread, while another part going to another thread and so on.. Is this understanding correct?
Is there any other better way?
Thanks for any help and pointers here.
Without seeing your code I can only guess what goes wrong. Most probably you're running long actions inside async completion handlers. The completion handlers should be fast - get the data, hand it off for further processing, done.
As a first priority, I would go full-asynchronous and run all processing in a thread pool. You can find an example here, where a new thread is started for every new client, which you can replace with a thread pool.
Use a single io_service. A single io_service can handle a lot of parallelism, provided you don't delay it inside completion handlers. This simplifies the implementation because you don't have to worry about completion handlers running in parallel, which will happen if you run multiple IOS in multiple threads.
Q1: Should I use a single IOS object and call its run method in all of the threads of the thread pool, or should I use a separate IOS per thread?
Either you can
HTTP Server 2 - IOS per thread
HTTP Server 3 - single IOS with thread pool
Q2: If I use a single IOS, is there a possibility that part of the tcp data goes to one thread, while another part going to another thread and so on.. Is this understanding correct?
Yes, there is a race condition, but boost.asio support strand to avoid it.
Q3: Is there any other better way?
To me, not find a better way, if you find, tell me or past here, thank you.
BTW, as #rustyx said, your program is blocked at sync calls, turn to full-asynchronous calls will help.
The following is a programming problem I am trying to solve...
I am creating a (C++ 11) server application that manages operations being performed on other servers. Thus far, I have designed it to do the following:
start a thread for each remote server (with operations on it that are being managed), that polls the server to make sure it's still there/working.
start a thread for each operation that it is managing, that polls the remote server to get info about the current state of the operation.
Where I am running into issues is... The calls to the remote server to verify that it is there/working, and the calls to get the status of the operations being managed on the remote server are "waited" operations. If something stops working (either a remote server stops responding, or I am not able to get the state of an operation running on a remote server, I WAS attempting to just "kill" the thread that monitors that server or operation, and flag it (the server or operation) as dead/non-responsive. Unfortunately, there does not seem to be any clean way to kill a thread.
Note that I was trying to "kill" threads because many of the calls that I am performing are "waited" operations, and I can not have the application hang because it is waiting for completion of an operation (that in some cases may never complete). I have done some research on ways to "kill" an active thread (stop it from a manager thread). One of the only C++ thread management libraries that I could find that even supported a C++ method to "kill" a thread is the "ZThread" library, but using it's kill method doesn't seem to work consistently.
I need to find a way to start executing a segment of code (ie: a thread), that will be performing some waited operation, and kill execution of that code when needed (not wait for the operation to complete.
Suggestions???
I have a application using pthreads and prior to C++11 is in use. We have several worker threads assigned for several purposes and tasks get distributed in producer-consumer way through shared circular pool of task data. Posix semaphores have been used to do inter-thread synchronizations in both wait/notify mode as well as mutex locks for shared data to ensure mutual exclusions.
Recently, noticing a strange problem with large volume of data that program seems to hang with signal 1 received. Signal 1 is basically a SIGHUP, that means hang-up, this signal is usually used to report that the user's terminal is disconnected, perhaps because a network or telephone connection was broken.
Can this be caused because the parent terminal time-outing? If so, can nohup help?
This occurs only for large volume of data (didn't notice with smaller volume) and the application is being run from command line from a solaris terminal (telnet session).
Thoughts, welcome.
I'm designing a networking framework which uses WSAEventSelect for asynchronous operations. I spawn one thread for every 64th socket due to the max 64 events per thread limitation, and everything works as expected except for one thing:
Threads keep getting spawned uncontrollably by Winsock during connect and disconnect, threads that won't go away.
With the current design of the framework, two threads should be running when only a few sockets are active. And as expected, two threads are running in total. However, when I connect with a few sockets (1-5 sockets), an additional 3 threads are spawn which persist until I close the application. Also, when I lose connection on any of the sockets, 2 more threads are spawned (also persisting until closure). That's 7 threads in total, 5 of which I have no idea what they are there for.
If they are required by Winsock for connecting or whatever and then disappeared, that would be fine. But it bothers me that they persist until I close my application.
Is there anyone who could shed some light on this? Possibly a solution to avoid these threads or force them to close when no connections are active?
(Application is written in C++ with Win32 and Winsock 2.2)
Information from Process Explorer:
Expected threads:
MyApp.exe!WinMainCRTStartup
MyApp.exe!Netfw::NetworkThread::ThreadProc
Unexpected threads:
ntdll.dll!RtlpUnWaitCriticalSection+0x2dc
mswsock.dll+0x7426
ntdll.dll!RtlGetCurrentPeb+0x155
ntdll.dll!RtlGetCurrentPeb+0x155
ntdll.dll!RtlGetCurrentPeb+0x155
All of the unexpected threads have call stacks with calls to functions such as ntkrnlpa.exe!IoSetCompletionRoutineEx+0x46e which probably means it is a part of the notification mechanism.
Download the sysinternals tool process explorer. Install the appropriate debugging tools for windows. In process explorer, set Options -> Symbols path to:
SRV*C:\Websymbols*http://msdl.microsoft.com/download/symbols
Where C:\Websymbols is just a place to store the symbol cache (I'd create a new empty directory for it.)
Now, you can inspect your program with process explorer. Double click the process, go to the threads tab, and it will show you where the threads started, how busy they are, and what their current callstack is.
That usually gives you a very good idea of what the threads are. If they're Winsock internal threads, I wouldn't worry about them, even if there are hundreds.
One direction to look in (just a guess): If these are TCP connections, these may be background threads to handle internal TCP-related timers. I don't know why they would use one thread per connection, but something has to do the background work there.