I am writing a ThreadPool Class in C++ using Boost ASIO. The following is the code that I have written so far:
The ThreadPool Class
using namespace std;
using namespace boost;
class ThreadPoolClass {
private:
/* The limit to the maximum number of threads to be
* instantiated within this pool
*/
int maxThreads;
/* Group of threads in the Pool */
thread_group threadPool;
asio::io_service asyncIOService;
void _Init()
{
maxThreads = 0;
}
public:
ThreadPoolClass();
ThreadPoolClass(int maxNumThreads);
ThreadPoolClass(const ThreadPoolClass& orig);
void CreateThreadPool();
void RunTask(JobClass * aJob);
virtual ~ThreadPoolClass();
};
ThreadPoolClass::ThreadPoolClass() {
_Init();
}
ThreadPoolClass::ThreadPoolClass(int maxNumThreads) {
_Init();
maxThreads = maxNumThreads;
}
void ThreadPoolClass::CreateThreadPool() {
asio::io_service::work work(asyncIOService);
for (int i = 0; i < maxThreads; i++) {
cout<<"Pushed"<<endl;
threadPool.create_thread(bind(&asio::io_service::run, &asyncIOService));
}
}
void ThreadPoolClass::RunTask(JobClass * aJob) {
cout<<"RunTask"<<endl;
asyncIOService.post(bind(&JobClass::Run,aJob));
}
ThreadPoolClass::ThreadPoolClass(const ThreadPoolClass& orig) {
}
ThreadPoolClass::~ThreadPoolClass() {
cout<<"Kill ye all"<<endl;
asyncIOService.stop();
threadPool.join_all();
}
The Job Class
using namespace std;
class JobClass {
private:
int a;
int b;
int c;
public:
JobClass() {
//Empty Constructor
}
JobClass(int val) {
a = val;
b = val - 1;
c = val + 1;
}
void Run()
{
cout<<"a: "<<a<<endl;
cout<<"b: "<<b<<endl;
cout<<"c: "<<c<<endl;
}
};
Main
using namespace std;
int main(int argc, char** argv) {
ThreadPoolClass ccThrPool(20);
ccThrPool.CreateThreadPool();
JobClass ccJob(10);
cout << "Starting..." << endl;
while(1)
{
ccThrPool.RunTask(&ccJob);
}
return 0;
}
So, basically I am creating 20 threads, but as of now just posting only one (same) task to be run by ioservice (just to keep things simple here and get to the root cause). The following is the output when I run this program in GDB:
Pushed
[New Thread 0xb7cd2b40 (LWP 15809)]
Pushed
[New Thread 0xb74d1b40 (LWP 15810)]
Pushed
[New Thread 0xb68ffb40 (LWP 15811)]
Pushed
[New Thread 0xb60feb40 (LWP 15812)]
Pushed
[New Thread 0xb56fdb40 (LWP 15813)]
Pushed
[New Thread 0xb4efcb40 (LWP 15814)]
Pushed
[New Thread 0xb44ffb40 (LWP 15815)]
Pushed
[New Thread 0xb3affb40 (LWP 15816)]
Pushed
[New Thread 0xb30ffb40 (LWP 15817)]
Pushed
[New Thread 0xb28feb40 (LWP 15818)]
Pushed
[New Thread 0xb20fdb40 (LWP 15819)]
Pushed
[New Thread 0xb18fcb40 (LWP 15820)]
Pushed
[New Thread 0xb10fbb40 (LWP 15821)]
Pushed
[New Thread 0xb08fab40 (LWP 15822)]
Pushed
[New Thread 0xb00f9b40 (LWP 15823)]
Pushed
[New Thread 0xaf8f8b40 (LWP 15824)]
Pushed
[New Thread 0xaf0f7b40 (LWP 15825)]
Pushed
[New Thread 0xae8f6b40 (LWP 15826)]
Pushed
[New Thread 0xae0f5b40 (LWP 15827)]
Pushed
[New Thread 0xad8f4b40 (LWP 15828)]
Starting...
RunTask
Kill ye all
[Thread 0xb4efcb40 (LWP 15814) exited]
[Thread 0xb30ffb40 (LWP 15817) exited]
[Thread 0xaf8f8b40 (LWP 15824) exited]
[Thread 0xae8f6b40 (LWP 15826) exited]
[Thread 0xae0f5b40 (LWP 15827) exited]
[Thread 0xaf0f7b40 (LWP 15825) exited]
[Thread 0xb56fdb40 (LWP 15813) exited]
[Thread 0xb18fcb40 (LWP 15820) exited]
[Thread 0xb10fbb40 (LWP 15821) exited]
[Thread 0xb20fdb40 (LWP 15819) exited]
[Thread 0xad8f4b40 (LWP 15828) exited]
[Thread 0xb3affb40 (LWP 15816) exited]
[Thread 0xb7cd2b40 (LWP 15809) exited]
[Thread 0xb60feb40 (LWP 15812) exited]
[Thread 0xb08fab40 (LWP 15822) exited]
[Thread 0xb68ffb40 (LWP 15811) exited]
[Thread 0xb74d1b40 (LWP 15810) exited]
[Thread 0xb28feb40 (LWP 15818) exited]
[Thread 0xb00f9b40 (LWP 15823) exited]
[Thread 0xb44ffb40 (LWP 15815) exited]
[Inferior 1 (process 15808) exited normally]
I have two questions:
Why is it so that my threads are exiting, even when I am posting
tasks in a while loop?
Why is the output from JobClass i.e. the values of the variables a,b
and c not getting printed?
I think this happens because you create work object in the CreateThreadPool method, which is automatically destroyed when goes out of scope -> in this case io_service has no active work and does not process your tasks.
Try to make 'work' instance variable of your ThreadPool class, not local one in the method.
class ThreadPoolClass {
private:
thread_group threadPool;
asio::io_service asyncIOService;
std::auto_ptr<asio::io_service::work> work_;
public:
};
ThreadPoolClass::ThreadPoolClass(int maxNumThreads) {
_Init();
maxThreads = maxNumThreads;
}
void ThreadPoolClass::CreateThreadPool() {
work_.reset(new asio::io_service::work(asyncIOService));
for (int i = 0; i < maxThreads; i++) {
cout<<"Pushed"<<endl;
threadPool.create_thread(bind(&asio::io_service::run, &asyncIOService));
}
}
OK, i'll be the first to admit I don't know boost, and more specifically boost::asio from a hole in the ground, but I know a hella-lot about thread pools and work crews.
The threads a supposed to sleep until notified of new work, but if they are not configured to do so they will likely just finish their thread proc and exit, A tell-tale sign that this is the case is to start up a pool, sleep for a reasonable amount of time before posting any work, and if the pool threads are all terminating, they're not properly waiting. A quick perusal of boost docs yielded this and it may be related to your problem.
On that note, is it possible that the destructor of your pool from the main() entry point is, in fact, prematurely killing your work crew? I see the join_all, but that stop() gives me the willies. if it does what its name implies that would explain a lot. According to the description of that stop() call from the docs:
To effect a shutdown, the application will then need to call the
io_service object's stop() member function. This will cause the
io_service run() call to return as soon as possible, abandoning
unfinished operations and without permitting ready handlers to be
dispatched.
That immediate shutdown and abandonment mention seems suspiciously familiar to your current situation.
Again, I don't know boost:asio from Adam, but were I on this I would check the startup configuration for the boost thread objects. they likely require configuration for how to start, how to wait, etc. There must be numerous samples of using boost:asio on the web concerning configuring the very thing you're describing here, namely a work crew paradigm. I see boost::asio a TON on SO, so there is likely many related or near-related questions as well.
Please feel free to downgrade this if it isn't anything useful, and I apologize if that is the case.
Related
Suppose I call a program with OMP_NUM_THREADS=16.
The first function calls #pragma omp parallel for num_threads(16).
The second function calls #pragma omp parallel for num_threads(2).
The third function calls #pragma omp parallel for num_threads(16).
Debugging with gdb shows me that on the second call 14 threads exit. And on the third call, 14 new threads are spawned.
Is it possible to prevent 14 threads from exiting on the second call? Thank you.
The proof listings are below.
$ cat a.cpp
#include <omp.h>
void func(int thr) {
int count = 0;
#pragma omp parallel for num_threads(thr)
for(int i = 0; i < 10000000; ++i) {
count += i;
}
}
int main() {
func(16);
func(2);
func(16);
return 0;
}
$ g++ -o a a.cpp -fopenmp -g
$ ldd a
...
libgomp.so.1 => ... gcc-9.3.0/lib64/libgomp.so.1
...
$ OMP_NUM_THREADS=16 gdb a
...
Breakpoint 1, main () at a.cpp:13
13 func(16);
(gdb) n
[New Thread 0xffffbe24f160 (LWP 27216)]
[New Thread 0xffffbda3f160 (LWP 27217)]
[New Thread 0xffffbd22f160 (LWP 27218)]
[New Thread 0xffffbca1f160 (LWP 27219)]
[New Thread 0xffffbc20f160 (LWP 27220)]
[New Thread 0xffffbb9ff160 (LWP 27221)]
[New Thread 0xffffbb1ef160 (LWP 27222)]
[New Thread 0xffffba9df160 (LWP 27223)]
[New Thread 0xffffba1cf160 (LWP 27224)]
[New Thread 0xffffb99bf160 (LWP 27225)]
[New Thread 0xffffb91af160 (LWP 27226)]
[New Thread 0xffffb899f160 (LWP 27227)]
[New Thread 0xffffb818f160 (LWP 27228)]
[New Thread 0xffffb797f160 (LWP 27229)]
[New Thread 0xffffb716f160 (LWP 27230)]
15 func(2);
(gdb)
[Thread 0xffffba9df160 (LWP 27223) exited]
[Thread 0xffffb716f160 (LWP 27230) exited]
[Thread 0xffffbca1f160 (LWP 27219) exited]
[Thread 0xffffb797f160 (LWP 27229) exited]
[Thread 0xffffb818f160 (LWP 27228) exited]
[Thread 0xffffbd22f160 (LWP 27218) exited]
[Thread 0xffffb899f160 (LWP 27227) exited]
[Thread 0xffffbda3f160 (LWP 27217) exited]
[Thread 0xffffbb1ef160 (LWP 27222) exited]
[Thread 0xffffb91af160 (LWP 27226) exited]
[Thread 0xffffba1cf160 (LWP 27224) exited]
[Thread 0xffffb99bf160 (LWP 27225) exited]
[Thread 0xffffbb9ff160 (LWP 27221) exited]
[Thread 0xffffbc20f160 (LWP 27220) exited]
17 func(16);
(gdb)
[New Thread 0xffffbb9ff160 (LWP 27231)]
[New Thread 0xffffbc20f160 (LWP 27232)]
[New Thread 0xffffb99bf160 (LWP 27233)]
[New Thread 0xffffba1cf160 (LWP 27234)]
[New Thread 0xffffbda3f160 (LWP 27235)]
[New Thread 0xffffbd22f160 (LWP 27236)]
[New Thread 0xffffbca1f160 (LWP 27237)]
[New Thread 0xffffbb1ef160 (LWP 27238)]
[New Thread 0xffffba9df160 (LWP 27239)]
[New Thread 0xffffb91af160 (LWP 27240)]
[New Thread 0xffffb899f160 (LWP 27241)]
[New Thread 0xffffb818f160 (LWP 27242)]
[New Thread 0xffffb797f160 (LWP 27243)]
[New Thread 0xffffb716f160 (LWP 27244)]
19 return 0;
The simple answer is that it isn't possible with GCC to force the runtime to keep the threads around. From cursory reading the source code of libgomp, there are no ICVs, portable or vendor-specific, that prevent the termination of excess idle threads in consecutive regions. (someone correct me if I'm wrong)
If you really need to rely on the unportable requirement that the OpenMP runtime uses persistent threads across regions with varying team sizes in between, then use Clang or Intel C++ instead of GCC. Clang's (actually LLVM's) OpenMP runtime is based on the open-source version of Intel's and they both behave the way you want. Again, this is not portable and the behaviour may change in future versions. It is instead advisable to not write your code in such a way that its performance depends on the particularities of the OpenMP implementation. For example, if the loop takes several orders of magnitude more time than the creation of a thread team (which is on the order of tens of microseconds on modern systems), it won't really matter whether the runtime uses persistent threads or not.
If OpenMP overhead is really a problem, e.g., if the work done in the loop is not enough to amortise the overhead, a portable solution is to lift the parallel region and then either re-implement the for worksharing construct like in #dreamcrash's answer or (ab)use OpenMP's loop scheduling by setting a chunk size that will only result in the desired number of threads working on the problem:
#include <omp.h>
void func(int thr) {
static int count;
const int N = 10000000;
int rem = N % thr;
int chunk_size = N / thr;
#pragma omp single
count = 0;
#pragma omp for schedule(static,chunk_size) reduction(+:count)
for(int i = 0; i < N-rem; ++i) {
count += i;
}
if (rem > 0) {
#pragma omp for schedule(static,1) reduction(+:count)
for(int i = N-rem; i < N; ++i) {
count += i;
}
}
#pragma omp barrier
}
int main() {
int nthreads = max of {16, 2, other values of thr};
#pragma omp parallel num_threads(nthreads)
{
func(16);
func(2);
func(16);
}
return 0;
}
You need chunks of exactly equal sizes in all threads. The second loop is there to take care of the case when thr does not divide the number of iterations. Also, one cannot simply sum across private variables, hence count has to be shared, e.g., by making it static. This is ugly and drags along a bunch of synchronisation necessities that may have overhead comparable with spawning new threads and make the entire exercise pointless.
One approach would be to create a single parallel region, filter out the threads that will be executing the for, and manually distribute the loop iterations per thread. For simplicity sake, I will assume a parallel for schedule(static, 1):
include <omp.h>
void func(int total_threads) {
int count = 0;
int thread_id = omp_get_thread_num();
if (thread_id < total_threads)
{
for(int i = thread_id; i < 10000000; i += total_threads) {
count += i;
}
#pragma omp barrier
}
int main() {
...
#pragma omp parallel num_threads(max_threads_to_be_used)
{
func(16);
func(2);
func(16);
}
return 0;
}
Bear in mind that there is a race condition count += i; that would have to be fixed. In the original code, you could easily fix it by using the reduction clause, namely #pragma omp parallel for num_threads(thr) reduction(sum:count). In the code with the manual for you could solved it as follows:
#include <omp.h>
#include<stdio.h>
#include <stdlib.h>
int func(int total_threads) {
int count = 0;
int thread_id = omp_get_thread_num();
if (thread_id < total_threads)
{
for(int i = thread_id; i < 10000000; i += total_threads)
count += i;
}
return count;
}
int main() {
int max_threads_to_be_used = // the max that you want;
int* count_array = malloc(max_threads_to_be_used * sizeof(int));
#pragma omp parallel num_threads(max_threads_to_be_used)
{
int count = func(16);
count += func(2);
count += func(16);
count_array[omp_get_thread_num()] = count;
}
int count = 0;
for(int i = 0; i < max_threads_to_be_used; i++)
count += count_array[i];
printf("Count = %d\n", count);
return 0;
}
I would say that most of the time, one will have the same number of thread used in each parallel region. So such type of pattern should not be much of an occurrent issue.
I am trying to trace/fix a segmentation fault in my program. My program works fine when perform() has only one iteration of "protos", but not with two. On the second, I get a segmentation fault after the first iteration. I am pretty sure that the way I am dealing with iterating and deleting elements in my map in write_blacklist() is correct, but it still reports that it is the error. I thought that it may be because the map is empty, but I did checks to avoid that and it still throws a segmentation fault.
For write_blacklist(), all it should just safely do its iterations and delete the map elements that meet the conditions.
(gdb) run
Starting program: /root/BruteBlock/a.out
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
openssh
vsftpd
115.239.198.235 already in blacklist, skipping...
121.14.7.244 already in blacklist, skipping...
42.7.26.88 already in blacklist, skipping...
143.137.151.22 already in blacklist, skipping...
58.87.67.58 already in blacklist, skipping...
60.173.82.156 already in blacklist, skipping...
[New Thread 0x7ffff2d34700 (LWP 2087)]
[Thread 0x7ffff2d34700 (LWP 2087) exited]
[New Thread 0x7ffff2d34700 (LWP 2088)]
[Thread 0x7ffff2d34700 (LWP 2088) exited]
[New Thread 0x7ffff2d34700 (LWP 2089)]
[Thread 0x7ffff2d34700 (LWP 2089) exited]
Detaching after fork from child process 2090.
115.239.198.235 already in iptables, skipping...
121.14.7.244 already in iptables, skipping...
42.7.26.88 already in iptables, skipping...
143.137.151.22 already in iptables, skipping...
58.87.67.58 already in iptables, skipping...
60.173.82.156 already in iptables, skipping...
[New Thread 0x7ffff2d34700 (LWP 2091)]
[Thread 0x7ffff2d34700 (LWP 2091) exited]
Program received signal SIGSEGV, Segmentation fault.
0x000000000040c1e6 in std::__detail::_Hash_node<std::pair<std::string const, int>, true>::_M_next (this=0x0)
at /opt/rh/devtoolset-7/root/usr/include/c++/7/bits/hashtable_policy.h:285
285 { return static_cast<_Hash_node*>(this->_M_nxt); }
Missing separate debuginfos, use: debuginfo-install cyrus-sasl-lib-2.1.26-21.el7.x86_64 glibc-2.17-196.el7_4.2.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libcurl-7.29.0-42.el7_4.1.x86_64 libgcc-4.8.5-16.el7_4.1.x86_64 libidn-1.28-4.el7.x86_64 libselinux-2.5-11.el7.x86_64 libssh2-1.4.3-10.el7_2.1.x86_64 libstdc++-4.8.5-16.el7_4.1.x86_64 nspr-4.13.1-1.0.el7_3.x86_64 nss-3.28.4-15.el7_4.x86_64 nss-pem-1.0.3-4.el7.x86_64 nss-softokn-3.28.3-8.el7_4.x86_64 nss-softokn-freebl-3.28.3-8.el7_4.x86_64 nss-sysinit-3.28.4-15.el7_4.x86_64 nss-util-3.28.4-3.el7.x86_64 openldap-2.4.44-5.el7.x86_64 openssl-libs-1.0.2k-8.el7.x86_64 pcre-8.32-17.el7.x86_64 sqlite-3.7.17-8.el7.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0 0x000000000040c1e6 in std::__detail::_Hash_node<std::pair<std::string const, int>, true>::_M_next (this=0x0)
at /opt/rh/devtoolset-7/root/usr/include/c++/7/bits/hashtable_policy.h:285
#1 0x000000000040a829 in std::__detail::_Node_iterator_base<std::pair<std::string const, int>, true>::_M_incr (
this=0x7fffffffde20) at /opt/rh/devtoolset-7/root/usr/include/c++/7/bits/hashtable_policy.h:314
#2 0x0000000000409612 in std::__detail::_Node_iterator<std::pair<std::string const, int>, false, true>::operator++ (
this=0x7fffffffde20) at /opt/rh/devtoolset-7/root/usr/include/c++/7/bits/hashtable_policy.h:369
Python Exception <class 'gdb.error'> There is no member or method named _M_bbegin.:
#3 0x000000000040597e in BruteBlock::write_blacklist (this=0x7fffffffe130, ips=std::unordered_map with 0 elements,
output_file="/etc/blacklist.lst") at BruteBlock.cpp:68
#4 0x0000000000406552 in BruteBlock::perform (this=0x7fffffffe130) at BruteBlock.cpp:188
#5 0x0000000000404e8c in main () at main.cpp:18
main.cpp:
while (true) {
18: b.perform();
sleep(b.get_interval());
}
BruteBlock.cpp::perform():
void BruteBlock::perform() {
// Hopefully this will become more elegant!
for (auto i : protos) {
std::unordered_map<std::string, int> r(retr_fails(i.logfile, i.expr));
if (r.empty()) {
} else {
write_blacklist(r, blacklist_);
188: block(blacklist_);
}
}
}
BruteBlock.cpp::write_blacklist():
void BruteBlock::write_blacklist(std::unordered_map<std::string, int> &ips, const std::string &output_file) {
std::ifstream is(output_file.c_str());
if (!is) throw std::runtime_error("Error opening blacklist");
if (ips.empty()) return;
// ignore duplicates
std::string buf;
while (std::getline(is, buf)) {
if (ips.find(buf) != ips.end()) {
ips.erase(buf);
std::cout << buf << " already in blacklist, skipping..." << '\n';
}
}
// delete the IPs that don't meet the criteria
auto a = ips.begin();
while (a != ips.end()) {
if (a->second < max_attempts_) {
a = ips.erase(a);
} else {
if (a->second > max_attempts_) {
if (check_reports(a->first) < max_reports_) {
a = ips.erase(a);
}
}
68: ++a;
}
}
// write the remaining IPs to the blacklist
std::ofstream os(output_file.c_str(), std::ios_base::app);
if (!os) throw std::invalid_argument("Error opening blacklist file");
for (auto f : ips) {
if ((f.second > max_attempts_) && (check_reports(f.first) > max_reports_)) {
os << f.first << '\n';
std::cout << f.first << " had " << f.second << " failed attempts and " << check_reports(f.first)
<< " abuse reports, adding to blacklist...\n";
}
}
}
In your last loop, j3 lines above the line you've labeled with 68:, you have a = ips.erase(a);. If you're erasing the last node in the map, a will point to ips.end() after that erase. When you attempt to increment a on line 68 you get the segmentation fault since you can't increment an iterator the end iterator.
The solution would be to not increment a if you're erasing it.
I am wrote on C++ multithread TCP server, for synchronization using boost:scoped_lock
After connecting to server client freezes.
in gdb i saw more threads in pthread_kill after call boost::mutex::lock
(gdb) info thread
277 Thread 808779c00 (LWP 245289330/xgps) 0x0000000802579d5c in poll () at poll.S:3
276 Thread 808779800 (LWP 245289329/xgps) 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
275 Thread 808779400 (LWP 245289328/xgps) 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
.....
246 Thread 808c92800 (LWP 245289296/xgps) 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
245 Thread 808643800 (LWP 245289295/xgps) 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
244 Thread 808643400 (LWP 245289294/xgps) 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
243 Thread 806c8f400 (LWP 245289292/xgps) 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
242 Thread 808643000 (LWP 245286262/xgps) 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
241 Thread 808c92400 (LWP 245289288/xgps) 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
[Switching to thread 205 (Thread 80863a000 (LWP 245289251/xgps))]#0 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
(gdb) where
#0 0x00000008019799bc in pthread_kill () from /lib/libthr.so.3
#1 0x0000000801973cfc in pthread_getschedparam () from /lib/libthr.so.3
#2 0x00000008019782fc in pthread_mutex_getprioceiling () from /lib/libthr.so.3
#3 0x000000080197838b in pthread_mutex_lock () from /lib/libthr.so.3
#4 0x0000000000442b2e in boost::mutex::lock (this=0x803835f10) at mutex.hpp:62
#5 0x0000000000442c36 in boost::unique_lock<boost::mutex>::lock (this=0x7fffe7334270) at lock_types.hpp:346
#6 0x0000000000442c7c in unique_lock (this=0x7fffe7334270, m_=#0x803835f10) at lock_types.hpp:124
#7 0x0000000000466e31 in XDevice::getDeviceIMEI (this=0x803835e20) at /home/xgps_app/device.cpp:639
#8 0x000000000049071f in XDevicePool::get_device (this=0x7fffffffd9c0, device_imei=868683024674230) at /home/xgps_app/pool_devices.cpp:351
Code at line device.cpp:639
IMEI
XDevice::getDeviceIMEI()
{
try {
boost::mutex::scoped_lock lock(cn_mutex);
return device_imei;
}
catch (std::exception &e )
{
cout << " ERROR in getDeviceIMEI " << e.what() << "\n";
}
return 0;
}
Code in pool_device
XDevicePtr
XDevicePool::get_device(IMEI device_imei)
{
XDevicePtr device;
unsigned int i = 0;
while(i < this->devices.size())
{
device = devices[i];
if (device->getDeviceIMEI() == device_imei) {
LOG4CPLUS_DEBUG(logger, "XDevicePool::get_device found!");
return device;
}
i++;
}
device.reset();
return device;
}
XDevicePtr
XDevicePool::get_device_mt(IMEI device_imei)
{
try
{
boost::mutex::scoped_lock lock(pool_mutex);
}
catch (std::exception & e)
{
LOG4CPLUS_ERROR(logger, "XDevicePool::get_device error! " << e.what());
}
// boost::mutex::scoped_lock lock(pool_mutex);
return get_device(device_imei);
}
Why after call to mutex lock thread terminating?
I think dead lock not reason for that behavior
Please help!
You have multiple locks.
Whenever you have multiple locks that can be required simultaneously you need to obtain them in a fixed order, to avoid dead-locking.
It seems likely that you have such a deadlock occurring. See Boost Thread's free function boost::lock http://www.boost.org/doc/libs/1_63_0/doc/html/thread/synchronization.html#thread.synchronization.lock_functions.lock_multiple for help acquiring multiple lock in reliable order.
You will also want to know about std::defer_lock.
Other than this, there might be interference from fork in multi-threaded programs. I think it's beyond the scope now to explain, unless you are indeed using fork in your process
tl;dr pthread_kill is likely a red herring.
Why after call to mutex lock thread terminating?
It doesn't. Your threads have not been terminated (as evidenced by them still appearing on info thread).
You seem to assume that pthread_kill kills the current thread. In fact, what pthread_kill does is send a signal to another thread. And even the sending is optional (if sig=0).
See the man page for further details.
I am trying to implement a thread pool using ACE Semaphore library. It does not provide any API like sem_getvalue which is in Posix semaphore. I need to debug some flow which is not behaving as expected. Can I examine the semaphore in GDB. I am using Centos as OS.
I initialized two semaphores using the default constructor providing count 0 and 10. I have declared them as static in the class and initialized it in the cpp file as
DP_Semaphore ThreadPool::availableThreads(10);
DP_Semaphore ThreadPool::availableWork(0);
But when I am printing the semaphore in GDB using the print command, I am getting the similar output
(gdb) p this->availableWork
$7 = {
sema = {
semaphore_ = {
sema_ = 0x6fe5a0,
name_ = 0x0
},
removed_ = false
}
}
(gdb) p this->availableThreads
$8 = {
sema = {
semaphore_ = {
sema_ = 0x6fe570,
name_ = 0x0
},
removed_ = false
}
}
Is there a tool which can help me here, or shall I switch to Posix thread and re-write all my code.
EDIT: As requested by #timrau the output of call this->availableWork->dump()
(gdb) p this->availableWork.dump()
[Switching to Thread 0x2aaaae97e940 (LWP 28609)]
The program stopped in another thread while making a function call from GDB.
Evaluation of the expression containing the function
(DP_Semaphore::dump()) will be abandoned.
When the function is done executing, GDB will silently stop.
(gdb) call this->availableWork.dump()
[Switching to Thread 0x2aaaaf37f940 (LWP 28612)]
The program stopped in another thread while making a function call from GDB.
Evaluation of the expression containing the function
(DP_Semaphore::dump()) will be abandoned.
When the function is done executing, GDB will silently stop.
(gdb) info threads
[New Thread 0x2aaaafd80940 (LWP 28613)]
6 Thread 0x2aaaafd80940 (LWP 28613) 0x00002aaaac10a61e in __lll_lock_wait_private ()
from /lib64/libpthread.so.0
* 5 Thread 0x2aaaaf37f940 (LWP 28612) ThreadPool::fetchWork (this=0x78fef0, worker=0x2aaaaf37f038)
at ../../CallManager/src/DP_CallControlTask.cpp:1043
4 Thread 0x2aaaae97e940 (LWP 28609) DP_Semaphore::dump (this=0x6e1460) at ../../Common/src/DP_Semaphore.cpp:21
2 Thread 0x2aaaad57c940 (LWP 28607) 0x00002aaaabe01ff3 in __find_specmb () from /lib64/libc.so.6
1 Thread 0x2aaaacb7b070 (LWP 28604) 0x00002aaaac1027c0 in __nptl_create_event () from /lib64/libpthread.so.0
(gdb)
sema.semaphore_.sema_ in your code looks like a pointer. Try to find it's type in the ACE headers, then convert it to a type and print:
(gdb) p *((sem_t)0x6fe570)
Update: try to convert the address within the structure you posted to sem_t. If you use linux, ACE should be using posix semaphores, so type sem_t must be visible to gdb.
Class LocalT have member of other class that realized read-write-mutex. Mutex initialized at constructor and use pthread_rwlock_rdlock(&aMutex); for reading lock. So, seems, its all ok with mutex class. But program crashed when some LocalT object lock his mutex member for reading.
CSerialize.cpp:2054 line is MUTEX.lock_reading();
Thread 6 (Thread 0x80d4e00 (runnable)):
#0 0x4864f11d in pthread_mutex_lock () from /lib/libpthread.so.2
#1 0x4864b558 in pthread_rwlock_init () from /lib/libpthread.so.2
#2 0x4864b659 in pthread_rwlock_rdlock () from /lib/libpthread.so.2
#3 0x0807ae14 in LocalT::serialize (this=0x80d4e00, outbin=#0x7574736b)
at CSerialize.cpp:2054
Other two running threads:
1) at socket accept();
2) next runnable thread at popen() call, seems its execute or read from pipe. But does not know what is __error() ?????
Thread 1 (Thread 0x8614800 (LWP 100343)):
#0 0x4865b8f9 in __error () from /lib/libpthread.so.2
#1 0x4865a15a in pthread_testcancel () from /lib/libpthread.so.2
#2 0x486425bf in read () from /lib/libpthread.so.2
#3 0x08056340 in UT::execute_popen (command=#0x4865e6bc,
ptr_output=0xbf2f7d30) at Utils.cpp:75
3) all other thread sleeping.
I have no ideas why its crashed? Maybe somebody can assume something or suggest?
==EDIT==
and here is one system(?) thread ( i does not create it, but program always have +1 thread). It always:
Thread 8 (Thread 0x80d4a00 (LWP 100051)):
#0 0x4865a79b in pthread_testcancel () from /lib/libpthread.so.2
#1 0x48652412 in pthread_mutexattr_init () from /lib/libpthread.so.2
#2 0x489fd450 in ?? ()
==EDIT2 - bt as requested==
(gdb) bt
#0 0x4865a7db in pthread_testcancel () from /lib/libpthread.so.2
#1 0x48652412 in pthread_mutexattr_init () from /lib/libpthread.so.2
#2 0x489fd450 in ?? ()
strangely... why ?? () ?
==EDIT3 - when loading core==
Program terminated with signal 11, Segmentation fault.
[skiped]
#0 0x4865a7db in pthread_testcancel () from /lib/libpthread.so.2
[New Thread 0x8614800 (LWP 100343)]
[New Thread 0x8614600 (sleeping)]
[New Thread 0x8614400 (sleeping)]
[New Thread 0x8614200 (sleeping)]
[New Thread 0x8614000 (sleeping)]
[New Thread 0x80d4e00 (runnable)]
[New Thread 0x80d4c00 (sleeping)]
[New Thread 0x80d4a00 (LWP 100051)]
[New Thread 0x80d4000 (runnable)]
[New LWP 100802]
(gdb) info thread
* 10 LWP 100802 0x4865a7db in pthread_testcancel () from /lib/libpthread.so.2
9 Thread 0x80d4000 (runnable) 0x486d7bd3 in accept () from /lib/libc.so.6 -- MAIN() THREAD
8 Thread 0x80d4a00 (LWP 100051) 0x4865a79b in pthread_testcancel ()
from /lib/libpthread.so.2 ( UNIDENTIFIED THREAD system()? )
7 Thread 0x80d4c00 (sleeping) 0x48651cb6 in pthread_mutexattr_init ()
from /lib/libpthread.so.2 (SIGNAL PROCESSOR THREAD)
6 Thread 0x80d4e00 (runnable) 0x4864f11d in pthread_mutex_lock ()
from /lib/libpthread.so.2 (MAINTENANCE THREAD)
5 Thread 0x8614000 (sleeping) 0x48651cb6 in pthread_mutexattr_init ()
from /lib/libpthread.so.2 (other mutex cond_wait - worker 1)
4 Thread 0x8614200 (sleeping) 0x48651cb6 in pthread_mutexattr_init ()
from /lib/libpthread.so.2 (other mutex cond_wait - worker 2 )
3 Thread 0x8614400 (sleeping) 0x48651cb6 in pthread_mutexattr_init ()
from /lib/libpthread.so.2 (other mutex cond_wait - worker 3 )
2 Thread 0x8614600 (sleeping) 0x48651cb6 in pthread_mutexattr_init ()
from /lib/libpthread.so.2 (other mutex cond_wait - worker 4)
1 Thread 0x8614800 (LWP 100343) 0x4865b8f9 in __error ()
from /lib/libpthread.so.2 ( popen() thread see below)
I created: 1 maintenance thread (serializing), 1 popen() thread, 4 workers, 1 main, 1 signal thread = 8 threads....
the thread that you are referring to as system thread is actually your program's main thread.
Secondly with information shared by you so far, it looks like you are acquiring the mutex but never releasing it. that leads to an unstable state (some parameters having wrong values) which leads to a crash. I am sure you will also be observing an intermittent hang.
could you share the backtrace when it crashes ?