Why does cuda-gdb launch multiple threads? - c++

When I start my program in cuda-gdb, I get an output like:
[New Thread 0x7fffef8ea700 (LWP 8003)]
[New Thread 0x7fffe35b2700 (LWP 8010)]
[New Thread 0x7fffe2db1700 (LWP 8011)]
[New Thread 0x7fffe25b0700 (LWP 8012)]
I do not understand why these multiple threads are launched in the beginning. I have not launched my program in multi-threaded mode. I am using MPI, but I start one process. So, where are these threads coming from?
This does not affect my debugging process in any way. Its just that I don't understand what this means.

These threads you see are created by the CUDA runtime library, and aren't directly related to cuda-gdb itself. If you run the same code with gdb, you will also see the same messages.
If you want to see what happens what these threads are doing or where they're coming from, simply compile your code with the -g flag, set a breakpoint in your code (e.g., immediately before a CUDA kernel starts), run it, and then run the following command in the gdb console:
thread apply all backtrace
This command has the same effect of gdb's backtrace, except that it will show the backtrace for all threads created by your program.
In my case, I get the following messages after starting my program:
[New Thread 0x7fffeffb3700 (LWP 7141)]
[New Thread 0x7fffef731700 (LWP 7142)]
[New Thread 0x7fffeef30700 (LWP 7143)]
When I run the command mentioned above in my gdb console, I see the following output:
(gdb) thread apply all backtrace
Thread 4 (Thread 0x7fffeef30700 (LWP 7143)):
#0 pthread_cond_timedwait##GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238
#1 0x00007ffff63c19b7 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#2 0x00007ffff6386bb7 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#3 0x00007ffff63c0f48 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#4 0x00007ffff79bf064 in start_thread (arg=0x7fffeef30700) at pthread_create.c:309
#5 0x00007ffff6cce62d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
Thread 3 (Thread 0x7fffef731700 (LWP 7142)):
#0 0x00007ffff6cc5aed in poll () at ../sysdeps/unix/syscall-template.S:81
#1 0x00007ffff63bf6a3 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#2 0x00007ffff642261e in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#3 0x00007ffff63c0f48 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#4 0x00007ffff79bf064 in start_thread (arg=0x7fffef731700) at pthread_create.c:309
#5 0x00007ffff6cce62d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
Thread 2 (Thread 0x7fffeffb3700 (LWP 7141)):
#0 0x00007ffff6ccfa9f in accept4 (fd=13, addr=..., addr_len=0x7fffeffb2e18, flags=-1) at ../sysdeps/unix/sysv/linux/accept4.c:45
#1 0x00007ffff63c0556 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#2 0x00007ffff63b404d in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#3 0x00007ffff63c0f48 in ?? () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#4 0x00007ffff79bf064 in start_thread (arg=0x7fffeffb3700) at pthread_create.c:309
#5 0x00007ffff6cce62d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
Thread 1 (Thread 0x7ffff7fc0740 (LWP 7136)):
#0 main () at cuda_heap.cu:66
As you can verify, all threads that have been created at the beginning match both thread addresses and LWP (Light Weight Process) IDs. You can see that all of them come from libcuda.so.1.
In cuda-gdb, you're able to see some more detailed information:
(cuda-gdb) thread apply all bt
Thread 4 (Thread 0x7fffeef30700 (LWP 10019)):
#0 0x00007ffff79c33f8 in pthread_cond_timedwait##GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007ffff63c19b7 in cudbgApiDetach () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#2 0x00007ffff6386bb7 in cudbgApiDetach () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#3 0x00007ffff63c0f48 in cudbgApiDetach () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#4 0x00007ffff79bf064 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#5 0x00007ffff6cce62d in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 3 (Thread 0x7fffef731700 (LWP 10018)):
#0 0x00007ffff6cc5aed in poll () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff63bf6a3 in cudbgApiDetach () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#2 0x00007ffff642261e in cuVDPAUCtxCreate () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#3 0x00007ffff63c0f48 in cudbgApiDetach () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#4 0x00007ffff79bf064 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#5 0x00007ffff6cce62d in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 2 (Thread 0x7fffeffb3700 (LWP 10017)):
#0 0x00007ffff6ccfa9f in accept4 () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff63c0556 in cudbgApiDetach () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#2 0x00007ffff63b404d in cudbgApiDetach () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#3 0x00007ffff63c0f48 in cudbgApiDetach () from /usr/lib/x86_64-linux-gnu/libcuda.so.1
#4 0x00007ffff79bf064 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#5 0x00007ffff6cce62d in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 1 (Thread 0x7ffff7fc0740 (LWP 10007)):
#0 main () at cuda_heap.cu:66

i don't know what exactly it is, but I think cuda-gdb need to create multiple thread to catch the errors/exceptions like: memory violation or bank conflicts.

Related

C++ Multi threaded Program got crashed with BUS error

My Project got crashed after 5 days of testing , when I analyze the dump file its showing as BUS Error
Here the below chunk of code i got from the backtrace
Program terminated with signal SIGBUS, Bus error.
#0 0x0000000000000531 in ?? ()
[Current thread is 1 (LWP 902)]
(gdb) bt
#0 0x0000000000000531 in ?? ()
#1 0x000000000041a294 in CUtilsTimer::forgetTimer() ()
#2 0x0000000000415160 in CEMPLinkMonitor::monitor_ethernet_link_status() ()
#3 0x0000000000413fc8 in CEMPTransport::recvEMPData(Emp_Packet*) ()
#4 0x000000000041313c in CEMPRxTransport::run() ()
#5 0x00000000004190a8 in CUtilsThread::runLoop(void*) ()
#6 0x0000007fac289fb8 in ?? () from /lib/libpthread.so.0
#7 0x0000007fa74bdc98 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb)
I tried to find the root cause but not able get any clue to crack , Please help

localtime() function is invoking ___lll_lock_wait_private() which is making the thread to go deadlock

I have seen many questions in which ___lll_lock_wait_private () is going into deadlock. But their call stack is different. Their code was calling malloc() or free() or fork().
But in my case, I have a Log class, which is trying to print a log. That log is making my thread to go deadlock.
See below call stack,
#0 0x000000fff4c47b9c in __lll_lock_wait_private () from /lib64/libc.so.6
#1 0x000000fff4bf0364 in __tz_convert () from /lib64/libc.so.6
#2 0x000000fff4bee2c0 in localtime () from /lib64/libc.so.6
#3 0x000000fff5167188 in getTimeStr() () from /usr/sbin/sajet/sharedobj/liblibLite.so
#4 0x000000fff516756c in LogClass::logBegin() () from /usr/sbin/sajet/sharedobj/liblibLite.so
#5 0x000000fff5318c90 in DaemonCtrlServer::strtDaemon(daemonInfo&) () from /usr/sbin/sajet/sharedobj/libDaemonCtlServer.so
#6 0x000000fff531abc0 in DaemonCtrlServer::restrtDiedDaemon(daemonInfo&) () from /usr/sbin/sajet/sharedobj/libDaemonCtlServer.so
#7 0x000000fff531ae64 in DaemonCtrlServer::handleChildDeath() () from /usr/sbin/sajet/sharedobj/libDaemonCtlServer.so
#8 0x00000000100080ac in sj_initd::DaemonDeathHandler() ()
#9 0x000000001000b8f8 in sj_initd::SignalHandler(int) ()
#10 0x00000000100080e8 in sj_initd_SigHandler::sj_initdSigHandler(int) ()
#11 <signal handler called>
#12 0x000000fff4bedc00 in __offtime () from /lib64/libc.so.6
#13 0x000000fff4bf02a8 in __tz_convert () from /lib64/libc.so.6
#14 0x000000fff4bee2c0 in localtime () from /lib64/libc.so.6
#15 0x000000fff5167188 in getTimeStr() () from /usr/sbin/sajet/sharedobj/liblibLite.so
#16 0x000000fff516756c in LogClass::logBegin() () from /usr/sbin/sajet/sharedobj/liblibLite.so
#17 0x000000fff52e868c in ConnectionOS::ProcessReadEvent() () from /usr/sbin/sajet/sharedobj/libconnV2.so
#18 0x000000fff52ef354 in ConnectionOSManager::ProcessConns(fd_set*, fd_set*) () from /usr/sbin/sajet/sharedobj/libconnV2.so
#19 0x000000fff52f0a0c in SocketsManager::ProcessFds(bool) () from /usr/sbin/sajet/sharedobj/libconnV2.so
#20 0x000000fff52c51b4 in EventReactorBase::IO (this=0x19a65e80) at EventReactorBase.cpp:361
#21 0x000000fff52c457c in EventReactorBase::React (this=0x19a65e80) at EventReactorBase.cpp:419
#22 0x000000fff52c10cc in Task::Run (this=0x19a65e30) at Task.cpp:222
#23 0x000000fff52c1218 in startTask (t=0x19a65e30) at Task.cpp:152
#24 0x000000001000a9c4 in TaskManager::Start() ()
#25 0x0000000010007538 in main ()
sj_init is a daemon which monitors the live status of other daemons in a system. When a daemon dies(which closes the connection with sj_init), it tries to restart that daemon. Then, startDaemon() is trying to print a log, which is calling getTimeStr() which internally calling ___lll_lock_wait_private
Edit
as localtime is not threadsafe, I tried with localtime_r but it also lead the thread to go into a deadlock. but acc. to localtime_r description this is threadsafe function. What am I doing wrong here?
The programs blocks inside a signal handler in __tz_convert() being called from localtime(). The signal handler interrupted __tz_convert() being called from localtime().
#0 0x000000fff4c47b9c in __lll_lock_wait_private () from /lib64/libc.so.6
#1 0x000000fff4bf0364 in __tz_convert () from /lib64/libc.so.6
#2 0x000000fff4bee2c0 in localtime () from /lib64/libc.so.6
...
#11 <signal handler called>
...
#13 0x000000fff4bf02a8 in __tz_convert () from /lib64/libc.so.6
#14 0x000000fff4bee2c0 in localtime () from /lib64/libc.so.6
...
#25 0x0000000010007538 in main ()
localtime() seems to not be reintrant.
A signal handler may only call a very specific set of functions. Scroll down here: https://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03
localtime() is not within this set.
Signal handler should be short and simple.
You could set up a thread doing the formatting and logging which for example via a pipe is being fed by the signal handler with all necessary information to format and log. The functions to do so are listed within the set linked above.
#4 0x000000fff516756c in LogClass::logBegin() () from /usr/sbin/sajet/sharedobj/liblibLite.so
#16 0x000000fff516756c in LogClass::logBegin() () from /usr/sbin/sajet/sharedobj/liblibLite.so
Notice that LogClass::logBegin appears in your call stack twice?
You have two core problems.
You are calling localtime from a signal handler. That is not allowed.
You are looking at the wrong thread's stack. This thread got stuck in a deadlock caused by other threads and then was interrupted, getting stuck in the deadlock again. It's a victim of the deadlock (twice!). The perpetrator is another thread.
If you're going to log from a signal handlers, you need very tight control over all the code that runs in the signal handler and you need to pass the "heavy lifting" off to another thread that can safely call non-reentrant functions.

Make gdb show thread names on 'apply all' operations

I'm debugging an app with many threads, so I've named them using prctl. This works great with gdb's info threads option, but it would be nice if thread * apply all operations showed it as well. Any way to coerce gdb to do this?
(gdb) info threads
Id Target Id Frame
...
3 Thread 0x7ffff6ffe700 (LWP 30048) "poll_uart_threa" 0x00007ffff78eb823 in select ()
at ../sysdeps/unix/syscall-template.S:82
2 Thread 0x7ffff77ff700 (LWP 30047) "signal hander" do_sigwait (set=<optimized out>,
sig=0x7ffff77feed8)
at ../nptl/sysdeps/unix/sysv/linux/../../../../../sysdeps/unix/sysv/linux/sigwait.c:65
* 1 Thread 0x7ffff7fcc700 (LWP 30046) "simulator" __lll_lock_wait ()
at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:132
Pointer, PID {well, thread ID, but LWP threads == processes, ish}, and name
(gdb) thread apply all bt
...
Thread 3 (Thread 0x7ffff6ffe700 (LWP 30048)):
#0 0x00007ffff78eb823 in select () at ../sysdeps/unix/syscall-template.S:82
#1 0x0000000000403bb3 in poll_uart_thread (unused=0x0) at uart.c:96
#2 0x00007ffff7bc4e9a in start_thread (arg=0x7ffff6ffe700) at pthread_create.c:308
#3 0x00007ffff78f24bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#4 0x0000000000000000 in ?? ()
Thread 2 (Thread 0x7ffff77ff700 (LWP 30047)):
<call stack>
#2 0x0000000000417a89 in sig_thread (arg=0x7fffffffbb60) at simulator.c:879
#3 0x00007ffff7bc4e9a in start_thread (arg=0x7ffff77ff700) at pthread_create.c:308
#4 0x00007ffff78f24bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#5 0x0000000000000000 in ?? ()
Thread 1 (Thread 0x7ffff7fcc700 (LWP 30046)):
<call stack>
#9 0x00000000004182e3 in simulator (flash_file=0x7fffffffe0e4 "../programs/blink.bin")
at simulator.c:1005
#10 0x0000000000401f14 in main (argc=3, argv=0x7fffffffdd48) at cli.c:167
While I can find the name by hunting the call stack, it'd be nice / convenient / etc if it would print in the summary line, which here only has PID and pointer.
There's no easy way, you have to patch GDB. It's a simple patch, you can find it here.
it'd be nice / convenient / etc if it would print in the summary line, which here only has PID and pointer.
Please file an ehnancement request in GDB bugzilla.
If you are using GDB with embedded python, you might be able to script "thread apply" to do what you want, but it really ought to do the right thing already.

Very strange Segmentation fault analysis in GDB

I have application (server) written in C++ that are crashing around few hours, looks random probably.
Worst part is i trying to debug any of core file using gdb and i see that result:
gdb --core=core.668 --symbols=selectserver
GNU gdb 6.8
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i686-pc-linux-gnu"...
Core was generated by `./selectserver'.
Program terminated with signal 11, Segmentation fault.
[New process 672]
[New process 671]
[New process 670]
[New process 669]
[New process 668]
#0 0xb7866896 in ?? ()
(gdb) info threads
5 process 668 0xffffe410 in __kernel_vsyscall ()
4 process 669 0xffffe410 in __kernel_vsyscall ()
3 process 670 0xffffe410 in __kernel_vsyscall ()
2 process 671 0xffffe410 in __kernel_vsyscall ()
* 1 process 672 0xb7866896 in ?? ()
(gdb) bt
#0 0xb7866896 in ?? ()
#1 0x082da4b0 in ?? ()
#2 0xb79e4252 in ?? ()
#3 0xa2ba9014 in ?? ()
#4 0x0825e14c in ?? ()
#5 0x082da4b0 in ?? ()
#6 0xb56175e8 in ?? ()
#7 0x00000080 in ?? ()
#8 0xb5fe723f in ?? ()
#9 0xa2ba9014 in ?? ()
#10 0xa2ba9008 in ?? ()
#11 0xb7a32ff4 in ?? ()
#12 0x00000000 in ?? ()
(gdb) thread 2
[Switching to thread 2 (process 671)]#0 0xffffe410 in __kernel_vsyscall ()
(gdb) bt
#0 0xffffe410 in __kernel_vsyscall ()
#1 0xb7889486 in ?? ()
#2 0x00000000 in ?? ()
(gdb) thread 3
[Switching to thread 3 (process 670)]#0 0xffffe410 in __kernel_vsyscall ()
(gdb) bt
#0 0xffffe410 in __kernel_vsyscall ()
#1 0xb7889486 in ?? ()
#2 0x00000000 in ?? ()
(gdb) thread 4
[Switching to thread 4 (process 669)]#0 0xffffe410 in __kernel_vsyscall ()
(gdb) bt
#0 0xffffe410 in __kernel_vsyscall ()
#1 0xb7889486 in ?? ()
#2 0x00000000 in ?? ()
(gdb) thread 5
[Switching to thread 5 (process 668)]#0 0xffffe410 in __kernel_vsyscall ()
(gdb) bt
#0 0xffffe410 in __kernel_vsyscall ()
#1 0xb78b7de1 in ?? ()
#2 0x00000032 in ?? ()
#3 0xbf849ae8 in ?? ()
#4 0xbf8499e8 in ?? ()
#5 0x00000000 in ?? ()
(gdb) quit
I dont know what is going on, why addresses on stack excluding __kernel_vsyscall are so wired not maps to symbol.
What i need to do to find the problem, debug memory dump of that problem.
Thanks for help!
You need to compile the program with debugging symbols or get a separate file with debugging symbols. Pass the -g flag to gcc to enable these.
If you want to see what all of the functions are, even the ones inside library functions (for instance, standard library functions) you also need to get a version of the library with debugging symbols.
Starting gdb --core=core.668 selectserver fixed problem.

Determining the correct thread to debug in GDB

I've run into some problems debugging a multi-threaded process using GDB. I have a multi-threaded process that splinters off into several (8 or 9) different threads, and I am trying to determine what the contents of variables are when the constructor for a class called XML_File_Data is called. However, I've run into a problem where, after I apply the correct function breakpoint to all threads and it's apparent one of the thread's break point is getting hit (the program temporarily halts execution), I'm not able to determine which thread hit the breakpoint. The command
(gdb) thread apply all where
is giving me shockingly useless information in the form:
#0 0x004ab410 in __kernel_vsyscall ()
#1 0x05268996 in nanosleep () from /lib/libc.so.6
#2 0x052a215c in usleep () from /lib/libc.so.6
#3 0x082ee313 in frame_clock_frame_end (clock=0xb4bfd2f8)
at frame_clock.c:143
#4 0x003a349a in ?? ()
#5 0x00b5cfde in thread_proxy ()
from /cets_development_libraries/install/lib/libboost_thread-gcc41-mt-1_38.so.1.38.0
#6 0x02c1f5ab in start_thread () from /lib/libpthread.so.0
#7 0x052a8cfe in clone () from /lib/libc.so.6
Of the 9 processes, 7 or so are giving me almost exactly that output, and the information about the last 2 isn't really much more helpful (functions far down the call stack have recognizable names, but any recent #0-#4 functions aren't recognizable).
This is what I have so far:
(gdb) gdb
(gdb) gdb attach <processid>
(gdb) thread apply all 'XML_File_Data::XML_File_Data()'
and (after the breakpoint is hit)
(gdb) thread apply all where
Could any experienced debuggers offer me some hints on what I am doing wrong or what is normally done in this situation?
Cheers,
Charlie
EDIT: Fortunately, I was able to find out that the cause of the ??'s was optimized code being run through the debugger, in addition to not running the debugger in the directory of the executable file. Still not much success with the debugging though.
I find myself doing this all the time:
> t a a f
Short for:
> thread apply all frame
Of course, other variants are possible:
> t a a bt 3
Which prints the bottom 3 frames of each thread's stack. (You can also use negative numbers to get the top N frames of the stack)
You can use command thread or info threads to find out the current thread number after breakpoint hit
(gdb) thread
[Current thread is 1 (Thread 0xb790d6c0 (LWP 2519))]
(gdb)
(gdb) info threads
17 Thread 0xb789cb90 (LWP 2536) 0xb7fc6402 in __kernel_vsyscall ()
16 Thread 0xb769bb90 (LWP 2537) 0xb7fc6402 in __kernel_vsyscall ()
15 Thread 0xb749ab90 (LWP 2543) 0xb7fc6402 in __kernel_vsyscall ()
14 Thread 0xb7282b90 (LWP 2544) 0xb7fc6402 in __kernel_vsyscall ()
13 Thread 0xb5827b90 (LWP 2707) 0xb7fc6402 in __kernel_vsyscall ()
12 Thread 0xb5626b90 (LWP 2708) 0xb7fc6402 in __kernel_vsyscall ()
11 Thread 0xb5425b90 (LWP 2709) 0xb7fc6402 in __kernel_vsyscall ()
10 Thread 0xb5161b90 (LWP 2713) 0xb7fc6402 in __kernel_vsyscall ()
9 Thread 0xb4ef9b90 (LWP 2715) 0xb7fc6402 in __kernel_vsyscall ()
8 Thread 0xb4af7b90 (LWP 2717) 0xb7fc6402 in __kernel_vsyscall ()
7 Thread 0xb46ffb90 (LWP 2718) 0xb7fc6402 in __kernel_vsyscall ()
6 Thread 0xb44feb90 (LWP 2726) 0xb7fc6402 in __kernel_vsyscall ()
5 Thread 0xb42fdb90 (LWP 2847) 0xb7fc6402 in __kernel_vsyscall ()
4 Thread 0xb40fcb90 (LWP 2848) 0xb7fc6402 in __kernel_vsyscall ()
3 Thread 0xb3efbb90 (LWP 2849) 0xb7fc6402 in __kernel_vsyscall ()
2 Thread 0xb3cfab90 (LWP 2850) 0xb7fc6402 in __kernel_vsyscall ()
* 1 Thread 0xb790d6c0 (LWP 2519) 0xb7fc6402 in __kernel_vsyscall ()
(gdb)
An asterisk `*' to the left of the gdb thread number indicates the current thread. See here.