I am coding in C++ in Linux. I have handled the ctrl C signal so that I could clean up all the resources upon exit. However, I have the problem when I run gdb. Ctrl C is also the stopping of the gdb command. Hence, how do I send the ctrl C to my programme so that I could test my written resource clean up code?
Thanks.
At gdb's command prompt:
signal SIGINT
You can tell GDB to pass the signal through to your program and not stop:
(gdb) handle SIGINT pass nostop
Related
I know we could use the handle signal command in gdb to make it pass the SIGINT signal to the program. Is there a way to do the same while using the debugger in vs code?
In the Debug Console of vs code write:
-exec handle SIGINT pass
-exec handle SIGINT nostop
Open another terminal.
ps -eaf |grep <Proc-Name> // find the PID
kill -s SIGINT PID_OF_PROCESS
Get back to VS Code
Now you can see and use the stacktrace etc.
I want to debug a c program with gdb which is invoked by shell script. In this shell script , there are lot of things done and many environment variables are set.
This shell script is invoked by boost::process::launch from a c++ program.
I can change c++ program, shell script, and the c program itself, but can't change the architecture of this flow.
Is there any way , so that i can use gdb to debug the program.
If there is no solution, is there a way to dump all environment settings before launching shell script, so that i can launch same script with these settings to debug it later. I will prefer a portable and long term solution.
Two easy options:
Attach gdb after the program has started with gdb -p <pid of process> if it doesn't matter to stop it at a specific point.
Insert a raise(SIGSTOP); in the C program where you want it to stop. Once the process is stopped, attach gdb as in 1, set any breakpoints you need and then send the process a SIGCONT signal (kill -CONT <pid of process>) to cause it to continue.
I am running GDB in "mi interpreter" mode and I am using user defined hooks to detect events such as stop,quit etc.
Whenever the event occurs the hook will print some information which are redirected to a different log file.
Another application will read the contents from this log file and process it.
I have written a hook to detect GDB exit, as illustrated:
define hook-quit
set logging file D:\log\task.log
set logging on
print "GDB end detected"
set logging off
end
In GDB's console mode, the above hook executes successfully when GDB exits.
However in GDB's "mi interpreter mode", the hook fails to execute.
Is there any alternative hook (or any method) for detecting GDB exit in "mi interpreter mode".
Tested Environment:
Windows 7
Toolchain: arm-none-eabi (command: arm-none-eabi-gdb.exe --interpreter=mi D:\test.elf)
A couple of ways to do this come to mind.
One is to use Python to write at "at exit" hook that prints to the log. The Python exit hooks should be run during gdb exit.
Another is to do the writing at a different layer: either in whatever is invoking gdb -i=mi, or by writing a wrapper script that invokes gdb and then writes to the log when gdb exits.
I seem to have some kind of multithreading bug in my code that makes it crash once every 30 runs of its test suite. The test suite is non-interactive. I want to run my test suite in gdb, and have gdb exit normally if the program exits normally, or break (and show a debugging prompt) if it crashes. This way I can let the test suite run repeatedly, go grab a cup of coffee, come back, and be presented with a nice debugging prompt. How can I do this with gdb?
This is a little hacky but you could do:
gdb -ex='set confirm on' -ex=run -ex=quit --args ./a.out
If a.out terminates normally, it will just drop you out of GDB. But if you crash, the program will still be active, so GDB will typically prompt if you really want to quit with an active inferior:
Program received signal SIGABRT, Aborted.
0x00007ffff72dad05 in raise (sig=...) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
64 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
in ../nptl/sysdeps/unix/sysv/linux/raise.c
A debugging session is active.
Inferior 1 [process 15126] will be killed.
Quit anyway? (y or n)
Like I said, not pretty, but it works, as long as you haven't toggled off the prompt to quit with an active process. There is probably a way to use gdb's quit command too: it takes a numeric argument which is the exit code for the debugging session. So maybe you can use --eval-command="quit stuff", where stuff is some GDB expression that reflects whether the inferior is running or not.
This program can be used to test it out:
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main() {
if (time(NULL) % 2) {
raise(SIGINT);
}
puts("no crash");
return EXIT_SUCCESS;
}
You can also trigger a backtrace when the program crashes and let gdb exit with the return code of the child process:
gdb -return-child-result -ex run -ex "thread apply all bt" -ex "quit" --args myProgram -myProgramArg
The easiest way is to use the Python API offered by gdb:
def exit_handler(event):
gdb.execute("quit")
gdb.events.exited.connect(exit_handler)
You can even do it with one line:
(gdb) python gdb.events.exited.connect(lambda x : gdb.execute("quit"))
You can also examine the return code to ensure it's the "normal" code you expected with event.exit_code.
You can use it in conjuction with --eval-command or --command as mentioned by #acm to register the event handler from the command line, or with a .gdbinit file.
Create a file named .gdbinit and it will be used when gdb is launched.
run
quit
Run with no options:
gdb --args prog arg1...
You are telling gdb to run and quit, but it should stop processing the file if an error occurs.
Make it dump core when it crashes. If you're on linux, read the man core man page and also the ulimit builtin if you're running bash.
This way when it crashes you'll find a nice corefile that you can feed to gdb:
$ ulimit -c unlimited
$ ... run program ..., gopher coffee (or reddit ;)
$ gdb progname corefile
If you put the following lines in your ~/.gdbinit file, gdb will exit when your program exits with a status code of 0.
python
def exit_handler ( event ):
if event .exit_code == 0:
gdb .execute ( "quit" )
gdb .events .exited .connect ( exit_handler )
end
The above is a refinement of Kevin's answer.
Are you not getting a core file when it crashes? Start gdb like this 'gdb -c core' and do a stack traceback.
More likely you will want to be using Valgrind.
I'm trying to debug a server I wrote with gdb as it segfaults under very specific and rare conditions.
Is there any way I can make gdb run in the background (via quiet or batch mode?), follow children (as my server is a daemon and detaches from the main PID) and automatically dump the core and the backtrace (to a designated file) once the program crashes?
Assuming you have appropriate permissions, you can have gdb attach to any process. You can do it on the command line with:
gdb /path/to/binary _pid_
or from within gdb with the attach command:
attach _pid_
So, once your daemon has started, you can use either of these techniques to attach to the final PID your daemon is running as. Attaching gdb stops the process which you are tracing so you will need to issue a "continue" to restart it.
I don't know a direct way to get gdb to run arbitrary commands when the program crashes. Here is one workaround I can think of:
Create and register a signal handlers for SIGSEGV.
Tell gdb not to stop on that signal (handle SIGSEGV nostop)
Set a breakpoint at the first line of your signal handler.
Assign commands to the breakpoint from step 3
Why not just run the process interactively in a persistent screen session? Why must it be a daemon when debugging? Or just run gdb in the screen session and attach it to the running process (e.g. gdb /path/to/binary -p PID_of_binary) after it forks.
First, I'd setup your shell / environment to give you a core dump. In bash:
ulimit -c unlimited
Once you have the core dump, you can use gdb to examine the stack trace:
gdb /path/to/app /path/to/core/file
I'm not really a gdb expert but two things come to mind
Tracepoints which might give you the necessary information as your program runs or
Use gdb's remote debugging facility to debug your program while it's running as a daemon.
How to generate a stacktrace when my gcc C++ app crashes answer for this question should do what you want. (assuming you can make changes in your code)
You might want to take a look at how Samba facilitates debugging; it has a configurable "panic action" that can suspend the application, notify the developer, spawn gdb, etc., and is run as part of its signal handler. See lib/util/fault.c in the Samba source tree.
My practice: comment out daemon function call, rebuild binary, then use gdb to run.