Which signal numbers does GDB use? - gdb

I've been trying to put together a kind of 'remote gdb agent', but I don't seem to find the right signal numbers for stop packets. Where/how can I find the signal numbers gdb actually uses? At least the gdb-multiarch from Debian Jessie repo acts weird.
Signal 31 is shown as SIG37 - real-time event 37 (I expected SIGUSR2)
and if I send signal 10, dgb shows "Can't send signals to this remote system. SIGURG not sent." and sends a 'c'-packet (I expected SIGBUS).
With remote and serial debugs on, I can see that the signals 31 and 10 are actually received by gdb.
[edit]
By trying I found out the first 30 signals. Here are the first ones:
(the asterisk meand that can't continue with debugging)
2 SIGINT
4 SIGILL
5 SIGTRAP
6 SIGABRT *
7 SIGEMT
8 SIGFPE
9 SIGKILL
10 SIGURG *
11 SIGSTOP
12 SIGTSTP
13 SIGCONT *
14 SIGCHLD *
15 SIGTTIN *
16 SIGTTOU
17 SIGIO *
18 SIGXCPU *
[edit2]
[r $][T][1][0][#][b][5]Packet received: T10
...
Can't send signals to this remote system. SIGURG not sent.
Sending packet: $c#63...[\x00][\x00][\x00][\x00][\x00][
r +]Ack

The gdb remote protocol uses its own numbers for signals. These have to be translated to the correct system values by your remote agent. See the documentation (first paragraph) for details; I think the signal numbers are only available in a gdb header file.

Related

arm-none-eabi-gdb on stm32: warning: unrecognized item "timeout" in "qSupported" response

I'm using the command-line to do my stm32 development. CubeIDE and Atom are too heavyweight for the specs of my machine.
I compile an elf and bin file with debug support and upload the bin to the stm32. It is a simple LED blink program, and it works.
I start stlink-server, and it reports port 7184. In another terminal I type:
$ arm-none-eabi-gdb
file app.elf
target remote localhost:7184
I do not get a response for about 30 seconds, then arm-non-eabi-gdb reports:
Ignoring packet error, continuing...
warning: unrecognized item "timeout" in "qSupported" response
Ignoring packet error, continuing...
Remote replied unexpectedly to 'vMustReplyEmpty': timeout
stlink-server reports:
Error: recv returned 0. Remote side has closed gracefully. Good.
But not good!
So, what do I do? I can't seem to halt the stm32, set breakpoints, run, etc..
I'm running a mish-mash of stlink-server, arm-none-eabi-gcc, and arm-none-eabi-gdb from various sources, which might not be helping.
I'm using a Chinese ST-LINK v2, which I hear might not have all the pins wired up for debugging, and that I have to short some pins. It uploads the bin OK, though.
Update 1 OK, perhaps a little progress (??)
I start st-util, which reports:
2020-07-06T14:50:03 INFO common.c: F1xx Medium-density: 20 KiB SRAM, 64 KiB flash in at least 1 KiB pages.
2020-07-06T14:50:03 INFO gdb-server.c: Listening at *:4242...
So then in a separate console I type:
$ arm-none-eabi-gdb
(gdb) target remote localhost:4242
(gdb) file app.elf
(gdb) load app.elf
You can't do that when your target is `exec'
Oh. Also:
(gdb) r
Don't know how to run. Try "help target".
So I think I'm getting closer, It appears that I can set breakpoints. And maybe I've run the commands in the wrong order.
I think maybe I have to do:
exec app.elf
but that doesn't seem to respect the breakpoints.
Hmmm.
Update 2 The saga continues.
This seems better:
$ $arm-none-eabi-gdb
(gdb) target remote localhost:4242
(gdb) file app.elf
(gdb) b 26
continue
That seems to respect breakpoints; but the debugger reports:
Continuing.
Note: automatically using hardware breakpoints for read-only addresses.
Program received signal SIGTRAP, Trace/breakpoint trap.
0x0800000c in _reset ()
(gdb) print i
No symbol "i" in current context
Hmmm. It seems that the program is now no longer in main(), but in a signal trap, and hence i is not in current context (even though I defined it it main).
So reaching a breakpoint basically causes the machine to reset?? Which kinda defeats the point of debugging. So I think I must be doing something wrong (?) and there's a better way of doing it?
Update 3
I switched to the Arduino IDE and uploaded a sketch. Using the procedure above, I didn't get the signal trap problem. I was able to debug my program, set breakpoints, and inspect variables. Nice. The Arduino is obviously incorporating some "secret sauce" that I had not added to my own non-Arduino code.
So, it mostly works. Mostly.
Try reset mcu before load:
target remote localhost:4242
file app.elf
monitor reset halt
load app.elf

GDB - establish communication between gdb and OCD Deamon

I write OCD Daemon for an architecture that is not yet supported by already existing ones. As for now I try to establish remote communication between GDB <-> My_OCD_Daemon and here problems start. Right after I demand connection with my daemon by "target remote tcp:IP:PORT" gdb starts sending a bunch of requests, here are few of them:
Sending packet: $Hg0#df...Ack
Packet received:
Sending packet: $qxtn#cb...Ack
Packet received: XOCD
...
Sending packet: $qxtocdversion#99...Ack
Packet received: 6000
Sending packet: $p2b0#34...Ack
Reply contains invalid hex digit 79
Fetching next packet
...
For most of them it is enough if I reply just '+' which denotes successful reception. However there are commands like $p2b0#34 which expects some sane size value back.
So, is there a way to skip this never ending chain of requests from GDB and make it wait for user input?
How such init/hand-shake procedure shall look like?
Thanks.
Okay so it looks like we can not "bypass" or "skip" this initial stage of gdb. It is used to configure gdb session and shall be conducted with care. Passing odd values to gdb will result in odd baheviour during debugging session.

Use GDB to Debug SIGTERM

I have searched several questions on stackoverflow about debugging SIGTERM, but have not get the information I needed. Perhaps, I am still new to this issue.
My program terminated with the SIGTERM signal without core dump and I donot know how to track this down. My Question is, what is the general way of debugging this issue in GDB?
Thanks.
Although SIGTERM can be sent by the kernel in a few cases, it's almost always sent by another user process. If you run your program under gdb, then when it receives a SIGTERM it will become paused. You can then get some info about the signal by looking at the $_siginfo structure:
(gdb) print $_siginfo._sifields._kill
$2 = {si_pid = 3926, si_uid = 1001}
This is on Linux. It means that pid 3926 sent the signal, and the userid who sent it is 1001.
My program terminated with the SIGTERM signal without core dump
It is expected that if someone sends your program a SIGTERM, then no core dump is produced.
and I donot know how to track this down.
You need to figure out where that SIGTERM is coming from. Someone sends it your program, and the key question is who.
Usually SIGTERM is sent when either you type Control-C in the terminal in which you started the program (correction, that would send SIGINT, not SIGTERM), or you type kill <pid> in some other terminal.

Howto send signals to a program started in Debug mode in KDevelop

I want to analyse a program that i've written in KDevelop.
I compile the Program and start it with
Right Click on the CMake Project -> Debug as... -> Native Application
Now the program runs in KDevelop and I can see the output on the console embedded into KDevelop.
My program stops running when I press Ctrl+C" (SIGTERM). I can press it when I'm running the program in a console outside KDevelop.
How can I send the signal "SIGTERM" to the embedded console inside KDevelop?
As a workaround I can start htop, select the program and send a SIGTERM from there, which works fine although it would be nicer to have all the functionality in KDevelop itself.
One possible Solution is:
Right Click on the CMake Project -> Debug as... -> Native Application.
Change to the "gdb"-Tab inside KDevelop.
Hit the "Pause"-Icon on the right corner to enable the input field of the "gdb"-Tab
Type signal <Signal>, e.g. signal SIGTERM
The program continues and it catches the signal sent.
Use the kill command to send a signal to a process. kill -l should provide you with a list of signals and their IDs.
For example, on FreeBSD, the SIGTERM signal is #15 as shown by this output:
$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGEMT 8) SIGFPE 9) SIGKILL 10) SIGBUS
11) SIGSEGV 12) SIGSYS 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGURG 17) SIGSTOP 18) SIGTSTP 19) SIGCONT 20) SIGCHLD
21) SIGTTIN 22) SIGTTOU 23) SIGIO 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGINFO 30) SIGUSR1
31) SIGUSR2
So to send a SIGTERM to my process, I look up the process ID and then send it a kill command like so:
kill -15 <process ID>
You can send SIGINT from inside KDevelop:
Run -> Interrupt
However you can't send any other signal.
If you think that's an useful feature create a wish request on bugs.kde.org - eventually including an attached patch :D

Socket recv call freezes thread for approx. 5 seconds

I've a client server architecture implemented in C++ with blocking sockets under Windows 7. Everything is running well up to a certain level of load. If there are a couple of clients (e.g. > 4) receiving or sending megabytes of data, sometimes the communication with one client freezes for approximately 5 seconds. All other clients are working as expected in that case.
The buffer size is 8192 bytes and logging on the server side reads as follows:
TimeStamp (s.ms) - received bytes
…
1299514524.618 - 8192
1299514524.618 - 8192
1299514524.618 - 0004
1299514529.641 - 8192
1299514529.641 - 3744
1299514529.641 - 1460
1299514529.641 - 1460
1299514529.641 - 8192
…
It seems that only 4 bytes can be read within that 5 seconds. Furthermore I found out that the freezing time is always arounds that 5 seconds - never 4 or less and never 6 or more...
Any ideas?
Best regards
Michael
This is a Windows bug.
KB 2020447 - Socket communication using the loopback address will intermittently encounter a five second delay
A Hotfix is available in
KB 2861819 - Data transfer stops for five seconds in a Windows Socket-based application in Windows 7 and Windows Server 2008 R2
I've had this problem in situations of high load: the last packet of TCP data sometimes reached before the second to last, as the default stack is not defined for package sorting,
this disorder caused in receiving similar result to what you describe.
The solution adopted was: load distribution in more servers