Is there a way to disable hardware watchpoints in GDB/MI?
It's possible using the console interface with the following command:
set can-use-hw-watchpoints 0
Anything that can be done from the CLI can also be done via MI.
One simple, generic way is to just send the CLI command. MI will understand this. So, your MI client can just send set can-use-hw-watchpoints 0.
However, for the specific case of set, you can use the MI -gdb-set command.
Related
I am trying to write a practice script to debug a multi core system in Trace32. As per the docs, I started the scrpit as follows
SYStem.CPU CortexA55
SYStem.CONFIG CoreNumber 3
Core.Number 3
I can choose to single step in a single core by using the data list window buttons for that core. But I am not sure how to do that using the commands. In their docs they mention
Where the target is configured for Symmetric Multi-Processing (SMP),
a single instance of TRACE32 PowerView is used to control all cores.
In this case many PRACTICE commands add an option /CORE option to indicate that the command should be run with reference to a
particular core or cores.
But I tried to execute these commands
Go.Backover
Step
Print Register(pc)
but none of them does have this /Core option.
All commands used without the option /CORE are for the currently selected logical core. You can indicate the currently selected core by checking the status bar: Left to the system state (probably showing "stopped"), you'll see a digit, indicating the active logical core.
You can change the active logical core with the command CORE.select. Note, that after any Step or Go command the active logical core can change, because when the SoC goes in a running-state and the enters halt-state, the active logical core is the core which caused the SoC to halt.
So in you case the following should be safe.
CORE.select 0
Step.Over
CORE.select 0
Step
CORE.select 0
ECHO Register(PP)
Instead of CORE.select 0 you can use of course also CORE.select 1 or CORE.select 2.
I think there is no command called Go.Backover. So I am not sure if you are referring to Step.BackOver, Step.Over, Step.Back, or Go.Back.
By the way:
You have used SYStem.CPU CortexA55. That is not wrong, but probably not optimal: Selecting as base-core like CortexA55 is indented for self-made SoCs and requires usually a lot of additional SYStem.CONFIG commands to tell the Debugger the location of the CoreSight components. If you have not created your own SoC try to find the exact chip name in the window SYStem.CPU.
I want to get RX/TX statistics like bytes or packets sent/received for DPDK enabled interfaces. Similar to data present in the /proc/net/dev file. How can I get this?
I tried the command
./dpdk-procinfo -- --stats
But I get the following error.
The command that I use for the primary application.
./tas --ip-addr=10.0.0.1/24 --shm-len=1073741824 --dpdk-extra="-w 01:00.1" --fp-cores-max=4
I get the following output on ldd
[EDIT] based on debug session with Ashwin, it is been found PRIMARY application is compiled DPDK-19.11 while procinfo is run with DPDK-17.11.4. Running with the right version for primary-secondary is working with l2fwd. Application has CFLAGS and LDFLAGS cleanup to be done. Suggested the same
Solution: always run dpdk-procinfo with the same version as primary.
I humbly request you to go through http://doc.dpdk.org/api/rte__ethdev_8h.html. There are API rte_eth_stats_get and rte_eth_get_xstats which does the job for you. These can be invoked in the primary and secondary application of DPDK.
But if you are looking for a ready-made solution please take a look into dpdk-procifno application. The binary for the target is present in the target folder/app while the source code is present in dpdk-root/app/procinfo.
quick way to test the same is by referring to https://doc.dpdk.org/guides-18.08/tools/proc_info.html. the sample command line can be ./dpdk-procinfo -- --stats and ./dpdk-procinfo -- --xstats.
[EDIT]
as per the comment, if primary is run with whitelist PCIe devices, please pass the same in dpdk-procinfo
I'm wondering if there is any possibility to run Scapy's 'sniff(...)' without root priveleges.
It is used in an application, where certain packages are captured. But I don't want to run the whole application with root permissions or change anything on scapy itselfe.
Thanks in advance!
EDIT:
For testing I use following code:
from scapy.all import *
def arp_monitor_callback(pkt):
if ARP in pkt and pkt[ARP].op in (1,2): #who-has or is-at
return pkt.sprintf("%ARP.hwsrc% %ARP.psrc%")
sniff(prn=arp_monitor_callback, filter="arp", store=0)
I'm only able to run it using sudo.
I tried to set capabilities with sudo setcap 'cap_net_admin=+eip' test.py. But it doesn't show any effects. Even the all capablity doesn't help.
You need to set capabilities for binaries running your script i-e: python and tcpdump if you want to be able to just execute your script as ./test.py :
setcap cap_net_raw=eip /usr/bin/pythonX.X
setcap cap_net_raw=eip /usr/bin/tcpdump
Where X.X is the python version you use to run the script.
(note that path could be different on your system)
Please note that this allow anyone to open raw sockets on your system.
Although solution provided by #Jeff is technically correct, because of setting the file capabilities directly on binaries in /usr/bin, it has a drawback of allowing anyone in the system to open raw sockets.
Another way of achieving the desired outcome - script running with just the CAP_NET_RAW - is to use ambient capabilities. This can be done by leveraging a small helper binary that sets up ambient capabilities and exec()'s into python interpreter. For a reference please see this gist.
Using the reference implementation, assuming that that proper file capabilities are assigned to ./ambient:
$ sudo setcap 'cap_net_raw=p' ambient
your script would be launched as:
$ ./ambient -c '13' /usr/bin/python ./test.py
Please note that:
13 is the integer value of CAP_NET_RAW as per capability.h
ambient capabilities are available since kernel 4.3
you can use pscap to verify if the process was launched with desired capabilities in its effective set
Why does this method work?
Ambient capabilities are preserved across exec() calls (hence passed to all subsequently created subprocesses) and raised in their effective set, e.g. a python interpreter invoked by the binary or tcpdump invoked by python script. This is of course a simplification, for a full description of transitions between capability sets see capabilities(7)
I am currently developing a distributed software in C++ using linux which is executed in more than 20 nodes simultaneously. So one of the most challenging issue that I found is how to debug it.
I heard that is possible to manage in a single gdb session multiple remote sessions (e.g. in the master node I create the gdb session and in every other node I launch the program using gdbserver), is it possible? If so can you give an example? Do you know any other way to do it?
Thanks
You can try to do it like this:
First start nodes with gdbserver on remote hosts. It is even possible to start it without a program to debug, if you start it with --multi flag. When server is in multi mode, you can control it from your local session, I mean that you can make it start a program you want to debug.
Then, start multiple inferiors in your gdb session
gdb> add-inferior -copies <number of servers>
switch them to a remote target and connect them to remote servers
gdb> inferior 1
gdb> target extended-remote host:port // use extended to switch gdbserver to multi mode
// start a program if gdbserver was started in multi mode
gdb> inferior 2
...
Now you have them all attached to one gdb session. The problem is that, AFAIK, it is not much better than to start multiple gdb's from different console tabs. On the other hand you can write some scripts or auto tests this way. See the gdb tutorial: server and inferiors.
I don't believe there is one, simple, answer to debugging "many remote applications". Yes, you can attach to a process on another machine, and step through it in GDB. But it's quite awkward to debug a large number of interdependent processes, especially when the problem is complicated.
I believe a good set of logging capabilities in the code, supplemented with additional logs for specific debugging as needed, is more likely to give you a good/fast result.
Another option might be to run the processes on one machine, rather than on multiple machines. Perhaps even use threads within one process, to simulate the behaviour of multiple machines, simplifying the debugging process. Of course, this doesn't prevent bugs that appear ONLY when you run 20 processes on 20 different machines. But the basic idea is to reduce the number of those bugs to a minimum, and debug most things in a "simpler environment".
Aggressive use of defensive programming paradigms, such as liberal use of assert is clearly a good idea (perhaps with a macro to turn it off for the production runs, but make sure that you don't just leave error paths completely unchecked - it is MUCH harder to detect that the reason something crashes is that a memory allocation failed than to track down where that NULL pointer came from some 20 function calls away from a failed allocation.
I have a C++ script that locks the windows when a specific event occurs. Locking windows is done using “LockWorkStation()” command. What I want to do is “unlock” windows when some other event occurs. For the sake of argument, let’s assume windows is “locked”. I need windows to get unlocked after 2 minutes.
Does this need any modification to MSGINA.dll ? or it is a simple command similar to LockWorkStation() ?
Knowing that I have the username and password saved somewhere (currently on a US B stick that works as a key).
Any guidance, advice, or procedure to the direction to achieve the task is highly appreciated.
Regards
There is no supported mechanism to unlock workstation. You will have to write a custom GINA module and then communitate with it somehow.
With standard GINA the closest you can get is to do autologon (e.g. using Autologon tool from SysInternals). However autologon only kicks in after machine reboot or after user logoff, so user session would be lost.