What is the lldb equivalent of gdb's rwatch? - gdb

I'm in a situation where read watchpoints would be very handy. Looking at lldb's help, I could find the watchpoint command, but it only seems to support write watchpoints (which are admittedly a lot more useful in general, but won't do it in my case).
I know that gdb has a rwatch command that sets read watchpoints. Is there any equivalent with lldb?

watch set variable|expression both take a -w / --watch argument specifying either write (default), read, or read_write. e.g.
(lldb) wa s v -w read myvar

Related

How to set break points on all destructors in AIX? (c++)

I have found that rbreak in gdb can do this by a regular expression.
Is there an alternative way in aix dbx?
I have read the document of IBM.
It seems that dbx cannot do this.

how to find a search term in source code

I'm looking for a way to search for a given term in a project's C/C++ code, while ignoring any occurrences in comments and strings.
As the code base is rather large, i am searching for a way to automatically identify the lines of code matching my search term, as they need manual inspection.
If possible I'd like to perform the search on my linux system.
background
the code base in question is a realtime signal processing engine with a large number of 3rd party plugins. plugins are implemented in a variety of languages (mostly C, but also C++ and others; currently I only care for those two), no standards have been enforced.
our code base currently uses the built-in type float for floating-point numbers and we would like to replace that with a typedef that would allow us to use doubles.
we would like to find all occurrences of float in the actual code (ignoring legit uses in comments and printouts).
What complicates things furthermore, is that there are some (albeit few) legit uses of float in the code payload (so we are really looking for a way to identify all places that require manual inspection, rather than run some automatic search-and-replace.)
the code also contains C-style static casts to (float), so relying on compiler warnings to identify type mismatches is often not an option.
the code base consists of more than 3000 (C and C++) files accumulating about 750000 lines of code.
the code is cross-platform (linux, osx, w32 being the main targets; but also freebsd and similar), and is compiled with the various native compilers (gcc/g++, clang/clang++, VisualStudio,...).
so far...
so far I'm using something ugly like:
grep "\bfloat\b" | sed -e 's|//.*||' -e 's|"[^"]*"||g' | grep "\bfloat\b"
but I'm thinking that there must be some better way to search only payload code.
IMHO there is a good answers on a similar question at "Unix & Linux":
grep works on pure text and does not know anything about the
underlying syntax of your C program. Therefore, in order not search
inside comments you have several options:
Strip C-comments before the search, you can do this using gcc
-fpreprocessed -dD -E yourfile.c For details, please see Remove comments from C/C++ code
Write/use some hacky half-working scripts like you have already found
(e.g. they work by skipping lines starting with // or /*) in order to
handle the details of all possible C/C++ comments (again, see the
previous link for some scary testcases). Then you still may have false
positives, but you do not have to preprocess anything.
Use more advanced tools for doing "semantic search" in the code. I
have found "coccigrep": http://home.regit.org/software/coccigrep/ This
kind of tools allows search for some specific language statements
(i.e. an update of a structure with given name) and certainly they
drop the comments.
https://unix.stackexchange.com/a/33136/158220
Although it doesn't completely cover your "not in strings" requirement.
It might practically depend upon the size of your code base, and perhaps also on the editor you are usually using. I am suggesting to use GNU emacs (if possible on Linux with a recent GCC compiler...)
For a small to medium size code (e.g. less than 300KLOC), I would suggest using the grep mode of Emacs. Then (assuming you have bound the next-error Emacs function to some key, perhaps with (global-set-key [f10] 'next-error) in your ~/.emacs...) you can quickly scan every occurrence of float (even inside strings or comments, but you'll skip very quickly such occurrences...). In a few hours you'll be done with a medium sized source code (and that is quicker than learning how to use a new tool).
For a large sized code (millions of lines), it might be worthwhile to customize some static analysis tool or compiler. You could use GCC MELT to customize your GCC compiler on Linux. Its findgimple mode could be inspirational, and perhaps even useful (you probably want to find all Gimple assignments targeting a float)
BTW, you probably don't want to replace all occurrences -but only most of them- of the float type with double (probably suitably typedef-ed...), because very probably you are using some external (or standard) functions requiring a float.
The CADNA tool might also be useful, to help you estimate the precision of results (so help you deciding when using double is sensible).
Using semantical tools like GCC MELT, CADNA, Coccinelle, Frama-C (or perhaps Fluctuat, or Coccigrep mentioned in g0hl1n's answer) would give more precise or relevant results, at the expense of having to spend more time (perhaps days!) in learning and customizing the tool.
The robust way to do this should be with cscope (http://cscope.sourceforge.net/) in line-oriented mode using the find this C symbol option but I haven't used that on a variety of C standards so if that doesn't work for you or if you can't get cscope then do this:
find . -type f -print |
while IFS= read -r file
do
sed 's/a/aA/g; s/__/aB/g; s/#/aC/g' "$file" |
gcc -P -E - |
sed 's/aC/#/g; s/aB/__/g; s/aA/a/g' |
awk -v file="$file" -v OFS=': ' '/\<float\>/{print file, $0}'
done
The first sed replaces all hash (#) and __ symbols with unique identifier strings, so that the preprocessor doesn't do any expansion of #include, etc. but we can restore them after preprocessing.
The gcc preprocesses the input to strip out comments.
The second sed replaces the hash-identifier string that we previously added with an actual hash sign.
The awk actually searches for float within word-boundaries and if found prints the file name plus the line it was found on. This uses GNU awk for word-boundaries \< and \>.
The 2nd sed's job COULD be done as part of the awk command but I like the symmetry of the 2 seds.
Unlike if you use cscope, this sed/gcc/sed/awk approach will NOT avoid finding false matches within strings but hopefully there's very few of those and you can weed them out while post-processing manually anyway.
It will not work for file names that contain newlines - if you have those you can put the body in a script and execute it as find .. -print0 | xargs -0 script.
Modify the gcc command line by adding whatever C or C++ version you are using, e.g. -ansi.

find position of a list term in gdb

I'm wondering if it is possible, within a debugging session, to tell gdb to go over all the terms of an std::vector and print out the indexes of those that satisfy a certain condition. In my case I have vector, and I would like to know which of the terms are negative.
I am well aware that this can be accomplished using conditional breakpoints, but for that I would have to rerun the program and place the breakpoint in the position where the vector is initialized, but it's less convenient.
There is no way of doing what you are asking for with plain gdb. The debugger does not have a language in which you can run arbitrary queries. That being said, the debugger does have support (at least not ancient versions) to load python scripts that will interact with data.
You can define scripts in the gdb command line, or else you can define them in files and load them from the command line.

Debugging: Tracing (and diff-ing) function call tree of two version of the same program

I'm working on the rewriting of some
code in a c++ cmd line program.
I
changed the low level data structure that
it uses and the new version passes all
the tests (quite a lot) without any
problem and I get the correct output
from both the new and the old version...
Still, when give certain input, they give
different behaviour.
Getting to the point: Being somewhat of
a big project I don't have a clue about
how to track down when the execution
flow diverges, so... is there way to trace
the function call tree (possibly excluding
std calls) along with, i don't know, line
number in the source file and source
name?
Maybe some gcc or macro kungfu?
I would need a Linux solution since that's where the program runs.
Still, when give certain input, they give different behaviour
I would expand logging in you old and new versions in order to understand better work of your algorithms for certain input. When it become clearer you can for example use gdb if you still need it.
Update
OK, As for me logging is OK, but you do not want to add it.
Another method is tracing. Actually I used it only on Solaris but I see that it exists also on Linux. I have not used it on Linux so it is just an idea that you can test.
You can use SystemTap
User-Space Probing SystemTap initially focused on kernel-space probing. However, there are many instances where userspace probing can
help diagnose a problem. SystemTap 0.6 added support to allow probing
userspace processes. SystemTap includes support for probing the entry
into and return from a function in user-space processes, probing
predefined markers in user-space code, and monitoring user-process
events.
I can gurantee that it will work but why don't give it a try?
There is even an example in the doc:
If you want to see how the function xmalloc function is being called
by the command ls, you could use the user-space backtrack functions to
provide that information.
stap -d /bin/ls --ldd \
-e 'probe process("ls").function("xmalloc") {print_ustack(ubacktrace())}' \
-c "ls /"

Automatically find compiler options for fastest exe on given machine?

Is there a method to automatically find the best compiler options (on a given machine), which result in the fastest possible executable?
Naturally, I use g++ -O3, but there are additional flags that may make the code run faster, e.g. -ffast-math and others, some of which are hardware-dependent.
Does anyone know some code I can put in my configure.ac file (GNU autotools), so that the flags will be added to the Makefile automatically by the ./configure command?
In addition to automatically determining the best flags, I would be interested in some useful compiler flags that are good to use as a default for most optimized executables.
Update: Most people suggest to just try different flags and select the best ones empirically. For that method, I'd have a follow-up question: Is there a utility that lists all compiler flags that are possible for the machine I'm running on (e.g. tests if SSE instructions are available etc.)?
I don't think you can do this at configure-time, but there is at least one program which attempts to optimize gcc option flags given a particular executable and machine. See http://www.coyotegulch.com/products/acovea/ for example.
You might be able to use this with some knowledge of your target machine(s) to find a good set of options for your code.
Um - yes. This is possible. Look into profile-guided optimization.
some compilers provide "-fast" option to automatically select most aggressive optimization for given compilation host. http://en.wikipedia.org/wiki/Intel_C%2B%2B_Compiler
Unfortunately, g++ does not provide similar flags.
as a follow-up to your next question, for g++ you can use -mtune option together with -O3 which will give you reasonably fast defaults. Challenge then is to find processor type of your compilation host. you may want to look on autoconf macro archive, to see somebody wrote necessary tests. otherwise, assuming linux, you have to parse /proc/cpuinfo to get processor type
After some googling, I found this script: gcccpuopt.
On one of my machines (32bit), it outputs:
-march=pentium4 -mfpmath=sse
On another machine (64bit) it outputs:
$ ./gcccpuopt
Warning: The optimum *32 bit* architecture is reported
-m32 -march=core2 -mfpmath=sse
So, it's not perfect, but might be helpful.
See also -mcpu=native/-mtune=native gcc options.
Is there a method to automatically find the best compiler options (on a given machine), which result in the fastest possible executable?
No.
You could compile your program with a large assortment of compiler options, then benchmark each and every version, then select the one that is "fastest," but that's hardly reliable and probably not useful for your program.
This is a solution that works for me, but it does take a little while to set up. In "Python Scripting for Computational Science" by Hans Petter Langtangen (an excellent book in my opinion), an example is given of using a short python script to do numerical experiments to determine the best compiler options for your C/Fortran/... program. This is described in Chapter 1.1.11 on "Nested Heterogeneous Data Structures".
Source code for examples from the book are freely available at http://folk.uio.no/hpl/scripting/index.html (I'm not sure of the license, so will not reproduce any code here), and in particular you can find code for a similar numerical test in the code in TCSE3-3rd-examples.tar.gz in the file src/app/wavesim2D/F77/compile.py , which you could use as a base for writing a script which is appropriate for a particular system/language (C++ in your case).
Optimizing your app is mainly your job, not the compiler's.
Here's an example of what I'm talking about.
Once you've done that, IF your app is compute-bound, with hotspots in your code (not in library code) THEN the compiler optimizations for speed will make some difference, so you can try different flag combinations.