How to debug a program instrumented by LLVM - llvm

I have instrumented a target program with LLVM pass. The instrumented code fragment is some function implemented in a library. I have the source code of this library. Now I want to debug the program and see what the instrumented function does step by step. What should I do?

Related

When compiling a Rust library with C++ extensions in debug mode, is the C++ code compiled with debug flags too?

I have a Rust project with lot of C++ under the hood, that's built the usual way (I link the c++ files with cc:Build::new() and generate individual bindings to a C API with bindgen::Builder::default()).
I'm trying to understand the source of performance degradation when I build the project with a profile that extends release but has debug=True. Two questions:
Is this profile causing the C++ library to be compiled with debug flags, and if so, what level? I would assume default?
If I wanted to use split-debuginfo (haven't yet figured out what the right way to do this is), AND if the answer to 1. is "no", how would I go about ensuring that the executable with the debug info does have debug flags for the C++ library, while the release executable does not?
Neither cc::Build::new() nor bindgen::Builder::default() read the environment, they both compile and generate everything the same no matter which optimization level you specify (or which profile debug vs release) you build in.
To build with debuginfo on debug and without it on release you have to check the relevant environment variables (OPT_LEVEL and DEBUG sepecifically) and include the respectively necessary flags for cc.

Coverage run with GCC does not produce data

I have a Fortran program I would like to profile with GNU coverage. So I compiled the program with GCC 11.2 with these coverage options:
-fprofile-arcs -ftest-coverage
Also, I add flags to disallow compiler to inline the code:
-fno-inline -fno-inline-small-functions -fno-default-inline
I turned off lto and add -lcgov to linker. This setup worked well for a sample program I proved. But when I tried to use it for the program I am interested in it did not generate any *.gcno files, just nothing. Execution, however, exited well (0 exit code) producing correct results.
My question is, how can I find where the problem is. Without an error message, I don't know where to start. It is a rather bigger program ~10 MB of source code, can that be a problem? Also, it heavily depends on MKL, can the external library be the problem? Once I accidentally mixed compile time and runtime environments and it complained about the version of libgcov.so, so something is working after all. Or do you have any other suggestions for coverage profiling?

How do I compile my OCaml code into a standalone bytecode executable?

I want to compile my OCaml project into an executable that can be run in other computers that don't have OCaml installed. Using ocamlbuild, when I compile a ".native" file it works fine on other machines but if I compile a ".byte" file it fails with a Cannot exec ocamlrun message when I try to run the executable.
Since the bytecode version of my program is significantly smaller in terms of file size, I would prefer to distribute it instead of the native code. Is there a way to bundle ocamlrun into the executable when I compile it?
You need to compile in a custom mode, from ocamlc user manual:
-custom
Link in “custom runtime” mode. In the default linking mode, the linker produces bytecode that is intended to be executed with the
shared runtime system, ocamlrun. In the custom runtime mode, the
linker produces an output file that contains both the runtime system
and the bytecode for the program. The resulting file is larger, but it
can be executed directly, even if the ocamlrun command is not
installed. Moreover, the “custom runtime” mode enables static linking
of OCaml code with user-defined C functions, as described in chapter
Unix: Never use the strip command on executables produced by ocamlc -custom, this would remove the bytecode part of the executable.
If you're using oasis then all that you need is to add Custom : true field to your executable section, similarly, for ocamlbuild, add -tag custom or put custom in _tags .

LLVM for parsing math expressions

I have some troubles wrapping my head around what LLVM actually does...
Am I right to assume that it could be used to parse mathematical expressions at runtime in a C++ program?
Right now at runtime, I'm getting the math expressions and build a C program out of it, compile it on the fly by doing system call to gcc. Then I dynamically load the .so produced by gcc and extract my eval function...
I'd like to replace this workflow by something simpler, maybe even faster...
Can LLVM help me out? Any resources out there to get me started?
You're describing using LLVM as a JIT compiler, which is absolutely possible. If you generate LLVM IR code (in memory) and hand it off to the library, it will generate machine code for you (still in memory). You can then run that code however you like.
If you want to generate LLVM IR from C code, you can also link clang as a library.
Here is a PDF I found at this answer, which has some examples of how to use LLVM as a JIT.

binary generation from LLVM

How does one generate executable binaries from the c++ side of LLVM?
I'm currently writing a toy compiler, and I'm not quite sure how to do the final step of creating an executable from the IR.
The only solution I currently see is to write out the bitcode and then call llc using system or the like. Is there a way to do this from the c++ interface instead?
This seems like it would be a common question, but I can't find anything on it.
LLVM does not ship the linker necessary to perform this task. It can only write out as assembler and then invoke the system linker to deal with it. You can see the source code of llvm-ld to see how it's done.