What is diffrence and usage of CallInst, InvokeInst and CallSite in LLVM? - llvm

I am reading LLVM Programmer’s Manual but I get confused with the term CallInst, InvokeInst and CallSite and their usage. Can somebody explain me in detail with example its usage and its function in LLVM.

Related

ASM: Optimization of Pass-by-value vs Pass-by-reference in leading-edge compilers?

(This question is about compiler optimization assembly, not about the use of pointers in code.)
I'm trying to determine if I can take advantage of optimizations built into Clang or GCC in regards to whether the variable is passed by value or reference.
For some background I'm working on a "my own language" to C++ transliterator that does some level of preprocessing to the code. It occurred to me that it should be possible to determine at compile time whether a variable is more efficiently passed around by reference or by value. It would mean writing different versions of functions automatically, etc. but that is all doable.
I suspected that the compilers do this already to some extent. The question is: to what extent?
I've noticed by playing with Godbolt's Compiler Explorer that on higher optimization levels the resulting ASM is nothing like the code. I'm beginning to realize that many best practices and optimizations that are typically done in code no longer matter because the compilers are so damn efficient. Does this expand to pass-by-reference vs. pass-by-value?
And the big question: What if I just made everything pass-by-value with no thought for memory-efficiency or copy speed, and just let the compiler decide... would it correctly optimize?
If so I could design my language so the coder only needs to consider whether a write is done to the parent variable or results in a copy, and the whole world of reference and pointers can be delegated to the compiler. It's a completely different way of thinking.
My initial testing on Compiler Explorer with Clang (widberg) shows a difference in the ASM begins at optimization level -O2 (same ASM before that). You can see the code and ASM here:
Small Struct on Godbolt
Larger Struct on Godbolt
I've been trying to work it out, but I don't know enough assembly to figure out if I've found what I'm looking for.
Is it passing by value in the small struct version and by reference in the larger version?

When should I use "volatile" for LLVM IR?

In which scenarios should I care about it/place it for LLVM?
I have read the following doc but need more detailed examples if anyone could lift .What does volatile mean exactly in LLVM?
Citation: Volatile Memory Accesses, where "volatile" is defined in LLVM.

Best way to visualize CFG of a broken LLVM function

I need to visualize the CFG of an LLVM Function, which I have in a .ll file. There is the opt tool, which has the --view-cfg option. However, the problem is that the function is broken, the definition of a register does not dominate all its uses. I need to view the CFG to investigate why this is the case. Problem: opt does not take wrong LLVM functions, so I cannot view the CFG with it.
So, what is the best way to visualize the CFG of a broken LLVM function?
Problem: opt does not take wrong LLVM functions, so I cannot view the CFG with it.
That's not actually the case. The verifier is turned on by default, yes, but if the function in question is syntactically correct, then you can just turn it off:
$ opt -disable-verify -view-cfg foo.ll
You can even try to compile it with llc, run with lli, etc this way.

Role of compiler for Virtual functions in C++

Recently I attended an interview. The interviewer asked me to explain virtual function mechanism in C++. I explained using VPTR and VTABLE. I explained in detail how VPTR and VTABLE are used to achieve run time polymorphism.
While I was explaining how the compiler introduces hidden code to fetch VPTR from the class, get function address from VTABLE and call is resolved. But he was not satisfied with the answer. He asked me detail of hidden code? What exactly does a compiler do? If a compiler is doing everything for you then what is the use of developer?
I searched for details of the role of a compiler for virtual functions. Regarding the hidden code. But still not clear about the question.
Please, any help or pointers?
If a compiler is doing everything for you then what is the use of developer?
Developers are there to specify their intentions and compilers are there to transform the intentions to executables. As time passes, computers get faster and compilers smarter so there is no need to express developer's intentions in assembly code but in Erlang, F#, Prolog, whatever.
In other words, it's interesting to know details of the code produced by C++ compilers, but it is not the core of C++ development.
Finally, to answer the quoted question:
Compilers are not doing everything yet. Unfortunately.

Learning to read GCC assembler output

I'm considering picking up some very rudimentary understanding of assembly. My current goal is simple: VERY BASIC understanding of GCC assembler output when compiling C/C++ with the -S switch for x86/x86-64.
Just enough to do simple things such as looking at a single function and verifying whether GCC optimizes away things I expect to disappear.
Does anyone have/know of a truly concise introduction to assembly, relevant to GCC and specifically for the purpose of reading, and a list of the most important instructions anyone casually reading assembly should know?
You should use GCC's -fverbose-asm option. It makes the compiler output additional information (in the form of comments) that make it easier to understand the assembly code's relationship to the original C/C++ code.
If you're using gcc or clang, the -masm=intel argument tells the compiler to generate assembly with Intel syntax rather than AT&T syntax, and the --save-temps argument tells the compiler to save temporary files (preprocessed source, assembly output, unlinked object file) in the directory GCC is called from.
Getting a superficial understanding of x86 assembly should be easy with all the resources out there. Here's one such resource: http://www.cs.virginia.edu/~evans/cs216/guides/x86.html .
You can also just use disasm and gdb to see what a compiled program is doing.
I usually hunt down the processor documentation when faced with a new device, and then just look up the opcodes as I encounter ones I don't know.
On Intel, thankfully the opcodes are somewhat sensible. PowerPC not so much in my opinion. MIPS was my favorite. For MIPS I borrowed my neighbor's little reference book, and for PPC I had some IBM documentation in a PDF that was handy to search through. (And for Intel, mostly I guess and then watch the registers to make sure I'm guessing right! heh)
Basically, the assembly itself is easy. It basically does three things: move data between memory and registers, operate on data in registers, and change the program counter. Mapping between your language of choice and the assembly will require some study (e.g. learning how to recognize a virtual function call), and for this an "integrated" source and disassembly view (like you can get in Visual Studio) is very useful.
"casually reading assembly" lol (nicely)
I would start by following in gdb at run time; you get a better feel for whats happening. But then maybe thats just me. it will disassemble a function for you (disass func) then you can single step through it
If you are doing this solely to check the optimizations - do not worry.
a) the compiler does a good job
b) you wont be able to understand what it is doing anyway (nobody can)
Unlike higher-level languages, there's really not much (if any) difference between being able to read assembly and being able to write it. Instructions have a one-to-one relationship with CPU opcodes -- there's no complexity to skip over while still retaining an understanding of what the line of code does. (It's not like a higher-level language where you can see a line that says "print $var" and not need to know or care about how it goes about outputting it to screen.)
If you still want to learn assembly, try the book Assembly Language Step-by-Step: Programming with Linux, by Jeff Duntemann.
I'm sure there are introductory books and web sites out there, but a pretty efficient way of learning it is actually to get the Intel references and then try to do simple stuff (like integer math and Boolean logic) in your favorite high-level language and then look what the resulting binary code is.