C++ Native to Intermediate - c++

Is it theoretically and/or practically possible to compile native c++ to some sort of intermediate language which will then be compiled at run time?
Along the same lines, is "portable" the term used to denote this?

LLVM which is a compiler infrastructure parses C++ code, transforming it to an intermediate language called LLVM IR (IR stands for Intermediate Representation) which looks like high-level assembly language. It is a machine independent language. Generating IR is one phase. In the next phase, it passes through various optimizers (called pass). which then reaches to third phase which emits machine code (i.e machine dependent code).
It is a module-based design; output of one phase (module) becomes input of another. You could save IR on your disk, so that the remaining phases can resume later, maybe on entirely different machine!
So you could generate IR and then do rest of the things on runtime? I've not done that myself, but LLVM seems really promising.
Here is the documentation of LLVM IR:
LLVM Language Reference Manual
This topic on Stackoverlow seems interesting, as it says,
LLVM advantages:
JIT - you can compile and run your code dynamically.
And these articles are good read:
The Design of LLVM (on drdobs.com)
Create a working compiler with the LLVM framework, Part 1

Related

how many ast's a llvm program generate?

I am reading the llvm's compiler writing guide:
https://llvm.org/docs/tutorial/LangImpl02.html
In that guide, they are using a simple language called "kaleidoscope" as an example. Before reading that guide, I was under the impression that a single AST is generated for every program (I assume that the program is written on a single file and hence no linking is necessary). But it seems that llvm creates a separate AST for every line (or, to be more precise, for every construct). Hence, for a single program, llvm can create hundreds of separate ast's. Is this interpretation correct?
First of all, note that this chapter doesn't really have much to do with LLVM. It's just explaining how to write a parser and an AST for the language. It does not use any code from the LLVM library¹ and wouldn't look any differently in a project that did not use LLVM at all². The LLVM-specific part only comes later when you translate the AST to LLVM IR. So if anything, it's not that LLVM generates "multiple ASTs", it's that the code from the tutorial generates "multiple ASTs".
So is it accurate to say that code generates multiple ASTs? Kind of - it all depends on what exactly you mean by that.
Like any tree, an AST consists of multiple subtrees. Each subtree is itself a valid tree. So you could say that every non-trival tree is in fact a collection of multiple trees and this would apply to the AST in the tutorial as well.
However it's important to note that all of the subtrees are part of the larger tree. It is not true that the code creates multiple trees that aren't connected to each other if that's what you were thinking.
¹ Other than llvm::make_unique, but that could just as well be replaced with std::make_unique if your compiler supports C++14 or your own implementation if it doesn't.
² On a similar note, it is also perfectly possible to write an LLVM-based compiler by generating the LLVM IR directly in the parser and not create any ASTs at all. Whether and how you generate your ASTs is entirely independent from LLVM.

How does the C++ compiler know which CPU architecture is being used

with reference to : http://www.cplusplus.com/articles/2v07M4Gy/
During the compilation phase,
This phase translates the program into a low level assembly level code. The compiler takes the preprocessed file ( without any directives) and generates an object file containing assembly level code. Now, the object file created is in the binary form. In the object file created, each line describes one low level machine level instruction.
Now, if I am correct then different CPU architectures works on different assembly languages/syntax.
My question is how does the compiler comes to know to which assembly language syntax the source code has to be changed? In other words, how does the C++ compiler know which CPU architecture is there in the machine it is working on ?
Is there any mapping used by assembler w.r.t the CPU architecture for generating assembly code for different CPU architectures?
N.S : I am beginner !!
Each compiler needs to be "ported" to the given system. For each system supported, a "compiler port" needs to be programmed by someone who knows the system in-depth.
WARNING : This is extremely simplified
In short, there are three main parts to a compiler :
"Front-end" : This part reads the language (in this case c++) and converts it to a sort of pseudo-code specific to the compiler. (An Abstract Syntactic Tree, or AST)
"Optimizer/Middle-end" : This part takes the AST and make a non-architecture-dependant optimized one.
"Back-end" : This part takes the AST, and converts it to binary executable code, specific to the architecture you want to compile your language on.
When you download a c++ compiler for your platform, you, in fact, download the c++ frontend with the linux-amd64 backend, for example.
This coding architecture is extremely helpful, because it allows to port the compiler for another architecture without rewriting the whole parsing/optimizing thing. It also allows someone to create another optimizer, or even another frontend supporting a whole different language, and, as long as it outputs a correct AST, it will be compatible with every single backend ever written for this compiler.
Simply put, the knowledge of the target system is coded into the compiler.
So you might have a C compiler that generates SPARC binaries, and a C compiler that generates VAX binaries. They both accept the same input language (as defined in the C standard), but produce different programs from it.
Often we just refer to "the compiler", meaning the one that will generate binaries for our current environment.
In modern times, the distinction has become less obvious with compiler collections such as GCC. Now the "different compilers" are often the same compiler program, just set up with different configurations (these are the "target description files").
Just to complete the answers given here:
The target architecture is indeed coded into the specific compiler instance you're using. This is important also for a process called "cross-compiling" - The process of compiling on a certain system an executable that would operate on another system/architecture.
Consider working on an embedded system-on-chip that uses a completely different instruction set than your own - You're working on an x86/64 Linux system, but need to compile a mobile app running on an ARM micro-processor, or some other type of assembly architecture.
It would be unreasonable to compile your code on the target system, which might be so limited in CPU and memory that it can't feasibly run a compiler - and so you can use a GCC (or any other compiler) port for that target architecture on your favorite system.
It's also quite critical to remember that the entire tool-chain is often compatible to the target system, for instance when shared libraries such as libc are getting in play - as the target OS could be a different release of Linux and would have different versions of common functions - In which case it's common to use tool-chains that contain all the necessary libraries and use something like chroot or mock to compile in the "target environment" from within your system.

What type of assembly do C++ compilers use?

So as I'm understood c++ code is comprised of assembly code, and when I compile a program it is read as its assembly equivelent and then run by the compiler. I'm also understood that assembly syntax and features change from model to model of proccessor. If this is so, how do compilers manage to compile programs without being littered with bugs? I mean, it can't be possible for a compiler to hold every assembly language variant created, is it?
I think you're confusing assembly code with machine code. It's not the same. Machine code is what the CPU executes - a byte stream of instructions and data. Assembly is a human readable representation of machine code.
It's indeed true that all C++ code is compiled into machine code, eventually. Yes, the instruction set changes between CPUs and CPU versions. Compilers have the notion of "target architecture" - when you compile, you have an option of specifying one. If you don't, the architecture of the current machine is usually assumed. Yes, compiler vendors have to extend an effort to support every flavor of CPU that they intend to support. Fortunately, there's not that many. Besides, in the C compilation process, code generation is not even the most complex step, so the majority of compiler's own code is not CPU specific.
Some compilers work via assembly - rather than generating machine code, they generate assembly and feed that to an assembler for the final stage of compilation. With that kind of design, your compiler normally assumes a certain flavor of assembler to be present on the system - typically GNU assembler (as).
I think you've misunderstood the meaning of "assembly code".
C++ code does not "consist" of assembly code; it consists of, well, C++ code.
A compiler translates this C++ code, ultimately into executable machine code that can be run on a computer (usually under the direction of an operating system).
Assembly code is a human-readable symbolic representation of machine code. Typically a line of assmembly code corresponds to a single CPU instruction of machine code. Assembly is a much lower level language than C++ (or even C).
Some C++ compilers generate assembly code as an intermediate step; the assembly code is then translated into executable machine code. Other C++ compilers skip that step and generate machine code directly (though they may have an option to produce a human-readable assembly listing).
Typically each compiler accepts input in a single high-level language (C, C++, etc.) and generates output for one CPU (x86, ARM, MIPS, etc). Compilers are commonly designed in phases, so that the portion that processes the high-level input language can be combined with the portion that generates machine-specific code. gcc is designed this way. There are front ends that process a number of different input languages, and code generators that generate code for different CPUs. Thus if you already have an Ada front end and a MIPS back end, it's not too difficult to join them together to create an Ada compiler that generates MIPS machine code.
As for how compilers manage to do with without being "littered with bugs", well, it's just a lot of work.

Pin Like Tool for compile time injection of instrumentation code

As you might know, PIN is a dynamic binary instrumentation tool. By using Pin for example, I can instrument every load and store in my application. I was wondering If there is a similar tool which injects code at compile time (Using a higher level of information, not requiring us to write the LLVM pass), rather than at runtime like Pin. I am especially interested for such kind of tool for LLVM.
You could write LLVM passes of your own and apply them on your code to "instrument" it during compile time. These work on LLVM IR and produce LLVM IR, so for some tasks this will be a very natural thing to do and for other tasks it might be cumbersome or difficult (because of the differences between LLVM and IR and the source language). It depends.

LLVM what is it and how can i use it to cross platform compilations

I was reading here and there about llvm that can be used to ease the pain of cross platform compilations in c++ , i was trying to read the documents but i didn't understand how can i
use it in real life development problems can someone please explain me in simple words how can i use it ?
The key concept of LLVM is a low-level "intermediate" representation (IR) of your program.
This IR is at about the level of assembler code, but it contains more information to facilitate optimization.
The power of LLVM comes from its ability to defer compilation of this intermediate representation to a specific target machine until just before the code needs to run. A just-in-time (JIT) compilation approach can be used for an application to produce the code it needs just before it needs it.
In many cases, you have more information at the time the program is running that you do back at head office, so the program can be much optimized.
To get started, you could compile a C++ program to a single intermediate representation, then compile it to multiple platforms from that IR.
You can also try the Kaleidoscope demo, which walks you through creating a new language without having to actually write a compiler, just write the IR.
In performance-critical applications, the application can essentially write its own code that it needs to run, just before it needs to run it.
Why don't you go to the LLVM website and check out all the documentation there. They explain in great detail what LLVM is and how to use it. For example they have a Getting Started page.
LLVM is, as its name says a low level virtual machine which have code generator. If you want to compile to it, you can use either gcc front end or clang, which is c/c++ compiler for LLVM which is still work in progress.
It's important to note that a bunch of information about the target comes from the system header files that you use when compiling. LLVM does not defer resolving things like "size of pointer" or "byte layout" so if you compile with 64-bit headers for a little-endian platform, you cannot use that LLVM source code to target a 32-bit big-endian assembly output pater.
There is a good chapter in a book explaining everything nicely here: www.aosabook.org/en/llvm.html