Intermediate code from C++ - c++

I want to compile a C++ program to an intermediate code. Then, I want to compile the intermediate code for the current processor with all of its resources.
The first step is to compile the C++ program with optimizations (-O2), run the linker and do most of the compilation procedure. This step must be independent of operating system and architecture.
The second step is to compile the result of the first step, without the original source code, for the operating system and processor of the current computer, with optimizations and special instructions of the processor (-march=native). The second step should be fast and with minimal software requirements.
Can I do it? How to do it?
Edit:
I want to do it, because I want to distribute a platform independent program that can use all resources of the processor, without the original source code, instead of distributing a compilation for each platform and operating system. It would be good if the second step be fast and easy.
Processors of the same architecture may have different features. X86 processors may have SSE1, SSE2 or others, and they can be 32 or 64 bit. If I compile for a generic X86, it will lack of SSE optimizations. After many years, processors will have new features, and the program will need to be compiled for new processors.

Just a suggestion - google clang and LLVM.

How much do you know about compilers? You seem to treat "-O2" as some magical flag.
For instance, register assignment is a typical optimization. You definitely need to now how many registers are available. No point in assigning foo to register 16, and then discover in phase 2 that you're targetting an x86.
And those architecture-dependent optimizations can be quite complex. Inlining depends critically on call cost, and that in turn depends on architecture.

Once you get to "processor-specific" optimizations, things get really tricky. It's really tough for a platform-specific compiler to be truly "generic" in its generation of object or "intermediate" code at an appropriate "level": Unless it's something like "IL" (intermediate language) code (like the C#-IL code, or Java bytecode), it's really tough for a given compiler to know "where to stop" (since optimizations occur all over the place at different levels of the compilation when target platform knowledge exists).
Another thought: What about compiling to "preprocessed" source code, typically with a "*.i" extension, and then compile in a distributed manner on different architectures?
For example, most (all) the C and C++ compilers support something like:
cc /P MyFile.cpp
gcc -E MyFile.cpp
...each generates MyFile.i, which is the preprocessed file. Now that the file has included ALL the headers and other #defines, you can compile that *.i file to the target object file (or executable) after distributing it to other systems. (You might need to get clever if your preprocessor macros are specific to the target platform, but it should be quite straight-forward with your build system, which should generate the command line to do this pre-processing.)
This is the approach used by distcc to preprocess the file locally, so remote "build farms" need not have any headers or other packages installed. (You are guaranteed to get the same build product, no matter how the machines in the build farm are configured.)
Thus, it would similarly have the effect of centralizing the "configuration/pre-processing" for a single machine, but provide cross-compiling, platform-specific compiling, or build-farm support in a distributed manner.
FYI -- I really like the distcc concept, but the last update for that particular project was in 2008. So, I'd be interested in other similar tools/products if you find them. (In the mean time, I'm writing a similar tool.)

Related

C++ Versions, do they auto-detect the version of an exe?

Okay so I know that there are multiple c++ versions. And I dont really know much about the differences between them but my question is:
Lets say i made a c++ application in c++ 11 and sent it off to another computer would it come up with errors from other versions of c++ or will it automatically detect it and run with that version? Or am I getting this wrong and is it defined at compile time? Someone please tell me because I am yet to find a single answer to my question on google.
It depends if you copy the source code to the other machine, and compile it there, or if you compile it on your machine and send the resulting binary to the other computer.
C++ is translated by the compiler to machine code, which runs directly on the processor. Any computer with a compatible processor will understand the machine code, but there is more than that. The program needs to interface with a filesystem, graphic adapters, and so on. This part is typically handled by the operating system, in different ways of course. Even if some of this is abstracted by C++ libraries, the calls to the operating system are different, and specific to it.
A compiled binary for ubuntu will not run on windows, for example, even if both computers have the same processor and hardware.
If you copy the source code to the other machine, and compile it there (or use a cross-compiler), your program should compile and run fine, if you don't use OS-specific features.
The C++ version does matter for compilation, you need a C++11 capable compiler of course if you have C++11 source code, but once the program is compiled, it does not matter any more.
C++ is compiled to machine code, which is then runnable on any computer having that architecture e.g. i386 or x64 (putting processor features like SSE etc. aside).
For Java, to bring a counterexample, it is different. There the code is compiled to a bytecode format, that is machine independent. This bytecodeformat is read/understood by the Java Virtual Machine (JVM). The JVM then has to be available for your architecture and the correct version has to be installed.
Or am I getting this wrong and is it defined at compile time?
This is precisely the idea: The code is compiled, and after that the language version is almost irrelevant. The only possible pitfall would be if a newer C++ version would include a breaking change to the standard C++ library (the library, not the language itself!). However, since the vast majority of that library is template code, it's compiled along with your own code anyway. It's basically baked into your .exe file along with your own code, so it's just as portable as yours. Also, both the C and C++ designers take great care not to break old code; so you can expect even those parts that are provided by the system itself (the standard C library) not to break anything.
So, even though there are things that could break in theory, pure C++ code should run fine on all machines that understand the same .exe format as the machine it was compiled on.

C++ systems not using "source files"

In The C++ Programming Language (4th edition), §15.1, Stroustrup states:
A file is the traditional unit
of storage (in a file system) and the traditional unit of compilation. There are systems that do not
store, compile, and present C++ programs to the programmer as sets of files.
Sadly, he doesn't give further information. Do you know any example of such systems?
EDIT:
I mean if you know any actual free, commercial, opensource or whatever C++ implementation that doesn't deal with files as we are accustomed to.
And I was wondering: Why do that systems exist? What's the point? What can be the advantages of such a design philosophy? What the drawbacks?
IIRC, in the 1980s IBM Visual Age C++ stored the program source code (or perhaps a faithful representation of its AST) in some proprietary database. (It is rumored that header files also sit in some database at that time).
And current C++ compilers are often able to get the source code from a generated file, or even some pipe. For instance, on my Linux I could have a program mygeneratorgenerating some C++ code on its stdout and invoke the GCC compiler as:
mygenerator | g++ -x c++ /dev/stdin -Wall -O -o myprogram
However, today, most compilers are generally compiling source files and header files from some file system. Notice that an optimizing compiler spend much more time in compilation proper than in disk IO, and you could use some tmpfs file system, so file read&write time is negligible when compiling C++ code (even parsing is often quicker than optimization & code generation).
So I know no C++ compiler used in 2015 which compiles and optimize source code outside of source files
Actually, generating C++ code is often a good idea (I'm doing it in MELT, which enables you to customize GCC), but usually you tweak your build procedure (e.g. your Makefile) to generate then compile some temporary C++ files. With current computers and operating systems and compilers (e.g. Linux & GCC) you could even generate some temporary C++ file, fork a compilation of it into a shared object plugin, and dlopen(3) it.
A possible reason to store the source code in something better than a file -e.g. some database- would be to make an incremental compiler, which would recompile only one function if it was the only modification from the previous compilation. In practice, this is difficult to implement in existing compilers (it has been discussed, and sort-of prototyped, within the GCC community, but nothing stable came out of this). But C++ or C is not the best language for such an approach (Common Lisp is much better, and SBCL is able to compile and optimize in memory and incrementally), in particular because of its preprocessor.
BTW, tinycc is able to compile C code sitting inside some const char* string in memory, but the performance of the generated machine code is bad (since tcc does not do any kind of serious optimizations, that current processors need so much).
Notice also that with link time optimizations (e.g. compile and link with g++ -flto -O2) compilers are keeping some form of the AST (actually the Gimple representation of GCC) in object files.
C++ source code can be stored in a database in various ways.

Is it possible to compile a C/C++ source code that executes in all Linux distributions without recompilation?

Is it possible to compile a C/C++ source code that executes in all Linux distributions without recompilation?
If the answer is yes, can I use any external (non-standard C/C++) libraries?
I want distribute my binary application instead of distribute of source code.
No, you can't compile an executable the executes in all Linux distributions. However, you can compile an executable that works on most distributions that people will tend to care about.
Compile 32-bit. Compile for the minimum CPU level you're willing to support.
Build your own version of glibc. Use the --enable-kernel option to set the minimum kernel version you're willing to support.
Compile all other libraries you plan to use yourself. Use the headers from your glibc build and your chosen CPU/compiler flags.
Link statically.
For anything you couldn't link to statically (for example, if you need access to the system's default name resolution or you need PAM), you have to design your own helper process and API. Release the source to the helper process and let them (or your installer) compile it.
Test thoroughly on all the platforms you need to support.
You may need to tweak some libraries if they call functions that cannot work with this mechanism. That includes dlopen, gethostbyname, iconv_open, and so on. (These kinds of functions fundamentally rely on dynamic linking. See step 5 above. You will get a warning when you link for these.)
Also, time zones tend to break if you're not careful because your code may not understand the system's zone format or zone file locations. (You will get no warning for these. It just won't work.)
Most people who do this are building with the minimum supported CPU being a Pentium 4 and the minimum supported kernel version being 2.6.0.
There are two differences which are among installations. Architecture and libraries.
Having one binary for different architectures is not directly possible; there was an attempt to have binary for multiple archs in one file (fatelf), but it is not widely used and unlikely to gain momentum. So at least you have to distribute separate binaries for ia32, amd64, arm, ... (most if not all amd64 distros have kernel compiled with support for running ia32 code, though)
Distributions contain different versions of libraries. You're fine as long as the API does not change, you can link to that library. Some libs ensure inary backwards-compatibility within major number (so GTK2.2 app will run fine with GTK2.30 lib, but not necessarily vice versa). If you want to be sure, you have to link statically with all libs that you use, except the most basic ones (probably only libc6, which is binary-compatible accross distros AFAIK). This can increase size of the binary, and it one of reasons why e.g. Acrobat Reader is relatively big download, although the app itself is not specially rich functionality-wise.
There was a transitional period for c++ ABI, which changed between gcc 2.9 and 3 (IIRC), but the old ABI would be really just on ancient installations. This should no longer be an isse for you, and if you link statically, it is irrelevant anyway.
Generally no.
There are several bariers.
Different architectures
While a 32bit binary will run on a x86_64 system, it won't work vice versa. Plus there is a lot of ARM systems.
Kernel ABI
Kernel ABI changes very slowly, but it does change, therefore you can't really support all possible versions. Note that in some places kernel 2.2 is still in use.
What you can do is to create a statically linked binary. Such binary will include all libraries your app depends on, and it will work on all systems with the same architecture and a reasonably similar kernel version.

Which x86 C++ compilers are multithreaded by itself?

Now almost an every user have a 2 or 4 cores on desktop (and on high number of notebooks). Power users have 6-12 cores with amd or i7.
Which x86/x86_64 C/C++ compilers can use several threads to do the compilation?
There is already a 'make -j N'-like solutions, but sometimes (for -fwhole-program or -ipo) there is the last big and slow step, which started sequentially.
Does any of these can: GCC, Intel C++ Compiler, Borland C++ compiler, Open64, LLVM/GCC, LLVM/Clang, Sun compiler, MSVC, OpenWatcom, Pathscale, PGI, TenDRA, Digital Mars ?
Is there some higher limit of thread number for compilers, which are multithreaded?
Thanks!
Gcc has -flto=n or -flto=jobserver to make the linking step (which with LTO does optimization and code generation) parallel. According to the documentation, these have been available since version 4.6, although I am not sure how good it was in those early versions.
Some build systems can compile independent modules in parallel, but the compilers themselves are still single-threaded. I'm not sure there is anything to gain by making the compiler multi-threaded. The most time-consuming compilation phase is processing all the #include dependencies and they have to be processed sequentially because of potential dependencies between the various headers. The other compilation phases are heavily dependent on the output of previous phases so there's little benefit to be gained from parallelism there.
Newer Visual Studio versions can compile distinct translation units in parallel. It helps if your project uses many implementation files (such as .c, .cc, .cpp).
MSDN Page
It is not really possible to multi-process the link stage. There amy be some degree of multi-threading possible but it is unlikely to give much of a performance boost. As such many build systems will simply fire off a seperate process for seperate files. Once they are all compiled then it will, as you note, perform a long single threaded link. Alas, as I say, there is precious little you can do about this :(
Multithreaded compilation is not really useful as build systems (Make, Ninja) will start multiple compilation units at once.
And as Ferrucio stated, concurrent compilation is really difficult to implement.
Multithreaded linking can though be useful (concurrent .o/.a reading and symbol resolution.) as this will most likely be the last build step.
Gnu Gold linker can be multithreaded, with the LLVM ThinLTO implementation:
https://clang.llvm.org/docs/ThinLTO.html
Go 1.9 compiler claims to have:
Parallel Compilation
The Go compiler now supports compiling a package's functions in parallel, taking advantage of multiple cores. This is in addition to the go command's existing support for parallel compilation of separate packages.
but of course, it compiles Go, not C++
I can't name any C++ compiler doing likewise, even in October 2017. But I guess that the multi-threaded Go compiler shows that multi-threaded C or C++ compilers are (in principle) possible. But they are few of them, and making new ones is a huge work, and you'll practically need to start such an effort from scratch.
For Visual C++, I am not aware whether it does any parallel compilation (I don't think so). For versions later than Visual Studio 2005 (i.e. Visual C++ 8), the projects within a solution are built in parallel as far as is allowed by the solution dependency graph.

Same binary code on Windows and Linux (x86)

I want to compile a bunch of C++ files into raw machine code and the run it with a platform-dependent starter written in C. Something like
fread(buffer, 1, len, file);
a=((*int(*)(int))buffer)(b);
How can I tell g++ to output raw code?
Will function calls work? How can I make it work?
I think the calling conventions of Linux and Windows differ. Is this a problem? How can I solve it?
EDIT: I know that PE and ELF prevent the DIRECT starting of the executable. But that's what I have the starter for.
There is one (relatively) simple way of achieving some of this, and that's called "position independent code". See your compiler documentation for this.
Meaning you can compile some sources into a binary which will execute no matter where in the address space you place it. If you have such a piece of x86 binary code in a file and mmap() it (or the Windows equivalent) it is possible to invoke it from both Linux and Windows.
Limitations already mentioned are of course still present - namely, the binary code must restrict itself to using a calling convention that's identical on both platforms / can be represented on both platforms (for 32bit x86, that'd be passing args on the stack and returning values in EAX), and of course the code must be fully self-contained - no DLL function calls as resolving these is system dependent, no system calls either.
I.e.:
You need position-independent code
You must create self-contained code without any external dependencies
You must extract the machine code from the object file.
Then mmap() that file, initialize a function pointer, and (*myblob)(someArgs) may do.
If you're using gcc, the -ffreestanding -nostdinc -fPIC options should give you most of what you want regarding the first two, then use objdump to extract the binary blob from the ELF object file afterwards.
Theoretically, some of this is achievable. However there are so many gotchas along the way that it's not really a practical solution for anything.
System call formats are totally incompatible
DEP will prevent data executing as code
Memory layouts are different
You need to effectively dynamically 'relink' the code before you can run it.
.. and so forth...
The same executable cannot be run on both Windows and Linux.
You write your code platform independently (STL, Boost & Qt can help with this), then compile in G++ on Linux to output a linux-binary, and similarly on a compiler on the windows platform.
EDIT: Also, perhaps these two posts might help you:
One
Two
Why don't you take a look at wine? It's for using windows executables on Linux. Another solution for that is using Java or .NET bytecode.
You can run .NET executables on Linux (requires mono runtime)
Also have a look at Agner's objconv (disassembling, converting PE executable to ELF etc.)
http://www.agner.org/optimize/#objconv
Someone actually figured this out. It’s called αcτµαlly pδrταblε εxεcµταblε (APE) and you use the Cosmopolitan C library. The gist is that there’s a way to cause Windows PE executable headers to be ignored and treated as a shell script. Same goes for MacOS allowing you to define a single executable. Additionally, they also figured out how to smuggle ZIP into it so that it can incrementally compress the various sections of the file / decompress on run.
https://justine.lol/ape.html
https://github.com/jart/cosmopolitan
Example of a single identical Lua binary running on Linux and Windows:
https://ahgamut.github.io/2021/02/27/ape-cosmo/
Doing such a thing would be rather complicated. It isn't just a matter of the cpu commands being issued, the compiler has dependencies on many libraries that will be linked into the code. Those libraries will have to match at run-time or it won't work.
For example, the STL library is a series of templates and library functions. The compiler will inline some constructs and call the library for others. It'd have to be the exact same library to work.
Now, in theory you could avoid using any library and just write in fundamentals, but even there the compiler may make assumptions about how they work, what type of data alignment is involved, calling convention, etc.
Don't get me wrong, it can work. Look at the WINE project and other native drivers from windows being used on Linux. I'm just saying it isn't something you can quickly and easily do.
Far better would be to recompile on each platform.
That is achievable only if you have WINE available on your Linux system. Otherwise, the difference in the executable file format will prevent you from running Windows code on Linux.