C++ systems not using "source files" - c++

In The C++ Programming Language (4th edition), §15.1, Stroustrup states:
A file is the traditional unit
of storage (in a file system) and the traditional unit of compilation. There are systems that do not
store, compile, and present C++ programs to the programmer as sets of files.
Sadly, he doesn't give further information. Do you know any example of such systems?
EDIT:
I mean if you know any actual free, commercial, opensource or whatever C++ implementation that doesn't deal with files as we are accustomed to.
And I was wondering: Why do that systems exist? What's the point? What can be the advantages of such a design philosophy? What the drawbacks?

IIRC, in the 1980s IBM Visual Age C++ stored the program source code (or perhaps a faithful representation of its AST) in some proprietary database. (It is rumored that header files also sit in some database at that time).
And current C++ compilers are often able to get the source code from a generated file, or even some pipe. For instance, on my Linux I could have a program mygeneratorgenerating some C++ code on its stdout and invoke the GCC compiler as:
mygenerator | g++ -x c++ /dev/stdin -Wall -O -o myprogram
However, today, most compilers are generally compiling source files and header files from some file system. Notice that an optimizing compiler spend much more time in compilation proper than in disk IO, and you could use some tmpfs file system, so file read&write time is negligible when compiling C++ code (even parsing is often quicker than optimization & code generation).
So I know no C++ compiler used in 2015 which compiles and optimize source code outside of source files
Actually, generating C++ code is often a good idea (I'm doing it in MELT, which enables you to customize GCC), but usually you tweak your build procedure (e.g. your Makefile) to generate then compile some temporary C++ files. With current computers and operating systems and compilers (e.g. Linux & GCC) you could even generate some temporary C++ file, fork a compilation of it into a shared object plugin, and dlopen(3) it.
A possible reason to store the source code in something better than a file -e.g. some database- would be to make an incremental compiler, which would recompile only one function if it was the only modification from the previous compilation. In practice, this is difficult to implement in existing compilers (it has been discussed, and sort-of prototyped, within the GCC community, but nothing stable came out of this). But C++ or C is not the best language for such an approach (Common Lisp is much better, and SBCL is able to compile and optimize in memory and incrementally), in particular because of its preprocessor.
BTW, tinycc is able to compile C code sitting inside some const char* string in memory, but the performance of the generated machine code is bad (since tcc does not do any kind of serious optimizations, that current processors need so much).
Notice also that with link time optimizations (e.g. compile and link with g++ -flto -O2) compilers are keeping some form of the AST (actually the Gimple representation of GCC) in object files.

C++ source code can be stored in a database in various ways.

Related

C++ Versions, do they auto-detect the version of an exe?

Okay so I know that there are multiple c++ versions. And I dont really know much about the differences between them but my question is:
Lets say i made a c++ application in c++ 11 and sent it off to another computer would it come up with errors from other versions of c++ or will it automatically detect it and run with that version? Or am I getting this wrong and is it defined at compile time? Someone please tell me because I am yet to find a single answer to my question on google.
It depends if you copy the source code to the other machine, and compile it there, or if you compile it on your machine and send the resulting binary to the other computer.
C++ is translated by the compiler to machine code, which runs directly on the processor. Any computer with a compatible processor will understand the machine code, but there is more than that. The program needs to interface with a filesystem, graphic adapters, and so on. This part is typically handled by the operating system, in different ways of course. Even if some of this is abstracted by C++ libraries, the calls to the operating system are different, and specific to it.
A compiled binary for ubuntu will not run on windows, for example, even if both computers have the same processor and hardware.
If you copy the source code to the other machine, and compile it there (or use a cross-compiler), your program should compile and run fine, if you don't use OS-specific features.
The C++ version does matter for compilation, you need a C++11 capable compiler of course if you have C++11 source code, but once the program is compiled, it does not matter any more.
C++ is compiled to machine code, which is then runnable on any computer having that architecture e.g. i386 or x64 (putting processor features like SSE etc. aside).
For Java, to bring a counterexample, it is different. There the code is compiled to a bytecode format, that is machine independent. This bytecodeformat is read/understood by the Java Virtual Machine (JVM). The JVM then has to be available for your architecture and the correct version has to be installed.
Or am I getting this wrong and is it defined at compile time?
This is precisely the idea: The code is compiled, and after that the language version is almost irrelevant. The only possible pitfall would be if a newer C++ version would include a breaking change to the standard C++ library (the library, not the language itself!). However, since the vast majority of that library is template code, it's compiled along with your own code anyway. It's basically baked into your .exe file along with your own code, so it's just as portable as yours. Also, both the C and C++ designers take great care not to break old code; so you can expect even those parts that are provided by the system itself (the standard C library) not to break anything.
So, even though there are things that could break in theory, pure C++ code should run fine on all machines that understand the same .exe format as the machine it was compiled on.

Is the preprocessor, assembler and linker a part of the compiler?

So I've been taught, as many of us have, that the compiler is a program that translates your human readable code into machine readable code. The more you look into it however, you learn that the "compilation process" is actually broken up into 4 different parts: the preprocessor, compiler, assembler and linker. I think not understanding where all these parts fit into place have confused me a bit.
Are all the steps described in a typical compilation process part of
the compiler program?
Or are things like the assembler and linker separate programs built
into IDE's along with compilers to generate code?
Does it depend on the compiler or programming language?
If separate, is the compiler responsible for just the assembly code
creation as well as optimizing the assembly code?
Are all the steps described in a typical compilation process part of the compiler program?
All the steps are required by the translation process. The process includes Preprocess, Compilation, assembly / machine code instruction generation, and producing an executable (e.g. linking).
A translator program, a.k.a. compiler, does not need to put all steps into one compiler executable.
For example, a program may be composed of more than one translation unit, so they can be compiled all at once, then the pieces can be linked together. Often separating compilation from linking is beneficial.
Or are things like the assembler and linker separate programs built into IDE's along with compilers to generate code?
Some IDE's like Eclipse, do not have built-in compilers or linkers. The Eclipse IDE is designed to work with various compilers and linker. The Eclipse IDE needs to be configured as to what tools it will use when building a program.
Does it depend on the compiler or programming language?
IDEs are usually independent from compilers and languages. The NetBeans IDE can be used with Java or C++ (similarly with Eclipse).
Some IDEs may have features that work better with one language than another, such as keyword highlighting.
If separate, is the compiler responsible for just the assembly code creation as well as optimizing the assembly code?
Assembly language creation is not a required part of the process.
Typically, compilers have an option you can supply in order to print an assembly language listing.
Some compilers emit executable code without going through the generation of assembly language.
The meaning of the term “compiler” depends on the context.
For the beginner, the compiler is the tool you use to create an executable program from your source code.
Delving a little deeper, one learns that with practical toolchains there is at least a division into compiler and linker.
And while the above two views have been based solely on tool usage, when one learns more about C++ one appreciates the division into preprocessing and compilation “proper”, i.e. a preprocessor and a compiler, and a linker, where the preprocessor produces text, the compiler produces object code, and the linker produces executables or libraries.
Delving even deeper into things one may start to differentiate between different internal phases of the compiler (in the trio above). Some compiler utilize an assembler, some generate code directly from an abstract syntax tree, some compilers go as far as using a whole C compiler at the end, just translating the language X source code to C source code. E.g. Eiffel compilers used to do this, and probably do it still. And C++ started out that way, as a front end to a C compiler.
And especially with the idea of just translating to C, one may call that part the real compiler, with the C compiler at the end as just one of the tools invoked by the compiler proper.
So, it depends very much on the context.

Difference between code object and executable file

I'm a C++ beginner and I'm studying the basics of the language. There is a topic in my book about the compiler and my problem is that I can not understand what the text wants to say:
C++ is a compiled language so you need to translate the source code in
a file that the computer can execute. This file is generated by the
compiler and is called the object code ( .obj ), but a program like
the "hello world" program is composed by a part that we wrote and a part
of the C++ library. The linker links these two parts of a program and
produces an executable file ( .exe ).
Why does my book tell that the file that is executed by the computer is the one with the obj suffix (the object code) and then say that it is the one with the exe suffix?
Object files are source compiled into binary machine language, but they contain unresolved external references (such as printf,for instance). They may need to be linked against other object files, third party libraries and almost always against C/C++ runtime library.
In Unix, both object and exe files are the same COFF format. The only difference is that object files have unresolved external references, while a.out files don't.
The C++ specification is a technical document in English. For C++11 have a look inside n3337 (or spend a lot of money to buy the paperback ISO standard). In theory you don't need a computer to run a C++ program (you could use a bunch of human slaves, but that would be unethical, inefficient, and unreliable).
You could have a C++ implementation which is an interpreter, not a compiler (e.g. Ch by SoftIntegration)
If you install Linux on your laptop (which I recommend doing to every student) then you could have several free software C++ compilers, in particular GCC and Clang/LLVM (using g++ and clang commands respectively). Source files are suffixed .cc, or .cxx, or .cpp, or even .C (I prefer .cc), and you could ask the compiler to handle a file of some other suffix as a C++ source file (but that is not conventional). Then, both object files (suffixed .o) and executables share the same ELF format. Conventionally, executables don't have any suffix (e.g. g++ is a binary executable, not doing much except starting other processes like cc1plus -the compiler proper-, as -the assembler-, ld -the linker- etc...)
In all cases I strongly recommend:
to enable all warnings and debug info during compilation (e.g. use g++ -Wall -g ....)
to improve your source code till you got no warnings
to learn how to use the debugger (gdb)
to be able to build your program on the command line
to use a version control system like git
to use a good editor like emacs, gedit, geany, or gvim
once you are writing programs in several source files, learn how to use a builder like make
to learn C++11 (or even perhaps C++14) rather than older C++ standards
to also learn other programming languages (Ocaml, Scheme, Haskell, Prolog, Scala, ....) since they would improve your thinking and your way of coding in C++
to study the source code of several free software coded in C++
to read the documentation of every function that you are using, e.g. on cppreference or in man pages (for Linux)
to understand what is undefined behavior (the fact that your program sometimes work does not make it correct).
Concretely, on Linux you could edit your Hello World program (file hello.cc) with gedit or emacs (with a command like gedit hello.cc) etc..., compile it using g++ -Wall -g hello.cc -o hello command, debug it using gdb ./hello, and repeat (don't forget to use git commands for version control).
Sometimes it makes sense to generate some C++ code, e.g. by some shell, Python, or awk script (or even by your own program coded in C++ which generates C++ code!).
Also, understand that an IDE is not a compiler (but runs the compiler for you).
The basic steps for creating an application from a C or C++ source file are as follows:
(1) the source files are created (by a person or generated by a program), (2) the source files are compiled (which is really two steps, Preprocessor and compilation) into object code, (3) the object files that are created by the C/C++ compiler are linked to create the .exe
So you have these steps of transforming one version of the computer program, the source files, to another, the executable. The C++ source is compiled to produce the object files. The object files are then linked to produce the executable file.
In most cases there are several different programs involved in the compile and link process with C and C++. Each program takes in certain files and creates new files.
C/C++ Preprocessor takes in source code files and generates source code files
C/C++ Compiler takes in source code files and generates object code files
the linker takes in object code files and libraries and generates executable files
See What is the difference between - 1) Preprocessor,linker, 2)Header file,library? Is my understanding correct?
Most compiler installations have a program that runs these various applications for you. So if you are using gcc then gcc program will run first the C++ Preprocessor then then C++ compiler and then the linker. However you can modify what gcc does with command line options to tell it to only run the C++ Preprocessor or to only compile the source files but not to link them or to only link the object code files.
A brief history of computer languages and programming
The languages used for programming computers along with the various software development tools have evolved over the years.
The first computers were programmed with numbers entered by switches on a console.
Then people started developing languages and software that could be used to create software more easily and quicker. The first major development was creating assembler language where each line of source was converted by a computer program into a machine code instruction. Along with this came the development of linkers (which link pieces of machine code together into larger pieces). Assemblers were improved by adding a macro or preprocessor facility somewhat like the C/C++ Preprocessor though designed for assembly language.
Then people created programming languages that looked more like people written languages rather than assembler (FORTRAN and COBOL and ALGOL for instance). These languages were easier to read and a single line of source might be converted into several machine instructions so it was more productive to write computer programs in these languages rather than assembler.
The C programming language was a later refinement using lessons learned from the early programming languages such as FORTRAN. And C used some of the same software development tools that already existed such as linkers which already existed. Still later C++ was invented, starting off as a refinement of C introducing object oriented facilities. In fact the first C++ compiler was really a C++ translator which translated C++ source code to C source code which was then compiled with a C compiler. However modern C++ is compiled straight to machine code in order to provide the full functionality of the C++ standard with templates, lambdas, and all the other things with C++11 and later.
linkers and loaders
When you run a program you run the executable file. The executable file contains several kinds of information. The first is the machine instructions that are the result of compiling the C++ source code. The other is information that the loader uses in order to know how to load the executable into memory.
In the old days, long long ago all libraries and object files were linked together into an executable file and the executable file was loaded by the loader and the loader was pretty simple.
Then people invented shared libraries and dynamic link libraries and this required the linker to be more complex and the loader to be more complex.
The linker became more complex because it had to be able to recognize the difference between using a shared library and a static library and be able to generate an executable file that not only contains the linked object code but also information for the loader about any dynamic libraries.
The loader became more complex because not only does the loader have to load the executable file into memory so that it can start running, the loader must also find any shared libraries or dynamic link libraries that are also needed and load those too. And the loader also has to do a certain amount of linking of the additional components, the shared libraries, so the loader does a lot more than it used to do.
See also
Difference between shared objects (.so), static libraries (.a), and DLL's (.so)?
What is an application binary interface (ABI)?
How to make a SIMPLE C++ Makefile
Object code (within an object file): Output from a compiler intended as input for a linker (for the linker to produce executable code).
Executable: A program ready to be run (executed) on a computer

What is a Delphi DCU file?

What is a Delphi DCU file?
I believe it stands for "Delphi Compiled Unit". Am I correct in assuming it contains object code, and therefore corresponds to an ".o" file compiled from a C/C++ source code file?
I believe .dcu generally means "Delphi Compiled Unit" as opposed to a .pas file which is simply "Pascal source code".
A .dcu file is the file that the DCC compiler produces after compiling the .pas files (.dfm files are converted to binary resources, then directly processed by the linker).
It's analogous to .o and .obj files that other compilers produce, but contains more information on the symbols (therefore you can reverse engineer the interface section of a unit from it omitting comments and compiler directives).
A .dcu file technically not a "cache" file, although your builds will run faster if you don't delete them and when doesn't need to recompile them. A .dcu file is tied to the compiler version that generated it. In that sense it is less portable than .o or .obj files (though they have their share of compatibility problems too)
Here's some history in case it adds anything.
Compilers have traditionally translated source code languages into some intermediate form. Interpreters don't do that -- they just interpret the language directly and run the application right away. BASIC is the classic example of an interpreted language. The "command line" in DOS and Windows has a language that can be written in files called "batch files" with a .bat extension. But typing things on the command line executed them directly. In *nix environments, there are a bunch of different command-line interpreters (CLIs), such as sh, csh, bash, ksh, and so on. You can create batch files from all of them -- this are usually referred to as "scripting languages". But there are a lot of other languages now that are both interpreted and compiled.
Anyway Java and .Net, for example, compile into something called an intermediate "byte-code" representation.
Pascal was originally written as a single-pass compiler, and Turbo Pascal (originating from PolyPascal) - with different editions for CP/M, CP/M-86 and DOS - directly generated a binary executable (COM) file that ran under those operating systems.
Pascal was originally designed as a small, efficient language intended to encourage good programming practices using structured programming and data structuring; Turbo Pascal 1 was originally designed as a an IDE with built-in very fast compiler, and an affordable competitor in the the DOS and CP/M market against the long edit/compile/link cycles at that time. Turbo Pascal and Pascal had similar limitations as any programming environment back then: memory and disk space were measured in kilobytes, processor speeds in Megahertz.
Linking to an executable binary prevented you from linking to separately compiled units and libraries.
Before Turbo Pascal, there was UCSD p-System operating system (supporting many languages, including Pascal. The UCSD Pascal compiler back then already extended the Pascal language with units) which compiled into a pseudo-machine byte-code (called p-code) format that allowed linking multiple units together. It was slow though,
Meanwhile, c evolved in VAX and Unix environments, and it compiled into .o files, which meant "object code" as opposed to "source code". Note: this is totally unrelated to anything we call "objects" today.
Turbo Pascal up to and including version 3 directly generated .com binary output files (although you could use amend those overlays files), and as of version 4 supported separating code into units that first compiled into .tpu files before linked into the final executable binary. The Turbo C compiler generated .obj (object code) files rather than byte-codes, and Delphi 2 introduced .obj file generation on order to co-operate with C++ Builder.
Object files use relative addressing within each unit, and require what's called "fix-ups" (or relocation) later on to make them run. Fix-ups point to symbolic labels that are expected to exist in other object files or libraries.
There are two kinds of "fix-ups": one is done statically by a tool called a "linker". The linker takes a bunch of object files and seams them together into something analogous to a patchwork quilt. It then "fixes-up" all of the relative references by plugging-in pointers to all of the externally-defined labels.
The second fix-ups are done dynamically when the program is loaded to run. They're done by something called the "loader", but you never see that. When you type a command on the command line, the loader is called to load an EXE file into memory, fix-up the remaining links based on where the file is loaded, and then control is transferred to the entry point of the application.
So .dcu files originated as .tpu files when Borland introduced units in Turbo Pascal, then changed extension with the introduction of Delphi. They are very different from .obj files, though you can link to .obj files from Turbo Pascal and Delphi.
Delphi also hid the linker entirely, so you just do a compile and a run. All of the linker settings are still there, however, in one of Delphi's options panes.
In addition to David Schwartz's answer, there is one case when a dcu actually is quite different from typical obj files generated in other languages: Generic type definitions. If a generic type is defined in a Delphi Unit, the compiler compiles this code into a syntax tree representation rather than to machine code. This syntax tree representation then is stored in the dcu file. When the generic type then is used and instantiated in another unit, the compiler will use this representation and "merge" it with the syntax tree of the unit using the generic type. You could think of this being somewhat analogues to method inlining. This, btw is also the reason why a unit that makes heavy use of generics takes much longer to compile, although the generic types are "linked in" from a dcu file.
A Delphi Compiled Unit contains object code, and pre-compiled headers, and is therefore somewhat comparable to both an obj file and a .pch / .gch file.
The 'interface' section of a Delphi source file corresponds to the header, and the 'implementation' section creates the object code.
Pre-compiled header files may significantly reduce compilation and link time. The DCU header section provides link information to other referenced units, that does not have to be re-discovered.
In the Delphi / Turbo Pascal environment, pre-compiled headers support strict type checking, which would have required source-code referencing if an Object file format like .coff or .obj had been used. (In C++, name mangling provides a similar but less complete function).

Intermediate code from C++

I want to compile a C++ program to an intermediate code. Then, I want to compile the intermediate code for the current processor with all of its resources.
The first step is to compile the C++ program with optimizations (-O2), run the linker and do most of the compilation procedure. This step must be independent of operating system and architecture.
The second step is to compile the result of the first step, without the original source code, for the operating system and processor of the current computer, with optimizations and special instructions of the processor (-march=native). The second step should be fast and with minimal software requirements.
Can I do it? How to do it?
Edit:
I want to do it, because I want to distribute a platform independent program that can use all resources of the processor, without the original source code, instead of distributing a compilation for each platform and operating system. It would be good if the second step be fast and easy.
Processors of the same architecture may have different features. X86 processors may have SSE1, SSE2 or others, and they can be 32 or 64 bit. If I compile for a generic X86, it will lack of SSE optimizations. After many years, processors will have new features, and the program will need to be compiled for new processors.
Just a suggestion - google clang and LLVM.
How much do you know about compilers? You seem to treat "-O2" as some magical flag.
For instance, register assignment is a typical optimization. You definitely need to now how many registers are available. No point in assigning foo to register 16, and then discover in phase 2 that you're targetting an x86.
And those architecture-dependent optimizations can be quite complex. Inlining depends critically on call cost, and that in turn depends on architecture.
Once you get to "processor-specific" optimizations, things get really tricky. It's really tough for a platform-specific compiler to be truly "generic" in its generation of object or "intermediate" code at an appropriate "level": Unless it's something like "IL" (intermediate language) code (like the C#-IL code, or Java bytecode), it's really tough for a given compiler to know "where to stop" (since optimizations occur all over the place at different levels of the compilation when target platform knowledge exists).
Another thought: What about compiling to "preprocessed" source code, typically with a "*.i" extension, and then compile in a distributed manner on different architectures?
For example, most (all) the C and C++ compilers support something like:
cc /P MyFile.cpp
gcc -E MyFile.cpp
...each generates MyFile.i, which is the preprocessed file. Now that the file has included ALL the headers and other #defines, you can compile that *.i file to the target object file (or executable) after distributing it to other systems. (You might need to get clever if your preprocessor macros are specific to the target platform, but it should be quite straight-forward with your build system, which should generate the command line to do this pre-processing.)
This is the approach used by distcc to preprocess the file locally, so remote "build farms" need not have any headers or other packages installed. (You are guaranteed to get the same build product, no matter how the machines in the build farm are configured.)
Thus, it would similarly have the effect of centralizing the "configuration/pre-processing" for a single machine, but provide cross-compiling, platform-specific compiling, or build-farm support in a distributed manner.
FYI -- I really like the distcc concept, but the last update for that particular project was in 2008. So, I'd be interested in other similar tools/products if you find them. (In the mean time, I'm writing a similar tool.)