How toolchain is related to OS and platform architecture [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Can someone explain about toolchain dependency on OS and platform architecture, for instance if I want to compile code for an arm architecture, should I look for platform architecture or for OS that platform is running on and then adapt toolchain to it?

Most compilers compile their code to the assembly language. The code they produce will most likely depend on various calls to the operating system (e.g. to allocate dynamic memory), and have a header defining properties of the file like the location of the code and data sections (e.g. ELF, PE). An assembler then compiles this assembly to object files, which are linked using the linker of this platform. All these tools produce code for a specific architecture and OS.
This does not mean that the compiler and linker cannot run on another type of system. The process of compiling code for another system is called cross-compiling. Even though this is less commonly used than compiling for the same platform as the compiler runs on, it is quite commonly used. A few examples of this are compiling OS kernels, which of course cannot rely on another OS, or compiling native code for Android (the android NDK contains a cross-compiler).

Related

C++: Static analysis tools that will warn of missing headers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am compiling a large project for several platforms using GCC and Clang. The issue I have is that I do all of the bug fixing and testing on one platform (Ubuntu 18.04), and even run static tools like cppcheck and clang-tidy to find bugs. As part of the bug fixing, I even try to compile with several compilers on Ubuntu to make sure that the code is ready to ship.
However, several times I have run across the problem where a developer on another system can't compile the update due to a simple missing include.
A recent example is where we introduced some new functionality which was heavily tested in GCC and Clang on Ubuntu. Then a dev on MacOS got some compiler errors which turned out to be due to a missing #include <array> in one file, and missing #include <sstream> in another. I mean, when you look at the offending files, they were indeed using arrays and stringstreams, so I get it. But I am just surprised that the static tools didn't catch those errors.
So how do I solve this problem? They definitely are programming errors, not compiler bugs since it was obvious that I should have included the files.
You are looking for include-what-you-use. From their docs:
"Include what you use" means this: for every symbol (type, function variable, or macro) that you use in foo.cc, either foo.cc or foo.h should #include a .h file that exports the declaration of that symbol.
Compiling it yourself isn't trivial as the inner workings of this tool is tightly coupled to the Llvm internals. But you might be lucky to find a pre-built package in your distro. Still, once you get it running, it's not a silver bullet. The problem it tries to solve is hard, there might be false positives etc.

Do I need to type -std=c++17 (or whichever standard I want to use) every time I need to compile a C++ program? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have the Windows 10 OS, use VS Code to write my C++ code and use CMD to compile my programs. I don't really know which standard the compiler in my PC (MinGW, gcc version 6.3.0) uses, but I just want to endure that it uses the latest one like C++14 or 17. Unfortunately, I need to type in -std=c++17 every time I need to compile my program using that standard. How do I set the desired standard as default?
Unfortunately, I need to type in -std=c++17 every time I need to compile
This is why build scripts exist. There are many arguments that you want to pass to your compiler at some point:
Source files (you may have multiple translation units = .cpp files)
Include directories
Libraries to link
Optimization level
Warnings
C++ standard
Symbol defines
...and many more compiler flags...
In bigger projects you may also have multiple build targets (and thus compiler invocations) and you don't want to do all that by hand every time either.
As a simple solution, you could write a .bat script that invokes the compiler with the right arguments for you. However, there are tools that do a way better job at this, such as make (generally only found in the Linux world) or MSBuild (part of Visual Studio). Then there are also tools that generate these scripts for you, such as CMake and several IDEs with their own project configuration files.
I just want to endure that it uses the latest one like C++14 or 17. Unfortunately, I need to type in -std=c++17 every time
-std=c++17 is exactly how you ensure that you're using the C++ version you want (in this case C++17).

C++ dynamic libraries - link symbols at runtime on OS X [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I'm writing a plugin based emulation system. The way this works is that the main system sets up an ImGui instance and the plugins use ImGui to draw windows to the screen. I'm using a static build of ImGui which is embedded in the host program and linked to at run time; on Linux, this works fine, because the plugin .so files don't need to link against ImGui at compile time, only at run time. On OS X I get errors about "Undefined symbols for architecture x86_64" when trying to link the .dylibs.
Is there a way to tell OS X to leave the linking for run-time also?
Found the answer elsewhere - I need to add the -undefined dynamic_lookup flag on OS X.

How to detect ABI at compile time? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is there a way to detect ABI in C/C++ at compile time? I know there are macros for OS and CPU architecture. Are there similar macros (or some other way) for ABI?
The notion of ABI is not known to standard specification like C11 or C++14. It is an implementation thing.
You could on Linux use feature_test_macros(7).
You could consider improving your build procedure (e.g. your Makefile, etc...). You might run some shell script detecting features (like autoconf generated configure scripts do). Notice that some C or C++ code (e.g. header files, etc...) might be generated at build time (for examples: by bison, moc, rpcgen, swig, ...), perhaps by your own utilities or scripts. Use a good enough build automation tool (with care, GNU make and ninja are able to deal with generated C++ or C code and manage their generation and the dependencies).
Don't confuse compilation with build; the compilation commands running a compiler are just parts of the build process.
Some platforms accept several ABIs. E.g. my Linux/Debian/Sid/x86-64 desktop with a Linux 4.13 kernel can run x86 32 bits ELF executable, x86-64 64 bits ELF, probably some old a.out format from the 1980s, and also x32 ABI. With binfmt_misc I can add even more ABIs. See x86 psABI for a list of several ABI documentations.
BTW, the current trend is to try writing portable code. Perhaps using frameworks like Qt or POCO or Glib (and many others) could hide the ABI details to your application.
In some cases, libffi might be helpful too.
In general, once you know your OS and your architecture, you can practically -most of the time- deduce the ABI.
If you really want your ABI, then a possible Linux specific way might be to run file(1) on the current executable. I don't recommend doing that, but you could try (using proc(5) to get the executable):
/// return a heap allocated string describing the ABI of current executable
//// I don't recommend using this
const char*getmyabi(void) {
char mycmdname[80];
int sz = snprintf(mycmdname, sizeof(mycmdname),
"/usr/bin/file -L /proc/%d/exe",
getpid());
assert (sz < (int) sizeof(mycmdname));
FILE*f = popen(mycmdname, "r");
if (!f) {
perror(mycmdname); exit(EXIT_FAILURE);
};
char* restr = NULL;
size_t siz = 0;
getline(&restr, &siz, f);
if (pclose(f)) { perror("pclose"); exit(EXIT_FAILURE); };
return restr;
} // end of getmyabi
/// the code above in untested
You could get a string like:
"ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked,"
" interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32,"
"BuildID[sha1]=deca50aa4d3df4d57bacc464aa1e8790449ebf8e, stripped"
then you need to parse that. You could also want to parse the output of ldd(1) or of objdump(1) on your executable, your ELF interpreter ld-linux(8), etc.... (or use some ELF parsing library for that).
I don't know how useful is that getmyabi function. I don't know what precise output is file giving (in all weird cases of various ABIs). I leave you to test it (be sure to compile your test program with all the ABIs installed on your system, so gcc -m32, gcc -m64, gcc -mx32, etc....); if possible test that on some non x86 Linux system.
If you just need to get your ABI at build time, consider compiling some hello-world executable, then run file (and ldd) on it. Have appropriate build rules (Makefile rules) doing that and parsing the output of those file and ldd commands.
(I am surprised of your question; what kind of application needs to know the ABI; most software needing that are compilers...; a strong dependency on a precise ABI might be the symptom of undefined behavior.)
Perhaps the hints given here might apply to your case (just a blind guess).
If you are writing some compiler, consider generating some C code in it then use some existing C compiler on that generated C code, or use a good JIT compilation library like LIBGCCJIT or LLVM. They would take care of the ABI specific aspects (and more importantly of low-level optimizations and code generation).
If you are writing a compiler alone and don't want to use external tools, you should in practice restrict yourself to one or a few ABIs and platforms. Life is short.
PS. I am not sure at all that ABI has a precise meaning. It is more a specification document than a defined feature of some system (or some executable). IIUC, the ABI specification did evolve (probably was not exactly the same 15 years ago).

How C++ libraries work? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
As i know Windows operating system used the assembly language to interact with the hardwares.
When they did this,they could use c,c++ or any other language to the rest work.
As i know C++ header files actually call the windows api for the implementaion.
So where are header files located? Are them installed by compilers ? or they come with the operating systems?
What keyword or code do the header files use to interact with the sutable api(for example std::cout on windows,calls a function in a dll file and in linux an other)?
For example is iostream.h different on linux from windows?
And how that how they find suitable libraries?
And my last question is that,how libraries interact with assembly code?(so assembly code interacts with hardware)
TIA.
The following passage isn't meant to be any sort of complete description of how libraries, compilation processes or system calls invoking works but rather a bird-eye view of what OP asked, thus lacks several details and important passages which will have to be studied in-depth by the OP himself
By "C++ library" I assume you're referring to the C++ standard library (although the considerations here are valid for any other library as well).
The C++ standard library isn't mandatorily present on any operating system by default, it usually comes shipped with a compiler installation or with a secondary package. That doesn't mean you can't execute C++ compiled routines, you just need headers and the library in order to compile your programs along with a compiler which supports it.
The C++ standard library is usually compiled platform-specific and you can't just copy the headers and lib files from one operating system to another one (you'll end up in tears).
Everytime you import the declarations from a header file with something like
#include <iostream>
you're rendering your program aware of the moltitude of data structures, functions and classes provided by the standard library. You can use them as you want as long as you provide the .lib file (in a Windows environment) where the code of the routines is usually defined (in Visual Studio this is usually referred as the Runtime Library with /MT /MD options) for linking.
Once you've linked your executable against those .lib files, you have a compiled executable which, opened in a disassembler, might have something like (for a simple hello world, snippet from here - not a windows environment)
mov edx,len ;message length
mov ecx,msg ;message to write
mov ebx,1 ;file descriptor (stdout)
mov eax,4 ;system call number (sys_write)
int 0x80 ;call kernel
thus eventually every C++ function or routine provided by the standard library either implements an algorithm and/or eventually call some operating-system specific routines through System Calls. There are several design and implementation differences between the various operating systems (and even the boundaries of the system call points) in addition to a thousand of layers for security checking (not to mention ring3/ring0 switches) so I won't spend more words here about that.
You may try to install the Windows SDK and check the %PROGRAMFILES%\Microsoft SDKs\Windows directory.