I have just installed MinGW and in the bin folder I can see 7 .exe files that compile my program:
c++.exe
g++.exe
mingw32-c++.exe
mingw32-g++.exe
gcc.exe
mingw32-gcc.exe
mingw32-gcc-4.4.1.exe
My small program (testprog.cpp) compiles correctly with each of them; the a.exe file is generated in the bin folder and it runs correctly.
What's the difference between them and which one should I use?
Also, what can I do to change the name of the output file from a.exe to testprog.exe automatically upon each successful compile?
These follow gcc naming conventions.
c++.exe is a traditional name for the system c++ compiler
g++.exe and gcc.exe are the names for the gcc compilers that compile for the "current system"
mingw32-* versions are the names for the compilers that cross-compile to the "mingw" target. In this case this is the same as the system target.
An then mingw32-gcc-4.1.exe is "gcc for mingw target version 4.1"
You should typically compile C code with a "gcc" variant, and c++ code with a "g++" variant.
Use -o filename in order to specify the output filename, the default is a.exe
It's quite possible that they are all the same; either exact copies or symbolic links to one another. Try using the --version flag on each to see what you've got. On my MingGW installation here, each of those binaries differs (checked with diff), but they all output the same version information (with the exception of the first bit, which is the filename):
gcc.exe (GCC) 3.4.5 (mingw-vista special r3)
Copyright (C) 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Use the -o flag to change the output file name:
gcc -o testprog.exe testprog.cpp
In unix they'd mostly by symbolic links. The only major difference is between the 'cc' vs. '++' ones. You should notice a difference between these two if you use any part of the standard C++ library. The '++' versions link to that lib automatically. The 'cc' ones are C compilers and so don't...though you can use them as C++ compilers by just adding -lstdc++ or whatever.
While I was searching around the web for instructions in making some DLLs, I ran into a C++ compilation process that, from what I saw, used g++ to compile c++ the same way as using gcc.
I used "g++.exe"
Then in my IDE, VSCode, I also had to change the "IntelliSense mode" to "windows-gcc-x64" to get rid of the warning
Related
I'm upgrading a project from gcc 4.3 to 12.1 (a major jump) on Centos 7 & 6 - both 64 bit. As part of that I'm having trouble with what I suspect is an indirect dependency on a very old version of libgmp.
The submodule I'm compiling does not require any external libraries, however while running make I get the error:
gcc_12.1.0_path/cc1plus: error while loading shared libraries: libgmp.so.3: cannot open shared object file: No such file or directory
Google search does not turn out anything useful, issues that turn up are different than my case, my ld_library_path vars, lib paths are set correctly. libgmp v 3 is not available on my machine, and ideally I do not wish to install it, instead figure out where the dependency is and try upgrade it to use a more recent version of libgmp.
Even more confusing is, libgmp3 is not supported by centos 7 (or gcc 12). I'm not sure what part of compilation is dependent on this library, or how that part was successfully compiling with gcc 4 on centos 6.
The command for compilation is
g++_path -g -Wno-deprecated -D_DEBUG -fPIC -m64 -DLIN64BIT -I multiple_include_paths -o object_path -c source_file no -L inclusions
I wish to know two things:
In this case, what information should I provide to seek help?
Is there any tool for 2nd level or further (indirect) dependency
analysis? Would a c++ package manager help with such cases?
Is it possible to use libgmp v 3 built on another platform and copied to my lib path to resolve this (I do not have immediate access to such a machine or lib hence cannot check immediately)
Are there any suspects based on just the message and command?
I'm doing a tutorial right now that doesn't make it to clear on how to do this.
I have two files, int2.cpp which has my main and int1.cpp which is a function (int2 calls on int1) I know they will work but how would one type it into the command line? tutorial says g++ int2.cpp int1.cpp -o int2.cpp, but it says " g++ is an illegal command"
I'm using DOSbox 0.74.
I compile things with tcc sorry but it says -o isn't a command line option
I compile things with tcc sorry but it says -o isn't a command line option
TurboC++ is an obsolete compiler (for an obsolete variant of C++); don't use it.
TinyC (e.g. the tcc command) is a compiler for C, not for C++.
C and C++ are different languages, and you want to learn at least C++11 (since older standards of C++ are obsolete and very different, it is not worth learning them in 2017).
So get and use a free software C++11 compiler like GCC or Clang. BTW both are easily available on most Linux distributions (which I recommend you to use).
Of course you'll want to compile with warnings and debug information, so use
g++ -Wall -Wextra -g with GCC and clang++ -Wall -Wextra -g with Clang.
BTW, you probably want to compile several translation units into a single executable binary. This often involves a linking step (i.e. running the linker on several object files, also using the g++ command). Consider learning to use some build automation tool like GNU make (which has a lot of builtin rules to help in doing that).
g++ is an illegal command means the compiler is not installed on your path. The syntax is (almost) right, you likely just don't have gcc installed.
g++ int2.cpp int1.cpp -o int2.exe
I have compiled gcc 5.3.0 from source on an aws.ami linux to learn more about the entire development compiling chain. I have searched a number of threads for several hours and have not found the right combination to understand exactly what is going on.
Looking in the .configure --help, I set the flag --includedir=/home/mybin/include and compiled the programs with no errors, using all the flags under Fine tuning of the installation directories:
When I compile a program passing g++ -v test.cc I see that by default the compiler is looking in
#include "..." search starts here:
#include <...> search starts here:
/home/mybin/lib/gcc/x86_64-unknown-linux-gnu/5.3.0/include
/usr/local/include
/home/mybin/lib/gcc/x86_64-unknown-linux-gnu/5.3.0/include-fixed
/usr/include
End of search list.
for include programs such as map & iostream etc.
Q1: why doesn't the -v output show the --includedir in the search. I note it does however look there for #include programs.
Q2: I note that when the make install happened it did not copy the files from the compile tmp directory /home/tmp/gcc-5.3.0/libstdc++-v3/include/std/.... to the --includedir. Is there a flag I have missed to get it to dump these files into that dir?
Q3: Also using the --help output I have set CPPFLAGS="-I/home/anotherBin" to test if it will scan this dir for other include files. However it does not seem to work.
So I tried each of the following with no success, what is the correct flag to set?
LDFLAGS="-L/home/anotherBin"
linker flags, e.g. -L<lib dir> if you have libraries in a nonstandard directory <lib dir>
LIBS="-l/home/anotherBin"
libraries to pass to the linker, e.g. -l<library>
You have (understandably) misunderstood the function of the --includedir parameter of
./configure. This is a standard parameter of GNU autotools ./configure scripts,
not just GCC's. The same if true for all the other ./configure options under
the heading Fine tuning of the installation directories. They are boilerplate.
The --includedir parameter specifies a non-default directory in which to install the
header files that contain the APIs of a library or libraries that are installed
by the script. Thus e.g. if you made an autotools package for a library
libfoobar that you had written, and I decided to install the package with
./configure --includedir=/usr/local/include/foobar --libdir=/usr/local/lib/foobar
then when I compiled and linked a program using libfoobar:
main.c
#include <foobar.h>
int main(void)
{
foo();
bar();
return 0;
}
I would have to do it like:
gcc -I/usr/local/include/foobar -c -o main.o main.c
gcc -o prog main.o -L/usr/local/lib/foobar -lfoobar
GCC is not a library that you link with your programs. You don't
#include any such thing as "the GCC API" in your source code. --includedir
is not relevant.
In the case of your question Q3 you are also confused about the function of the ./configure variable,
CPPFLAGS. This variable and its fellows (CFLAGS, CXXFLAGS, LDFLAGS, etc.) affects
the behaviour of the compiler you already have when it is
building your new GCC. They have no effect on the behaviour of the new compiler that you build.
In the typical installation with C and C++ (leaving aside the other supported languages),
GCC comprises various cooperating tools for compiling, assembling and linking programs in
those languages plus implementations of the Standard libraries of
those languages. The installed locations of the tools, the binaries of the
Standard libraries and the header files of the Standard libraries are all primarily
controlled by the (specified or default) --prefix configure option, whereby a standard relationship
is maintained between all of these installed locations.
Do not depend on ./configure help to install GCC. Start at the GCC Wiki,
Installing GCC. Read that page carefully and
follow the links as you find necessary, including Installing GCC: Configuration.
I refer you especially to these words on that page:
Options specification
Use options to override several configure time options for GCC. A list of supported options follows;
‘configure --help’ may list other options, but those not listed below may not work and should not normally be used.
(My emphasis)
I have a code on my computer uses Petsc which depends on mpi. On my computer it works well. I put it on cluster, exported paths of gcc, Petsc and openmpi (although I was using mpich on my computer I hope openmpi will also work) to LD_LIBRARY_PATH and PATH. I also changed paths in makefile. Petsc, gcc, openmpi were all available on cluster so I did not configure anything. When I did make, compiler gave error:
fatal error: mpi.h: No such file or directory
I know I did not give complete information but I can tell more if needed. How can I make the Petsc to know where is mpi.h?
Typically, you should use mpicc (or mpicxx for C++) to compile instead of gcc (or g++ for C++). These commands are simple wrappers around gcc and g++ that simply add in the appropriate -I/path/to/mpi/includes and -L/path/to/mpi/libs automatically and should be included with your openmpi install. In the absence of that, simply add -I/path/to/mpi/includes in your command to compile the appropriate files. This tells the compiler where to look for the appropriate header files.
To answer the question. To prevent a C/C++ editor from showing errors as you tyoe in the "special code" just use:
#include </usr/include/mpi/mpi.h>
which seems to be a link -- but doing that turns off the errors in Netbeans editor so I can code without distraction.
Note: Using Ubuntu 18.04 Desktop as editing machine -- and testing run machine -- but I compile manually using mpic as noted previously.
sudo mpicc c_pi.c -o c_pi
and then...
mpiexec ./c_pi
hth
I'm trying to analyze why a (quite large) program segfaults. If the program crashes it writes a core dump to /tmp which I try to analyze using gdb. However, gdb gives me the following error:
Reading symbols from /home/user/Executable...Dwarf Error:
wrong version in compilation unit header (is 4, should be 2)
[in module /home/user/Executable]
I've searched a bit and found a thread on stackoverflow where the author assumes that this was the result of compiling parts of the code (precisely a library he/she was using) with a different -g flag.
I've checked the version of the compilation unit on my executable (C++) and a library (C) I'm using in my program via
readelf --debug-dump=info Executable | grep -A 2 'Compilation Unit #'
And apparently the executable has everywhere a version of 4, while the library has a version of 2. I'm wondering if it is possible to fix this and how? I'm also quite curious how this problem actually rose in the first place (toying around with the debug level via the -g flag doesn't helped at all).
TIA
The set of inputs that produce a single object file (.o) is called a compilation unit; for more info, see wikipedia. For convenience, "compilation unit" is often abbreviated as "CU."
When compiling a CU with debug information, each CU has a debug information section that begins with a CU header; this header contains a version number. This debugging information is in a format called DWARF.
Over time the DWARF standard has evolved. For each major release, the version number has changed. This ensures that when a DWARF producer (e.g., a compiler) creates debug information, the DWARF consumer (e.g., a debugger) knows what to expect.
When gdb complains about the version of a CU, it is really complaining about the version number that is in the DWARF CU header.
To avoid this problem, as you have discovered, you have to make sure that your entire software development toolchain (compiler, linker, debugger) is able to "speak" the same DWARF version. Your solution of compiling the latest version of gdb is correct.
From the GCC 4.8 release notes:
Before GCC 4.8 the default version used was DWARF2. To make GCC 4.8 generate an older DWARF version use -g together with -gdwarf-2 or -gdwarf-3
In my case, adding
-gdwarf-2 -gstrict-dwarf
made the old debugger to work again. I agree that using a newer GDB version is the best solution in most cases, though.
I've compiled the library with g++ (instead of gcc) which resulted in the desired compilation unit. However, this still resulted in the DWARF error thrown by gdb and so I compiled the latest version of gdb on the machine and finally it works now.