Clang not generating debug info on -g flag - c++

When using clang v8.0.0 on Windows (from llvm prebuilt binaries) with -g or -gline-tables-only source map tables are not being picked up by gdb or lldb debuggers.
Upon including -g flag file grows in size (which is to be expected) yet neither gdb nor lldb pickes the source up
When compiled with gcc though (with -g flag) source files are detected by debugger.
I have tried running the same command (clang -g <codefile>) on macOS High Sierra (clang -v says it is Apple LLVM version 10.0.0 (clang-1000/10.44.4)) where there source files are being picked up by lldb. So I guessed it is localized to my windows instance or llvm for windows build.
P.S. output of clang -v on windows:
clang version 8.0.0 (tags/RELEASE_800/final)
Target: x86_64-pc-windows-msvc
Thread model: posix
InstalledDir: C:\Program Files\LLVM\bin

On Windows, Clang is not self-sufficient (at least not the official binaries). You need to have either GCC or MSVC installed for it to function.
As Target: x86_64-pc-windows-msvc indicates, by default your Clang is operating in some kind of MSVC-compatible mode. From what I gathered, it means using the standard library and other libraries provided by your MSVC installation, and presumably generating debug info in some MSVC-specific format.
Add --target=x86_64-w64-windows-gnu to build in GCC-compatible mode. (If you're building for 32 bits rather than 64, replace x86_64 with i686). This will make Clang use headers & libraries provided by your GCC installation, and debug info should be generated in a GCC-compatible way. I'm able to debug resulting binaries with MSYS2's GDB (and that's also where my GCC installation comes from).
If you only have GCC installed and not MSVC, you still must use this flag.
How do I know this is the right --target? This is what MSYS2's Clang uses, and I assume they know what they're doing. If you don't want to type this flag every time, you can replace the official Clang with MSYS2's one, but I'm not sure if it's the best idea.
(I think they used to provide some patches to increase compatibility with MinGW, but now the official binaries work equally well, except for the need to specify the target. Also, last time I checked their binary distribution was several GB larger, due to their inability to get dynamic linking to work. Also some of the versions they provided were prone to crashing. All those problems come from them building their Clang with MinGW, which Clang doesn't seem to support very well out of the box. In their defence, they're actively maintaining their distribution, and I think they even ship libc++ for Windows, which the official distribution doesn't do.)

Related

force clang to use llvm instead of gcc in linux

How can I make use of llvm as clang backend to compile C++ files without using gcc as clang's backend?
I am pretty sure clang is using gcc because
$ clang++ --version
clang version 6.0.1 (tags/RELEASE_601/final)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/local/bin
it uses gnu as target instead of llvm. My llvm-config output:
$ llvm-config --version --targets-built
6.0.1
X86
I built both clang and llvm from source using standard options for my build target(X86).
EDIT: I think it is using gcc as backend because this code produces error in online ide but works on my machine with clang++ and g++. Code relies on fact that gcc has implementation of policy based data structures which are not part of standard.
The problem is in the interpretation of the data. The target that clang refers to has to do with the platform for which you are generating code.
x86_64 This is a 64 bit processor compatible with Intel/and
unknown I'm uncertain about this one, though I believe it specifies more detail about the processor, which ain't available
linux You are using a Linux kernel/operation system
gnu The object structure should follow the gnu standards, I believe this directly maps on ELF
This will be different if you use BSD or windows as OS, or when your processor is an ARM, Intel 32 bit, Spark ...
The only moment you should be worrying about the target is when you are cross compiling. In other words, if the computer on which you are running the compiler has other requirements for the executable structure than the machine on which you will be running it.
PS: Clang always uses LLVM for it's IR. ignoring the deprecated Clang+C2, it always uses the LLVM optimizer and code generator.

Glibc vs GCC vs binutils compatibility

Is there a sort of official documentation about version compatibility between binutils, glibc and GCC? I found this matrix for binutils vs GCC version compatibility. It would be good to have something like this for GCC vs glibc as well.
The point I'm asking this for is that I need to know if I can build, say, cross GCC 4.9.2 with "embedded" glibc 2.2.4 to be able to support quite old targets like CentOS 5.
Thank you.
it's extremely unlikely you'll be able to build such an old version of glibc with such a new version of gcc. glibc documents the min required version of binutils & gcc in its INSTALL file.
glibc-2.23 states:
Recommended Tools for Compilation
GCC 4.7 or newer
GNU 'binutils' 2.22 or later
typically if you want to go newer than those, glibc will generally work with the version of gcc that was in development at the time of the release. e.g. glibc-2.23 was released 18 Feb 2016 and gcc-6 was under development at that time, so glibc-2.23 will work with gcc-4.7 through gcc-6.
so find the version of gcc you want, then find its release date, then look at the glibc releases from around the same time.
all that said, using an old version of glibc is a terrible idea. it will be full of known security vulnerabilities (include remotely exploitable ones). the latest glibc-2.23 release for example fixed CVE-2015-7547 which affects any application doing DNS network resolution and affects versions starting with glibc-2.9. remember: this is not the only bug lurking.
When building a cross-compiler there are at least two, and sometimes three, platform types to consider:
Platform A is used to BUILD a cross compiler HOSTED on Platform B which TARGETS binaries for embedded Platform C. I used the words BUILD, HOSTED, and TARGETS intentionally, as those are the options passed to configure when building a cross-GCC.
BUILD PLATFORM: Platform of machine which will create the cross-GCC
HOST PLATFORM: Platform of machine which will use the cross-GCC to create binaries
TARGET PLATFORM: Platform of machine which will
run the binaries created by the cross-GCC
Consider the following (Canadian Cross Config, BUILD != HOST platform):
A 32-bit x86 Windows PC running the mingw32 toolchain will be used to compile a cross-GCC. This cross-GCC will be used on 64-bit x86 Linux computers. The binaries created by the cross-GCC should run on a 32-bit PowerPC single-board-computer running LynxOS 178 RtOS (Realtime Operating System).
In the above scenario, our platforms are as follows:
BUILD: i686-w32-mingw32
HOST: x86_64-linux-gnu
TARGET: powerpc-lynx-lynxos178
However, this is not the typical configuration. Most often BUILD PLATFORM and HOST PLATFORM are the same.
A more typical scenario (Regular Cross Config, BUILD == HOST platform):
A 64-bit x86 Linux server will be used to compile a cross-GCC. This cross-GCC will also be used on 64-bit x86 Linux computers. The binaries created by the cross-GCC should run on a 32-bit PowerPC single-board-computer running LynxOS 178 RtOS (Realtime Operating System).
In the above scenario, our platforms are as follows:
BUILD: x86_64-linux-gnu
HOST: x86_64-linux-gnu
TARGET: powerpc-lynx-lynxos178
When building the cross-GCC (assuming we are building a Regular Cross Config, where BUILD == HOST Platform), native versions of GNU BinUtils, GCC, glibc, and libstdc++ (among other libraries) will be required to actually compile the cross-GCC. It is less about specific versions of each component, and more about whether each component supports the specific language features required to compile GCC-4.9.2 (note: just because GCC-4.9.2 implements language feature X, does not mean that language feature X must be supported by the version of GCC used to compile GCC-4.9.2. In the same way, just because glibc-X.X.X implements library feature Y, does not mean that the version of GCC used to compile glibc-X.X.X must have been linked against a glibc that implements feature Y.
In your case, you should simply build your cross-GCC 4.9.2 (or if you are not cross compiling, i.e. you are compiling for CentOS 5 on Linux, build native GCC 4.9.2), and then when you link your executable for CentOS 5, explicitly link glibc v2.2.4 using -l:libc.so.2.2.4. You also probably will need to define -std=c99 or -std=gnu99 when you compile, as I highly doubt glibc 2.2.4 supports the C 2011 standard.

To exclude /usr/include/c++/4.3/ in compiling code with intel compiler

I am working on a cluster which has older version of intel compiler (11) and gcc (4.3).
I have installed a newer trial version of intel composer xe (with 14.0 compiler). I have also installed gcc 4.9. Both the newer gcc and intel compilers are in my home directory (non-root)
I use C++11 in my codes, so obviously i use -std=c++11 flag for compiling. I provide -L and -I flags to include intel's includes and libraries in my makefile
When I try to compile my code with icpc, The compiler looks in /user/include/c++/4.3/.... path.
I tried to remove the path by setting C_INCLUDE_PATH and CPLUS_INCLUDE_PATH to
/home/peter/intel/composerxe/include.
But it still looks in the /usr/include/c++ path. Because of this, the old files in /user/include/c++/4.3/tr1_impl/ ... are included and not the latest ones.
How do I stop the intel compiler to look into these paths and see the new ones. Now instead of intel compiler i use gcc4.9, what changes do i need to make ?
I tried adding -nostdinc flag for compiling, but no luck. It gives error as :
catastrophic error cannot open source file "iostream"
because 1st include header is iostream
I figured out that intel shares its headers from already installed gcc in the system and takes the /usr/include or usr/local/include paths.
In my case, the operating system on our cluster is SUSE linux with gcc4.3 as default GNU compiler.
Now, the cluster management software is set such that two compilers (intel and gcc) cannot be loaded simultaneously (module intel version 11 and module gcc version 4.5)
After I did a gcc --print-search-dirs I found out that the intel compiler inherits some common headers (iostream, etc) from existing gcc compiler (which is 4.3 in this case) and hence includes the old tr1_imple files and headers because of which I get errors in compiling c++11 code.
I did install gcc 4.9 in my home directory, but intel 14 has compatibility issues in including headers from gcc 4.9 and again this time weird compiler errors.
While using gcc 4.5 (using module load gcc-4.5) atleast got me to compile the code, I think the only solution to this problem would be to use gcc 4.7.2 and provide its include path since that was the first release with c++11 support ( not c++0x working draft)implementing a lot of c++11 features except concurrency.
One needs to know that Intel compilers do depend upon existing gcc compiler provided by the OS or installed by the user/root. Without that, they are pretty much of no use.
Cheers!

Unable to cross-compile to SPARC using clang

So here's the situation: I need to be able to compile binaries from a Linux machine (on Ubuntu, for what it's worth) which are able to run from a SPARC server. The program I'm trying to compile is very simple:
#include <stdio.h>
#include <stdlib.h>
int main() {
printf("Testing the SPARC program...");
return EXIT_SUCCESS;
}
I've tried a number of different compile lines to get it to work, but unfortunately nothing appears to be working.
I tried the traditional:
clang -target sparc blah.c -o blahsparc
But this doesn't work, with a bunch of assembler failures:
/tmp/blah-519e77.s: Assembler messages:
/tmp/blah-519e77.s:7: Error: unknown pseudo-op: '.register'
/tmp/blah-519e77.s:8: Error: unknown pseudo-op: '.register'
/tmp/blah-519e77.s:9: Error: unknown pseudo-op: '.register'
/tmp/blah-519e77.s:10: Error: unknown pseudo-op: '.register'
/tmp/blah-519e77.s:11: Error: no such instruction: 'save %sp,-240,%sp'
/tmp/blah-519e77.s:12: Error: no such instruction: 'st %g0, [%fp+2043]'
...
clang: error: assembler (via gcc) command failed with exit code 1 (use -v to see invocation)
I've tried this also:
clang -cc1 -triple "sparc-unknown-Linux" blah.c -o blahsparc
which complains about the missing headers, so instead of using -cc1, I use -Xclang:
clang -Xclang -triple -Xclang "sparc-unknown-Linux" blah.c -o blahsparc
however, this also fails due to "error: unknown target CPU 'x86-64'".
I'm not sure where to proceed with this. I've tried using crosstool-ng as well with very little success.
As of the 3.4.2 release (June 2014), llvm is missing code necessary to be able to generate object files for sparc targets. Older releases (1.x & 2.x) had support for it, but llvm's framework for emitting object files was less mature back then. When the current framework was rolled out it looks like they didn't migrate all platforms.
The documentation seems to imply that a combination of llvm/gcc is known to work, but I think that table was tabulated based on a much earlier version of llvm when they had a less mature framework for emitting object files.
Support for emitting object files was added to their SVN trunk in revision r198533 (this thread discusses the commit), but as you can see in the 3.4.2 final release, files & changes added in r198533 aren't present.
As an aside, clang currently isn't functional in sparc solaris (not sure about sparc in general). The parser seems to have trouble parsing templates; I get coredumps & the like. I ran across a thread a week or so ago discussing alignment problems in sparc/solaris clang, and this may be one of the reasons clang isn't yet usable on this platform.
If you need a cross compiler for Sparc that runs on an Ubuntu machine, the simplest way I know of is to use Buildroot. Here's a small tutorial on how to obtain a cross compiler and test the generated executables on a Sparc emulator.
LLVM 3.6.2 has some support for sparc now... I was able to build llvm 3.6.2 and clang 3.6.2-r100 on my T2000. I haven't gotten C++ support working but I have built moderately complex C applications like htop.
I did compile LLVM using gcc 5.2 however I lower version should work as well although I'd suggested at least gcc 4.9 and no lower than gcc 4.7.
The LLVM emerge on gentoo crashed during the compile but I was able to resume it by moving to the portage directory with the llvm ebuilds and restarting the build manually:
cd /usr/portage/*/llvm/
ebuild llvm-3.6.2.ebuild merge
I had to override some of the default compiler:
CC="clang -target sparc-unknown-linux-gnu"
CXX="clang++ -target sparc-unknown-linux-gnu"
CFLAGS="-O2 -pipe"
CXXFLAGS="${CFLAGS}"
I don't know if you will be able to use this to build from an x86 machine... though clang is supposed to be able to do that. But worst case you might be able to get this going in qemu-system-sparc64 vm or on some real hardware that you can find cheap on ebay (T5xxx hardware is coming down in price and blades are dirt cheap)
I recently updated to clang 3.8 (which is as yet unreleased) and I was able to compile a c++ application by passing -lstdc++ in addition to the options above. I believe this is the same behavior as gcc when invoked as gcc rather than g++.

How to compile C++ programs in codeblocks for 32bit computers with the dual targets MinGw compiler [duplicate]

I've downloaded MinGW with mingw-get-inst, and now I've noticed that it cannot compile for x64.
So is there any 32-bit binary version of the MinGW compiler that can both compile for 32-bit Windows and also for 64-bit Windows?
I don't want a 64-bit version that can generate 32-bit code, since I want the compiler to also run on 32-bit Windows, and I'm only looking for precompiled binaries here, not source files, since I've spent countless hours compiling GCC and failing, and I've given up for a while. :(
AFAIK mingw targets either 32 bit windows or 64 bit windows, but not both, so you would need two installs. And the latter is still considered beta.
For you what you want is either mingw-w64-bin_i686-mingw or mingw-w64-bin_i686-cygwin if you want to compile for windows 64. For win32, just use what you get with mingw-get-inst.
See http://sourceforge.net/apps/trac/mingw-w64/wiki/download%20filename%20structure for an explanation of file names.
I realize this is an old question. However it's linked to the many times the question has been repeated.
I have found, after lots of research that, by now, years later, both compilers are commonly installed by default when installing mingw from your repository (i.e. synaptic).
You can check and verify by running Linux's locate command:
$ locate -r "mingw32.*[cg]++$"
On my Ubuntu (13.10) install I have by default the following compilers to choose from... found by issuing the locate command.
/usr/bin/amd64-mingw32msvc-c++
/usr/bin/amd64-mingw32msvc-g++
/usr/bin/i586-mingw32msvc-c++
/usr/bin/i586-mingw32msvc-g++
/usr/bin/i686-w64-mingw32-c++
/usr/bin/i686-w64-mingw32-g++
/usr/bin/x86_64-w64-mingw32-c++
/usr/bin/x86_64-w64-mingw32-g++
Finally, the least you'd have to do on many systems is run:
$ sudo apt-get install gcc-mingw32
I hope the many links to this page can spare a lot of programmers some search time.
for you situation, you can download multilib (include lib32 and lib64) version for Mingw64:
Multilib Toolchains(Targetting Win32 and Win64)
By default it is compiled for 64bit.You can add -m32 flag to compile for 32bit program.
But sadly,no gdb provided,you ought to add it manually.
Because according to mingw-64's todo list, gcc multilib version is done,but gdb
multilib version is still in progress,you could use it maybe in the future.
Support of multilib build in configure and in gcc. Parts are already present in gcc's 4.5 version by using target triplet -w64-mingw32.
gdb -- Native support is present, but some features like multi-arch support (debugging 32-bit and 64-bit by one gdb) are still missing features.
mingw-64-todo-list