I am attempting to setup a small build cluster at home using distcc. There are two x64 systems and 1 i686 systems. All systems are running Ubuntu 10.10 and are up to date. The system that is initiating the build is x64. Distcc works fine between the two x64 systems but all build tasks sent to the i686 system fail.
So far:
I have installed the multilib package for g++ on that system. I am able to cross-compile to x64 locally using g++ -m64
Changed the link in /usr/lib/distcc/g++ to point to a script that explicity sets the -m64 parameter.
Any suggestions?
Attempting this one again after more research:
GCC has a page describing the i386 and x86-64 options. The -m64 flag says to generate 64-bit code, but you'll also want to specify the type of CPU with -march=i686 or -march=k8 or similar, to use the correct instruction set.
Since distcc sends the GCC command line flags out, you should try adding these to the distcc command running locally and skip the remote script for setting flags.
If you test the architecture flags on your local x64 machine without distcc, just g++, then it should give you the right binaries when using distcc.
Related
What compiler flags are used to build standard c/c++ libraries like glibc and libstdc++?
Are they same across the distributions like debian, fedora, archlinux etc?
On debian machines there is dpkg-buildflags but I'm not sure if they are overridden for critical runtime libraries.
The one common factor is -O2, with -O2 -g (enabling gdb debugging symbols) being the most common. -O2 is definitely the expected optimization level maintainers assume packages should be compiled at.
If you look at debian/rules in Debian source packages (or -debian.tar.* tarballs), you'll usually find HOST_CFLAGS and HOST_CXXFLAGS describing the C and C++ compiler flags used when the source package is built using Debian tools.
This is documented in for example the man 1 dpkg-buildflags man page: "The default value set by the vendor includes -g and the default optimization level (-O2 usually, or -O0 if the DEB_BUILD_OPTIONS environment variable defines noopt)."
Outside Debian, Gentoo recommends -O2 -pipe and a -march= for the current architecture. (The -pipe option tells GCC to use pipes instead of temporary files during the build, which is usually faster but uses more RAM, during the build. It has no effect on the compiled binaries.)
Embedded systems like OpenWRT often use -Os instead, to generate smaller binaries.
When using clang v8.0.0 on Windows (from llvm prebuilt binaries) with -g or -gline-tables-only source map tables are not being picked up by gdb or lldb debuggers.
Upon including -g flag file grows in size (which is to be expected) yet neither gdb nor lldb pickes the source up
When compiled with gcc though (with -g flag) source files are detected by debugger.
I have tried running the same command (clang -g <codefile>) on macOS High Sierra (clang -v says it is Apple LLVM version 10.0.0 (clang-1000/10.44.4)) where there source files are being picked up by lldb. So I guessed it is localized to my windows instance or llvm for windows build.
P.S. output of clang -v on windows:
clang version 8.0.0 (tags/RELEASE_800/final)
Target: x86_64-pc-windows-msvc
Thread model: posix
InstalledDir: C:\Program Files\LLVM\bin
On Windows, Clang is not self-sufficient (at least not the official binaries). You need to have either GCC or MSVC installed for it to function.
As Target: x86_64-pc-windows-msvc indicates, by default your Clang is operating in some kind of MSVC-compatible mode. From what I gathered, it means using the standard library and other libraries provided by your MSVC installation, and presumably generating debug info in some MSVC-specific format.
Add --target=x86_64-w64-windows-gnu to build in GCC-compatible mode. (If you're building for 32 bits rather than 64, replace x86_64 with i686). This will make Clang use headers & libraries provided by your GCC installation, and debug info should be generated in a GCC-compatible way. I'm able to debug resulting binaries with MSYS2's GDB (and that's also where my GCC installation comes from).
If you only have GCC installed and not MSVC, you still must use this flag.
How do I know this is the right --target? This is what MSYS2's Clang uses, and I assume they know what they're doing. If you don't want to type this flag every time, you can replace the official Clang with MSYS2's one, but I'm not sure if it's the best idea.
(I think they used to provide some patches to increase compatibility with MinGW, but now the official binaries work equally well, except for the need to specify the target. Also, last time I checked their binary distribution was several GB larger, due to their inability to get dynamic linking to work. Also some of the versions they provided were prone to crashing. All those problems come from them building their Clang with MinGW, which Clang doesn't seem to support very well out of the box. In their defence, they're actively maintaining their distribution, and I think they even ship libc++ for Windows, which the official distribution doesn't do.)
I am attempting to port an application to an arm processor and have run into a roadblock. I don't get to change the source code and it uses a feature that is not available in the arm runtime on the arm host. I get the message on the arm host:
/usr/lib/arm-linux-gnueabihf/libstdc++.so.6: version 'CXXABI_1.3.8' not found (required by MyDaemon);
I ran
strings libstdc++.so.6 | grep CXXABI and got the list with the last element as CXXABI_1.3.6.
Can I simply replace the toolchain/runtime on the arm machine or do I have to worry about other programs that link to it and will not run any longer?
g++ --version gives (Debian 4.6.3-14) 4.6.3
so maybe I can use a 4.9 toolchain and runtime?
The issue with that answer is that when the different libstdc++ is loaded it reports.
GLIBC_2.17 not found
The problem is that the environment on the machine where the application is compiled is different from the environment where the application is run and would like to know where to read to be able to solve this problem.
Is it correct, that from pepper 18 onwards, I dont need the scons build system in order to compile, but rather use gcc(nacl-versions) and makefiles?
Also, is it correct that the generated .nexe files will run on any platforms webserver, not just on the platform it was compiled on? So for example, the native code module is developed and compiled under mac os, and generates a 32bit and a 64bit nexe file. The webserver I will load this module on runs on linux, and will still execute the modules in both 32bit and 64bit versions?
Build system for Native Client
No version of the Native Client SDK mandates a particular build system; it has been possible at any time to use SCons, GNU Make, CMake, or even just shell scripts. Put differently, the compilers and tools - which are based on gcc and the GNU toolchain - are independent of the build system the developer decides to use.
However, up to and including the Pepper version 17 of the Native Client SDK, the examples in the SDK came with build files for SCons, and SCons was included in the SDK. From Pepper 18 and onwards this is no longer the case. Instead the build files that are provided for the examples are Makefiles intended for GNU Make.
Also see the release notes for the Pepper 18 version of the SDK.
Cross-compiling
The tools provided in the SDK currently support the 32-bit x86 and 64-bit x86 architectures. The platform of the web server is not important because the Native Client module runs on the client (that is, in the browser). This means there are two systems to consider: the user's system and the developer's system.
On the user's system, when Chrome encounters a Native Client module in a page, it fetches the executable (.nexe file) that's appropriate for the browser on that client. Hence, if a user on 64-bit Windows visits the page, the 64-bit binary will be fetched; if the user is on a 32-bit Mac, the 32-bit binary is fetched. There are exceptions, which I'll treat separately below. Chrome determines the names of the 32-bit and 64-bit .nexes from the manifest file. See the Native Client SDK site (www.GoNaCl.com) for a description and an example of a manifest file
The developer can – and should - produce both 32-bit and 64-bit executables regardless of the operating system and architecture used for development. Running 'make' in the examples/ directory of Pepper 18 and looking at the commands issued is a convenient way of seeing how to do this. E.g., part of the 'make hello_world_glibc' output reads something like:
i686-nacl-gcc -o hello_world_x86_32.nexe hello_world.c -m32 -O0 -g -pthread -O0 -g -Wno-long-long -Wall -lppapi
and
i686-nacl-gcc -o hello_world_x86_64.nexe hello_world.c -m64 -O0 -g -pthread -O0 -g -Wno-long-long -Wall -lppapi
The first line produces the 32-bit .nexe; the second line produces the 64-bit .nexe. The important flag is -m32/-m64, which specifies the architecture - always build both, so that client's on both 32-bit and 64-bit machines can use the app.
Longer term, only one deployment format will be needed, and ARM will be added as a directly supported architecture. See the Portable Native Client project for details.
Here is the specific matching of browser and client architecture to 32/64 bit:
Mac OS (32-bit and 64-bit) -> 32-bit .nexe (Chrome is 32-bit)
Windows (32-bit) -> 32-bit .nexe
Windows (64-bit) -> 64-bit .nexe (Chrome is 32-bit, but starts a 64-bit broker process)
Linux (32-bit) -> 32-bit .nexe
Linux (64-bit) -> 32-bit Chrome fetches 32-bit .nexe; 64-bit Chrome fetches 64-bit .nexe
So as a general rule Chrome fetches the .nexe that matches its own bit-age – except on 64-bit Windows where Chrome fetches the 64-bit .nexe despite being 32-bit itself.
I have built GCC 4.7 on my x86 32-bit linux system. When I try to cross-compile with the -m64 flag I get the following:
sorry, unimplemented: 64-bit mode not compiled in
while the compiler provided by default by my Linux distribution can cross-compile with -m64.
What do I have to pass to ./configure to enable the 64bit mode in GCC? These are the options I used to build GCC 4.7:
$ /usr/local/bin/g++ -v Using built-in specs.
COLLECT_GCC=/usr/local/bin/g++
COLLECT_LTO_WRAPPER=/usr/local/libexec/gcc/i686-pc-linux-gnu/4.7.0/lto-wrapper
Target: i686-pc-linux-gnu
Configured with: ./configure --enable-multiarch --with-cloog=/usr/local/ --with-mpfr=/usr/local/ --with-ppl=/usr/local/ --verbose --enable-languages=c,c++
Thread model: posix gcc version 4.7.0 20120113 (experimental) (GCC)
EDIT:
--enable-multilib and --enable-targets=i686-pc-linux-gnu,x86_64-pc-linux-gnu
do not change the situation. The compiler still complains about 64 bit mode not compiled in:
$ g++ -v Using built-in specs. COLLECT_GCC=g++
COLLECT_LTO_WRAPPER=/usr/local/libexec/gcc/i686-pc-linux-gnu/4.7.0/lto-wrapper
Target: i686-pc-linux-gnu Configured with: ./configure
--enable-multiarch --with-cloog=/usr/local/ --with-mpfr=/usr/local/ --with-ppl=/usr/local/ --verbose --enable-languages=c,c++ --enable-multilib --enable-targets=i686-pc-linux-gnu,x86_64-pc-linux-gnu Thread model: posix gcc version 4.7.0 20120113 (experimental) (GCC)
$ g++ -m64 c.cpp c.cpp:1:0: sorry, unimplemented: 64-bit mode not
compiled in
This typically means that you're using the wrong (old) compiler.
The new compilers support both -m32 and -m64. You have to set the PATH to the new compilers (in the gcc,MinGW subdirectory of Rtools) before any old compilers in Rtools.
Try updating your compiler's binary lib path to 64bit version. Other resources like lib folders also should change accordingly.
You will need both binutils and gcc configured with:
--enable-multilib
and probably:
--enable-targets=i686-pc-linux-gnu,x86_64-pc-linux-gnu
to support multilib (the -m64 and/or -m32 options). You'll also need two versions of stuff like glibc to be able to link and run the resulting binaries.
Just resolved this issue.
In the environment variables, remove the entries to any outdated c++ package.
In my case, I worked in Anaconda on Windows 64-bit. In anaconda, I performed 'conda install mingw libpython'.
Mingw is for c++ compiler. But I had earlier installed cygwin's mingw for c++ compilations which hadn't been updated. This is the reason for conflict.
I resolved this issue by simply removing the environment variable (PATH) corresponding to these c++ packages.
I have tried almost all forums, this solution works.
Please let me know in case anyone needs help. :)
I had the same problem on windows. Despite installing codeblock 20.03 I couldn't compile my code 64 bit.
I solved it by setting the compiler to x86_64-w64-mingw32-g++.exe instead of g++.exe. It was in the bin directory as g++.exe.
In windows go to Settings menue and:
Settings->Compiler...
select the "toolchain executables" tab and and from there on it is obvious.
Had the same issues. My solution:
Update everything (R, Rstudio, R packages) and close Rstudio.
Uninstall Rtools and install the latest version.
Add only 2 entries under Enviroment Variables/System variables/Path:
- C:\Rtools\bin
- C:\Rtools\mingw_64\bin (!not the 32bit version)
Path entries have to be in this order and above %SystemRoot\System32
I did NOT install in the strongly recommended default location on C:
After that open Rstudio and re-install Rcpp via console:
install.packages("Rcpp")
Test if it's working with:
Rcpp::evalCpp("2+2")
After that just switch to the Terminal in Rstudio, go into the cmdstan source folder and type 'make build'.
--- CmdStan v2.19.1 built ---
Done!
Details:
*> sessionInfo()
R version 3.6.0 (2019-04-26)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17763)
Matrix products: default
locale:
[1] LC_COLLATE=Slovenian_Slovenia.1250 LC_CTYPE=Slovenian_Slovenia.1250 LC_MONETARY=Slovenian_Slovenia.1250 LC_NUMERIC=C
[5] LC_TIME=Slovenian_Slovenia.1250
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_3.6.0 tools_3.6.0*