Choosing compiler options based on the operating system in boost-build - c++

Currently I can build my program using boost build in different platforms by setting the toolset and parameters in the command line. For example :
Linux
b2
MacOS
b2 toolset=clang cxxflags="-stdlib=libc++" linkflags="-stdlib=libc++"
Is there a way to create a rule in the Jamroot file to decide which compiler to use based on the operating system? I am looking for something along these lines:
import os ;
if [ os.on-macos ] {
using clang : <cxxflags>"-stdlib=libc++" <linkflags>"-stdlib=libc++c ;"
}
in linux it automatically decides to use gcc but in the mac if I don't specify the clang toolset it will try (without success) to compile it with gcc.
Just for reference, here is my current jamroot (any suggestions also appreciated):
# Project requirements (note, if running on a Mac you have to build foghorn with clang with libc++)
project myproject
: requirements <cxxflags>-std=c++11 <linkflags>-std=c++11 ;
# Build binaries in src
lib boost_program_options ;
exe app
: src/main.cpp src/utils src/tools boost_program_options
;

How abou using a Jamroot? I have the following in mine. It selects between two GCC versions on Linux, depending on what's in an environmen variable, and chooses vacpp on AIX.
if [ os.name ] = LINUX
{
switch [ modules.peek : ODSHOME ]
{
case *gcc-4* : using gcc : 4.4 : g++-4.4 ;
case *gcc-3.3* : using gcc : 3.3 : g++-3.3 ;
case * : error Only gcc v4 and gcc v3.3 supported. ;
}
}
else if [ os.name ] = AIX
{
using vacpp ;
}
else
{
error Only Linux and AIX supported at present. ;
}

After a long time I have found out that there is really no way (apart from very hacky) to do this. The goal of Boost.Build is to let the toolset option for the user to define.
The user has several ways to specify the toolset:
in the command line with --toolset=gcc for example
in the user configuration by setting it in the user-config.jam for all projects compiled by the user
in the site configuration by setting it in the site-config.jam for all users
the user-config.jam can be in the user's $HOME or in the boost build path.
the site-config.jam should be in the /etc directory, but could also be in the two locations above.
In summary, setup your site-config or user-config for a pleasant experience, and write a nice README file for users trying to compile your program.
Hope this helps someone else.

Related

How to compile a Linux LTO-enabled library with Premake 5?

SO wisdom, I'm turning to you. I'm trying to build a 64-bit static lib using LTO with Makefiles and Premake 5 on Ubuntu 16.04 LTS.
Here's the premake script i'm using:
-- premake5.lua
workspace "TestApp"
location "TestApp" -- The directory of generated files - .sln, etc.
configurations { "Debug", "Shipping" }
platforms { "Linux_Static", "Linux_DLL" }
targetdir "TestApp/Build/%{cfg.platform}/%{cfg.buildcfg}"
objdir "TestApp/Build/"
language "C++"
architecture "x86_64"
system "linux"
filter "platforms:*Static"
kind "StaticLib"
filter "platforms:*DLL"
kind "SharedLib"
filter "kind:SharedLib"
defines { "TEST_USE_DLL", "TEST_DLL_EXPORT" }
-- Configuration filters
configuration "*"
flags { "ExtraWarnings", "C++14", "MultiProcessorCompile", "ShadowedVariables", "UndefinedIdentifiers" }
configuration { "Debug" }
symbols "On"
defines { "TEST_DEBUG" }
optimize "Debug"
configuration "Shipping"
defines { "TEST_SHIPPING" }
optimize "Full"
flags { "LinkTimeOptimization" }
-- step 1
--buildoptions "--plugin=$$(gcc --print-file-name=liblto_plugin.so)"
-- step 2
--toolset "clang"
-- step 3
--premake.tools.gcc.ar = "gcc-ar"
-- Projects
project "TestCore"
location "TestApp/Core"
files { "TestApp/Core/*.h", "TestApp/Core/*.cpp" }
includedirs { "TestApp/" }
project "UnitTests"
location "TestApp/Tests"
kind "ConsoleApp"
links { "TestCore" }
objdir "TestApp/Tests/Build/"
files { "TestApp/Tests/UnitTests/*.cpp", "TestApp/ThirdParty/Catch/*" }
includedirs { "TestApp/ThirdParty/Catch", "TestApp/" }
removedefines { "TEST_DLL_EXPORT" }
filter { "platforms:*DLL", "system:linux" }
runpathdirs { "Build/%{cfg.platform}/%{cfg.buildcfg}" }
"Shipping" is the faulty configuration. I also bundled the whole test project in a zip for you to try to reproduce the issue.
The errors I have when compiling the TestCore library are first plugin needed to handle lto object, then plugin /usr/lib/gcc/x86_64-linux-gnu/5/liblto_plugin.so is not licensed under a GPL-compatible license.
What can we do about it ? If you have any knowledge to make it work with GCC, please help.
what you would do to reproduce the GCC errors after extracting the zip:
cd testBreaking
premake5 gmake
cd TestApp
make config=shipping_linux_static TestCore (get the "plugin needed to handle lto object" error)
Uncomment line 37 of premake5.lua to get the "not licensed under a GPL-compatible license" error
Uncomment line 43 to use gcc-ar instead of ar, notice it doesn't work either
Using gcc option -fuse-linker-plugin doesn't help
Some more system info:
ubuntu 16.04 LTS
gcc 5.4, make 4.1, ar 2.26.1
premake 5.0.0-alpha11
I got it working with Clang. Using toolset clang for Shipping configuration (using LLVM 3.9), the library seems to compile fine. But I got another error:
error adding symbols: Archive has no index; run ranlib to add one
I managed to work around this issue by calling ranlib Build/Linux_Static/Shipping/libTestCore.a --plugin /usr/lib/llvm-3.9/lib/LLVMgold.so, then make again.
So it painfully works using Clang.
I read that I could create a specific premake toolset for this kind of thing, because it's recommended replacing all gnu utils with their gcc- counterparts (e.g. gcc-ar instead of ar), but having rapidly tinkered with premake.tools.gcc.ar = "gcc-ar" with no result, I'm not so sure it would help.

Configuring DUB to use 64-bit compiler

How do I configure DUB to compile my application as 64-bit executable? Here's my dub.json:
{
"name": "dvulkanbase",
"targetType": "executable",
"description": "Vulkan boilerplate",
"authors": ["Myself"],
"homepage": "http://something",
"license": "MIT"
}
I tried adding this line to dub.json:
"dflags-dmd": ["-m64"]
but then dub build outputted:
## Warning for package dvulkanbase ##
The following compiler flags have been specified in the package description
file. They are handled by DUB and direct use in packages is discouraged.
Alternatively, you can set the DFLAGS environment variable to pass custom flags
to the compiler, or use one of the suggestions below:
-m64: Use --arch=x86/--arch=x86_64/--arch=x86_mscoff to specify the target architecture
Performing "debug" build using dmd for x86.
So I tried replacing the line with:
"dflags-dmd": ["--arch=x86_64"]
but got this error:
Error: unrecognized switch '--arch=x86_64'
I'm on Windows 10, have DMD 2.074.0 and Visual Studio 2015 and 2017 installed.
I am pretty sure (correct me if I am wrong) that you did not configure DMD properly for the 64bit environment.
Have a look at http://dlang.org/dmd-windows.html#environment . - The key information there is that you need to set LINKCMD64 variable correctly. Example: set LINKCMD64=C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin\amd64\link.exe
Then you instruct the DMD compiler (with the -m64 option) to compile the D code and use Microsoft's linker to generate 64bit executable.
Finally, you will need to modify your JSON or SDL DUB file to contain proper environment settings. ( Have a look at https://code.dlang.org/package-format?lang=json#target-types )
If you do not specify the environment in the DUB file, you will have to explicitly provide it in your dub build. Example: dub build --arch=x86_64

How to compile boost library for Nintendo DS Lite (arm none eabi g++ compiler)?

I want to compile the c++ boost library for the NDS (on a Windows machine). I followed this tutorial: https://patater.com/boost-on-the-nintendo-ds/
This is my project-config.jam:
import option ;
using gcc : 6.3.0 : arm-none-eabi-g++.exe ;
option.set keep-going : false ;
But when I run bjam, it hangs forever(actually not forever, but it has been running for more than two hours). Also, nothing is outputted to my output directory. How do I compile boost for the NDS?
EDIT:
To provide you some more details, this is what I did:
I downloaded Boost
I ran bootstrap.bat
I added C:\devkitPro\devkitARM\arm-none-eabi\bin and C:\devkitPro\devkitARM\bin to PATH
I changed using msvc ; to using gcc : 6.3.0 : arm-none-eabi-g++.exe ;
I ran this command in the boost directory: b2 --toolset=gcc-6.3.0 --prefix=C:\devkitPro\boost threading=single link=static install

Conflict Protobuf version when using Opencv and Tensorflow c++

I am currently trying to use Tensorflow's shared library in a non-bazel project, so I creat a .so file from tensorflow using bazel.
but when I launch a c++ program that uses both Opencv and Tensorflow, it makes me the following error :
[libprotobuf FATAL external/protobuf/src/google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
Abandon (core dumped)
Can you help me?
Thank you
You should rebuild TensorFlow with a linker script to avoid making third party symbols global in the shared library that Bazel creates. This is how the Android Java/JNI library for TensorFlow is able to coexist with the pre-installed protobuf library on the device (look at the build rules in tensorflow/contrib/android for a working example)
Here's a BUILD file that I adapted from the Android library to do this:
package(default_visibility = ["//visibility:public"])
licenses(["notice"]) # Apache 2.0
exports_files(["LICENSE"])
load(
"//tensorflow:tensorflow.bzl",
"tf_copts",
"if_android",
)
exports_files([
"version_script.lds",
])
# Build the native .so.
# bazel build //tensorflow/contrib/android_ndk:libtensorflow_cc_inference.so \
# --crosstool_top=//external:android/crosstool \
# --host_crosstool_top=#bazel_tools//tools/cpp:toolchain \
# --cpu=armeabi-v7a
LINKER_SCRIPT = "//tensorflow/contrib/android:version_script.lds"
cc_binary(
name = "libtensorflow_cc_inference.so",
srcs = [],
copts = tf_copts() + [
"-ffunction-sections",
"-fdata-sections",
],
linkopts = if_android([
"-landroid",
"-latomic",
"-ldl",
"-llog",
"-lm",
"-z defs",
"-s",
"-Wl,--gc-sections",
"-Wl,--version-script", # This line must be directly followed by LINKER_SCRIPT.
LINKER_SCRIPT,
]),
linkshared = 1,
linkstatic = 1,
tags = [
"manual",
"notap",
],
deps = [
"//tensorflow/core:android_tensorflow_lib",
LINKER_SCRIPT,
],
)
And the contents of version_script.lds:
{
global:
extern "C++" {
tensorflow::*;
};
local:
*;
};
This will make everything in the tensorflow namespace global and available through the library, while hiding the reset and preventing it from conflicting with protobuf.
(wasted a ton of time on this so I hope it helps!)
The error indicates that the program was complied using headers (.h files) from protobuf 2.6.1. These headers are typically found in /usr/include/google/protobuf or /usr/local/include/google/protobuf, though they could be in other places depending on your OS and how the program is being built. You need to update these headers to version 3.1.0 and recompile the program.
This is indeed a pretty serious problem! I get the below error similar to you:
$./ceres_single_test
[libprotobuf FATAL google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
Aborted
My workaround:
cd /usr/lib/x86_64-linux-gnu
sudo mkdir BACKUP
sudo mv libmirprotobuf.so* ./BACKUP/
Now, the executable under test works, cool. What is not cool, however, is that things like gedit no longer work without running from a shell that has the BACKUP path added to LD_LIBRARY_PATH :-(
Hopefully there's a better fix out there?
The error complains about the Protocol Buffer runtime library, which is not compatible with the installed version. This error is coming from the GTK3 library. GTK3 use Protocol Buffer 2.6.1. If you use GTK3 to support Opencv, you get this error. The easiest way to fix this, you can use QT instead of GTK3.
If you use Cmake GUI to install Opencv, just select QT support instead of using GTK3. You can install QT using the following command.
sudo apt install qtbase5-dev
rebuild libprotobuf with -Dprotobuf_BUILD_SHARED_LIBS=ON
then make install to cover the older version

How to compile CodeBlocks MingW in Windows to Ubuntu or Centos

Is there a way to compile with MingW with CodeBlocks in Windows so they can be used in Ubuntu or Centos distros?
I've tried compiling with GNU GCC option then got the output file with .o extensions under obj/Release/ folder.
When I run I get this error under my Vagrant Ubuntu machine:
-bash: ./main.o: cannot execute binary file
How can I compile it so it runs on my Linux machines?
The technical term for what you're trying to accomplish is cross-compilation. For that, you need to build a specific cross-compiler using GCC sources. If you still want to keep MinGW, there is a page explaining the steps needed to create a ARM cross-compiler : http://www.mingw.org/wiki/HostedCrossCompilerHOWTO. (you'll have to modify the target)
List of targets supported by GCC :
armv5te-android-gcc armv5te-linux-rvct armv5te-linux-gcc
armv5te-none-rvct
armv6-darwin-gcc armv6-linux-rvct armv6-linux-gcc
armv6-none-rvct
armv7-android-gcc armv7-darwin-gcc armv7-linux-rvct
armv7-linux-gcc armv7-none-rvct
mips32-linux-gcc
ppc32-darwin8-gcc ppc32-darwin9-gcc ppc32-linux-gcc
ppc64-darwin8-gcc ppc64-darwin9-gcc ppc64-linux-gcc
sparc-solaris-gcc
x86-android-gcc x86-darwin8-gcc x86-darwin8-icc
x86-darwin9-gcc x86-darwin9-icc x86-darwin10-gcc
x86-darwin11-gcc x86-darwin12-gcc x86-linux-gcc
x86-linux-icc x86-os2-gcc x86-solaris-gcc
x86-win32-gcc x86-win32-vs7 x86-win32-vs8
x86-win32-vs9
x86_64-darwin9-gcc x86_64-darwin10-gcc x86_64-darwin11-gcc
x86_64-darwin12-gcc x86_64-linux-gcc x86_64-linux-icc
x86_64-solaris-gcc x86_64-win64-gcc x86_64-win64-vs8
x86_64-win64-vs9
universal-darwin8-gcc universal-darwin9-gcc universal-darwin10-gcc
universal-darwin11-gcc universal-darwin12-gcc
generic-gnu
There is only one big caveat : since Windows is not POSIX compliant, I don't think you can use signals or pthreads.
Finally, brace yourself because it's a tedious task to build a cx-compiler (lots of obscure bugs). That's why profesionnal devs pays $$$ for "plug'n'play" solutions.
EDIT : this MXE project can be useful to you