I am trying to convert Makefile build into bazel, and need to reproduce the following condition to specify CPU capabilities defines for C code compilation:
HAVE_AVX2 := $(shell grep avx2 /proc/cpuinfo)
ifdef HAVE_AVX2
$(info Checking for AVX support... AVX and AVX2)
CFLAGS += -DRTE_MACHINE_CPUFLAG_AVX -DRTE_MACHINE_CPUFLAG_AVX2
else
HAVE_AVX := $(shell grep avx /proc/cpuinfo)
ifdef HAVE_AVX
$(info Checking for AVX support... AVX)
CFLAGS += -DRTE_MACHINE_CPUFLAG_AVX
else
$(info Checking for AVX support... no)
endif
endif
Is it possible to implement such conditional in bazel? From what I have found, cc_library has defines and copts where I could use a select function, but I cannot understand what kind of condition I can use inside of select.
Take a look at https://docs.bazel.build/versions/master/be/general.html#config_setting.
Generally, you do something like
config_setting(
name = "avx2",
values = {
"define": "avx2=yes"
}
)
and the you can select on :avx2 condition:
cc_library(...
copts = select({":avx2":[...], ...})
and run bazel with
bazel build --define avx2=yes ...
Related
I am trying to enable and disable compile flags for gcov in C++ on Linux. I do not want to have the gcov flags set at all times. I only want them set when I am testing the software. The environment variable I am checking is called TESTENABLED.
In my configure.ac file I have the following line:
AM_CONDITIONAL([ENABLEGCOV],[test x$TESTENABLED = xtrue])
In my Makefile.am file I have the following lines:
if ENABLEGCOV
AM_CXXFLAGS = -Wall -fPIC -fprofile-arcs -ftest-coverage
else
AM_CXXFLAGS = -Wall
endif
However when I build my program I notice that it is not setting AM_CXXFLAGS correctly. So none of my gcov .gcno/.gcda files are being generated. Does anyone see what I am doing wrong?
Do you have your environment variable set to true, or probably to some other truish-value (e.g. 1)?
In any case, the usual way would be add a flag to configure that turns on a certain feature. The following configure.ac snippet adds a --enable-gcov flag to configure; it will also do a printout whether it has enabled gcov or not:
AC_ARG_ENABLE(gcov,[AS_HELP_STRING([--enable-gcov], [enable coverage test])])
AC_MSG_CHECKING([whether to enable gcov])
AS_IF([test "x${enable_gcov}" = "xyes" ], AC_MSG_RESULT([yes]), AC_MSG_RESULT([no]))
AM_CONDITIONAL([ENABLEGCOV],[test "x${enable_gcov}" = "xyes"])
I also find Makefile.am more easy to read by just adding flags to CXXFLAGS if a certain condition is met:
AM_CXXFLAGS = -Wall -fPIC
if ENABLEGCOV
AM_CXXFLAGS += -fprofile-arcs -ftest-coverage
endif
Say I have a file structure like:
win32/ Something.cpp
linux/ Something.cpp
Something.h
main.cpp
How would I be able to implement:
UNAME = $(shell uname -s)
ifeq ($(UNAME), Linux)
OS = linux
else
OS = win32
endif
all:
g++ ?????
I've been at a loss for around an hour now. I've never attempted cross platform makefiles so I usually just let the IDE handle it but now I need to create it because the IDE isn't really cut out for cross platform projects.
PS: Something.h is just a class definition/prototype and the linux/ and win32/ source files are just providing a method for that class. Problem is that I get a compilation error saying the class is already declared if both exist without some form of build target or whatever.
Thanks.
Multi platform Makefiles are an advanced topic. You might want to take a look at scons. Anyway, here is how you could make your case work:
Build your list of sources depending on the $(UNAME):
UNAME = $(shell uname -s)
TARGET = myprogam
SRC_COMMON = main.cpp
ifeq ($(UNAME), Linux)
SRC_OS = $(wildcard linux/*.cpp)
else
SRC_OS = $(wildcard win32/*.cpp)
endif
SRC = $(SRC_COMMON) $(SRC_OS)
OBJ = $(SRC:.cpp=.o)
$(TARGET): $(OBJ)
I have two solvers for an application, one in C and other in CUDA. The Makefile detects if nvcc is available and automatically switches to use the CUDA solver. Otherwise, it should use the C solver.
I wanted to include the header of the CUDA solver only if nvcc is detected, so I did this in the main.cpp file:
#if (NVCC_TEST == nvcc)
#include "utilCUDA.h"
#endif
NVCC_TEST is declared in the Makefile like this:
NVCC_RESULT := $(shell which nvcc 2> NULL)
NVCC_TEST := $(notdir $(NVCC_RESULT))
The problem is that the main file includes utilCUDA.h even when nvcc is not available in the system. Any ideas?
Variables defined in the make environment don't automatically show up at the source code level when compiling. Referring to this question/answer, in the section of your Makefile where you set make variables for CUDA vs. non-CUDA usage, do the following:
NVCC_RESULT := $(shell which nvcc 2> NULL)
NVCC_TEST := $(notdir $(NVCC_RESULT))
ifeq ($(NVCC_TEST),nvcc)
CC := nvcc
CCFLAGS := -DUSE_CUDA
else
CC := g++
CCFLAGS := -DNO_CUDA
endif
Then whereever in your Makefile you are specifying the compile command, add the $(CCFLAGS) to the compile command line. Any source code compiled in that command line will see that define.
Then in your source code, you can do:
#ifdef USE_CUDA
#include "utilCUDA.h"
#endif
#ifdef NO_CUDA
// whatever else you want to do.
#endif
By the way, if you attempt to compile a .cpp file with nvcc, you're probably not going to get the results you expect. If you need to do this, use the nvcc -x cu option when you specify your compile command.
I am attempting to use the version 1.3 of the palabos cfd library (which can be found at http://www.palabos.org/ ) in version 13.12 of Code::Blocks, using the progect file that was provided with the palabos distribution. Things went fine until i attempted to activate palabos's paralell processing features. The advice given in the palabos documentation is mostly directed tword people running the program from the command line in unix based systems. As such the advise they give regarding how to activate the built in paralellization features is: "edit the Makefile, set MPI_PARALLEL=true, and recompile" However, Code::blocks does not use makefiles by default, I have tried using the included makefile (check the download in the same folder as the "permeability" example), but it gives an error about there being no rule for release (even after i have changed seffings so that it should work in windows). What should i do?
Makefile contents:
palabosRoot = C:\Users\estone\Documents\summers\summer 2014\Soil Research\palabos v1.3r0
# Name of source files in current directory to compile and link with Palabos
projectFiles = permeability.cpp
# Set optimization flags on/off
optimize = true
# Set debug mode and debug flags on/off
debug = false
# Set profiling flags on/off
profile = false
# Set MPI-parallel mode on/off (parallelism in cluster-like environment)
MPIparallel = true
# Set SMP-parallel mode on/off (shared-memory parallelism)
SMPparallel = false
# Decide whether to include calls to the POSIX API. On non-POSIX systems,
# including Windows, this flag must be false, unless a POSIX environment is
# emulated (such as with Cygwin).
usePOSIX = false
# Path to external libraries (other than Palabos)
libraryPaths =
# Path to inlude directories (other than Palabos)
includePaths =
# Dynamic and static libraries (other than Palabos)
libraries =
# Compiler to use without MPI parallelism
serialCXX = g++
# Compiler to use with MPI parallelism
parallelCXX = mpicxx
# General compiler flags (e.g. -Wall to turn on all warnings on g++)
compileFlags = -Wall -Wnon-virtual-dtor
# General linker flags (don't put library includes into this flag)
linkFlags =
# Compiler flags to use when optimization mode is on
optimFlags = -O3
# Compiler flags to use when debug mode is on
debugFlags = -g
# Compiler flags to use when profile mode is on
profileFlags = -pg
##########################################################################
# All code below this line is just about forwarding the options
# to SConstruct. It is recommended not to modify anything there.
##########################################################################
SCons = $(palabosRoot)/scons/scons.py -j 2 -f $(palabosRoot)/SConstruct
SConsArgs = palabosRoot=$(palabosRoot) \
projectFiles="$(projectFiles)" \
optimize=$(optimize) \
debug=$(debug) \
profile=$(profile) \
MPIparallel=$(MPIparallel) \
SMPparallel=$(SMPparallel) \
usePOSIX=$(usePOSIX) \
serialCXX=$(serialCXX) \
parallelCXX=$(parallelCXX) \
compileFlags="$(compileFlags)" \
linkFlags="$(linkFlags)" \
optimFlags="$(optimFlags)" \
debugFlags="$(debugFlags)" \
profileFlags="$(profileFlags)" \
libraryPaths="$(libraryPaths)" \
includePaths="$(includePaths)" \
libraries="$(libraries)"
compile:
python $(SCons) $(SConsArgs)
clean:
python $(SCons) -c $(SConsArgs)
/bin/rm -vf `find $(palabosRoot) -name '*~'`
I'm trying to compile a C++ project manually ( I'm a newbie )
I have a MakeFile.am that contains:
ifneq ($(WANT_JANSSON),)
JANSSON_INCLUDES= -I$(top_srcdir)/compat/jansson
else
JANSSON_INCLUDES=
endif
EXTRA_DIST = example-cfg.json nomacro.pl
SUBDIRS = compat
INCLUDES = $(PTHREAD_FLAGS) -fno-strict-aliasing $(JANSSON_INCLUDES)
bin_PROGRAMS = minerd
dist_man_MANS = minerd.1
minerd_SOURCES = elist.h miner.h compat.h \
cpu-miner.c util.c \
sha2.c scrypt.c
ifneq ($(ARCH_x86),)
minerd_SOURCES += sha2-x86.S scrypt-x86.S
endif
ifneq ($(ARCH_x86_64),)
minerd_SOURCES += sha2-x64.S scrypt-x64.S
endif
ifneq ($(ARCH_ARM),)
minerd_SOURCES += sha2-arm.S scrypt-arm.S
endif
minerd_LDFLAGS = $(PTHREAD_FLAGS)
minerd_LDADD = #LIBCURL# #JANSSON_LIBS# #PTHREAD_LIBS# #WS2_LIBS#
minerd_CPPFLAGS = #LIBCURL_CPPFLAGS#
I've opened the Windows' Command Prompt and type:
mingw32-make -f Makefile.am
The output:
mingw32-make: *** No targets. Stop.
I don't know why this error shows up.
You're confused about the tools available.
A Makefile.am is not a makefile. It's an automake file. Automake is a tool that will take in a description of a makefile (the Makefile.am file) and a set of configuration options (typically, but not necessarily, generated by the auto-configuration tool autoconf), and generate a makefile named Makefile.
Then you can run make with that Makefile in order to build your system.
These tools (autoconf and automake) are very important to portable software because so many of the UNIX-like systems have slight differences. These tools help you abstract away all those differences and even support new systems which you've never used before.
On Windows this is not so important, since most Windows systems are pretty similar and backward-compatible. So I doubt most people who are targeting Windows-only environments bother with autoconf and automake.