Background: I create many small utilities for some very specialised data processing. Often, I am the only user. I don't even think about multi-threaded programming, because the run-time performance is by far sufficient for my use cases. The critical resource is my programming time. So I want to avoid any extra effort required for multi-threaded programming.
However, it seems there is a risk, that my source code gets executed in a multi-threaded context, when I reuse my code in future.
According to CppCoreGuidelines :
Be careful: there are many examples where code that was “known” to
never run in a multi-threaded program was run as part of a
multi-threaded program. Often years later. Typically, such programs
lead to a painful effort to remove data races. Therefore, code that is
never intended to run in a multi-threaded environment should be
clearly labeled as such and ideally come with compile or run-time
enforcement mechanisms to catch those usage bugs early.
Most of the suggestions in the same source actually get me started with multi-threaded programming. The one suggestion, which I prefer to follow says:
Refuse to build and/or run in a multi-threaded environment.
So my question is, how do I do that? E.g. is there an include file, #pragma or so to ensure single threaded build / execution of everything within a source file?
With g++/gcc compiling and linking multithreaded code requires using -pthread compiler and linker option. This option sets _REENTRANT macro which you can inspect at compile time:
$ c="g++ -E -dD -xc++ /dev/null"
$ diff <($c) <($c -pthread)
389a390
> #define _REENTRANT 1
Contrary to popular belief, using -lpthread linker option is unnecessary and insufficient to correctly build a multi-threaded program.
Microsoft Visual Studio sets _MT macro for a multi-threaded build, IIRC.
Boost library does the following:
// Turn on threading support if the compiler thinks that it's in
// multithreaded mode. We put this here because there are only a
// limited number of macros that identify this (if there's any missing
// from here then add to the appropriate compiler section):
//
#if (defined(__MT__) || defined(_MT) || defined(_REENTRANT) \
|| defined(_PTHREADS) || defined(__APPLE__) || defined(__DragonFly__)) \
&& !defined(BOOST_HAS_THREADS)
# define BOOST_HAS_THREADS
#endif
So that you can #include <boost/config.hpp> and then inspect the value of BOOST_HAS_THREADS macro.
The following causes a compilation error if it is built in multi-threaded mode:
#if defined(BOOST_HAS_THREADS) || defined(_REENTRANT) || defined(_MT)
#error This code is single-threaded only.
#endif
Related
Does anyone know of a fix for an MSVC compiler bug/annoyance where SIMD Extension settings get "stuck" on AVX?
The context of this question is coding up SIMD CPU dispatchers, closely following Agner's well-known dispatch_example2.cpp project. I've been going back and forth in three different MSVC projects and have dead-ended with this issue in two of them, after which one of those two "fixed itself" somehow.
The question is pretty simple: To compile the dispatchers I need to compile 4 times with
/arch:AVX512 /DINSTRSET=10
/arch:AVX2 /DINSTRSET=8
/arch:AVX /DINSTRSET=7
/arch:SSE2 /D__SSE4_2__
While I'm doing this I'm watching the value of INSTRSET and this code:
#if defined ( __AVX512VL__ ) && defined ( __AVX512BW__ ) && defined ( __AVX512DQ__ )
#define AVX512_FLAG 1
#else
#define AVX512_FLAG 2
#endif
#if defined ( __AVX2__ )
#define AVX2_FLAG 1
#else
#define AVX2_FLAG 2
#endif
#if defined ( __AVX__ )
#define AVX_FLAG 1
#else
#define AVX_FLAG 2
#endif
The behavior is like this: For the three AVX compiles everything is exactly as expected. When the problem is not happening, the SSE2 compile shows as expected (AVX512_FLAG, AVX2_FLAG, AVX_FLAG == 2) and the final code runs fine.
When the problem is happening, for the /arch:SSE2 /D__SSE4_2__ compile the code above shows AVX512_FLAG == 2 but AVX2_FLAG == AVX_FLAG == 1 and INSTRSET == 8, and the compiler thinks the AVX2 instructions are enabled - the project compiles, but crashes on an SSE4.2 machine.
If I try /arch:SSE2 /DINSTRSET=6 then I get INSTRSET == 6 for the compile, but the code above still shows AVX2_FLAG == 1 and AVX_FLAG == 1, and the final project still crashes on an SSE4.2 machine.
The crashes happen even if I don't run any vector code - anything that calls into the dispatcher crashes immediately even if all vector code is short circuited.
FYI, trying /DINSTRSET=6 is just an act of desperation - I've never gotten anything to work with SSE4.2 without using /D__SSE4_2__
Does anyone know how to fix this problem that is completely halting my progress? Tried "Clean Solution" already.
If you want a single binary which works on SSE-only computers, but can leverage AVX when available, you need to do following.
At the project level, set “Enable enhanced instruction set: Not set” if you’re building for Win64, or “SSE2” if you’re building for Win32.
Set “Enable enhanced instruction set: AVX” or AVX2 only on the *.cpp files which contain AVX version of your functions.
Make sure to never call these AVX functions unless both CPU and OS (see GetEnabledXStateFeature WinAPI) actually have the support.
Practically speaking, instead of compiling same source file multiple times with different settings, compile 4 different source files. They can contain the same code, C++ has #include preprocessor directive. If you have a single implementation dispatched with these macros, move that implementation into *.inl or *.hpp file, and include that file into 4 different *.cpp files for different CPUs.
I figured this out (it's simple and boring). For the incremental object files I'm compiling 3 .obj files from the same .cpp (the .cpp with the vector code). When the MSVC SIMD settings are changed in the project level Properties, they may or may not get inherited in the .cpp file Properties. This is where the project gets "stuck" on AVX (sometimes, not always). Just need to check the .cpp file properties and make sure they are correct.
BTW I'm using VS 2019, /std:c++17 and the context above is the 32-bit build.
I'm writing a C++ library and I would like to make my API throw exceptions for invalid parameters, but rely on asserts instead when the code is compiled with -fno-exceptions.
Is there a way to detect at compile-time if I'm allowed to use exception handling?
Note that I'm writing a header-only library, so I don't have a configure phase and I don't have access to the build system to simply define a macro on the command line (and
I don't want to add burden to the user).
Since the Standard doesn't have any concept of "-fno-exceptions", of course the solution could be compiler-dependent. In this case I'm interested in solutions that work with both g++ and clang++, other compilers are not important for this project.
Thank you very much
GCC and Clang define the __EXCEPTIONS macro when exceptions are enabled, and do not define it when exceptions are disabled via -fno-exceptions.
Example:
#include <cstdio>
int main() {
#ifdef __EXCEPTIONS
puts("Exceptions are enabled");
#else
puts("Exceptions are disabled");
#endif
}
How do I find out at “compile time” (e.g. using the preprocessor) if the compiler supports a particular language feature? The concrete example I am thinking of is the NEWUNIT specifier of Fortran 2008. I would like to do something like this:
#ifdef HAVE_NEWUNIT
! go ahead and use that
#else
! activate workaround
#endif
I am tagging this question fortran because that is what I am concerned with at the moment, though I suspect a general answer may have to go beyond Fortran (autoconf??). Of course, the solution should work with as many compilers as possible, but mostly, I care about gfortran and ifort (both of which have introduced NEWUNIT semi-recently).
Clarification:
I am looking for an automatic way to handle such a situation. (Which might include more than just a Fortran source file -- Makefiles …)
I do not care very much if it is “standard” as long as it would work on most Unix-type systems.
If you are going to go down the route of using autotools, specifically autoconf the following would be how I'd code it all up.
Create the configure.ac:
dnl -*- Autoconf -*-
dnl Process this file with autoconf to produce a configure script.
dnl
AC_PREREQ(2.61)
AC_INIT([test], [0.0.1], [me#example.com])
# Define our M4 macro directory
AC_CONFIG_MACRO_DIR([m4])
# Put our generated config header in the source directory
AC_CONFIG_HEADERS([src/config.h])
# Make sure we have a fortran compiler
AC_PROG_FC([ifort xlf pgfortran gfortran])
AC_LANG([Fortran])
AC_FC_FREEFORM
# Check for newunit option to open()
AX_F08_NEWUNIT
AC_OUTPUT
Create the ax_f08_newunit.m4 macro in the m4/ sub-directory. Since this is where I specified the macros are in configure.ac:
AC_DEFUN([AX_F08_NEWUNIT], [
AC_REQUIRE([AC_PROG_FC])
AC_LANG_PUSH([Fortran])
AC_MSG_CHECKING([for NEWUNIT support])
AC_COMPILE_IFELSE([
program conftest
integer :: i
open(newunit=i,file='test')
end program conftest],
[ax_f08_newunit=yes], [ax_f08_newunit=no])
AC_LANG_POP([Fortran])
if test "x$ax_f08_newunit" = "xyes"; then
AC_MSG_RESULT([yes])
AC_DEFINE([HAVE_NEWUNIT], [1], [NEWUNIT support])
else
AC_MSG_RESULT([no])
fi
])
Then follow all the usual autotools routines:
aclocal -I m4
autoheader
autoconf
Then you can run ./configure, this is my output:
checking for Fortran compiler default output file name... a.out
checking whether the Fortran compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU Fortran compiler... no
checking whether ifort accepts -g... yes
checking for Fortran flag needed to allow free-form source... -FR
checking for NEWUNIT support... yes
configure: creating ./config.status
config.status: creating src/config.h
Finally in your src/config.h file you should have:
/* src/config.h. Generated from config.h.in by configure. */
/* src/config.h.in. Generated from configure.ac by autoheader. */
/* NEWUNIT support */
#define HAVE_NEWUNIT 1
/* Define to the address where bug reports for this package should be sent. */
#define PACKAGE_BUGREPORT "me#example.com"
/* Define to the full name of this package. */
#define PACKAGE_NAME "test"
/* Define to the full name and version of this package. */
#define PACKAGE_STRING "test 0.0.1"
/* Define to the one symbol short name of this package. */
#define PACKAGE_TARNAME "test"
/* Define to the version of this package. */
#define PACKAGE_VERSION "0.0.1"
Of course your source code has to be run through a preprocessor now too. For example src/test.F90:
program test
#include "config.h"
integer :: i
#ifdef HAVE_NEWUNIT
open(newunit=i,file='data')
#else
i = 7
open(unit=i,file='data')
#endif
end program test
When compiling the test program ifort understands that the capital F means the file needs to be preprocessed.
(Standard) Fortran doesn't really have the facility to do what you want. Most (?) of us (Fortran programmers) would test the availability of a new feature by compiling a code which used it and studying the compiler messages. We might, also, consult compiler documentation.
Fortran preprocessors, which are not defined by the Fortran standard, are, in my experience, usually C preprocessors which have been taught enough about Fortran to be almost usable; think of teaching a dog to walk on two legs, it's never going to be truly bipedal and it's perfectly good at quadripedal locomotion so what's the point ?
Leaving my own prejudices aside for a moment, there is no standard way to check, at compile-time, the availability of a new feature other than by trying to compile a code which uses it.
There was a Technical Report on Conditional Compilation for Fortran back in the 90s (I think) but the extensions to the language it proposed have not found their way into more recent editions of Fortran.
In response to OP's clarification:
One approach would be to follow the lead of a lot of Linux software and to use tools such as automake and autoconf to first determine the capabilities of the (in this case) Fortran compiler, and later to either write (perhaps using a macro-processor) or to select the necessary fragments of code and thereby to construct a compilable program. In other words, configure, make and install.
Since it is far from trivial to determine at compile time what unit numbers might be in use when a new one is needed at run time I suppose you'd have to fall back on the old-fashioned approach of selecting new unit numbers from a pre-defined range which you know (or, if the code is sufficiently large and complex, just hope) is not already in use.
I have been using something like this:
int main(int argc, char *argv[])
{
#ifdef DEBUG
printf("RUNNING DEBUG BUILD");
#else
printf("Running... this is a release build.");
#endif
...
However this requires me to compile with -DDEBUG for the debug build. Does GCC give me some way for me to determine when I am compiling with debug symbols (-g flag) such as defining its own preprocessor macro that I can check for?
Answer is no. Usually these macros (DEBUG, NDEBUG, _DEBUG) are set by the IDE/make system depending on which configuration (debug/release) you have active. I think these answers can be of help:
C #define macro for debug printing
Where does the -DNDEBUG normally come from?
_DEBUG vs NDEBUG
I think the answer that I was looking for was essentially what Adam posted as a comment, which is:
The compiler's job does not include preprocessing, and in fact the compiler will choke on any preprocessor switches not handled by the preprocessor that make their way into code.
So, because the way to branch code has to leverage the preprocessor, it means by the time the compiler gets any code it's already one or the other (debug code or release code), so it's impossible for me to do what my question asks at this stage (after preprocessor).
So it is a direct consequence of the preprocessor being designed as a separate process for feeding the code through.
Is there a standardized (e.g. implemented by all major compilers) #define that will allow me to distinguish between debug and release builds?
if believe
#ifdef NDEBUG
// nondebug
#else
// debug code
#endif
is the most portable.
But no compiler knows whether you are compiling debug or release, so this isn't automatic. But this one is used by assert.h in the c-runtime, so it's quite common. Visual Studio will set it, and I'm sure most other IDE's will as well.
Since there is no standard definition of debug or release, there isn't a way to do this. I can think of at least four different things that could be meant, and they can all be changed independently. Only two can be tested from within the code.
Compiler optimization level
Debugging symbols included in binary (these can even be removed at a later date)
assert() enabled (NDEBUG not defined)
logging turned off
Edit: I misread the question and waffled off on a different tangent!!! Apologies...
The macro _NDEBUG is used on Linux as well as on Windows...
If the binary is built and you need to determine if the build was release/debug, you can get a hexadecimal dump, if you see loads of symbols in it that would be debugging information...for example, under Linux, using the strings utility. There is a version available for Windows by SysInternals, available here on technet. Release versions of binary executables would not have the strings representing different symbols...
strings some_binary
Hope this helps,
Best regards,
Tom.
Best I could come with is
#ifndef NDEBUG
// Production builds should set NDEBUG=1
#define NDEBUG false
#else
#define NDEBUG true
#endif
#ifndef DEBUG
#define DEBUG !NDEBUG
#endif
Then you can wrap your debug code in if(DEBUG) { ... }.