I noticed a strange behavior when I was working with a constexpr function.
I reduced the code to a simplified example.
Two functions are called from two different translation units (module A and B).
#include <iostream>
int mod_a();
int mod_b();
int main()
{
std::cout << "mod_a(): " << mod_a() << "\n";
std::cout << "mod_b(): " << mod_b() << "\n";
std::cout << std::endl;
return 0;
}
The modules look similar. This is mod_a.cpp:
constexpr int X = 3;
constexpr int Y = 4;
#include "common.h"
int mod_a()
{
return get_product();
}
Only some internal constants differ. This is mod_b.cpp:
constexpr int X = 6;
constexpr int Y = 7;
#include "common.h"
int mod_b()
{
return get_product();
}
Both modules use a common constexpr function which is defined in "common.h":
/* static */ constexpr int get_product()
{
return X * Y;
}
I was very astonished that both functions return 12. Due to the #include directive (which should only be some source code inclusion), I supposed that there is no interaction between both modules.
When I defined get_product also to be static, the behavior was as expected:
mod_a() returned 12,
mod_b() returned 42.
I also looked Jason Turner's episode 312 of C++ Weekly: Stop Using 'constexpr' (And Use This Instead!) at https://www.youtube.com/watch?v=4pKtPWcl1Go.
The advice to use generally static constexpr is a good hint.
But I still wonder if the behavior which I noticed without the static keyword is well defined. Or is it UB? Or is it a compiler bug?
Instead of the constexpr function I also tried a C-style macro #define get_product() (X*Y) which showed me also the expected results (12 and 42).
Take care
michaeL
This program ill-formed: X and Y have internal linkage since they are const variables at namespace scope. This means that both definitions of constexpr int get_product() (which is implicitly inline) violate the one definition rule:
There can be more than one definition in a program of each of the following: [...], inline function, [...], as long as all the following is true:
[...]
name lookup from within each definition finds the same entities (after overload-resolution), except that
constants with internal or no linkage may refer to different objects as long as they are not odr-used and have the same values in every definition
And obviously these constants have different values.
What's happening is both mod_a and mod_b are calling get_product at runtime. get_product is implicitly inline, so one of the definitions is chosen and the other is discarded. What gcc seems to do is take the first definition found:
$ g++ mod_a.cpp mod_b.cpp main.cpp && ./a.out
mod_a(): 12
mod_b(): 12
$ g++ mod_b.cpp mod_a.cpp main.cpp && ./a.out
mod_a(): 42
mod_b(): 42
$ g++ -c mod_a.cpp
$ g++ -c mod_b.cpp
$ g++ mod_a.o mod_b.o main.cpp && ./a.out
mod_a(): 12
mod_b(): 12
$ g++ mod_b.o mod_a.o main.cpp && ./a.out
mod_a(): 42
mod_b(): 42
It's as if get_product isn't constexpr, since it is getting called at runtime.
But if you were to enable optimisations (or force get_product() to be called at compile time, like with constexpr int result = get_product(); return result;), the results are as you would "expect":
$ g++ -O1 mod_a.cpp mod_b.cpp main.cpp && ./a.out
mod_a(): 12
mod_b(): 42
(Though this is still UB, and the correct fix is to make the functions static)
This code violates the One Definition Rule (language lawyers please correct me if I'm wrong).
If I compile the code separately, I get the behavior that you expect:
g++ -O1 -c main.cpp
g++ -O1 -c mod_a.cpp
g++ -O1 -c mod_b.cpp
g++ *.o
./a.out
> mod_a(): 12
> mod_b(): 42
If I compile all at once or activate link-time optimization, the UB becomes apparent.
g++ -O1 *.cpp
./a.out
> mod_a(): 12
> mod_b(): 12
How to fix this
You are on the right track with declaring them static. More C++-esce would be an anonymous namespace. You should also declare the constants static or put them in a namespace, not just the function.
mod_a.cpp:
namespace {
constexpr int X = 3;
constexpr int Y = 4;
}
#include "common.h"
int mod_a()
{
return get_product();
}
common.h:
namespace {
constexpr int get_product()
{
return X * Y;
}
} /* namespace anonymous */
Even better, in my opinion: Include the common.h within an opened namespace. That makes the connection between the declarations more apparent and would allow you to have multiple public get_products, one per namespace. Something like this:
mod_a.cpp:
namespace {
constexpr int X = 3;
constexpr int Y = 4;
#include "common.h"
} /* namespace anonymous */
int mod_a()
{
return get_product();
}
Related
I am hoping to receive some input on an issue I encountered while attempting to learn c++20 modules.
In short, I would like a namespace containing const and/or constexpr variables to be implemented within a module, and import that module into any implementation files applicable.
It works fine with non const / non constexpr variables, however this isn't ideal; I'd like to stick with const and/or constexpr depending on the data types within the namespace.
Please see example below:
// test_module.cpp
module;
#include <string_view>
export module test_module;
export namespace some_useful_ns {
/* produces a compile error when attempting to compile main.cpp */
constexpr std::string_view str_view{"a constexpr string view"};
const char* const chr{"a const char* const"};
/* compiles just fine, but not ideal; prefer const / constexpr
std::string_view str_view{"a non constexpr str view"};
const char* chr{"a const char*"};
*/
}
// main.cpp
#include <iostream>
import test_module;
int main() {
std::cout << some_useful_ns::str_view << std::endl;
std::cout << some_useful_ns::chr << std::endl;
return 0;
}
Compile the two files like below:
g++ -c --std=c++2a -fmodules-ts test_module.cpp
g++ -c --std=c++2a -fmodules-ts main.cpp
g++ main.o test_module.o
Before linking the .o files, I receive the following error when compiling main.cpp like above:
main.cpp: In function ‘int main()’:
main.cpp:6:38: error: ‘str_view’ is not a member of ‘some_useful_ns’
6 | std::cout << some_useful_ns::str_view << std::endl;
| ^~~~~~~~
main.cpp:7:38: error: ‘chr’ is not a member of ‘some_useful_ns’
7 | std::cout << some_useful_ns::chr << std::endl;
|
I find this strange since it works fine when I use a non const / non constexpr like the lines commented out in test_module.cpp.
Also, using the const / constexpr works just as expected when using a traditional implementation without modules.
Anyone have a clue as to why I am unable to get this to work successfully with const / constexpr?
In case it helps, I am using gcc (GCC) 11.2.1 20220127
Thanks in advance.
This seems like a compiler bug.
Normally, const-qualified variables have internal linkage, and names with internal linkage cannot be exported. However... there's a specific carveout for such variables which are exported. That is, exported const-qualified variables don't have internal linkage.
In the following code, I create a Builder template, and provide a default implementation to return nothing. I then specialize the template with int, to return a value 37.
When I compile with -O0, the code prints 37, which is the expected result. But when I compile using -O3, the code prints 0.
The platform is Ubuntu 20.04, with GCC 9.3.0
Can anyone helps me understand the behavior?
builder.h
class Builder {
public:
template<typename C>
static C build() {
return 0;
}
};
builder.cc
#include "builder.h"
template<>
int Builder::build<int>() {
return 37;
}
main.cc
#include "builder.h"
#include <iostream>
int main() {
std::cout << Builder::build<int>() << '\n';
}
makefile
CXX_FLAG = -O0 -g
all:
g++ $(CXX_FLAG) builder.cc -c -o builder.o
g++ $(CXX_FLAG) main.cc builder.o -o main
clean:
rm *.o
rm main
You should add a forward declaration for build<int>() to builder.h, like so:
template<>
int Builder::build<int>();
Otherwise, while compiling main.cc, the compiler sees only the generic template, and is allowed to inline an instance of the generic build() function. If you add the forward declaration, the compiler knows you provided a specialization elsewhere, and will not inline it.
With -O3, the compiler tries to inline, with -O0 it will not inline anything, hence the difference.
Your code actually violates the "One Definition Rule": it will create two definitions for Builder::build<int>(), but they are not the same. The standard says the result is undefined, but no diagnostics are required. That is a bit unfortunate in this case, as it would have been helpful if a warning or error message was produced.
When compiling this snippet of code (example code)
#include <iostream>
#include <cstdlib>
#include <chrono>
#ifndef USE_DEFINE
class ClassA
{
public:
inline static constexpr uint8_t A = 0;
};
#else
#define A 0
#endif
int main()
{
srand(std::chrono::system_clock::now().time_since_epoch().count());
uint8_t a = rand() % 255;
#ifndef USE_DEFINE
if(a < ClassA::A)
#else
if(a < A)
#endif
{
std::cout << "Shouldn't even compile using `-Wall -Wextra -Werror -Wfatal-errors -Wpedantic` flags." << std::endl;
return -1;
}
return 0;
}
with GCC 7.5 and the following flags
g++ -std=gnu++17 -Wall -Wextra -Wpedantic -Wfatal-errors -Werror -o test main.cc
i get the following warning/error (due to -Werror):
main.cc: In function ‘int main()’:
main.cc:40:7: error: comparison is always false due to limited range of data type [-Werror=type-limits]
40 | if(a < ClassA::A)
Which seems ok to me due to the fact that i'm trying to check if an uint8_t is less than 0. The same happens when defining USE_DEFINE. Both expressions are not constant due to the changing value of a so it could be checked if the type limits allow this kind of comparison to evaluate as true AND false.
Doing the same with a newer compiler (GCC 10.1.0 / CLang 10.0.1) i don't get this warning anymore when using the inline static constexpr uint8_t A = 0 of ClassA. When i use USE_DEFINE the same happens.
Is this behaviour of GCC/Clang erroneous or is this some sideeffect caused by attributes of const/constexpr?
The GCC-Manual says (for GCC 6.4/7.5 as well as 10):
-Wtype-limits
Warn if a comparison is always true or always false due to the limited range of the data type, but do not warn for constant expressions. For example, warn if an unsigned variable is compared against zero with < or >=. This warning is also enabled by -Wextra.
As far as i can see if(a < ClassA::A) isn't a constant expression even though i'm using the constexpr keyword. The same happens when using const instead of constexpr. What does lead to this behaviour?
I run an up to date debian testing (with kernel 4.19).
Helpers are not found on my system (but they exist in the header, Qt jumps to them)
#include "bpf/bpf.h"
int main (){
int r = bpf_create_map(BPF_MAP_TYPE_ARRAY,1,1,1,0);
return 0;
}
Compilation results in
undefined reference to `bpf_create_map(bpf_map_type, int, int, int, unsigned int)'
compiled with
g++ -c -pipe -g -std=gnu++1z -Wall -W -fPIC -DQT_QML_DEBUG -I. -I../../Qt/5.13.0/gcc_64/mkspecs/linux-g++ -o main.o main.cpp
g++ -lbpf -o server main.o
Same result with
g++ main.cpp -lbpf -o out
I have the libbpf-dev installed as well and i have the associated libraries (a and so).
What is wrong?
Update
even the following code won't work
#include <linux/bpf.h>
int main (){
//int r = bpf_create_map(BPF_MAP_TYPE_ARRAY,1,1,1,0);
bpf_attr attr = {};
attr.map_type = BPF_MAP_TYPE_ARRAY;
attr.key_size = 1;
attr.value_size = 1;
attr.max_entries = 1;
bpf(BPF_MAP_CREATE, &attr, sizeof(attr));
return 0;
}
results in
error: 'bpf' was not declared in this scope
Update2:
BTW, key size is mandated to be 4 and not 1; but it is a point aside, that was unrelated to my problem here.
Namespace issue due to compiling in C++, you probably want:
extern "C" {
#include "bpf/bpf.h"
}
int main()...
Regarding your second error (error: 'bpf' was not declared in this scope), this is not directly related to libbpf, this is because there is no function simply called bpf() to actually perform the syscall. Instead you have to use the syscall number. For example, libbpf defines the following:
static inline int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
unsigned int size)
{
return syscall(__NR_bpf, cmd, attr, size);
}
... and uses sys_bpf() after that, the same way you try to call bpf() in your sample.
For the record, “BPF helpers” often designates BPF functions that you call from within a BPF program, which is not the case here. Hence some confusion in the comments, I believe.
Is this gcc being overly nice and doing what the dev thinks it will do or is clang being overly fussy about something. Am I missing some subtle rule in the standard where clang is actually correct in complaining about this
Or should I use the second bit of code which is basically the how offsetof works
[adrian#localhost ~]$ g++ -Wall -pedantic -ansi a.cc
[adrian#localhost ~]$ a.out
50
[adrian#localhost ~]$ cat a.cc
#include <iostream>
struct Foo
{
char name[50];
};
int main(int argc, char *argv[])
{
std::cout << sizeof(Foo::name) << std::endl;
return 0;
}
[adrian#localhost ~]$ clang++ a.cc
a.cc:10:29: error: invalid use of non-static data member 'name'
std::cout << sizeof(Foo::name) << std::endl;
~~~~~^~~~
1 error generated.
[adrian#localhost ~]$ g++ -Wall -pedantic -ansi b.cc
[adrian#localhost ~]$ a.out
50
[adrian#localhost ~]$ cat b.cc
#include <iostream>
struct Foo
{
char name[50];
};
int main(int argc, char *argv[])
{
std::cout << sizeof(static_cast<Foo*>(0)->name) << std::endl;
return 0;
}
[adrian#localhost ~]$ clang++ b.cc
[adrian#localhost ~]$ a.out
50
I found adding -std=c++11 stops it complaining. GCC is fine
with it in either version.
Modern GCC versions allow this even in -std=c++98 mode. However, older versions, like GCC 3.3.6 of mine, do complain and refuse to compile.
So now I wonder which part of C++98 I am violating with this code.
Wikipedia explicitly states that such a feature was added in C++11, and refers to N2253, which says that the syntax was not considered invalid by the C++98 standard initially, but then intentionally clarified to disallow this (I have no idea how non-static member fields are any different from other variables with regard to their data type). Some time later they decided to make this syntax valid, but not until C++11.
The very same document mentions an ugly workaround, which can also be seen throughout the web:
sizeof(((Class*) 0)->Field)
It looks like simply using 0, NULL or nullptr may trigger compiler warnings for possible dereference of a null pointer (despite the fact that sizeof never evaluates its argument), so an arbitrary non-zero value might be used instead, although it will look like a counter-intuitive “magic constant”. Therefore, in my C++ graceful degradation layer I use:
#if __cplusplus >= 201103L
#define CXX_MODERN 2011
#else
#define CXX_LEGACY 1998
#endif
#ifdef CXX_MODERN
#define CXX_FEATURE_SIZEOF_NONSTATIC
#define CxxSizeOf(TYPE, FIELD) (sizeof TYPE::FIELD)
#else
// Use of `nullptr` may trigger warnings.
#define CxxSizeOf(TYPE, FIELD) (sizeof (reinterpret_cast<const TYPE*>(1234)->FIELD))
#endif
Usage examples:
// On block level:
class SomeHeader {
public:
uint16_t Flags;
static CxxConstExpr size_t FixedSize =
#ifdef CXX_FEATURE_SIZEOF_NONSTATIC
(sizeof Flags)
#else
sizeof(uint16_t)
#endif
;
}; // end class SomeHeader
// Inside a function:
void Foo(void) {
size_t nSize = CxxSizeOf(SomeHeader, Flags);
} // end function Foo(void)
By the way, note the syntax difference for sizeof(Type) and sizeof Expression, as they are formally not the same, even if sizeof(Expression) works — as long as sizeof (Expression) is valid. So, the most correct and portable form would be sizeof(decltype(Expression)), but unfortunately it was made available only in C++11; some compliers have provided typeof(Expression) for a long time, but this never was a standard extension.