Suppose you have this:
struct Foo {
Foo(unsigned int x) : x(x) {}
unsigned int x;
};
int main() {
Foo f = Foo(-1); // how to get a compiler error here?
std::cout << f.x << std::endl;
}
Is it possible to prevent the implicit conversion?
The only way I could think of is to explicilty provide a constructor that takes an int and generates some kind of runtime error if the int is negative, but it would be nicer if I could get a compiler error for this.
I am almost sure, that there is a duplicate, but the closest I could find is this question which rather asks why the implicit conversion is allowed.
I am interested in both, C++11 and pre C++11 solutions, preferably one that would work in both.
Uniform initialization prevents narrowing.
It follows a (not working, as requested) example:
struct Foo {
explicit Foo(unsigned int x) : x(x) {}
unsigned int x;
};
int main() {
Foo f = Foo{-1};
std::cout << f.x << std::endl;
}
Simply get used to using the uniform initialization (Foo{-1} instead of Foo(-1)) wherever possible.
EDIT
As an alternative, as requested by the OP in the comments, a solution that works also with C++98 is to declare as private the constructors getting an int (long int, and so on).
No need actually to define them.
Please, note that = delete would be also a good solution, as suggested in another answer, but that one too is since C++11.
EDIT 2
I'd like to add one more solution, event though it's valid since C++11.
The idea is based on the suggestion of Voo (see the comments of Brian's response for further details), and uses SFINAE on constructor's arguments.
It follows a minimal, working example:
#include<type_traits>
struct S {
template<class T, typename = typename std::enable_if<std::is_unsigned<T>::value>::type>
S(T t) { }
};
int main() {
S s1{42u};
// S s2{42}; // this doesn't work
// S s3{-1}; // this doesn't work
}
You can force a compile error by deleting the undesired overload.
Foo(int x) = delete;
If you want to be warned on every occurrence of such code, and you're using GCC, use the -Wsign-conversion option.
foo.cc: In function ‘int main()’:
foo.cc:8:19: warning: negative integer implicitly converted to unsigned type [-Wsign-conversion]
Foo f = Foo(-1); // how to get a compiler error here?
^
If you want an error, use -Werror=sign-conversion.
Related
Sometimes for algebraic types it is convenient to have a constructor that takes a literal value 0 to denote the neutral element, or 1 to denote the multiplicative identity element, even if the underlying type is not an integer.
The problem is that it is not obvious how to convince the compiler only to accept, 0 or 1 without accepting any other integer.
Is there a way to do this in C++14 or beyond, for example combining literals, constexpr or static_assert?
Let me illustrate with a free function (although the idea is to use the technique for a constructor that take a single argument. Contructors cannot take template parameters either).
A function that accepts zero only could be written in this way:
constexpr void f_zero(int zero){assert(zero==0); ...}
The problem is that, this could only fail at runtime. I could write f_zero(2) or even f_zero(2.2) and the program will still compile.
The second case is easy to remove, by using enable_if for example
template<class Int, typename = std::enable_if_t<std::is_same<Int, int>{}> >
constexpr void g_zero(Int zero){assert(zero==0);}
This still has the problem that I can pass any integer (and it only fails in debug mode).
In C++ pre 11 one had the ability to do this trick to only accept a literal zero.
struct zero_tag_{};
using zero_t = zero_tag_***;
constexpr void h_zero(zero_t zero){assert(zero==nullptr);}
This actually allowed one to be 99% there, except for very ugly error messages.
Because, basically (modulo Maquevelian use), the only argument accepted would be h_zero(0).
This is situation of affairs is illustrated here https://godbolt.org/z/wSD9ri .
I saw this technique being used in the Boost.Units library.
1) Can one do better now using new features of C++?
The reason I ask is because with the literal 1 the above technique fails completely.
2) Is there an equivalent trick that can be applied to the literal 1 case? (ideally as a separate function).
I could imagine that one can invent a non-standard long long literal _c that creates an instance of std::integral_constant<int, 0> or std::integral_constant<int, 1> and then make the function take these types. However the resulting syntax will be worst for the 0 case. Perhaps there is something simpler.
f(0_c);
f(1_c);
EDIT: I should have mentioned that since f(0) and f(1) are potentially completely separate functions then ideally they should call different functions (or overloads).
In C++20 you can use the consteval keyword to force compile time evaluation. With that you could create a struct, which has a consteval constructor and use that as an argument to a function. Like this:
struct S
{
private:
int x;
public:
S() = delete;
consteval S(int _x)
: x(_x)
{
if (x != 0 && x != 1)
{
// this will trigger a compile error,
// because the allocation is never deleted
// static_assert(_x == 0 || _x == 1); didn't work...
new int{0};
}
}
int get_x() const noexcept
{
return x;
}
};
void func(S s)
{
// use s.get_x() to decide control flow
}
int main()
{
func(0); // this works
func(1); // this also works
func(2); // this is a compile error
}
Here's a godbolt example as well.
Edit:
Apperently clang 10 does not give an error as seen here, but clang (trunk) on godbolt does.
You can get this by passing the 0 or 1 as a template argument like so:
template <int value, typename = std::enable_if_t<value == 0 | value == 1>>
void f() {
// Do something with value
}
The function would then be called like: f<0>(). I don't believe the same thing can be done for constructors (because you can't explicitly set template parameters for constructors), but you could make the constructor(s) private and have static wrapper functions which can be given template parameters perform the check:
class A {
private:
A(int value) { ... }
public:
template <int value, typename = std::enable_if_t<value == 0 || value == 1>>
static A make_A() {
return A(value);
}
};
Objects of type A would be created with A::make_A<0>().
Well... you have tagged C++17, so you can use if constexpr.
So you can define a literal type when 0_x is a std::integral_constant<int, 0> value, when 1_x is a std::integral_constant<int, 1> and when 2_x (and other values) gives a compilation error.
By example
template <char ... Chs>
auto operator "" _x()
{
using t0 = std::integer_sequence<char, '0'>;
using t1 = std::integer_sequence<char, '1'>;
using tx = std::integer_sequence<char, Chs...>;
if constexpr ( std::is_same_v<t0, tx> )
return std::integral_constant<int, 0>{};
else if constexpr ( std::is_same_v<t1, tx> )
return std::integral_constant<int, 1>{};
}
int main ()
{
auto x0 = 0_x;
auto x1 = 1_x;
//auto x2 = 2_x; // compilation error
static_assert( std::is_same_v<decltype(x0),
std::integral_constant<int, 0>> );
static_assert( std::is_same_v<decltype(x1),
std::integral_constant<int, 1>> );
}
Now your f() function can be
template <int X, std::enable_if_t<(X == 0) || (X == 1), bool> = true>
void f (std::integral_constant<int, X> const &)
{
// do something with X
}
and you can call it as follows
f(0_x);
f(1_x);
For the case of Ada, you can define a subtype, a new type, or a derived type that is constrained only for the values of Integer 0 and 1.
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
procedure two_value is
-- You can use any one of the following 3 declarations. Just comment out other two.
--subtype zero_or_one is Integer range 0 .. 1; -- subtype of Integer.
--type zero_or_one is range 0 .. 1; -- new type.
type zero_or_one is new Integer range 0 .. 1; -- derived type from Integer.
function get_val (val_1 : in zero_or_one) return Integer;
function get_val (val_1 : in zero_or_one) return Integer is
begin
if (val_1 = 0) then
return 0;
else
return 1;
end if;
end get_val;
begin
Put_Line("Demonstrate the use of only two values");
Put_Line(Integer'Image(get_val(0)));
Put_Line(Integer'Image(get_val(1)));
Put_Line(Integer'Image(get_val(2)));
end two_value;
upon compiling you get the following warning message, although compiles successfully :
>gnatmake two_value.adb
gcc -c two_value.adb
two_value.adb:29:40: warning: value not in range of type "zero_or_one" defined at line 8
two_value.adb:29:40: warning: "Constraint_Error" will be raised at run time
gnatbind -x two_value.ali
gnatlink two_value.ali
And executing it gives the runtime error as specified by the compiler
>two_value.exe
Demonstrate the use of only two values
0
1
raised CONSTRAINT_ERROR : two_value.adb:29 range check failed
So, basically you can constrain the values by defining the new types, derived types or subtypes, you don't need to include the code to check the range, but based on your data type the compiler will automatically warn you.
This isn't a modern solution, but adding on to Zach Peltzer's solution, you can keep your syntax if you use macros...
template <int value, typename = std::enable_if_t<value == 0 | value == 1>>
constexpr int f_impl() {
// Do something with value
return 1;
}
#define f(x) f_impl<x>()
int main() {
f(0); //ok
f(1); //ok
f(2); //compile time error
}
Though, with the constructor problem you could just make the class templated instead of trying to have a templated constructor
template<int value, typename = std::enable_if_t<value == 0 | value == 1>>
class A {
public:
A() {
//do stuff
}
};
int main() {
A<0> a0;
auto a1 = A<1>();
// auto a2 = A<2>(); //fails!
}
The best solution to accept literal 0 that I've found to date is to use std::nullptr_t as the function's input:
struct math_object
{
real x,y,z;
math_object(std::nullptr_t) : x(0), y(0), z(0) {}
};
This has conversion advantages over some of the other solutions. For example, it allows syntax such as.. void MyFunc(const math_object &obj=0); I've been using this for years, and haven't found any trouble. However, I do not have a similar solution for literal 1. For that, I created a construct::id structure that has a global IDENTITY variable.
There's a basic problem. How can you do that in the compiler to be done for a parameter, and at the same time be efficient? Well, what do you need exactly?
That is included in strong typed languages like Pascal, or Ada. The enumerated types have only a couple of values, and the types are normally checked at development, but otherwise, the checks are eliminated by some compiler option at runtime, because just everything goes well.
A function interface is a contract. It is a contract between a seller (the writer of the function) and a buyer (the user of that function). There's even an arbiter, which is the programming language, that can act upon if someone tries to cheat the contract. But at the end, the program is being run in a machine that's open to make arbitraryness like modifying the set of enumerated values and put in the place a completely (and not permitted value).
The problem comes also with separate compilation. Separate compilation has its drawbacks, as it must face a compilation, without having to recheck and retest all previous compilations you have made. Once a compilation is finished, everything you have put in the code is there. If you want the code to be efficient, then the tests are superfluous, because caller and implementer both cope with the contract, but if you want to catch a lyer, then you have to include the test code. And then, it is better to do once for all cases, or is it better to let the programmer decide when and when not we want to catch a lyer?
The problem with C (and by legacy with C++) is that they were inspired by very good programmers, that didn't mistakes, and that have to run their software in big and slow machines. They decided to make both languages (the second was for interoperability purposes) weak typed... and so they are. Have you tried to program in Ada? or Modula-2? You'll see that, over the time, the strong typing thing is more academic than otherwise, and finally what you want, as a professional, is to have the freedom to say: now I want to be safe (and include test code), and now I know what I'm doing (and please be most efficient as you can)
Conclusion
The conclussion is that you are free to select the language, to select the compiler, and to relax the rules. The compilers have the possibility to allow you that. And you have to cope with it, or invent (this is something that todays happens almost each week) your own programming language.
This is the answer to my question, based on #IlCapitano answer for a wrapper class.
This wrapper class can be made private an used only on the construction.
class Matrix {
struct ZeroOROne {
/*implicit*/ consteval ZeroOROne(int n) : val_{n} {
if (n != 0 and n != 1) {throw;} // throw can produce an error at compile time
}
int val_;
};
public:
constexpr Matrix(ZeroOROne _0_or_1) {
if(_0_or_1.val_ == 0) { // this cannot be if constexpr, but that is ok
a00 = a01 = a10 = a11 = 0.0;
new int{0}; // allocation could is ok here
} else {
a00 = a11 = 1.0;
a10 = a01 = 0.0;
new int{0}; // allocation could is ok here
}
}
double a00; double a01;
double a10; double a11;
};
In this way, only Matrix A(0) or Matrix A(1) are allowed.
(Although it works with constant variables as well, but that is ok.)
int main() {
// ZeroOROne(0);
// ZeroOROne(1);
// ZeroOROne(2); // compilation error
Matrix A(0);
Matrix B(1);
// Matrix C(2); // compilation error
int const d = 0; // this works because the compiler can "see" the 0.
Matrix D(d);
constexpr int e = 0;
Matrix E(e);
// int f = 0;
// Matrix F(f); // compile error
return B.a00;
}
Here it is shown that the "runtime" if in the constructor is not a problem and can be elided by the compiler: https://godbolt.org/z/hd6TWY6qW
The solution needs C++20 and it works in recent version of GCC and clang.
EDIT: thanks to the answers I was able to solve all the issues with my code. I post here the solution: it might be useful to somebody in the future. In particular, the suggestion of using a proxy class proved very useful! The example doens't consider all the cases but it should be trivial to add another type to the variant!
I am writing a C++ (C11 - Linux) custom class that sort of behaves like an unordered map {key, value}. I would like to overload the [] operator so that I can use the class with the same syntax as an unordered map: object[key] would return value.
The problem is that I need object[key] to return a variant type. I can store internally value as a string or struct but, when I retrieve it by using object[key], I need the returned value to be an int, float or string depending on some internal condition determined at runtime.
This is why I was thinking about using the boost::variant library ... but I am opened to any other suggestion. The only restriction is that the test class (in the example) have to compiled as a shared library .so and that the code must be C11 compatible (I mean compilable by GNU g++ 4.8.5).
I wrote a simple example to show what kind of behavior I would like The example is not meant to mean anything. It is just to illustrate the kind of error that I am getting. The real class that I am writing has a different structure but the usage of bool::variant and operator [] overload is the same.
test.cpp
#include <boost/variant.hpp>
typedef boost::variant<int, float> test_t;
class Test
{
int i ;
float f;
void set(int randomint, test_t tmp){
if ( randomint == 0 ) i = boost::get<int>(tmp);
else f = boost::get<float>(tmp);
}
test_t get(int randomint){
if ( randomint == 0 ) return i;
else return f;
}
struct IntOrFloat {
int randomint;
Test *proxy;
explicit operator int () const
{ return boost::get<int>(proxy->get(randomint)); }
void operator= (int tmp)
{ proxy->set(randomint, tmp); }
explicit operator float () const
{ return boost::get<float>(proxy->get(randomint)); }
void operator= (float tmp)
{ proxy->set(randomint, tmp); }
};
public:
IntOrFloat operator [](int randomint)
{ return IntOrFloat{randomint, this}; }
const IntOrFloat operator [](int randomint) const
{ return IntOrFloat{randomint, (Test *) this}; }
};
main.cpp
#include <iostream>
#include <boost/variant.hpp>
#include "test.cpp"
#define INTEGER 0
#define FLOAT 1
int main (void) {
Test test;
int i = 3;
float f = 3.14;
test[INTEGER] = i;
test[FLOAT] = f;
int x = (int) test[INTEGER];
float y = (float) test[FLOAT];
std::cout << x << std::endl;
std::cout << y << std::endl;
return 0;
}
To compile and run
g++ -fPIC -std=c++11 -shared -rdynamic -o test.so test.cpp
g++ -std=c++11 -o test main.cpp -Lpath/to/the/test.so -l:test.so
LD_LIBRARY_PATH="path/to/the/test.so" ./test
In C++, overload resolution does not happen on the return type, so given
int foo() { return 0; }
float foo() { return 0.f; }
there is no sanctioned way for the compiler to differentiate
int x = foo();
float f = foo();
. There is a trick using conversion operator overloads:
#include <iostream>
struct IntOrFloat {
operator int () const {
std::cout << "returning int\n";
return 0;
}
operator float () const {
std::cout << "returning float\n";
return 0.f;
}
};
IntOrFloat foo() { return IntOrFloat(); }
int main () {
int x = foo();
float f = foo();
}
You can force more verbosity by making the conversion explicit:
explicit operator int () const ...
explicit operator float () const ...
int x = static_cast<int>(foo());
int x = float(foo()); // old-style-cast
This proxy (or other conversion operator tricks) are as far as you'll to simulate return type overload resolution.
The idea once arised while searching a solution to supporting <euclidian vector> * <euclidian vector>-syntax, i.e. an operator* which either means dot product or vector product, depending on the type of the variable the product is assigned to.
In the end, it was not really practical and did not contribute positively to readability. The more verbose forms dot(vec, vec) and cross(vec, vec) were superior for several reasons, among which:
principle of least surprise: the computer graphics community is used to the terms "dot" and "cross"
less cryptic error messages: because this proxy technique is not idiomatic in C++, people are not used to the kind of error messages this temporal indirection yields
temporal and/or spatial locality: you are essentially returning a closure with code in it, which can be executed many times at many places. this can be doubly bad as it does not (actually, does) work well with auto & kind of declarations:
int main () {
const auto &f = foo();
const int g = f;
const int h = f;
std::cout << (int)f << "\n";
}
This prints something multiple times, going hand in hand with the least surprise principle. Of course this becomes less severe if your proxy basically just forwards readily available values. But the error messages won't become any better!
Note you can also incorporate template conversion operator overloads and wild metaprogramming. While worth a fun experiment, this is not something I'd love to put into a production code base, for maintenance and readability will even decrease.
What remains? Infinite possibilities; but some of the most feasible:
Variant datatypes
Tuple datatypes (look into std::tuple, which comes with conversion operators in case of distinct member types)
Different idioms (e.g. named methods instead of operator method)
Different algorithms
Different data structures
Different design patterns
When you use return i, what's happening underneath the hood is the creation of a temporary of type test_t that encapsulates that int value. This works fine in the function test::test_variant because the return type is test_t. This cannot work in the function test::operator[] because the return type is test_t&. The language prohibits creating a modifiable (l-value) reference to a temporary.
One way to make this work is to add a data member of type test_t to your class, with your test function operator[] setting this member and returning it rather than returning a temporary. Your real class will most likely do something different.
I'm looking to the answer to the following question: is may_alias suitable as attribute for pointer to an object of some class Foo? Or must it be used at class level only?
Consider the following code(it is based on a real-world example which is more complex):
#include <iostream>
using namespace std;
#define alias_hack __attribute__((__may_alias__))
template <typename T>
class Foo
{
private:
/*alias_hack*/ char Data[sizeof (T)];
public:
/*alias_hack*/ T& GetT()
{
return *((/*alias_hack*/ T*)Data);
}
};
struct Bar
{
int Baz;
Bar(int baz)
: Baz(baz)
{}
} /*alias_hack*/; // <- uncommeting this line apparently solves the problem, but does so on class-level(rather than pointer-level)
// uncommenting previous alias_hack's doesn't help
int main()
{
Foo<Bar> foo;
foo.GetT().Baz = 42;
cout << foo.GetT().Baz << endl;
}
Is there any way to tell gcc that single pointer may_alias some another?
BTW, please note that gcc detection mechanism of such problem is imperfect, so it is very easy to just make this warning go away without actually solving the problem.
Consider the following snippet of code:
#include <iostream>
using namespace std;
int main()
{
long i = 42;
long* iptr = &i;
//(*(short*)&i) = 3; // with warning
//(*(short*)iptr) = 3; // without warning
cout << i << endl;
}
Uncomment one of the lines to see the difference in compiler output.
Simple answer - sorry, no.
__attrbite__ gives instructions to the compiler. Objects exist in the memory of the executed program. Hence nothing in __attribute__ list can relate to the run-time execution.
Dimitar is correct. may_alias is a type attribute. It can only apply to a type, not an instance of the type. What you'd like is what gcc calls a "variable attribute". It would not be easy to disable optimizations for one specific pointer. What would the compiler do if you call a function with this pointer? The function is potentially already compiled and will behave based on the type passed to the function, not based on the address store in the pointer (you should see now why this is a type attribute)
Now depending on your code something like that might work:
#define define_may_alias_type(X) class X ## _may alias : public X { } attribute ((may_alias));
You'd just pass your pointer as Foo_may_alias * (instead of Foo *) when it might alias. That's hacky though
Wrt your question about the warning, it's because -Wall defaults to -Wstrict-aliasing=3 which is not 100% accurate. Actually, -Wstrict-aliasing is never 100% accurate but depending on the level you'll get more or less false negatives (and false positives). If you pass -Wstrict-aliasing=1 to gcc, you'll see a warning for both
Here's the code. Is it possible to make last line work?
#include<iostream>
using namespace std;
template <int X, int Y>
class Matrix
{
int matrix[X][Y];
int x,y;
public:
Matrix() : x(X), y(Y) {}
void print() { cout << "x: " << x << " y: " << y << endl; }
};
template < int a, int b, int c>
Matrix<a,c> Multiply (Matrix<a,b>, Matrix<b,c>)
{
Matrix<a,c> tmp;
return tmp;
}
int main()
{
Matrix<2,3> One;
One.print();
Matrix<3,5> Two;
(Multiply(One,Two)).print(); // this works perfect
Matrix Three=Multiply(One,Two); // !! THIS DOESNT WORK
return 0;
}
In C++11 you can use auto to do that:
auto Three=Multiply(One,Two);
In current C++ you cannot do this.
One way to avoid having to spell out the type's name is to move the code dealing with Three into a function template:
template< int a, int b >
void do_something_with_it(const Matrix<a,b>& One, const Matrix<a,b>& Two)
{
Matrix<a,b> Three = Multiply(One,Two);
// ...
}
int main()
{
Matrix<2,3> One;
One.print();
Matrix<3,5> Two;
do_something_with_it(One,Two);
return 0;
}
Edit: A few more notes to your code.
Be careful with using namespace std;, it can lead to very nasty surprises.
Unless you plan to have matrices with negative dimensions, using unsigned int or, even more appropriate, std::size_t would be better for the template arguments.
You shouldn't pass matrices per copy. Pass per const reference instead.
Multiply() could be spelled operator*, which would allow Matrix<2,3> Three = One * Two;
print should probably take the stream to print to as std::ostream&. And I'd prefer it to be a free function instead of a member function. I would contemplate overloading operator<< instead of naming it print.
This wont be possible in C++03 but C++0x offers auto.
auto Three=Multiply(One,Two);
No, when using a class template, you have to specify all template arguments explicitly.
If your compiler supports it, you can use auto from C++0x instead:
auto Three=Multiply(One,Two);
In g++, you can enable C++0x support using the -std=c++0x flag.
Templates are used at compilation time and are used to implement static polymorphism. This means you should know everything about your objects at the moment your code is being compiled.
Hence, here the compiler fails, because this would be too hard for it to know that Three should have (2,5) dimensions (at least at currently common standard).
If this is a question for "just-to-know", then OK, but in real code you should obviously use constructors to initialize matrix (and set it's dimensions).
I know that sizeof is a compile-time calculation, but this seems odd to me: The compiler can take either a type name, or an expression (from which it deduces the type). But how do you identify a type within a class? It seems the only way is to pass an expression, which seems pretty clunky.
struct X { int x; };
int main() {
// return sizeof(X::x); // doesn't work
return sizeof(X()::x); // works, and requires X to be default-constructible
}
An alternate method works without needing a default constructor:
return sizeof(((X *)0)->x);
You can wrap this in a macro so it reads better:
#define member_sizeof(T,F) sizeof(((T *)0)->F)
Here is a solution without the nasty null pointer dereferencing ;)
struct X { int x; };
template<class T> T make(); // note it's only a declaration
int main()
{
std::cout << sizeof(make<X>().x) << std::endl;
}
What about offsetof? Have a look here. Also have a look here, which combines both sizeof and offsetof into a macro.
Hope this helps.