Why is boost::checked_delete "intentionally complex"? - c++

So I was looking through some boost source code and came across this:
(from <boost/checked_delete.hpp>)
template<class T> inline void checked_delete(T * x)
{
// intentionally complex - simplification causes regressions
typedef char type_must_be_complete[ sizeof(T)? 1: -1 ];
(void) sizeof(type_must_be_complete);
delete x;
}
Anyone happen to know why it is implemented in this way? Wouldn't sizeof(T) (for example) already be enough?

Someone asked the same question earlier. This post written by Peter Dimov (one of the writers of boost/checked_delete.hpp) pretty much speaks for itself:
What's the result of applying sizeof to an incomplete type?
A compile-time error, unless the compiler chooses to return 0 as a
nonstandard extension.
Why is sizeof called twice?
The second sizeof is a workaround for a Metrowerks CodeWarrior bug in
which the first typeof is never instantiated unless used.
Why is the result of sizeof cast to void? What exactly does that
line
do?
Silences a compiler warning.

This is just a guess; but there may be compilers out there that just emit a warning when you write sizeof(incomplete_type) and return 0. So you're making sure the array declaration fails in that case by trying to declare an array of size -1.

Related

Warnings on too-small array sizes to functions in C/C++

Are there any tools (preferably on linux) that can warn when an argument is defined as a smaller array then the prototype specifies?
eg:
void somefunc(float arg[10]); /* normally this would be defined in a header */
void my_func(void)
{
float arg[2];
somefunc(arg); /* <-- this could be a warning */
}
I realize this isn't invalid code but it could resolve some common mistakes if it were possible to warn of it (ran into one of these bugs recently).
Some tools (clang static checker for eg), will warn if the function is in the same file and sets a value outside the array bounds, but I was wondering if anything will warn if the arg is smaller then the prototype alone.
I've used cppcheck, clang, smatch, splint, gcc's -Wextra... but none complain of this.
The value in the prototype has no meaning to the compiler and is ignored! The function declared above is equivalent to
void somefunc(float* arg);
and
void somefunc(float arg[]);
When using C++ you can deal with the size restriction at compile-time using references. If you really mean to have an array of 10 floats, you can pass it by reference which will enforce that the size is correct:
void somefunc(float (&arg)[10]);
However, this will prevent bigger arrays from being passed. You can play with a template forwarding function if you want to pass bigger arrays:
void somefunc_intern(float* arg);
template <int Size>
typename std::enable_if<(10 <= Size)>::type
somefunc(float (&arg)[Size]) {
somefunc_intern(arg);
}
Of course, this won't generate a warning but an error if a too small array is passed.
In the C language, the float arg[10] parameter array bounds are merely stylistic: it is a hint to the programmer, not the compiler. Since C has weak type checking, you can pass any kind of float pointer or array to the function. One may argue and say that a programmer who doesn't read the function documentation before passing parameters to it, is asking for trouble. But there is of course always the potential for accidental bugs.
Good compilers will warn against this. If you have a bad compiler, which does not warn, you should indeed consider using an external static analysis tool, they are always notoriously picky about suspicious type conversions. Lint comes in a Linux version, I haven't used it, but it is known as an affordable alternative to the big and complex ones.
Theoretically, you could write code that will cause the compiler to produce more warnings, but it will obfuscate the program. I wouldn't recommend it, it would look like:
void somefunc(float (*arr_ptr)[10])
{
float* arg = *arr_ptr;
...
}
int main()
{
float ten[10];
float two[2];
somefunc(ten); // warning
somefunc(&ten); // warning
somefunc(two); // warning
somefunc(&two); // warning
float (*ten_ptr)[10] = &ten;
float (*two_ptr)[2] = &two;
somefunc(ten_ptr) // ok
somefunc(two_ptr) // warning
}
Since asking this question, cppcheck has added this feature in response to my suggestion (thanks guys!),
Commit:
https://github.com/danmar/cppcheck/commit/7f6a10599bee61de0c7ee90054808de00b3ae92d
Issue:
http://sourceforge.net/apps/trac/cppcheck/ticket/4262
At the time of writing this isn't yet in a release, but I assume it will be in the next release.
A perfectly ordinary C++ compiler will give you a compile error if you use std::array<N> instead or C arrays.
So just do that?

What's the purpose of dummy addition in this "number of elements" macro?

Visual C++ 10 is shipped with stdlib.h that among other things contains this gem:
template <typename _CountofType, size_t _SizeOfArray>
char (*__countof_helper(UNALIGNED _CountofType (&_Array)[_SizeOfArray]))[_SizeOfArray];
#define _countof(_Array) (sizeof(*__countof_helper(_Array)) + 0)
which uses a clever template trick to deduce array size and prevent pointers from being passed into __countof.
What's the purpose of + 0 in the macro definition? What problem does it solve?
Quoting STL from here
I made this change; I don't usually hack the CRT, but this one was
trivial. The + 0 silences a spurious "warning C6260: sizeof * sizeof
is usually wrong. Did you intend to use a character count or a byte
count?" from /analyze when someone writes _countof(arr) * sizeof(T).
What's the purpose of + 0 in the macro definition? What problem does
it solve?
I don't feel it solves any problem. It might be used to silence some warning as mentioned in another answer.
On the important note, following is another way of finding the array size at compile time (personally I find it more readable):
template<unsigned int SIZE>
struct __Array { char a[SIZE]; }
template<typename T, unsigned int SIZE>
__Array<SIZE> __countof_helper(const T (&)[SIZE]);
#define _countof(_Array) (sizeof(__countof_helper(_Array)))
[P.S.: Consider this as a comment]

Unary + operator on an int&

I have following statement and it compiles:
static unsigned char CMD[5] = {0x10,0x03,0x04,0x05,0x06};
int Class::functionA(int *buflen)
{
...
int length = sizeof(CMD); + *buflen; // compiler should cry! why not?
...
}
Why I get no compiler error?
+ *buflen;
Is a valid application of the unary + operator on an int&, it's basically a noop. It's the same as if you wrote this:
int i = 5;
+i; // noop
See here for what the unary operator+ actually does to integers, and here what you can practically do with it.
Because it isn't wrong, just a statement with no effect.
If you compile (gcc/g++) with the flag -Wall you'll see.
I guess from this Question's title "After semicolon another command and it compiles" that you think that there can only be one command/statement per line?
As you noticed, this is false. C++ and C are free-form languages (which means that you can arrange the symbols in any way you see fit). The semicolon is just a statement terminator.
You may write foo();bar(); or
foo();
bar();
Both (and more) arrangements are totally fine. By the way, that's a feature, not a bug. Some languages (Python, early Fortran) don't have that property.
As others have correctly pointed out, your specific statement is a no-op, a statement without any effect. Some compilers might warn you about that - but no compiler will warn you about multiple statements on one line.

MSVC++ erroring on a divide by 0 that will never happen! fix?

const int bob = 0;
if(bob)
{
int fred = 6/bob;
}
you will get an error on the line where the divide is done:
"error C2124: divide or mod by zero"
which is lame, because it is just as inevitable that the 'if' check will fail, as it is the divide will result in a div by 0. quite frankly I see no reason for the compiler to even evaluate anything in the 'if', except to assure brace integrity.
anyway, obviously that example isn't my problem, my problem comes when doing complicated template stuff to try and do as much at compile time as possible, in some cases arguments may be 0.
is there anyway to fix this error? or disable it? or any better workarounds than this:
currently the only work around I can think of (which I've done before when I encountered the same problem with recursive enum access) is to use template specialization to do the 'if'.
Oh yeah, I'm using Visual Studio Professional 2005 SP1 with the vista/win7 fix.
I suppose your compiler tries to optimize the code snippet since bob is defined const, so that the initial value of fred can be determined at compile time. Maybe you can prevent this optimization by declaring bob non-const or using the volatile keyword.
Can you provide more detail on what you're trying to do with templates? Perhaps you can use a specialised template for 0 that does nothing like in the good old Factorial example and avoid the error altogether.
template <int N>
struct Blah
{
enum { value = 6 / N };
};
template <>
struct Blah<0>
{
enum { value = 0 };
};
The problem - and the compiler has no choice in this - is that bob is a Integral Constant Expression, as is 6. Therefore 6/bob is also an ICE, and must be evaluated at compile time.
There's a very simple solution: inline int FredFromBob(int bob) { return 6/bob; } - a function call expression is never an ICE, even if the function is trivial and declared inline.

What does static_assert do, and what would you use it for?

Could you give an example where static_assert(...) ('C++11') would solve the problem in hand elegantly?
I am familiar with run-time assert(...). When should I prefer static_assert(...) over regular assert(...)?
Also, in boost there is something called BOOST_STATIC_ASSERT, is it the same as static_assert(...)?
Static assert is used to make assertions at compile time. When the static assertion fails, the program simply doesn't compile. This is useful in different situations, like, for example, if you implement some functionality by code that critically depends on unsigned int object having exactly 32 bits. You can put a static assert like this
static_assert(sizeof(unsigned int) * CHAR_BIT == 32);
in your code. On another platform, with differently sized unsigned int type the compilation will fail, thus drawing attention of the developer to the problematic portion of the code and advising them to re-implement or re-inspect it.
For another example, you might want to pass some integral value as a void * pointer to a function (a hack, but useful at times) and you want to make sure that the integral value will fit into the pointer
int i;
static_assert(sizeof(void *) >= sizeof i);
foo((void *) i);
You might want to asset that char type is signed
static_assert(CHAR_MIN < 0);
or that integral division with negative values rounds towards zero
static_assert(-5 / 2 == -2);
And so on.
Run-time assertions in many cases can be used instead of static assertions, but run-time assertions only work at run-time and only when control passes over the assertion. For this reason a failing run-time assertion may lay dormant, undetected for extended periods of time.
Of course, the expression in static assertion has to be a compile-time constant. It can't be a run-time value. For run-time values you have no other choice but use the ordinary assert.
Off the top of my head...
#include "SomeLibrary.h"
static_assert(SomeLibrary::Version > 2,
"Old versions of SomeLibrary are missing the foo functionality. Cannot proceed!");
class UsingSomeLibrary {
// ...
};
Assuming that SomeLibrary::Version is declared as a static const, rather than being #defined (as one would expect in a C++ library).
Contrast with having to actually compile SomeLibrary and your code, link everything, and run the executable only then to find out that you spent 30 minutes compiling an incompatible version of SomeLibrary.
#Arak, in response to your comment: yes, you can have static_assert just sitting out wherever, from the look of it:
class Foo
{
public:
static const int bar = 3;
};
static_assert(Foo::bar > 4, "Foo::bar is too small :(");
int main()
{
return Foo::bar;
}
$ g++ --std=c++0x a.cpp
a.cpp:7: error: static assertion failed: "Foo::bar is too small :("
I use it to ensure my assumptions about compiler behaviour, headers, libs and even my own code are correct. For example here I verify that the struct has been correctly packed to the expected size.
struct LogicalBlockAddress
{
#pragma pack(push, 1)
Uint32 logicalBlockNumber;
Uint16 partitionReferenceNumber;
#pragma pack(pop)
};
BOOST_STATIC_ASSERT(sizeof(LogicalBlockAddress) == 6);
In a class wrapping stdio.h's fseek(), I have taken some shortcuts with enum Origin and check that those shortcuts align with the constants defined by stdio.h
uint64_t BasicFile::seek(int64_t offset, enum Origin origin)
{
BOOST_STATIC_ASSERT(SEEK_SET == Origin::SET);
You should prefer static_assert over assert when the behaviour is defined at compile time, and not at runtime, such as the examples I've given above. An example where this is not the case would include parameter and return code checking.
BOOST_STATIC_ASSERT is a pre-C++0x macro that generates illegal code if the condition is not satisfied. The intentions are the same, albeit static_assert is standardised and may provide better compiler diagnostics.
BOOST_STATIC_ASSERT is a cross platform wrapper for static_assert functionality.
Currently I am using static_assert in order to enforce "Concepts" on a class.
example:
template <typename T, typename U>
struct Type
{
BOOST_STATIC_ASSERT(boost::is_base_of<T, Interface>::value);
BOOST_STATIC_ASSERT(std::numeric_limits<U>::is_integer);
/* ... more code ... */
};
This will cause a compile time error if any of the above conditions are not met.
One use of static_assert might be to ensure that a structure (that is an interface with the outside world, such as a network or file) is exactly the size that you expect. This would catch cases where somebody adds or modifies a member from the structure without realising the consequences. The static_assert would pick it up and alert the user.
In absence of concepts one can use static_assert for simple and readable compile-time type checking, for example, in templates:
template <class T>
void MyFunc(T value)
{
static_assert(std::is_base_of<MyBase, T>::value,
"T must be derived from MyBase");
// ...
}
This doesn't directly answers the original question, but makes an interesting study into how to enforce these compile time checks prior to C++11.
Chapter 2 (Section 2.1) of Modern C++ Design by Andrei Alexanderscu implements this idea of Compile-time assertions like this
template<int> struct CompileTimeError;
template<> struct CompileTimeError<true> {};
#define STATIC_CHECK(expr, msg) \
{ CompileTimeError<((expr) != 0)> ERROR_##msg; (void)ERROR_##msg; }
Compare the macro STATIC_CHECK() and static_assert()
STATIC_CHECK(0, COMPILATION_FAILED);
static_assert(0, "compilation failed");
To add on to all the other answers, it can also be useful when using non-type template parameters.
Consider the following example.
Let's say you want to define some kind of function whose particular functionality can be somewhat determined at compile time, such as a trivial function below, which returns a random integer in the range determined at compile time. You want to check, however, that the minimum value in the range is less than the maximum value.
Without static_assert, you could do something like this:
#include <iostream>
#include <random>
template <int min, int max>
int get_number() {
if constexpr (min >= max) {
throw std::invalid_argument("Min. val. must be less than max. val.\n");
}
srand(time(nullptr));
static std::uniform_int_distribution<int> dist{min, max};
std::mt19937 mt{(unsigned int) rand()};
return dist(mt);
}
If min < max, all is fine and the if constexpr branch gets rejected at compile time. However, if min >= max, the program still compiles, but now you have a function that, when called, will throw an exception with 100% certainty. Thus, in the latter case, even though the "error" (of min being greater than or equal to max) was present at compile-time, it will only be discovered at run-time.
This is where static_assert comes in.
Since static_assert is evaluated at compile-time, if the boolean constant expression it is testing is evaluated to be false, a compile-time error will be generated, and the program will not compile.
Thus, the above function can be improved as so:
#include <iostream>
#include <random>
template <int min, int max>
int get_number() {
static_assert(min < max, "Min. value must be less than max. value.\n");
srand(time(nullptr));
static std::uniform_int_distribution<int> dist{min, max};
std::mt19937 mt{(unsigned int) rand()};
return dist(mt);
}
Now, if the function template is instantiated with a value for min that is equal to or greater than max, then static_assert will evaluate its boolean constant expression to be false, and will throw a compile-time error, thus alerting you to the error immediately, without giving the opportunity for an exception at runtime.
(Note: the above method is just an example and should not be used for generating random numbers, as repeated calls in quick succession to the function will generate the same numbers due to the seed value passed to the std::mt19937 constructor through rand() being the same (due to time(nullptr) returning the same value) - also, the range of values generated by std::uniform_int_distribution is actually a closed interval, so the same value can be passed to its constructor for upper and lower bounds (though there wouldn't be any point))
The static_assert can be used to forbid the use of the delete keyword this way:
#define delete static_assert(0, "The keyword \"delete\" is forbidden.");
Every modern C++ developer may want to do that if he or she wants to use a conservative garbage collector by using only classes and structs that overload the operator new to invoke a function that allocates memory on the conservative heap of the conservative garbage collector that can be initialized and instantiated by invoking some function that does this in the beginning of the main function.
For example every modern C++ developer that wants to use the Boehm-Demers-Weiser conservative garbage collector will in the beginning of the main function write:
GC_init();
And in every class and struct overload the operator new this way:
void* operator new(size_t size)
{
return GC_malloc(size);
}
And now that the operator delete is not needed anymore, because the Boehm-Demers-Weiser conservative garbage collector is responsible to both free and deallocate every block of memory when it is not needed anymore, the developer wants to forbid the delete keyword.
One way is overloading the delete operator this way:
void operator delete(void* ptr)
{
assert(0);
}
But this is not recommended, because the modern C++ developer will know that he/she mistakenly invoked the delete operator on run time, but this is better to know this soon on compile time.
So the best solution to this scenario in my opinion is to use the static_assert as shown in the beginning of this answer.
Of course that this can also be done with BOOST_STATIC_ASSERT, but I think that static_assert is better and should be preferred more always.