Macro definition ARRAY_SIZE - c++

I encountered the following macro definition when reading the globals.h in the Google V8 project.
// The expression ARRAY_SIZE(a) is a compile-time constant of type
// size_t which represents the number of elements of the given
// array. You should only use ARRAY_SIZE on statically allocated
// arrays.
#define ARRAY_SIZE(a) \
((sizeof(a) / sizeof(*(a))) / \
static_cast<size_t>(!(sizeof(a) % sizeof(*(a)))))
My question is the latter part: static_cast<size_t>(!(sizeof(a) % sizeof(*(a))))). One thing in my mind is the following: Since the latter part will always evaluates to 1, which is of type size_t, the whole expression will be promoted to size_t.
If this assumption is correct, then there comes another question: since the return type of sizeof operator is size_t, why is such a promotion necessary? What's the benefit of defining a macro in this way?

As explained, this is a feeble (*) attempt to secure the macro against use with pointers (rather than true arrays) where it would not correctly assess the size of the array. This of course stems from the fact that macros are pure text-based manipulations and have no notion of AST.
Since the question is also tagged C++, I would like to point out that C++ offers a type-safe alternative: templates.
#ifdef __cplusplus
template <size_t N> struct ArraySizeHelper { char _[N]; };
template <typename T, size_t N>
ArraySizeHelper<N> makeArraySizeHelper(T(&)[N]);
# define ARRAY_SIZE(a) sizeof(makeArraySizeHelper(a))
#else
# // C definition as shown in Google's code
#endif
Alternatively, will soon be able to use constexpr:
template <typename T, size_t N>
constexpr size_t size(T (&)[N]) { return N; }
However my favorite compiler (Clang) still does not implement them :x
In both cases, because the function does not accept pointer parameters, you get a compile-time error if the type is not right.
(*) feeble in that it does not work for small objects where the size of the objects is a divisor of the size of a pointer.
Just a demonstration that it is a compile-time value:
template <size_t N> void print() { std::cout << N << "\n"; }
int main() {
int a[5];
print<ARRAY_SIZE(a)>();
}
See it in action on IDEONE.

latter part will always evaluates to 1, which is of type size_t,
Ideally the later part will evaluate to bool (i.e. true/false) and using static_cast<>, it's converted to size_t.
why such promotion is necessary? What's the benefit of defining a
macro in this way?
I don't know if this is ideal way to define a macro. However, one inspiration I find is in the comments: //You should only use ARRAY_SIZE on statically allocated arrays.
Suppose, if someone passes a pointer then it would fail for the struct (if it's greater than pointer size) data types.
struct S { int i,j,k,l };
S *p = new S[10];
ARRAY_SIZE(p); // compile time failure !
[Note: This technique may not show any error for int*, char* as said.]

If sizeof(a) / sizeof(*a) has some remainder (i.e. a is not an integral number of *a) then the expression would evaluate to 0 and the compiler would give you a division by zero error at compile time.
I can only assume the author of the macro was burned in the past by something that didn't pass that test.

In the Linux kernel, the macro is defined as (GCC specific):
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))
where __must_be_array() is
/* &a[0] degrades to a pointer: a different type from an array */
#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
and __same_type() is
#define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b))

The second part wants to ensure that the sizeof( a ) is divisible of by sizeof( *a ).
Thus the (sizeof(a) % sizeof(*(a))) part. If it's divisible, the expression will be evaluated to 0. Here comes the ! part - !(0) will give true. That's why the cast is needed. Actually, this does not affect the calculation of the size, just adds compile time check.
As it's compile time, in case that (sizeof(a) % sizeof(*(a))) is not 0, you'll have a compile-time error for 0 division.

Related

`#define` a very large number in c++ source code

Well, the question is not as silly as it sound.
I am using C++11 <array> and want to declare an array like this:
array<int, MAX_ARR_SIZE> myArr;
The MAX_ARR_SIZE is to be defined in a header file and could be very large i.e. 10^13. Currently I am typing it like a pre-school kid
#define MAX_ARR_SIZE 1000000000000000
I can live with it if there is no alternative. I can't use pow(10, 13) here since it can not be evaluated at compile time; array initialization will fail. I am not aware of any shorthand to type this.
Using #define for constants is more a way of C than C++.
You can define your constant in this way:
const size_t MAX_ARR_SIZE(1e15);
In this case, using a const size_t instead of #define is preferred.
I'd like to add that, since C++14, when writing integer literals, you could add the optional single quotes as separator.
1'000'000'000'000'000
This looks more clear.
You can define a constexpr function:
constexpr size_t MAX_ARR_SIZE()
{
return pow(10, 15);
}
That way you can do even more complex calculations in compile time.
Then use it as array<int, MAX_ARR_SIZE()> myArr; it will be evaluated in compile time.
Also like it was already mentioned, you probably won't be able to allocate that size on the stack.
EDIT:
I have a fault here, since pow itself is not constexpr you can't use it, but it's solvable, for example use ipow as discussed here: c++11 fast constexpr integer powers
here is the function quote:
constexpr int64_t ipow(int64_t base, int exp, int64_t result = 1) {
return exp < 1 ? result : ipow(base*base, exp/2, (exp % 2) ? result*base : result);
}
simply change MAX_ARR_SIZE() to:
constexpr size_t MAX_ARR_SIZE()
{
return ipow(10, 15);
}
#define MAX_ARRAY_SIZE (1000ull * 1000 * 1000 * 1000 * 1000)
You actually can evaluate pow(10, 15) and similar expressions at compile time in C++11 if you use a const instead of #define. Just make sure you pick a large enough primitive.
You can use :
#define MAX_ARR_SIZE 1e15
1e15 is very huge and probably would not be allocated.

What's the purpose of dummy addition in this "number of elements" macro?

Visual C++ 10 is shipped with stdlib.h that among other things contains this gem:
template <typename _CountofType, size_t _SizeOfArray>
char (*__countof_helper(UNALIGNED _CountofType (&_Array)[_SizeOfArray]))[_SizeOfArray];
#define _countof(_Array) (sizeof(*__countof_helper(_Array)) + 0)
which uses a clever template trick to deduce array size and prevent pointers from being passed into __countof.
What's the purpose of + 0 in the macro definition? What problem does it solve?
Quoting STL from here
I made this change; I don't usually hack the CRT, but this one was
trivial. The + 0 silences a spurious "warning C6260: sizeof * sizeof
is usually wrong. Did you intend to use a character count or a byte
count?" from /analyze when someone writes _countof(arr) * sizeof(T).
What's the purpose of + 0 in the macro definition? What problem does
it solve?
I don't feel it solves any problem. It might be used to silence some warning as mentioned in another answer.
On the important note, following is another way of finding the array size at compile time (personally I find it more readable):
template<unsigned int SIZE>
struct __Array { char a[SIZE]; }
template<typename T, unsigned int SIZE>
__Array<SIZE> __countof_helper(const T (&)[SIZE]);
#define _countof(_Array) (sizeof(__countof_helper(_Array)))
[P.S.: Consider this as a comment]

ARRAYSIZE C++ macro: how does it work?

OK, I'm not entirely a newbie, but I cannot say I understand the following macro. The most confusing part is the division with value cast to size_t: what on earth does that accomplish? Especially, since I see a negation operator, which, as far as I know, might result in a zero value. Does not this mean that it can lead to a division-by-zero error? (By the way, the macro is correct and works beautifully.)
#define ARRAYSIZE(a) \
((sizeof(a) / sizeof(*(a))) / \
static_cast<size_t>(!(sizeof(a) % sizeof(*(a)))))
The first part (sizeof(a) / sizeof(*(a))) is fairly straightforward; it's dividing the size of the entire array (assuming you pass the macro an object of array type, and not a pointer), by the size of the first element. This gives the number of elements in the array.
The second part is not so straightforward. I think the potential division-by-zero is intentional; it will lead to a compile-time error if, for whatever reason, the size of the array is not an integer multiple of one of its elements. In other words, it's some kind of compile-time sanity check.
However, I can't see under what circumstances this could occur... As people have suggested in comments below, it will catch some misuse (like using ARRAYSIZE() on a pointer). It won't catch all errors like this, though.
I wrote this version of this macro. Consider the older version:
#include <sys/stat.h>
#define ARRAYSIZE(a) (sizeof(a) / sizeof(*(a)))
int main(int argc, char *argv[]) {
struct stat stats[32];
std::cout << "sizeof stats = " << (sizeof stats) << "\n";
std::cout << "sizeof *stats = " << (sizeof *stats) << "\n";
std::cout << "ARRAYSIZE=" << ARRAYSIZE(stats) << "\n";
foo(stats);
}
void foo(struct stat stats[32]) {
std::cout << "sizeof stats = " << (sizeof stats) << "\n";
std::cout << "sizeof *stats = " << (sizeof *stats) << "\n";
std::cout << "ARRAYSIZE=" << ARRAYSIZE(stats) << "\n";
}
On a 64-bit machine, this code produces this output:
sizeof stats = 4608
sizeof *stats = 144
ARRAYSIZE=32
sizeof stats = 8
sizeof *stats = 144
ARRAYSIZE=0
What's going on? How did the ARRAYSIZE go from 32 to zero? Well, the problem is the function parameter is actually a pointer, even though it looks like an array. So inside of foo, "sizeof(stats)" is 8 bytes, and "sizeof(*stats)" is still 144.
With the new macro:
#define ARRAYSIZE(a) \
((sizeof(a) / sizeof(*(a))) / \
static_cast<size_t>(!(sizeof(a) % sizeof(*(a)))))
When sizeof(a) is not a multiple of sizeof(* (a)), the % is not zero, which the ! inverts, and then the static_cast evaluates to zero, causing a compile-time division by zero. So to the extent possible in a macro, this weird division catches the problem at compile-time.
PS: in C++17, just use std::size, see http://en.cppreference.com/w/cpp/iterator/size
The division at the end seems to be an attempt at detecting a non-array argument (e.g. pointer).
It fails to detect that for, for example, char*, but would work for T* where sizeof(T) is greater than the size of a pointer.
In C++, one usually prefers the following function template:
typedef ptrdiff_t Size;
template< class Type, Size n >
Size countOf( Type (&)[n] ) { return n; }
This function template can't be instantiated with pointer argument, only array. In C++11 it can alternatively be expressed in terms of std::begin and std::end, which automagically lets it work also for standard containers with random access iterators.
Limitations: doesn't work for array of local type in C++03, and doesn't yield compile time size.
For compile time size you can instead do like
template< Size n > struct Sizer { char elems[n]; };
template< class Type, size n >
Sizer<n> countOf_( Type (&)[n] );
#define COUNT_OF( a ) sizeof( countOf_( a ).elems )
Disclaimer: all code untouched by compiler's hands.
But in general, just use the first function template, countOf.
Cheers & hth.
suppose we have
T arr[42];
ARRAYSIZE(arr) will expand to (rougly)
sizeof (arr) / sizeof(*arr) / !(sizeof(arr) % sizeof(*arr))
which in this case gives 42/!0 which is 42
If for some reason sizeof array is not divisible by sizeof its element, division by zero will occur. When can it happen? For example when you pass a dynamically allocated array instead of a static one!
It does lead to a division-by-zero error (intentionally). The way that this macro works is it divides the size of the array in bytes by the size of a single array element in bytes. So if you have an array of int values, where an int is 4 bytes (on most 32-bit machines), an array of 4 int values would be 16 bytes.
So when you call this macro on such an array, it does sizeof(array) / sizeof(*array). And since 16 / 4 = 4, it returns that there are 4 elements in the array.
Note: *array dereferences the first element of the array and is equivalent to array[0].
The second division does modulo-division (gets the remainder of the division), and since any non-zero value is considered "true", using the ! operator would cause a division by zero if the remainder of the division is non-zero (and similarly, division by 1 otherwise).
The div-by-zero may be trying to catch alignment errors caused by whatever reason. Like if, with some compiler settings, the size of an array element were 3 but the compiler would round it to 4 for faster array access, then an array of 4 entries would have the size of 16 and !(16/3) would go to zero, giving division-by-zero at compile time. Yet, I don't know of any compiler doing like that, and it may be against the specification of C++ for sizeof to return a size that differs from the size of that type in an array..
Coming late to the party here...
Google's C++ codebase has IMHO the definitive C++ implementation of the arraysize() macro, which includes several wrinkles that aren't considered here.
I cannot improve upon the source, which has clear and complete comments.

What does static_assert do, and what would you use it for?

Could you give an example where static_assert(...) ('C++11') would solve the problem in hand elegantly?
I am familiar with run-time assert(...). When should I prefer static_assert(...) over regular assert(...)?
Also, in boost there is something called BOOST_STATIC_ASSERT, is it the same as static_assert(...)?
Static assert is used to make assertions at compile time. When the static assertion fails, the program simply doesn't compile. This is useful in different situations, like, for example, if you implement some functionality by code that critically depends on unsigned int object having exactly 32 bits. You can put a static assert like this
static_assert(sizeof(unsigned int) * CHAR_BIT == 32);
in your code. On another platform, with differently sized unsigned int type the compilation will fail, thus drawing attention of the developer to the problematic portion of the code and advising them to re-implement or re-inspect it.
For another example, you might want to pass some integral value as a void * pointer to a function (a hack, but useful at times) and you want to make sure that the integral value will fit into the pointer
int i;
static_assert(sizeof(void *) >= sizeof i);
foo((void *) i);
You might want to asset that char type is signed
static_assert(CHAR_MIN < 0);
or that integral division with negative values rounds towards zero
static_assert(-5 / 2 == -2);
And so on.
Run-time assertions in many cases can be used instead of static assertions, but run-time assertions only work at run-time and only when control passes over the assertion. For this reason a failing run-time assertion may lay dormant, undetected for extended periods of time.
Of course, the expression in static assertion has to be a compile-time constant. It can't be a run-time value. For run-time values you have no other choice but use the ordinary assert.
Off the top of my head...
#include "SomeLibrary.h"
static_assert(SomeLibrary::Version > 2,
"Old versions of SomeLibrary are missing the foo functionality. Cannot proceed!");
class UsingSomeLibrary {
// ...
};
Assuming that SomeLibrary::Version is declared as a static const, rather than being #defined (as one would expect in a C++ library).
Contrast with having to actually compile SomeLibrary and your code, link everything, and run the executable only then to find out that you spent 30 minutes compiling an incompatible version of SomeLibrary.
#Arak, in response to your comment: yes, you can have static_assert just sitting out wherever, from the look of it:
class Foo
{
public:
static const int bar = 3;
};
static_assert(Foo::bar > 4, "Foo::bar is too small :(");
int main()
{
return Foo::bar;
}
$ g++ --std=c++0x a.cpp
a.cpp:7: error: static assertion failed: "Foo::bar is too small :("
I use it to ensure my assumptions about compiler behaviour, headers, libs and even my own code are correct. For example here I verify that the struct has been correctly packed to the expected size.
struct LogicalBlockAddress
{
#pragma pack(push, 1)
Uint32 logicalBlockNumber;
Uint16 partitionReferenceNumber;
#pragma pack(pop)
};
BOOST_STATIC_ASSERT(sizeof(LogicalBlockAddress) == 6);
In a class wrapping stdio.h's fseek(), I have taken some shortcuts with enum Origin and check that those shortcuts align with the constants defined by stdio.h
uint64_t BasicFile::seek(int64_t offset, enum Origin origin)
{
BOOST_STATIC_ASSERT(SEEK_SET == Origin::SET);
You should prefer static_assert over assert when the behaviour is defined at compile time, and not at runtime, such as the examples I've given above. An example where this is not the case would include parameter and return code checking.
BOOST_STATIC_ASSERT is a pre-C++0x macro that generates illegal code if the condition is not satisfied. The intentions are the same, albeit static_assert is standardised and may provide better compiler diagnostics.
BOOST_STATIC_ASSERT is a cross platform wrapper for static_assert functionality.
Currently I am using static_assert in order to enforce "Concepts" on a class.
example:
template <typename T, typename U>
struct Type
{
BOOST_STATIC_ASSERT(boost::is_base_of<T, Interface>::value);
BOOST_STATIC_ASSERT(std::numeric_limits<U>::is_integer);
/* ... more code ... */
};
This will cause a compile time error if any of the above conditions are not met.
One use of static_assert might be to ensure that a structure (that is an interface with the outside world, such as a network or file) is exactly the size that you expect. This would catch cases where somebody adds or modifies a member from the structure without realising the consequences. The static_assert would pick it up and alert the user.
In absence of concepts one can use static_assert for simple and readable compile-time type checking, for example, in templates:
template <class T>
void MyFunc(T value)
{
static_assert(std::is_base_of<MyBase, T>::value,
"T must be derived from MyBase");
// ...
}
This doesn't directly answers the original question, but makes an interesting study into how to enforce these compile time checks prior to C++11.
Chapter 2 (Section 2.1) of Modern C++ Design by Andrei Alexanderscu implements this idea of Compile-time assertions like this
template<int> struct CompileTimeError;
template<> struct CompileTimeError<true> {};
#define STATIC_CHECK(expr, msg) \
{ CompileTimeError<((expr) != 0)> ERROR_##msg; (void)ERROR_##msg; }
Compare the macro STATIC_CHECK() and static_assert()
STATIC_CHECK(0, COMPILATION_FAILED);
static_assert(0, "compilation failed");
To add on to all the other answers, it can also be useful when using non-type template parameters.
Consider the following example.
Let's say you want to define some kind of function whose particular functionality can be somewhat determined at compile time, such as a trivial function below, which returns a random integer in the range determined at compile time. You want to check, however, that the minimum value in the range is less than the maximum value.
Without static_assert, you could do something like this:
#include <iostream>
#include <random>
template <int min, int max>
int get_number() {
if constexpr (min >= max) {
throw std::invalid_argument("Min. val. must be less than max. val.\n");
}
srand(time(nullptr));
static std::uniform_int_distribution<int> dist{min, max};
std::mt19937 mt{(unsigned int) rand()};
return dist(mt);
}
If min < max, all is fine and the if constexpr branch gets rejected at compile time. However, if min >= max, the program still compiles, but now you have a function that, when called, will throw an exception with 100% certainty. Thus, in the latter case, even though the "error" (of min being greater than or equal to max) was present at compile-time, it will only be discovered at run-time.
This is where static_assert comes in.
Since static_assert is evaluated at compile-time, if the boolean constant expression it is testing is evaluated to be false, a compile-time error will be generated, and the program will not compile.
Thus, the above function can be improved as so:
#include <iostream>
#include <random>
template <int min, int max>
int get_number() {
static_assert(min < max, "Min. value must be less than max. value.\n");
srand(time(nullptr));
static std::uniform_int_distribution<int> dist{min, max};
std::mt19937 mt{(unsigned int) rand()};
return dist(mt);
}
Now, if the function template is instantiated with a value for min that is equal to or greater than max, then static_assert will evaluate its boolean constant expression to be false, and will throw a compile-time error, thus alerting you to the error immediately, without giving the opportunity for an exception at runtime.
(Note: the above method is just an example and should not be used for generating random numbers, as repeated calls in quick succession to the function will generate the same numbers due to the seed value passed to the std::mt19937 constructor through rand() being the same (due to time(nullptr) returning the same value) - also, the range of values generated by std::uniform_int_distribution is actually a closed interval, so the same value can be passed to its constructor for upper and lower bounds (though there wouldn't be any point))
The static_assert can be used to forbid the use of the delete keyword this way:
#define delete static_assert(0, "The keyword \"delete\" is forbidden.");
Every modern C++ developer may want to do that if he or she wants to use a conservative garbage collector by using only classes and structs that overload the operator new to invoke a function that allocates memory on the conservative heap of the conservative garbage collector that can be initialized and instantiated by invoking some function that does this in the beginning of the main function.
For example every modern C++ developer that wants to use the Boehm-Demers-Weiser conservative garbage collector will in the beginning of the main function write:
GC_init();
And in every class and struct overload the operator new this way:
void* operator new(size_t size)
{
return GC_malloc(size);
}
And now that the operator delete is not needed anymore, because the Boehm-Demers-Weiser conservative garbage collector is responsible to both free and deallocate every block of memory when it is not needed anymore, the developer wants to forbid the delete keyword.
One way is overloading the delete operator this way:
void operator delete(void* ptr)
{
assert(0);
}
But this is not recommended, because the modern C++ developer will know that he/she mistakenly invoked the delete operator on run time, but this is better to know this soon on compile time.
So the best solution to this scenario in my opinion is to use the static_assert as shown in the beginning of this answer.
Of course that this can also be done with BOOST_STATIC_ASSERT, but I think that static_assert is better and should be preferred more always.

C++ throwing compilation error on sizeof() comparison in preprocessor #if

I have this which does not compile with the error "fatal error C1017: invalid integer constant expression" from visual studio. How would I do this?
template <class B>
A *Create()
{
#if sizeof(B) > sizeof(A)
#error sizeof(B) > sizeof(A)!
#endif
...
}
The preprocessor does not understand sizeof() (or data types, or identifiers, or templates, or class definitions, and it would need to understand all of those things to implement sizeof).
What you're looking for is a static assertion (enforced by the compiler, which does understand all of these things). I use Boost.StaticAssert for this:
template <class B>
A *Create()
{
BOOST_STATIC_ASSERT(sizeof(B) <= sizeof(A));
...
}
Preprocessor expressions are evaluated before the compiler starts compilation. sizeof() is only evaluated by the compiler.
You can't do this with preprocessor. Preprocessor directives cannot operate with such language-level elements as sizeof. Moreover, even if they could, it still wouldn't work, since preprocessor directives are eliminated from the code very early, they can't be expected to work as part of template code instantiated later (which is what you seem to be trying to achieve).
The proper way to go about it is to use some form of static assertion
template <class B>
A *Create()
{
STATIC_ASSERT(sizeof(B) <= sizeof(A));
...
}
There are quite a few implementations of static assertions out there. Do a search and choose one that looks best to you.
sizeof() cannot be used in a preprocessor directive.
The preprocessor runs before the compiler (at least logically it does) and has no knowledge of user defined types (and not necessarily much knowledge about intrinsic types - the preprocessor's int size could be different than the compiler targets.
Anyway, to do what you want, you should use a STATIC_ASSERT(). See the following answer:
Ways to ASSERT expressions at build time in C
With a STATIC_ASSERT() you'll be able to do this:
template <class B>
A *Create()
{
STATIC_ASSERT( sizeof(A) >= sizeof( B));
return 0;
}
This cannot be accomplished with pre-processor . The pre-processor executes in a pass prior to the compiler -- therefore the sizes of NodeB and Node have not yet been computed at the time #if is evaluated.
You could accomplish something similar using template-programming techniques. An excellent book on the subject is Modern C++ Design: Generic Programming and Design Patterns Applied, by Andrei Alexandrescu.
Here is an example from a web page which creates a template IF statement.
From that example, you could use:
IF< sizeof(NodeB)<sizeof(Node), non_existing_type, int>::RET i;
which either declares a variable of type int or of type non_existing_type. Assuming the non-existing type lives up to its name should the template IF condition evaluate as true, a compiler error will result. You can rename i something descriptive.
Using this would be "rolling your own" static assert, of which many are already available. I suggest you use one of those after playing around with building one yourself.
If you are interested in a compile time assert that will work for both C and C++, here is one I developed:
#define CONCAT2(x, y) x ## y
#define CONCAT(x, y) CONCAT2(x, y)
#define COMPILE_ASSERT(expr, name) \
struct CONCAT(name, __LINE__) { char CONCAT(name, __LINE__) [ (expr) ? 1 : -1 ]; }
#define CT_ASSERT(expr) COMPILE_ASSERT(expr, ct_assert_)
The to how this works is that the size of the array is negative (which is illegal) when the expression is false. By further wrapping that in a structure definition, this does not create anything at runtime.
This has already been explained, but allow me to elaborate on why the preprocessor can not compute the size of a structure. Aside from the fact that this is too much to ask of a simple preprocessor, there are also compiler flags that affect the way the structure is laid out.
struct X {
short a;
long b;
};
this structure might be 6 bytes or 8 bytes long, depending on whether the compiler was told to 32-bit align the "b" field, for performance reasons. There's no way the preprocessor could have that information.
Using MSVC, this code compiles for me:
const int cPointerSize = sizeof(void*);
const int cFourBytes = 4;`
#if (cPointerSize == cFourBytes)
...
however this (which should work identically) does not:
#if ( sizeof(void*) == 4 )
...
i see many people say that sizeof cannot be used in a pre-processor directive,
however that can't be the whole story because i regularly use the following macro:
#define STATICARRAYSIZE(a) (sizeof(a)/sizeof(*a))
for example:
#include <stdio.h>
#define STATICARRAYSIZE(a) (sizeof(a)/sizeof(*a))
int main(int argc, char*argv[])
{
unsigned char chars[] = "hello world!";
double dubls[] = {1, 2, 3, 4, 5};
printf("chars num bytes: %ld, num elements: %ld.\n" , sizeof(chars), STATICARRAYSIZE(chars));
printf("dubls num bytes: %ld, num elements: %ld.\n" , sizeof(dubls), STATICARRAYSIZE(dubls));
}
yields:
orion$ ./a.out
chars num bytes: 13, num elements: 13.
dubls num bytes: 40, num elements: 5.
however
i, too, cannot get sizeof() to compile in a #if statement under gcc 4.2.1.
eg, this doesn't compile:
#if (sizeof(int) == 2)
#error uh oh
#endif
any insight would be appreciated.