`#define` a very large number in c++ source code - c++

Well, the question is not as silly as it sound.
I am using C++11 <array> and want to declare an array like this:
array<int, MAX_ARR_SIZE> myArr;
The MAX_ARR_SIZE is to be defined in a header file and could be very large i.e. 10^13. Currently I am typing it like a pre-school kid
#define MAX_ARR_SIZE 1000000000000000
I can live with it if there is no alternative. I can't use pow(10, 13) here since it can not be evaluated at compile time; array initialization will fail. I am not aware of any shorthand to type this.

Using #define for constants is more a way of C than C++.
You can define your constant in this way:
const size_t MAX_ARR_SIZE(1e15);

In this case, using a const size_t instead of #define is preferred.
I'd like to add that, since C++14, when writing integer literals, you could add the optional single quotes as separator.
1'000'000'000'000'000
This looks more clear.

You can define a constexpr function:
constexpr size_t MAX_ARR_SIZE()
{
return pow(10, 15);
}
That way you can do even more complex calculations in compile time.
Then use it as array<int, MAX_ARR_SIZE()> myArr; it will be evaluated in compile time.
Also like it was already mentioned, you probably won't be able to allocate that size on the stack.
EDIT:
I have a fault here, since pow itself is not constexpr you can't use it, but it's solvable, for example use ipow as discussed here: c++11 fast constexpr integer powers
here is the function quote:
constexpr int64_t ipow(int64_t base, int exp, int64_t result = 1) {
return exp < 1 ? result : ipow(base*base, exp/2, (exp % 2) ? result*base : result);
}
simply change MAX_ARR_SIZE() to:
constexpr size_t MAX_ARR_SIZE()
{
return ipow(10, 15);
}

#define MAX_ARRAY_SIZE (1000ull * 1000 * 1000 * 1000 * 1000)

You actually can evaluate pow(10, 15) and similar expressions at compile time in C++11 if you use a const instead of #define. Just make sure you pick a large enough primitive.

You can use :
#define MAX_ARR_SIZE 1e15
1e15 is very huge and probably would not be allocated.

Related

c++11: short notation for pow?

I need paste huge formula into my c++11 code.
This formula contains numerous things like double type to power of integer:
It is boring to write:
pow(sin(B), 6) + ... pow(sin(C), 4) + ....
20 times
The best thing will be overload operator^ for double and int,
but as I understand it is not possible for C++11. And in formula like this:
z0^2 * (something)
according to precedence of operators it would be like this:
z0 ^ (2 * (something))
and this is not what I want.
So is it possible using some trick to approximate math notation for x to power of y with C++ code?
Possible toools are: c++11 and boost.
Update:
About support code with such math notation instead of C++.
Ideal solution as I see would be like:
const double result = (latex_mode
(sin(B_0))^6(a + b)
latex_mode_end)
(B_0, a, b);
where latex_mode support small subset of LaTex language without ambiguity.
1)All programmers that touch this code have mathematical background, so they read latex without problem
2)Formula can be copy/paste from article without any modification,
so it will reduce typo errors.
No, you can't do it. At least, you shouldn't. But it can be simplified otherwise. If your formula can be described as
then you can make table of pairs (a,b) and write code which itself will do it. For example:
vector<pair<unsigned, unsigned>> table = {{1, 2}, {2, 3}, {3, 4}};
unsigned sum = 0;
for(const auto& x : table)
sum += pow(get<0>(x), get<1>(x));
Inspired by a #5gon12eder's comment I wrote a function:
template <typename Input, typename Output = unsigned>
Output powers(std::initializer_list<Input> args) {
Output result = 0;
for(const auto x : args)
result += pow(std::get<0>(x), std::get<1>(x));
return result;
}
You need additional (standard) library:
#tuple
#initializer_list
Example of use:
std::pair<unsigned, unsigned> a{1,2}, b{2,3};
std::cout << powers({a, b, {3, 4}, {4,5}}) << '\n';
That prints 1114 and it's correct.
Referring to edit part, I can suggest write a function receiving string and parsing. But it will be much slower than the above method.
Finally, you can write to the authors of your compiler.
Edit:
With C++14 new possibilities appeared. You can make constexpr functions with for, variables etc. So it is easier to create compile-time parser. I still recommend solution from original part of post, cause it will be little bit messy, but it will do what you want in compile-time.
String to int example:
#include <iostream>
template<size_t N>
constexpr uint32_t to_int(const char (&input)[N]) { // Pass by reference
uint32_t result = 0;
for(uint32_t i = 0; i < N && input[i] != '\0'; ++i) {
result *= 10;
result += input[i] - '0';
}
return result;
}
constexpr uint32_t value = to_int("123427");
enum { __force_compile_time_computing = value };
int main() {
std::cout << value << std::endl;
}
Prints:
~ $ g++ -std=c++14 -Wall -Wextra -pedantic-errors example.cpp
~ $ ./a.out
123427
~ $
Obviously it will be harder to make parser. Probably the best way to do it is to create constexpr class Operation with two constructors Operation(operation, operation) and Operation(value) and to create calculation tree in compile time (if you have variables in your string).
If you don't want to do whole this job, and you can accept other program/language semantic, then you can realize easy run-time solution. Create new thread which calls R/mathematica/{something else} and send input string to it. After calculation resend value to main program.
If you want some hint, probably using std::future will be convenient.
Just for the record, this is what Bjarne Stroustrup says about it:
Can I define my own operators?
Sorry, no. The possibility has been considered several times, but each time I/we decided that the likely problems outweighed the likely benefits.
It's not a language-technical problem. Even when I first considerd it in 1983, I knew how it could be implemented. However, my experience has been that when we go beyond the most trivial examples people seem to have subtlely different opinions of “the obvious” meaning of uses of an operator. A classical example is a**b**c. Assume that ** has been made to mean exponentiation. Now should a**b**c mean (a**b)**c or a**(b**c)? I thought the answer was obvious and my friends agreed – and then we found that we didn't agree on which resolution was the obvious one. My conjecture is that such problems would lead to subtle bugs.
Interestingly, it is exactly the operator you are missing. Well, this is not that much a coincidence since many people are missing a built-in exponentiation operator. Especially those who know Fortran or Python (two languages otherwise rarely mentioned together).

Is there another way to code #define SIZE 50 in c++

I have a program and it needs to define 50. Is this the only way of doing it
#define SIZE 50
Using #defines for constants is decidedly "old skool" and has a number of disadvantages. Better to use const, e.g.
const size_t SIZE = 50;
Note that this applies equally to C++, C and Objective-C.
With a const int?
const int size = 50;
This has a different meaning the #define though, and is much safer. #define just does a pre-processing cut and paste, while defining a constant you maintain type checking. You can't use this for everything that #define will do, but it will work for general constants and lengths of static arrays.
C++11 introduces constexpr
constexpr int size = 50;
constexpr expands the capabilities of constants to also include more compile-time computation.

What's the purpose of dummy addition in this "number of elements" macro?

Visual C++ 10 is shipped with stdlib.h that among other things contains this gem:
template <typename _CountofType, size_t _SizeOfArray>
char (*__countof_helper(UNALIGNED _CountofType (&_Array)[_SizeOfArray]))[_SizeOfArray];
#define _countof(_Array) (sizeof(*__countof_helper(_Array)) + 0)
which uses a clever template trick to deduce array size and prevent pointers from being passed into __countof.
What's the purpose of + 0 in the macro definition? What problem does it solve?
Quoting STL from here
I made this change; I don't usually hack the CRT, but this one was
trivial. The + 0 silences a spurious "warning C6260: sizeof * sizeof
is usually wrong. Did you intend to use a character count or a byte
count?" from /analyze when someone writes _countof(arr) * sizeof(T).
What's the purpose of + 0 in the macro definition? What problem does
it solve?
I don't feel it solves any problem. It might be used to silence some warning as mentioned in another answer.
On the important note, following is another way of finding the array size at compile time (personally I find it more readable):
template<unsigned int SIZE>
struct __Array { char a[SIZE]; }
template<typename T, unsigned int SIZE>
__Array<SIZE> __countof_helper(const T (&)[SIZE]);
#define _countof(_Array) (sizeof(__countof_helper(_Array)))
[P.S.: Consider this as a comment]

Macro definition ARRAY_SIZE

I encountered the following macro definition when reading the globals.h in the Google V8 project.
// The expression ARRAY_SIZE(a) is a compile-time constant of type
// size_t which represents the number of elements of the given
// array. You should only use ARRAY_SIZE on statically allocated
// arrays.
#define ARRAY_SIZE(a) \
((sizeof(a) / sizeof(*(a))) / \
static_cast<size_t>(!(sizeof(a) % sizeof(*(a)))))
My question is the latter part: static_cast<size_t>(!(sizeof(a) % sizeof(*(a))))). One thing in my mind is the following: Since the latter part will always evaluates to 1, which is of type size_t, the whole expression will be promoted to size_t.
If this assumption is correct, then there comes another question: since the return type of sizeof operator is size_t, why is such a promotion necessary? What's the benefit of defining a macro in this way?
As explained, this is a feeble (*) attempt to secure the macro against use with pointers (rather than true arrays) where it would not correctly assess the size of the array. This of course stems from the fact that macros are pure text-based manipulations and have no notion of AST.
Since the question is also tagged C++, I would like to point out that C++ offers a type-safe alternative: templates.
#ifdef __cplusplus
template <size_t N> struct ArraySizeHelper { char _[N]; };
template <typename T, size_t N>
ArraySizeHelper<N> makeArraySizeHelper(T(&)[N]);
# define ARRAY_SIZE(a) sizeof(makeArraySizeHelper(a))
#else
# // C definition as shown in Google's code
#endif
Alternatively, will soon be able to use constexpr:
template <typename T, size_t N>
constexpr size_t size(T (&)[N]) { return N; }
However my favorite compiler (Clang) still does not implement them :x
In both cases, because the function does not accept pointer parameters, you get a compile-time error if the type is not right.
(*) feeble in that it does not work for small objects where the size of the objects is a divisor of the size of a pointer.
Just a demonstration that it is a compile-time value:
template <size_t N> void print() { std::cout << N << "\n"; }
int main() {
int a[5];
print<ARRAY_SIZE(a)>();
}
See it in action on IDEONE.
latter part will always evaluates to 1, which is of type size_t,
Ideally the later part will evaluate to bool (i.e. true/false) and using static_cast<>, it's converted to size_t.
why such promotion is necessary? What's the benefit of defining a
macro in this way?
I don't know if this is ideal way to define a macro. However, one inspiration I find is in the comments: //You should only use ARRAY_SIZE on statically allocated arrays.
Suppose, if someone passes a pointer then it would fail for the struct (if it's greater than pointer size) data types.
struct S { int i,j,k,l };
S *p = new S[10];
ARRAY_SIZE(p); // compile time failure !
[Note: This technique may not show any error for int*, char* as said.]
If sizeof(a) / sizeof(*a) has some remainder (i.e. a is not an integral number of *a) then the expression would evaluate to 0 and the compiler would give you a division by zero error at compile time.
I can only assume the author of the macro was burned in the past by something that didn't pass that test.
In the Linux kernel, the macro is defined as (GCC specific):
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))
where __must_be_array() is
/* &a[0] degrades to a pointer: a different type from an array */
#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
and __same_type() is
#define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b))
The second part wants to ensure that the sizeof( a ) is divisible of by sizeof( *a ).
Thus the (sizeof(a) % sizeof(*(a))) part. If it's divisible, the expression will be evaluated to 0. Here comes the ! part - !(0) will give true. That's why the cast is needed. Actually, this does not affect the calculation of the size, just adds compile time check.
As it's compile time, in case that (sizeof(a) % sizeof(*(a))) is not 0, you'll have a compile-time error for 0 division.

C++ throwing compilation error on sizeof() comparison in preprocessor #if

I have this which does not compile with the error "fatal error C1017: invalid integer constant expression" from visual studio. How would I do this?
template <class B>
A *Create()
{
#if sizeof(B) > sizeof(A)
#error sizeof(B) > sizeof(A)!
#endif
...
}
The preprocessor does not understand sizeof() (or data types, or identifiers, or templates, or class definitions, and it would need to understand all of those things to implement sizeof).
What you're looking for is a static assertion (enforced by the compiler, which does understand all of these things). I use Boost.StaticAssert for this:
template <class B>
A *Create()
{
BOOST_STATIC_ASSERT(sizeof(B) <= sizeof(A));
...
}
Preprocessor expressions are evaluated before the compiler starts compilation. sizeof() is only evaluated by the compiler.
You can't do this with preprocessor. Preprocessor directives cannot operate with such language-level elements as sizeof. Moreover, even if they could, it still wouldn't work, since preprocessor directives are eliminated from the code very early, they can't be expected to work as part of template code instantiated later (which is what you seem to be trying to achieve).
The proper way to go about it is to use some form of static assertion
template <class B>
A *Create()
{
STATIC_ASSERT(sizeof(B) <= sizeof(A));
...
}
There are quite a few implementations of static assertions out there. Do a search and choose one that looks best to you.
sizeof() cannot be used in a preprocessor directive.
The preprocessor runs before the compiler (at least logically it does) and has no knowledge of user defined types (and not necessarily much knowledge about intrinsic types - the preprocessor's int size could be different than the compiler targets.
Anyway, to do what you want, you should use a STATIC_ASSERT(). See the following answer:
Ways to ASSERT expressions at build time in C
With a STATIC_ASSERT() you'll be able to do this:
template <class B>
A *Create()
{
STATIC_ASSERT( sizeof(A) >= sizeof( B));
return 0;
}
This cannot be accomplished with pre-processor . The pre-processor executes in a pass prior to the compiler -- therefore the sizes of NodeB and Node have not yet been computed at the time #if is evaluated.
You could accomplish something similar using template-programming techniques. An excellent book on the subject is Modern C++ Design: Generic Programming and Design Patterns Applied, by Andrei Alexandrescu.
Here is an example from a web page which creates a template IF statement.
From that example, you could use:
IF< sizeof(NodeB)<sizeof(Node), non_existing_type, int>::RET i;
which either declares a variable of type int or of type non_existing_type. Assuming the non-existing type lives up to its name should the template IF condition evaluate as true, a compiler error will result. You can rename i something descriptive.
Using this would be "rolling your own" static assert, of which many are already available. I suggest you use one of those after playing around with building one yourself.
If you are interested in a compile time assert that will work for both C and C++, here is one I developed:
#define CONCAT2(x, y) x ## y
#define CONCAT(x, y) CONCAT2(x, y)
#define COMPILE_ASSERT(expr, name) \
struct CONCAT(name, __LINE__) { char CONCAT(name, __LINE__) [ (expr) ? 1 : -1 ]; }
#define CT_ASSERT(expr) COMPILE_ASSERT(expr, ct_assert_)
The to how this works is that the size of the array is negative (which is illegal) when the expression is false. By further wrapping that in a structure definition, this does not create anything at runtime.
This has already been explained, but allow me to elaborate on why the preprocessor can not compute the size of a structure. Aside from the fact that this is too much to ask of a simple preprocessor, there are also compiler flags that affect the way the structure is laid out.
struct X {
short a;
long b;
};
this structure might be 6 bytes or 8 bytes long, depending on whether the compiler was told to 32-bit align the "b" field, for performance reasons. There's no way the preprocessor could have that information.
Using MSVC, this code compiles for me:
const int cPointerSize = sizeof(void*);
const int cFourBytes = 4;`
#if (cPointerSize == cFourBytes)
...
however this (which should work identically) does not:
#if ( sizeof(void*) == 4 )
...
i see many people say that sizeof cannot be used in a pre-processor directive,
however that can't be the whole story because i regularly use the following macro:
#define STATICARRAYSIZE(a) (sizeof(a)/sizeof(*a))
for example:
#include <stdio.h>
#define STATICARRAYSIZE(a) (sizeof(a)/sizeof(*a))
int main(int argc, char*argv[])
{
unsigned char chars[] = "hello world!";
double dubls[] = {1, 2, 3, 4, 5};
printf("chars num bytes: %ld, num elements: %ld.\n" , sizeof(chars), STATICARRAYSIZE(chars));
printf("dubls num bytes: %ld, num elements: %ld.\n" , sizeof(dubls), STATICARRAYSIZE(dubls));
}
yields:
orion$ ./a.out
chars num bytes: 13, num elements: 13.
dubls num bytes: 40, num elements: 5.
however
i, too, cannot get sizeof() to compile in a #if statement under gcc 4.2.1.
eg, this doesn't compile:
#if (sizeof(int) == 2)
#error uh oh
#endif
any insight would be appreciated.