I've just encountered code in the project dcraw that's causing the Visual Studio 2012 compiler to fail. It's of the following form:
void CLASS cubic_spline (const int *x_, const int *y_, const int len)
{
float A[2*len][2*len], b[2*len], c[2*len], d[2*len];
...
The problem is the creation of these variable length arrays on the stack. I've never really seen code like this -- is it possible to compile this in the Visual Studio compiler?
There are working compiler extensions (such as Clang's one and GCC's one) that allow this, but it's not standard, yet.
In C++11 you can also use constexpr if the numeric value is constant. And finally a proposal has been submitted to standardize it.
If x_ and y_ are meant to be arrays, you can just use std::array as follows:
template<std::size_t size>
void CLASS cubic_spline (std::array<int, size> const& x, std::array<int, size> const& y)
{
using float_array = std::array<float, 2 * size>;
std::array<float_array, 2 * size> A;
float_array b, c, d;
// ...
}
This way you could ensure that the passed arrays dimensions are equal, at compile time.
Otherwise you can clean that up with std::vector:
void CLASS cubic_spline (std::vector<int> const& x, std::vector<int> const& y)
{
std::vector<std::vector<float>> A(2 * x.size());
std::vector<float> b, c, d;
// ...
}
As mentioned, VLA (Variable-Length Arrays) are a C feature, and more to the point a C99 feature which Visual Studio does not support. On Linux, Clang and Gcc both support C99 (and C11 I believe) and allow this syntax in C++ as an extension.
In C++, you can easily transform the code by switching to std::vector<float> for all simple arrays. Only A will require a bit more work:
you can either use a std::vector< std::vector<float> >, but then you lose contiguity and locality
or you can use a flattened version std::vector<float> A(2*len*2*len); but then you will lose access by A[i][j] which will have to be transformed into A[i*2*len + j] instead
In any case, you will need to update this code to make it work on Visual Studio.
EDIT: per your comment:
The function is called twice in the code, once as cubic_spline(cx, cf, 9); and once as cubic_spline(cx, cf, 18);. In the first example, cx and cy are int[9] and in the second example they're int[18].
In this case you can actually make the function a template:
template <size_t len>
void CLASS cubic_spline(int const (&x)[len], int const (&y)[len]) {
float A[2*len][2*len], b[2*len], c[2*len], d[2*len];
}
Note that I removed the last parameter, because it is no longer necessary. The type of x and y is int const (&)[len] which is a reference to an array of int const of length len.
This is a GCC-specific compiler extension, see Arrays of Variable Length. I am not aware of any compiler options to make them work out of the box in VC. If this is a very isolated problem, try preprocessor #ifdefs to serve different code to VC.
Related
I have a function which executes a bunch of tests. Whenever a new test is created, the function gets one or two more lines. And - the result is pushed back into an array. So it goes something like this (simplified):
void foo(int *results) {
auto index { 0 };
results[i++] = test_1(some, args, here);
results[i++] = test_1(some, other_args, here);
results[i++] = test_2(some, args, here);
results[i++] = test_3(some, args, here);
// etc. etc.
}
void bar() {
auto results = new int/* magic */];
foo(results);
}
I want to use the number of statements in this function to allocate space for the results (the line in bar()). I cannot use a dynamically-reallocated structure like an std::vector or a list etc. - since I am precluded from allocating any memory due to hardware restrictions.
Now, I could just manually count the lines - and this would work. But then whenever I add another test I would have to remember to update the magical constant.
Is there some way to do the counting with the result usable for the "magic" expression?
Note: Since I'm a scrupulous man with no dignity, I am willing to stoop to the use of macros.
Speaking of macro hackery:
#include <iostream>
#define ADD_TEST(X) do { results[i++] = (X); (void)__COUNTER__; } while (0)
const int foo_start = __COUNTER__;
void foo(int *results) {
int i = 0;
ADD_TEST(100);
ADD_TEST(200);
ADD_TEST(300);
}
const int foo_end = __COUNTER__;
int main() {
int results[foo_end - foo_start - 1];
foo(results);
for (int i : results) {
std::cout << i << '\n';
}
}
It's slightly awful and __COUNTER__ is a non-standard extension in GCC and other compilers, but hey, it works.
The advantage is that it doesn't use any fancy C++ features, so in principle it should be compatible with older compilers and even C.
As you haven't specified any language version, though, did tag it with constexpr, I've solved this making use of C++17. This without any dirty macros. Instead, I'm relying on CTAD (Constructor template argument deduction).
First of all, I've assumed your functions are constexpr. That way, everything can be done at compile-time. (In the resulting code, you don't even see memory being used for the array.
constexpr int test_1(int a, int b, int c)
{
return a + b + c;
}
constexpr int test_2(int a, int b, int c)
{
return a * b * c;
}
This isn't strictly needed, however, it can move unneeded calculations to compile time. It also allows propagating constexpr upto the final variable. That way, you could guarantee that none of the calculations will happen at run-time.
static constexpr auto myArr = createFilledArray();
However, the most important part is CTAD. A new C++17 feature that allows deducing the template arguments of your class based on the values that are passed at the constructor.
Instead of first creating an array, I create the array directly with all the different values that you pass to it. Since you haven't provided any arguments in your example, I assume they are known at compile time, which is again required for the constexpr waterfall. However, more importantly, I assume the number of elements is known at compile time.
By constructing all arguments when calling the constructor of std::array, there is no need for specifying its template arguments (note also the auto as return type). This gets deduced as std::array<int, 3> for this example.
constexpr auto createFilledArray(){
std::array a
{
test_1(1, 2, 3),
test_1(4, 5, 6),
test_2(7, 8, 9),
};
return a;
}
int main(int, char**)
{
return myArr.size(); // Returns 3
}
Code at compiler explorer
From what I'm aware, there is a proposal for C++20 that is intended to make std::vector constexpr. However, none of the compilers I've tested at compiler explorer support this. This will most likely allow you to write code based on std::vector and use that at compile time. In other words, the allocated memory that represents your data, will be part of your executable.
A quick attempt of what your code could look like can be found here at compiler explorer. (However, it ain't compiling at this point)
I am building an R package that contains a c++ program. The checking runs fine, but I am getting this message
: warning: ISO C++ forbids variable length array ‘s1’ [-Wvla]
The CRAN's maintainer says that the error is in this part of the code is shown below. I am thinking that the argument "nrows" is redundant , but I wonder if there is another way to solve the problem
double entCI(double input[], int cMatrix[], double partition,
int nrows, int begin, int end)
{
double s1[nrows], s2[nrows], entropy;
int cs1[nrows], cs2[nrows];
int s1Count=0, s2Count=0, sCount=0;
while(input[begin]<partition)
{
cs1[s1Count]=cMatrix[begin];
s1[s1Count++]=input[begin++];
}
while(begin<end)
{
cs2[s2Count]=cMatrix[begin];
s2[s2Count++]=input[begin++];
}
sCount=s1Count+s2Count;
entropy=(s1Count/double(sCount))*ent(s1,cs1,s1Count)
+(s2Count/double(sCount))*ent(s2,cs2,s2Count);
return entropy;
}
Indeed, the error is on these lines:
double s1[nrows], s2[nrows], entropy;
int cs1[nrows], cs2[nrows];
They declare arrays, whose size depend on the nrows argument. The value of nrows is determined at runtime and therefore the arrays must be variable length. Such array variables are not allowed by the c++ standard as told to you by the warning.
I am thinking that the argument "nrows" is redundant
I don't see how that is. It's used in the function.
but I wonder if there is another way to solve the problem
There are ways to solve the problem. If the size of the array needs to be determined at runtime, it must be allocated dynamically. The simplest and safest way to do that is to use std::vector.
Generally you should use dynamic memory allocation to create array out of variable:
double* s1 = new double[nrows];
Then, remember to delete that array.
Other solution is to use std::vector instead of plain array.
Variable Length Arrays is for a long time a feature from gcc. It has been accepted in C99 but not in C++11 (nor in any following C++ version I know).
An easy and clean solution would be to compile that function as C because it does not use any specific C++ feature, simply array manipulation. In fact, this function is plain C that happens to be accepted by g++ but is not correct C++ hence the warning.
My advice is :
put the function in a .c file and compile it in C99 mode
declare it as extern "C" double entCI(double input[], int cMatrix[], double partition,
int nrows, int begin, int end) in other C++ module, or better write the include file declaring it as
#ifdef C++
extern "C" {
#endif
double entCI(double input[], int cMatrix[], double partition,
int nrows, int begin, int end)
#ifdef C++
}
#endif
Why does the code
void fcn(int *twoDArrayPtr, const int dim1, const int dim2) {
int (*array)[dim2] = reinterpret_cast<int (*)[dim2]>(twoDArrayPtr);
}
int main() {
return 0;
}
generate the compiler error
error: cannot initialize a variable of type 'int (*)[dim2]' with
an rvalue of type 'int (*)[dim2]'
The types are the same, so I'd think the assignment can be performed. Since int (*)[dim2] is a pointer to an array of size dim2 and as such could be a pointer to a bunch of arrays of size dim2 in contiguous memory indexable by the pointer, I would think this should work.
I'm using clang++ on Mac OS/X with the following version information:
Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin14.0.0
Thread model: posix
dim2 is not a compile-time constant, and VLAs (variable-length arrays) don't exist in C++. Some other compilers (such as gcc) have language extensions to allow VLAs in C++, but clang's behavior is standard-conforming.
You can work around the problem with a class (or class template) that does the address translation for you, such as
// This is very rudimentary, but it's a point to start.
template<typename T>
class array2d_ref {
public:
array2d_ref(T *p, std::size_t dim) : data_(p), dim_(dim) { }
T *operator[](std::size_t i) { return &data_[i * dim_]; }
private:
T *data_;
std::size_t dim_;
};
...
array2d_ref<int> array(twoDArrayPtr, dim2);
But I'm afraid it is not possible (portably) to have a pointer-to-array unless you know the dimension of the array at compile time.
You're trying to use C99's Variable Length Array(VLA) feature when you use dim2 as the array dimension in your cast. (gcc, for example does support this by extension: https://gcc.gnu.org/onlinedocs/gcc/Variable-Length.html.)
Good news, you can't do this now but you will be able to soon with the introduction of C++14's Runtime Sized Arrays.
Pertainant quotes:
Runtime-sized arrays offer the same syntax and performance of C99’s VLAs... Bear in mind that runtime-sized arrays aren’t precisely the same as C99’s VLAs. The C++14 feature is more restrained, which is just as well. Specifically, the following properties are excluded:
Runtime-sized multidimensional arrays
Modifications to the function declarator syntax
sizeof(a) being a runtime-evaluated expression returning the size of a
typedef int a[n]; evaluating n and passing it through the typedef
So you're code will be legal soon, circa C++14.
I've tried it out on the Visual Studio 2015 Beta and sadly at time of writing it is not supported :(
Although clang does not support variable-length arrays, there is a workaround. The following compiles with clang++ 4.0.0:
void fcn(int *twoDArrayPtr, const int dim1, const int dim2) {
using array_type = int (*)[dim2];
array_type array = reinterpret_cast<array_type>(twoDArrayPtr);
}
int main() {
return 0;
}
I'm not sure why this alias declaration should make any difference. It certainly seems inconsistent.
say I have a kernel
foo(int a, int b)
{
__shared__ int array[a];
}
it seems a has to be a constant value, I added const in front of int. It sill didn't work out,
any idea?
foo(const int a, const int b)
{
__shared__ int array[a];
}
While you can't have a dynamically-sized array because of the constraints of the C language (as mentioned in other answers), what you can do in CUDA is something like this:
extern __shared__ float fshared[];
__global__ void testShmem( float * result, unsigned int shmemSize ) {
// use fshared - shmemSize tells you how many bytes
// Note that the following is not a sensible use of shared memory!
for( int i = 0; i < shmemSize/sizeof(float); ++i ) {
fshared[i] = 0;
}
}
providing you tell CUDA how much shared memory you want during kernel invocation, like so:
testShmem<<<grid, block, 1024>>>( pdata, 1024 );
In ISO C++ the size of an array needs to be a so-called constant expression. This is stronger than a const-qualified variable. It basically means compile-time constant. So, the value has to be known at compile-time.
In ISO C90 this was also the case. C99 added VLAs, variable-length-arrays, that allow the size to be determined at runtime. The sizeof operator for these VLAs becomes a runtime operator.
I'm not familiar with CUDA or the __shared__ syntax. It's not clear to me why/how you use the term kernel. But I guess the rules are similar w.r.t. constant expressions and arrays.
I don't think CUDA or OpenCL let you dynamically allocate shared memory. Use #define macro instead.
If you need a dynamic sized array on a per program basis, you can supply it using -D MYMACRO (with OpenCL, I don't know for CUDA). See Bahbar's answer.
Here's how you can statically allocate a __shared__ array of n values in CUDA using C++ templates
template <int n>
kernel(...)
{
__shared__ int array[n];
}
const int n = 128;
kernel<n><<<grid_size,block_size>>>(...);
Note that n must be a known constant at compile time for this to work. If n is not known at compile time then you must use the approach Edric suggests.
I suspect this is a C language question.
If it were C++, you could simply use std::vector.
void foo( int a, int b )
{
std::vector<int> array( a );
// ...
}
It if really is C++, then what C++ features you can use safely may depend on the environment. It's not clear what you mean by "kernel".
I am using the following macro for calculating size of an array:
#define G_N_ELEMENTS(arr) ((sizeof(arr))/(sizeof(arr[0])))
However I see a discrepancy in the value computed by it when I evaluate the size of an array in a function (incorrect value computed) as opposed to where the function is called (correct value computed). Code + output below. Any thoughts, suggestions, tips et al. welcome.
DP
#include <stdio.h>
#define G_N_ELEMENTS(arr) ((sizeof(arr))/(sizeof(arr[0])))
void foo(int * arr) // Also tried foo(int arr[]), foo(int * & arr)
// - neither of which worked
{
printf("arr : %x\n", arr);
printf ("sizeof arr: %d\n", G_N_ELEMENTS(arr));
}
int main()
{
int arr[] = {1, 2, 3, 4};
printf("arr : %x\n", arr);
printf ("sizeof arr: %d\n", G_N_ELEMENTS(arr));
foo(arr);
}
Output:
arr : bffffa40
sizeof arr: 4
arr : bffffa40
sizeof arr: 1
That's because the size of an int * is the size of an int pointer (4 or 8 bytes on modern platforms that I use but it depends entirely on the platform). The sizeof is calculated at compile time, not run time, so even sizeof (arr[]) won't help because you may call the foo() function at runtime with many different-sized arrays.
The size of an int array is the size of an int array.
This is one of the tricky bits in C/C++ - the use of arrays and pointers are not always identical. Arrays will, under a great many circumstances, decay to a pointer to the first element of that array.
There are at least two solutions, compatible with both C and C++:
pass the length in with the array (not that useful if the intent of the function is to actually work out the array size).
pass a sentinel value marking the end of the data, e.g., {1,2,3,4,-1}.
This isn't working because sizeof is calculated at compile-time. The function has no information about the size of its parameter (it only knows that it points to a memory address).
Consider using an STL vector instead, or passing in array sizes as parameters to functions.
In C++, you can define G_N_ELEMENTS like this :
template<typename T, size_t N>
size_t G_N_ELEMENTS( T (&array)[N] )
{
return N;
}
If you wish to use array size at compile time, here's how :
// ArraySize
template<typename T>
struct ArraySize;
template<typename T, size_t N>
struct ArraySize<T[N]>
{
enum{ value = N };
};
Thanks j_random_hacker for correcting my mistakes and providing additional information.
Note that even if you try to tell the C compiler the size of the array in the function, it doesn't take the hint (my DIM is equivalent to your G_N_ELEMENTS):
#include <stdio.h>
#define DIM(x) (sizeof(x)/sizeof(*(x)))
static void function(int array1[], int array2[4])
{
printf("array1: size = %u\n", (unsigned)DIM(array1));
printf("array2: size = %u\n", (unsigned)DIM(array2));
}
int main(void)
{
int a1[40];
int a2[4];
function(a1, a2);
return(0);
}
This prints:
array1: size = 1
array2: size = 1
If you want to know how big the array is inside a function, pass the size to the function. Or, in C++, use things like STL vector<int>.
Edit: C++11 was introduced since this answer was written, and it includes functions to do exactly what I show below: std::begin and std::end. Const versions std::cbegin and std::cend are also going into a future version of the standard (C++14?) and may be in your compiler already. Don't even consider using my functions below if you have access to the standard functions.
I'd like to build a little on Benoît's answer.
Rather than passing just the starting address of the array as a pointer, or a pointer plus the size as others have suggested, take a cue from the standard library and pass two pointers to the beginning and end of the array. Not only does this make your code more like modern C++, but you can use any of the standard library algorithms on your array!
template<typename T, int N>
T * BEGIN(T (& array)[N])
{
return &array[0];
}
template<typename T, int N>
T * END(T (& array)[N])
{
return &array[N];
}
template<typename T, int N>
const T * BEGIN_CONST(const T (& array)[N])
{
return &array[0];
}
template<typename T, int N>
const T * END_CONST(const T (& array)[N])
{
return &array[N];
}
void
foo(int * begin, int * end)
{
printf("arr : %x\n", begin);
printf ("sizeof arr: %d\n", end - begin);
}
int
main()
{
int arr[] = {1, 2, 3, 4};
printf("arr : %x\n", arr);
printf ("sizeof arr: %d\n", END(arr) - BEGIN(arr));
foo(BEGIN(arr), END(arr));
}
Here's an alternate definition for BEGIN and END, if the templates don't work.
#define BEGIN(array) array
#define END(array) (array + sizeof(array)/sizeof(array[0]))
Update: The above code with the templates works in MS VC++2005 and GCC 3.4.6, as it should. I need to get a new compiler.
I'm also rethinking the naming convention used here - template functions masquerading as macros just feels wrong. I'm sure I will use this in my own code sometime soon, and I think I'll use ArrayBegin, ArrayEnd, ArrayConstBegin, and ArrayConstEnd.
If you change the foo funciton a little it might make you feel a little more comfortable:
void foo(int * pointertofoo)
{
printf("pointertofoo : %x\n", pointertofoo);
printf ("sizeof pointertofoo: %d\n", G_N_ELEMENTS(pointertofoo));
}
That's what the compiler will see something that is completely a different context than the function.
foo(int * arr) //Also tried foo(int arr[]), foo(int * & arr)
{ // - neither of which worked
printf("arr : %x\n", arr);
printf ("sizeof arr: %d\n", G_N_ELEMENTS(arr));
}
sizeof(arr) is sizeof(int*), ie. 4
Unless you have a very good reason for writing code like this, DON'T. We're in the 21st century now, use std::vector instead.
For more info, see the C++ FAQ: http://www.parashift.com/c++-faq-lite/containers.html
Remember: "Arrays are evil"
You should only call sizeof on the array. When you call sizeof on the pointer type the size will always be 4 (or 8, or whatever your system does).
MSFT's Hungarian notation may be ugly, but if you use it, you know not to call your macro on anything that starts with a 'p'.
Also checkout the definition of the ARRAYSIZE() macro in WinNT.h. If you're using C++ you can do strange things with templates to get compile time asserts if do it that way.
Now that we have constexpr in C++11, the type safe (non-macro) version can also be used in a constant expression.
template<typename T, std::size_t size>
constexpr std::size_t array_size(T const (&)[size]) { return size; }
This will fail to compile where it does not work properly, unlike your macro solution (it won't work on pointers by accident). You can use it where a compile-time constant is required:
int new_array[array_size(some_other_array)];
That being said, you are better off using std::array for this if possible. Pay no attention to the people who say to use std::vector because it is better. std::vector is a different data structure with different strengths. std::array has no overhead compared to a C-style array, but unlike the C-style array it will not decay to a pointer at the slightest provocation. std::vector, on the other hand, requires all accesses to be indirect accesses (go through a pointer) and using it requires dynamic allocation. One thing to keep in mind if you are used to using C-style arrays is to be sure to pass std::array to a function like this:
void f(std::array<int, 100> const & array);
If you do not pass by reference, the data is copied. This follows the behavior of most well-designed types, but is different from C-style arrays when passed to a function (it's more like the behavior of a C-style array inside of a struct).