"Don't!" is the correct answer, but unfortunately it's not the one I need.
If I do:
size_t array_size = func_that_calc_array_size();
char *foo = new char[array_size];
if (array_size > 42)
foo[42] = 'X';
This is all perfectly legal, but my MISRA C++ code checker gives an error 5-0-15 on the foo[42], which says that "Array indexing shall be the only form of pointer arithmetic". This question has actually been asked before, but the question and answer missed a critical issue, namely that the documentation further states:
Array indexing shall only be applied to objects defined as an array type.
If you look at the documentation (a suspiciously bootleg copy can be found by searching for "misra c++ 2008 pdf") it has an example similar to:
void my_fn(uint8_t *p1, uint8_t p2[])
{
p1[5] = 0; // Non-compliant - p1 was not declared as array
p2[5] = 0; // Compliant
}
So, basically the code-checking tool matches the declaration to the usage. Is there any possible way to convert the pointer to an array?
In our real example, we are using OpenCV's uchar *cv::Mat::ptr(), so we can't just reserve a large-enough array.
I think the root of the problem here is char *foo = new char[array_size];. A MISRA checker is arguably allowed to assume that this is not an array, because all dynamic memory allocation is banned.
You could try to see if you get the same error when writing char array[10]={0}; char* foo = array; because then you can dismiss this as a false positive tool bug.
The purpose and rationale of the rule is to ban the form *(x + i) instead of x[i]. Nothing else. The rule does not block the use of the [] on a pointer operand.
Several MISRA rules are however in place to ensure that all pointer arithmetic is done with operands that point at the same array, to prevent undefined behavior.
MISRA-C:2004 and MISRA-C++:2008 also had some weird, vague requirement that function parameters should be declared as char param[] rather than char* param, but since that was nonsensical, all this talk about array style indexing was removed in MISRA-C:2012.
(In fact there's no such thing as "array style indexing" in C or C++, see Do pointers support "array style indexing"?)
The validity of the rule is quite dubious. In the example, p1 and p2 have exactly the same type: They are both pointers.
If p2 truly conforms to the rule, then the solution is to introduce a function, so that you can make use of the function-argument-array-to-pointer-adjustment. Here is an example with a lambda, but you can use a regular function as well:
char *foo = new foo[array_size];
if (array_size > 42)
[](char foo[]) {
foo[42] = 'X';
}(foo);
C++20 introduces std::span, which appears to be a solution to the problem:
std::span foo_span{foo, array_size};
if (array_size > 42)
foo_span[42] = 'X';
This uses a class overload of the subscript operator, and not pointer-subscript, so it appears to conform to the rule. std::span is probably not implementable without violating MISRA, but so are many other things in the standard library, so I suspect that is not a problem.
In our real example, we are using OpenCV's uchar *cv::Mat::ptr(), so we can't just reserve a large-enough array.
Perhaps to follow the spirit of the rule, rather than the letter, you should be passing a cv::Mat& into the function rather than char*.
P.S. I suspect that OpenCV does not conform to MISRA, so depending on it is probably not the best thing to do if the program has to conform to MISRA.
Related
I'm trying to understand the nature of type-decay. For example, we all know arrays decay into pointers in a certain context. My attempt is to understand how int[] equates to int* but how two-dimensional arrays don't correspond to the expected pointer type. Here is a test case:
std::is_same<int*, std::decay<int[]>::type>::value; // true
This returns true as expected, but this doesn't:
std::is_same<int**, std::decay<int[][1]>::type>::value; // false
Why is this not true? I finally found a way to make it return true, and that was by making the first dimension a pointer:
std::is_same<int**, std::decay<int*[]>::type>::value; // true
And the assertion holds true for any type with pointers but with the last being the array. For example (int***[] == int****; // true).
Can I have an explanation as to why this is happening? Why doesn't the array types correspond to the pointer types as would be expected?
Why does int*[] decay into int** but not int[][]?
Because it would be impossible to do pointer arithmetic with it.
For example, int p[5][4] means an array of (length-4 array of int). There are no pointers involved, it's simply a contiguous block of memory of size 5*4*sizeof(int). When you ask for a particular element, e.g. int a = p[i][j], the compiler is really doing this:
char *tmp = (char *)p // Work in units of bytes (char)
+ i * sizeof(int[4]) // Offset for outer dimension (int[4] is a type)
+ j * sizeof(int); // Offset for inner dimension
int a = *(int *)tmp; // Back to the contained type, and dereference
Obviously, it can only do this because it knows the size of the "inner" dimension(s). Casting to an int (*)[4] retains this information; it's a pointer to (length-4 array of int). However, an int ** doesn't; it's merely a pointer to (pointer to int).
For another take on this, see the following sections of the C FAQ:
6.18: My compiler complained when I passed a two-dimensional array to a function expecting a pointer to a pointer.
6.19: How do I write functions which accept two-dimensional arrays when the width is not known at compile time?
6.20: How can I use statically- and dynamically-allocated multidimensional arrays interchangeably when passing them to functions?
(This is all for C, but this behaviour is essentially unchanged in C++.)
C was not really "designed" as a language; instead, features were added as needs arose, with an effort not to break earlier code. Such an evolutionary approach was a good thing in the days when C was being developed, since it meant that for the most part developers could reap the benefits of the earlier improvements in the language before everything the language might need to do was worked out. Unfortunately, the way in which array- and pointer handling have evolved has led to a variety of rules which are, in retrospect, unfortunate.
In the C language of today, there is a fairly substantial type system, and variables have clearly defined types, but things were not always thus. A declaration char arr[8]; would allocate 8 bytes in the present scope, and make arr point to the first of them. The compiler wouldn't know that arr represented an array--it would represent a char pointer just like any other char*. From what I understand, if one had declared char arr1[8], arr2[8];, the statement arr1 = arr2; would have been perfectly legal, being somewhat equivalent conceptually to char *st1 = "foo, *st2 = "bar"; st1 = st2;, but would have almost always represented a bug.
The rule that arrays decompose into pointers stemmed from a time when arrays and pointers really were the same thing. Since then, arrays have come to be recognized as a distinct type, but the language needed to remain essentially compatible with the days when they weren't. When the rules were being formulated, the question of how two-dimensional arrays should be handled wasn't an issue because there was no such thing. One could do something like char foo[20]; char *bar[4]; int i; for (i=0; i<4; i++) bar[i] = foo + (i*5); and then use bar[x][y] in the same way as one would now use a two-dimensional array, but a compiler wouldn't view things that way--it just saw bar as a pointer to a pointer. If one wanted to make foo[1] point somewhere completely different from foo[2], one could perfectly legally do so.
When two two-dimensional arrays were added to C, it was not necessary to maintain compatibility with earlier code that declared two-dimensional arrays, because there wasn't any. While it would have been possible to specify that char bar[4][5]; would generate code equivalent to what was shown using the foo[20], in which case a char[][] would have been usable as a char**, it was thought that just as assigning array variables would have been a mistake 99% of the time, so too would have been re-assignment of array rows, had that been legal. Thus, arrays in C are recognized as distinct types, with their own rules which are a bit odd, but which are what they are.
Because int[M][N] and int** are incompatible types.
However, int[M][N] can decay into int (*)[N] type. So the following :
std::is_same<int(*)[1], std::decay<int[1][1]>::type>::value;
should give you true.
Two dimensional arrays are not stored as pointer to pointers, but as a contiguous block of memory.
An object declared as type int[y][x] is a block of size sizeof(int) * x * y whereas, an object of type int ** is a pointer to an int*
Once I encountered following code:
int s =10;
int *p=&s;
cout << p[3] << endl;
And I can't understand why am I able to access p[3] that doesn't exist (only p exists that is single pointer but I still get access to p[3] that is array that I have never created).
Is it some compiler bug or it is a feature or I don't know some basics of C++ that covers this?
Thank you
Why does C++ consider pointer and array of pointers as same thing?
It doesn't. You're asking why it treats pointers and arrays as the same.
The [] operator is just an abbreviated form of pointer arithmetic. a[b] is equivalent to *(a + b). Array names can decay into pointers, and then pointer arithmetic is applied. It's the programmers job to make sure they don't go out of bounds. The compiler can't possibly stop you from shooting your foot off.
Also, claiming to be able to "access" it is a strong assertion. That is UB, and is most likely going to either read the wrong memory or get a segfault.
No, it's not a compiler bug, its a very useful feature... but lets not get ahead of ourselves here, the consequence of your code is called Undefined Behaviour
So, what's the feature? All naked arrays are actually pointer to the first element. Except un-decayed arrays (See What is array decaying?).
Consider this code:
int s =10;
int* array = new int[12];
int *p;
p = array; // p refers to the first element
int* x = p + 7; //advances to the 7th element, compiler never checks bounds
int* y = p + 700; //ditto ...this is obviously undefined
p = &s; //p is now pointing to where s
int* xx = p + 3; //But s is a single element, so Undefined Behaviour
Once an array is decayed, it's simply a pointer... And a pointer can be incremented, decremented, dereferenced, advanced, assigned or reassigned.
So,
cout << p[7] << endl;
is a valid C++ program. but not necessarily correct.
It's the responsibility of the programmer to know whether a pointer points to a single element or an array. but thanks to static analyzers and https://github.com/isocpp/CppCoreGuidelines, things are changing for good.
Also see What are all the common undefined behaviours that a C++ programmer should know about?
From here, section array-to-pointer decay:
There is an implicit conversion from lvalues and rvalues of array type to rvalues of pointer type: it constructs a pointer to the first element of an array. This conversion is used whenever arrays appear in context where arrays are not expected, but pointers are
Inherited from C, C++ allows you to treat any pointer like the first element of an array starting at that address.
That's in part because it passes arrays by reference as pointers and so for that to make sense you need to be able to treat a pointer as an array.
It also enables some quite neat and very efficient code in various circumstances.
The upshot is that p[3] is a valid construct in this context.
Obviously however it has undefined behaviour because p isn't pointing to an array! Unfortunately the language rules (and compiler) aren't smart enough to work that out.
C is a very low level language and doesn't enforce nice things like range checking either during compilation or execution.
Why can't you pass arrays as function arguments?
I have been reading this C++ book that says 'you can't pass arrays as function arguments', but it never explains why. Also, when I looked it up online I found comments like 'why would you do that anyway?' It's not that I would do it, I just want to know why you can't.
Why can't arrays be passed as function arguments?
They can:
void foo(const int (&myArray)[5]) {
// `myArray` is the original array of five integers
}
In technical terms, the type of the argument to foo is "reference to array of 5 const ints"; with references, we can pass the actual object around (disclaimer: terminology varies by abstraction level).
What you can't do is pass by value, because for historical reasons we shall not copy arrays. Instead, attempting to pass an array by value into a function (or, to pass a copy of an array) leads its name to decay into a pointer. (some resources get this wrong!)
Array names decay to pointers for pass-by-value
This means:
void foo(int* ptr);
int ar[10]; // an array
foo(ar); // automatically passing ptr to first element of ar (i.e. &ar[0])
There's also the hugely misleading "syntactic sugar" that looks like you can pass an array of arbitrary length by value:
void foo(int ptr[]);
int ar[10]; // an array
foo(ar);
But, actually, you're still just passing a pointer (to the first element of ar). foo is the same as it was above!
Whilst we're at it, the following function also doesn't really have the signature that it seems to. Look what happens when we try to call this function without defining it:
void foo(int ar[5]);
int main() {
int ar[5];
foo(ar);
}
// error: undefined reference to `func(int*)'
So foo takes int* in fact, not int[5]!
(Live demo.)
But you can work-around it!
You can hack around this by wrapping the array in a struct or class, because the default copy operator will copy the array:
struct Array_by_val
{
int my_array[10];
};
void func (Array_by_val x) {}
int main() {
Array_by_val x;
func(x);
}
This is somewhat confusing behaviour.
Or, better, a generic pass-by-reference approach
In C++, with some template magic, we can make a function both re-usable and able to receive an array:
template <typename T, size_t N>
void foo(const T (&myArray)[N]) {
// `myArray` is the original array of N Ts
}
But we still can't pass one by value. Something to remember.
The future...
And since C++11 is just over the horizon, and C++0x support is coming along nicely in the mainstream toolchains, you can use the lovely std::array inherited from Boost! I'll leave researching that as an exercise to the reader.
So I see answers explaining, "Why doesn't the compiler allow me to do this?" Rather than "What caused the standard to specify this behavior?" The answer lies in the history of C. This is taken from "The Development of the C Language" (source) by Dennis Ritchie.
In the proto-C languages, memory was divided into "cells" each containing a word. These could be dereferenced using the eventual unary * operator -- yes, these were essentially typeless languages like some of today's toy languages like Brainf_ck. Syntactic sugar allowed one to pretend a pointer was an array:
a[5]; // equivalent to *(a + 5)
Then, automatic allocation was added:
auto a[10]; // allocate 10 cells, assign pointer to a
// note that we are still typeless
a += 1; // remember that a is a pointer
At some point, the auto storage specifier behavior became default -- you may also be wondering what the point of the auto keyword was anyway, this is it. Pointers and arrays were left to behave in somewhat quirky ways as a result of these incremental changes. Perhaps the types would behave more alike if the language were designed from a bird's-eye view. As it stands, this is just one more C / C++ gotcha.
Arrays are in a sense second-class types, something that C++ inherited from C.
Quoting 6.3.2.1p3 in the C99 standard:
Except when it is the operand of the sizeof operator or the unary
& operator, or is a string literal used to initialize an array, an
expression that has type "array of type" is converted to an
expression with type "pointer to type" that points to the initial
element of the array object and is not an lvalue. If the array object
has register storage class, the behavior is undefined.
The same paragraph in the C11 standard is essentially the same, with the addition of the new _Alignof operator. (Both links are to drafts which are very close to the official standards. (UPDATE: That was actually an error in the N1570 draft, corrected in the released C11 standard. _Alignof can't be applied to an expression, only to a parenthesized type name, so C11 has only the same 3 exceptions that C99 and C90 did. (But I digress.)))
I don't have the corresponding C++ citation handy, but I believe it's quite similar.
So if arr is an array object, and you call a function func(arr), then func will receive a pointer to the first element of arr.
So far, this is more or less "it works that way because it's defined that way", but there are historical and technical reasons for it.
Permitting array parameters wouldn't allow for much flexibility (without further changes to the language), since, for example, char[5] and char[6] are distinct types. Even passing arrays by reference doesn't help with that (unless there's some C++ feature I'm missing, always a possibility). Passing pointers gives you tremendous flexibility (perhaps too much!). The pointer can point to the first element of an array of any size -- but you have to roll your own mechanism to tell the function how big the array is.
Designing a language so that arrays of different lengths are somewhat compatible while still being distinct is actually quite tricky. In Ada, for example, the equivalents of char[5] and char[6] are the same type, but different subtypes. More dynamic languages make the length part of an array object's value, not of its type. C still pretty much muddles along with explicit pointers and lengths, or pointers and terminators. C++ inherited all that baggage from C. It mostly punted on the whole array thing and introduced vectors, so there wasn't as much need to make arrays first-class types.
TL;DR: This is C++, you should be using vectors anyway! (Well, sometimes.)
Arrays are not passed by value because arrays are essentially continuous blocks of memmory. If you had an array you wanted to pass by value, you could declare it within a structure and then access it through the structure.
This itself has implications on performance because it means you will lock up more space on the stack. Passing a pointer is faster because the envelope of data to be copied onto the stack is far less.
I believe that the reason why C++ did this was, when it was created, that it might have taken up too many resources to send the whole array rather than the address in memory. That is just my thoughts on the matter and an assumption.
It's because of a technical reason. Arguments are passed on the stack; an array can have a huge size, megabytes and more. Copying that data to the stack on every call will not only be slower, but it will exhaust the stack pretty quickly.
You can overcome that limitation by putting an array into a struct (or using Boost::Array):
struct Array
{
int data[512*1024];
int& operator[](int i) { return data[i]; }
};
void foo(Array byValueArray) { .......... }
Try to make nested calls of that function and see how many stack overflows you'll get!
Is there a "good" way to write "pointer to something" in C/C++ ?
I use to write void foo( char *str ); But sometimes I find it quite illogical because the type of str is "pointer to char", then it should more logical to attach the * to the type name.
Is there a rule to write pointers ?
char*str;
char* str;
char *str;
char * str;
There is no strict rule, but bear in mind that the * attaches to the variable, so:
char *str1, *str2; // str1 and str2 are pointers
char* str1, str2; // str1 is a pointer, str2 is a char
Some people like to do char * str1 as well, but it's up to you or your company's coding standard.
The common C convention is to write T *p, whereas the common C++ convention is to write T* p. Both parse as T (*p); the * is part of the declarator, not the type specifier. It's purely an accident of pointer declaration syntax that you can write it either way.
C (and by extension, C++) declaration syntax is expression-centric; IOW, the form of a declaration should match the form of an expression of the same type in the code.
For example, suppose we had a pointer to int, and we wanted to access that integer value. To do so, we dereference the pointer with the * indirection operator, like so:
x = *p;
The type of the expression *p is int; thus, it follows that the declaration of p should be
int *p
The int-ness of p is provided by the type specifier int, but the pointer-ness of p is provided by the declarator *p.
As a slightly more complicated example, suppose we had a pointer to an array of float, and wanted to access the floating point value at the i'th element of the array through the pointer. We dereference the array pointer and subscript the result:
f = (*ap)[i];
The type of the expression (*ap)[i] is float, so it follows that the declaration of the array pointer is
float (*ap)[N];
The float-ness of ap is provided by the type specifier float, but the pointer-ness and array-ness are provided by the declarator (*ap)[N]. Note that in this case the * must explicitly be bound to the identifer; [] has a higher precedence than unary * in both expression and declaration syntax, so float* ap[N] would be parsed as float *(ap[N]), or "array of pointers to float", rather than "pointer to array of float". I suppose you could write that as
float(* ap)[N];
but I'm not sure what the point would be; it doesn't make the type of ap any clearer.
Even better, how about a pointer to a function that returns a pointer to an array of pointer to int:
int *(*(*f)())[N];
Again, at least two of the * operators must explicitly be bound in the declarator; binding the last * to the type specifier, as in
int* (*(*f)())[N];
just indicates confused thinking IMO.
Even though I use it in my own C++ code, and even though I understand why it became popular, the problem I have with the reasoning behind the T* p convention is that it just doesn't apply outside of the simplest of pointer declarations, and it reinforces a simplistic-to-the-point-of-being-wrong view of C and C++ declaration syntax. Yes, the type of p is "pointer to T", but that doesn't change the fact that as far as the language grammar is concerned * binds to the declarator, not the type specifier.
For another case, if the type of a is "N-element array of T", we don't write
T[N] a;
Obviously, the grammar doesn't allow it. Again, the argument just doesn't apply in this case.
EDIT
As Steve points out in the comments, you can use typedefs to hide some of the complexity. For example, you could rewrite
int *(*(*f)())[N];
as something like
typedef int *iptrarr[N]; // iptrarr is an array of pointer to int
typedef iptrarr *arrptrfunc(); // arrptrfunc is a function returning
// a pointer to iptrarr
arrptrfunc *f; // f is a pointer to arrptrfunc
Now you can cleanly apply the T* p convention, declaring f as arrptrfunc* f. I personally am not fond of doing things this way, since it's not necessarily clear from the typedef how f is supposed to be used in an expression, or how to use an object of type arrptrfunc. The non-typedef'd version may be ugly and difficult to read, but at least it tells you everything you need to know up front; you don't have to go digging through all the typedefs.
The "good way" depends on
internal coding standards in your project
your personal preferences
(probably) in that order.
There is no right or wrong in this. The important thing is to pick one coding standard and stick to it.
That being said, I personally believe that the * belongs with the type and not the variable name, as the type is "pointer to char". The variable name is not a pointer.
I think this is going to be heavily influenced by the general pattern in how one declares the variables.
For example, I have a tendency to declare only one variable per line. This way, I can add a comment reminding me how the variable is to be used.
However, there are times, when it is practical to declare several variables of the same type on one line. Under such circumstances, my personal coding rule is to never, NEVER, EVER declare pointers on the same line as non-pointers. I find that mixing them can be a source of errors, so I try to make it easier to see "wrongness" by avoiding mixing.
As long as I follow the first guideline, I find that it does not matter overly much how I declare the pointers so long as I am consistent.
However, if I use the second guideline and declare several pointers on the same line, I find the following style to be most beneficial and clear (of course others may disagree) ...
char *ptr1, *ptr2, *ptr3;
By having no space between the * and the pointer name, it becomes easier to spot whether I have violated the second guideline.
Now, if I wanted to be consistent between my two personal style guidelines, when declaring only one pointer on a line, I would use ...
char *ptr;
Anyway, that's my rationale for part of why I do what I do. Hope this helps.
I got a comment to my answer on this thread:
Malloc inside a function call appears to be getting freed on return?
In short I had code like this:
int * somefunc (void)
{
int * temp = (int*) malloc (sizeof (int));
temp[0] = 0;
return temp;
}
I got this comment:
Can I just say, please don't cast the
return value of malloc? It is not
required and can hide errors.
I agree that the cast is not required in C. It is mandatory in C++, so I usually add them just in case I have to port the code in C++ one day.
However, I wonder how casts like this can hide errors. Any ideas?
Edit:
Seems like there are very good and valid arguments on both sides. Thanks for posting, folks.
It seems fitting I post an answer, since I left the comment :P
Basically, if you forget to include stdlib.h the compiler will assume malloc returns an int. Without casting, you will get a warning. With casting you won't.
So by casting you get nothing, and run the risk of suppressing legitimate warnings.
Much is written about this, a quick google search will turn up more detailed explanations.
edit
It has been argued that
TYPE * p;
p = (TYPE *)malloc(n*sizeof(TYPE));
makes it obvious when you accidentally don't allocate enough memory because say, you thought p was TYPe not TYPE, and thus we should cast malloc because the advantage of this method overrides the smaller cost of accidentally suppressing compiler warnings.
I would like to point out 2 things:
you should write p = malloc(sizeof(*p)*n); to always ensure you malloc the right amount of space
with the above approach, you need to make changes in 3 places if you ever change the type of p: once in the declaration, once in the malloc, and once in the cast.
In short, I still personally believe there is no need for casting the return value of malloc and it is certainly not best practice.
This question is tagged both for C and C++, so it has at least two answers, IMHO:
C
Ahem... Do whatever you want.
I believe the reason given above "If you don't include "stdlib" then you won't get a warning" is not a valid one because one should not rely on this kind of hacks to not forget to include an header.
The real reason that could make you not write the cast is that the C compiler already silently cast a void * into whatever pointer type you want, and so, doing it yourself is overkill and useless.
If you want to have type safety, you can either switch to C++ or write your own wrapper function, like:
int * malloc_Int(size_t p_iSize) /* number of ints wanted */
{
return malloc(sizeof(int) * p_iSize) ;
}
C++
Sometimes, even in C++, you have to make profit of the malloc/realloc/free utils. Then you'll have to cast. But you already knew that. Using static_cast<>() will be better, as always, than C-style cast.
And in C, you could override malloc (and realloc, etc.) through templates to achieve type-safety:
template <typename T>
T * myMalloc(const size_t p_iSize)
{
return static_cast<T *>(malloc(sizeof(T) * p_iSize)) ;
}
Which would be used like:
int * p = myMalloc<int>(25) ;
free(p) ;
MyStruct * p2 = myMalloc<MyStruct>(12) ;
free(p2) ;
and the following code:
// error: cannot convert ‘int*’ to ‘short int*’ in initialization
short * p = myMalloc<int>(25) ;
free(p) ;
won't compile, so, no problemo.
All in all, in pure C++, you now have no excuse if someone finds more than one C malloc inside your code...
:-)
C + C++ crossover
Sometimes, you want to produce code that will compile both in C and in C++ (for whatever reasons... Isn't it the point of the C++ extern "C" {} block?). In this case, C++ demands the cast, but C won't understand the static_cast keyword, so the solution is the C-style cast (which is still legal in C++ for exactly this kind of reasons).
Note that even with writing pure C code, compiling it with a C++ compiler will get you a lot more warnings and errors (for example attempting to use a function without declaring it first won't compile, unlike the error mentioned above).
So, to be on the safe side, write code that will compile cleanly in C++, study and correct the warnings, and then use the C compiler to produce the final binary. This means, again, write the cast, in a C-style cast.
One possible error it can introduce is if you are compiling on a 64-bit system using C (not C++).
Basically, if you forget to include stdlib.h, the default int rule will apply. Thus the compiler will happily assume that malloc has the prototype of int malloc(); On Many 64-bit systems an int is 32-bits and a pointer is 64-bits.
Uh oh, the value gets truncated and you only get the lower 32-bits of the pointer! Now if you cast the return value of malloc, this error is hidden by the cast. But if you don't you will get an error (something to the nature of "cannot convert int to T *").
This does not apply to C++ of course for 2 reasons. Firstly, it has no default int rule, secondly it requires the cast.
All in all though, you should just new in c++ code anyway :-P.
Well, I think it's the exact opposite - always directly cast it to the needed type. Read on here!
The "forgot stdlib.h" argument is a straw man. Modern compilers will detect and warn of the problem (gcc -Wall).
You should always cast the result of malloc immediately. Not doing so should be considered an error, and not just because it will fail as C++. If you're targeting a machine architecture with different kinds of pointers, for example, you could wind up with a very tricky bug if you don't put in the cast.
Edit: The commentor Evan Teran is correct. My mistake was thinking that the compiler didn't have to do any work on a void pointer in any context. I freak when I think of FAR pointer bugs, so my intuition is to cast everything. Thanks Evan!
Actually, the only way a cast could hide an error is if you were converting from one datatype to an smaller datatype and lost data, or if you were converting pears to apples. Take the following example:
int int_array[10];
/* initialize array */
int *p = &(int_array[3]);
short *sp = (short *)p;
short my_val = *sp;
in this case the conversion to short would be dropping some data from the int. And then this case:
struct {
/* something */
} my_struct[100];
int my_int_array[100];
/* initialize array */
struct my_struct *p = &(my_int_array[99]);
in which you'd end up pointing to the wrong kind of data, or even to invalid memory.
But in general, and if you know what you are doing, it's OK to do the casting. Even more so when you are getting memory from malloc, which happens to return a void pointer which you can't use at all unless you cast it, and most compilers will warn you if you are casting to something the lvalue (the value to the left side of the assignment) can't take anyway.
#if CPLUSPLUS
#define MALLOC_CAST(T) (T)
#else
#define MALLOC_CAST(T)
#endif
...
int * p;
p = MALLOC_CAST(int *) malloc(sizeof(int) * n);
or, alternately
#if CPLUSPLUS
#define MYMALLOC(T, N) static_cast<T*>(malloc(sizeof(T) * N))
#else
#define MYMALLOC(T, N) malloc(sizeof(T) * N)
#endif
...
int * p;
p = MYMALLOC(int, n);
People have already cited the reasons I usually trot out: the old (no longer applicable to most compilers) argument about not including stdlib.h and using sizeof *p to make sure the types and sizes always match regardless of later updating. I do want to point out one other argument against casting. It's a small one, but I think it applies.
C is fairly weakly typed. Most safe type conversions happen automatically, and most unsafe ones require a cast. Consider:
int from_f(float f)
{
return *(int *)&f;
}
That's dangerous code. It's technically undefined behavior, though in practice it's going to do the same thing on nearly every platform you run it on. And the cast helps tell you "This code is a terrible hack."
Consider:
int *p = (int *)malloc(sizeof(int) * 10);
I see a cast, and I wonder, "Why is this necessary? Where is the hack?" It raises hairs on my neck that there's something evil going on, when in fact the code is completely harmless.
As long as we're using C, casts (especially pointer casts) are a way of saying "There's something evil and easily breakable going on here." They may accomplish what you need accomplished, but they indicate to you and future maintainers that the kids aren't alright.
Using casts on every malloc diminishes the "hack" indication of pointer casting. It makes it less jarring to see things like *(int *)&f;.
Note: C and C++ are different languages. C is weakly typed, C++ is more strongly typed. The casts are necessary in C++, even though they don't indicate a hack at all, because of (in my humble opinion) the unnecessarily strong C++ type system. (Really, this particular case is the only place I think the C++ type system is "too strong," but I can't think of any place where it's "too weak," which makes it overall too strong for my tastes.)
If you're worried about C++ compatibility, don't. If you're writing C, use a C compiler. There are plenty really good ones avaliable for every platform. If, for some inane reason, you have to write C code that compiles cleanly as C++, you're not really writing C. If you need to port C to C++, you should be making lots of changes to make your C code more idiomatic C++.
If you can't do any of that, your code won't be pretty no matter what you do, so it doesn't really matter how you decide to cast at that point. I do like the idea of using templates to make a new allocator that returns the correct type, although that's basically just reinventing the new keyword.
Casting a function which returns (void *) to instead be an (int *) is harmless: you're casting one type of pointer to another.
Casting a function which returns an integer to instead be a pointer is most likely incorrect. The compiler would have flagged it had you not explicitly cast it.
One possible error could (depending on this is whether what you really want or not) be mallocing with one size scale, and assigning to a pointer of a different type. E.g.,
int *temp = (int *)malloc(sizeof(double));
There may be cases where you want to do this, but I suspect that they are rare.
I think you should put the cast in. Consider that there are three locations for types:
T1 *p;
p = (T2*) malloc(sizeof(T3));
The two lines of code might be widely separated. Therefore it's good that the compiler will enforce that T1 == T2. It is easier to visually verify that T2 == T3.
If you miss out the T2 cast, then you have to hope that T1 == T3.
On the other hand you have the missing stdlib.h argument - but I think it's less likely to be a problem.
On the other hand, if you ever need to port the code to C++, it is much better to use the 'new' operator.