How many pointers (*) are allowed in a single variable?
Let's consider the following example.
int a = 10;
int *p = &a;
Similarly we can have
int **q = &p;
int ***r = &q;
and so on.
For example,
int ****************zz;
The C standard specifies the lower limit:
5.2.4.1 Translation limits
276 The implementation shall be able to translate and execute at least one program that contains at least one instance of every one of the following limits: [...]
279 — 12 pointer, array, and function declarators (in any combinations) modifying an
arithmetic, structure, union, or void type in a declaration
The upper limit is implementation specific.
Actually, C programs commonly make use of infinite pointer indirection. One or two static levels are common. Triple indirection is rare. But infinite is very common.
Infinite pointer indirection is achieved with the help of a struct, of course, not with a direct declarator, which would be impossible. And a struct is needed so that you can include other data in this structure at the different levels where this can terminate.
struct list { struct list *next; ... };
now you can have list->next->next->next->...->next. This is really just multiple pointer indirections: *(*(..(*(*(*list).next).next).next...).next).next. And the .next is basically a noop when it's the first member of the structure, so we can imagine this as ***..***ptr.
There is really no limit on this because the links can be traversed with a loop rather than a giant expression like this, and moreover, the structure can easily be made circular.
Thus, in other words, linked lists may be the ultimate example of adding another level of indirection to solve a problem, since you're doing it dynamically with every push operation. :)
Theoretically:
You can have as many levels of indirections as you want.
Practically:
Of course, nothing that consumes memory can be indefinite, there will be limitations due to resources available on the host environment. So practically there is a maximum limit to what an implementation can support and the implementation shall document it appropriately. So in all such artifacts, the standard does not specify the maximum limit, but it does specify the lower limits.
Here's the reference:
C99 Standard 5.2.4.1 Translation limits:
— 12 pointer, array, and function declarators (in any combinations) modifying an
arithmetic, structure, union, or void type in a declaration.
This specifies the lower limit that every implementation must support. Note that in a footenote the standard further says:
18) Implementations should avoid imposing fixed translation limits whenever possible.
As people have said, no limit "in theory". However, out of interest I ran this with g++ 4.1.2, and it worked with size up to 20,000. Compile was pretty slow though, so I didn't try higher. So I'd guess g++ doesn't impose any limit either. (Try setting size = 10 and looking in ptr.cpp if it's not immediately obvious.)
g++ create.cpp -o create ; ./create > ptr.cpp ; g++ ptr.cpp -o ptr ; ./ptr
create.cpp
#include <iostream>
int main()
{
const int size = 200;
std::cout << "#include <iostream>\n\n";
std::cout << "int main()\n{\n";
std::cout << " int i0 = " << size << ";";
for (int i = 1; i < size; ++i)
{
std::cout << " int ";
for (int j = 0; j < i; ++j) std::cout << "*";
std::cout << " i" << i << " = &i" << i-1 << ";\n";
}
std::cout << " std::cout << ";
for (int i = 1; i < size; ++i) std::cout << "*";
std::cout << "i" << size-1 << " << \"\\n\";\n";
std::cout << " return 0;\n}\n";
return 0;
}
Sounds fun to check.
Visual Studio 2010 (on Windows 7), you can have 1011 levels before getting this error:
fatal error C1026: parser stack overflow, program too complex
gcc (Ubuntu), 100k+ * without a crash ! I guess the hardware is the limit here.
(tested with just a variable declaration)
There is no limit, check example at Pointers :: C Interview Questions and Answers.
The answer depends on what you mean by "levels of pointers." If you mean "How many levels of indirection can you have in a single declaration?" the answer is "At least 12."
int i = 0;
int *ip01 = & i;
int **ip02 = & ip01;
int ***ip03 = & ip02;
int ****ip04 = & ip03;
int *****ip05 = & ip04;
int ******ip06 = & ip05;
int *******ip07 = & ip06;
int ********ip08 = & ip07;
int *********ip09 = & ip08;
int **********ip10 = & ip09;
int ***********ip11 = & ip10;
int ************ip12 = & ip11;
************ip12 = 1; /* i = 1 */
If you mean "How many levels of pointer can you use before the program gets hard to read," that's a matter of taste, but there is a limit. Having two levels of indirection (a pointer to a pointer to something) is common. Any more than that gets a bit harder to think about easily; don't do it unless the alternative would be worse.
If you mean "How many levels of pointer indirection can you have at runtime," there's no limit. This point is particularly important for circular lists, in which each node points to the next. Your program can follow the pointers forever.
It's actually even funnier with pointer to functions.
#include <cstdio>
typedef void (*FuncType)();
static void Print() { std::printf("%s", "Hello, World!\n"); }
int main() {
FuncType const ft = &Print;
ft();
(*ft)();
(**ft)();
/* ... */
}
As illustrated here this gives:
Hello, World!
Hello, World!
Hello, World!
And it does not involve any runtime overhead, so you can probably stack them as much as you want... until your compiler chokes on the file.
There is no limit. A pointer is a chunk of memory whose contents are an address.
As you said
int a = 10;
int *p = &a;
A pointer to a pointer is also a variable which contains an address of another pointer.
int **q = &p;
Here q is pointer to pointer holding the address of p which is already holding the address of a.
There is nothing particularly special about a pointer to a pointer. So there is no limit on chain of poniters which are holding the address of another pointer.
ie.
int **************************************************************************z;
is allowed.
Every C++ developer should have heard of the (in)famous Three star programmer.
And there really seems to be some magic "pointer barrier" that has to be camouflaged.
Quote from C2:
Three Star Programmer
A rating system for C-programmers. The more indirect your pointers are (i.e. the more "*" before your variables), the higher your reputation will be. No-star C-programmers are virtually non-existent, as virtually all non-trivial programs require use of pointers. Most are one-star programmers. In the old times (well, I'm young, so these look like old times to me at least), one would occasionally find a piece of code done by a three-star programmer and shiver with awe.
Some people even claimed they'd seen three-star code with function pointers involved, on more than one level of indirection. Sounded as real as UFOs to me.
Note that there are two possible questions here: how many levels of pointer indirection we can achieve in a C type, and how many levels of pointer indirection we can stuff into a single declarator.
The C standard allows a maximum to be imposed on the former (and gives a minimum value for that). But that can be circumvented via multiple typedef declarations:
typedef int *type0;
typedef type0 *type1;
typedef type1 *type2; /* etc */
So ultimately, this is an implementation issue connected to the idea of how big/complex can a C program be made before it is rejected, which is very compiler specific.
I'd like to point out that producing a type with an arbitrary number of *'s is something that can happen with template metaprogramming. I forget what I was doing exactly, but it was suggested that I could produce new distinct types that have some kind of meta maneuvering between them by using recursive T* types.
Template Metaprogramming is a slow descent into madness, so it is not necessary to make excuses when generating a type with several thousand level of indirection. It's just a handy way to map peano integers, for example, onto template expansion as a functional language.
Rule 17.5 of the 2004 MISRA C standard prohibits more than 2 levels of pointer indirection.
There isn't such a thing like real limit but limit exists. All pointers are variables that are usually storing in stack not heap. Stack is usually small (it is possible to change its size during some linking). So lets say you have 4MB stack, what is quite normal size. And lets say we have pointer which is 4 bytes size (pointer sizes are not the same depending on architecture, target and compiler settings).
In this case 4 MB / 4 b = 1024 so possible maximum number would be 1048576, but we shouldn't ignore the fact that some other stuff is in stack.
However some compilers may have maximum number of pointer chain, but the limit is stack size. So if you increase stack size during linking with infinity and have machine with infinity memory which runs OS which handles that memory so you will have unlimited pointer chain.
If you use int *ptr = new int; and put your pointer into heap, that is not so usual way limit would be heap size, not stack.
EDIT Just realize that infinity / 2 = infinity. If machine has more memory so the pointer size increases. So if memory is infinity and size of pointer is infinity, so it is bad news... :)
It depends on the place where you store pointers. If they are in stack you have quite low limit. If you store it in heap, you limit is much much much higher.
Look at this program:
#include <iostream>
const int CBlockSize = 1048576;
int main()
{
int number = 0;
int** ptr = new int*[CBlockSize];
ptr[0] = &number;
for (int i = 1; i < CBlockSize; ++i)
ptr[i] = reinterpret_cast<int *> (&ptr[i - 1]);
for (int i = CBlockSize-1; i >= 0; --i)
std::cout << i << " " << (int)ptr[i] << "->" << *ptr[i] << std::endl;
return 0;
}
It creates 1M pointers and at the shows what point to what it is easy to notice what the chain goes to the first variable number.
BTW. It uses 92K of RAM so just imagine how deep you can go.
Related
How many pointers (*) are allowed in a single variable?
Let's consider the following example.
int a = 10;
int *p = &a;
Similarly we can have
int **q = &p;
int ***r = &q;
and so on.
For example,
int ****************zz;
The C standard specifies the lower limit:
5.2.4.1 Translation limits
276 The implementation shall be able to translate and execute at least one program that contains at least one instance of every one of the following limits: [...]
279 — 12 pointer, array, and function declarators (in any combinations) modifying an
arithmetic, structure, union, or void type in a declaration
The upper limit is implementation specific.
Actually, C programs commonly make use of infinite pointer indirection. One or two static levels are common. Triple indirection is rare. But infinite is very common.
Infinite pointer indirection is achieved with the help of a struct, of course, not with a direct declarator, which would be impossible. And a struct is needed so that you can include other data in this structure at the different levels where this can terminate.
struct list { struct list *next; ... };
now you can have list->next->next->next->...->next. This is really just multiple pointer indirections: *(*(..(*(*(*list).next).next).next...).next).next. And the .next is basically a noop when it's the first member of the structure, so we can imagine this as ***..***ptr.
There is really no limit on this because the links can be traversed with a loop rather than a giant expression like this, and moreover, the structure can easily be made circular.
Thus, in other words, linked lists may be the ultimate example of adding another level of indirection to solve a problem, since you're doing it dynamically with every push operation. :)
Theoretically:
You can have as many levels of indirections as you want.
Practically:
Of course, nothing that consumes memory can be indefinite, there will be limitations due to resources available on the host environment. So practically there is a maximum limit to what an implementation can support and the implementation shall document it appropriately. So in all such artifacts, the standard does not specify the maximum limit, but it does specify the lower limits.
Here's the reference:
C99 Standard 5.2.4.1 Translation limits:
— 12 pointer, array, and function declarators (in any combinations) modifying an
arithmetic, structure, union, or void type in a declaration.
This specifies the lower limit that every implementation must support. Note that in a footenote the standard further says:
18) Implementations should avoid imposing fixed translation limits whenever possible.
As people have said, no limit "in theory". However, out of interest I ran this with g++ 4.1.2, and it worked with size up to 20,000. Compile was pretty slow though, so I didn't try higher. So I'd guess g++ doesn't impose any limit either. (Try setting size = 10 and looking in ptr.cpp if it's not immediately obvious.)
g++ create.cpp -o create ; ./create > ptr.cpp ; g++ ptr.cpp -o ptr ; ./ptr
create.cpp
#include <iostream>
int main()
{
const int size = 200;
std::cout << "#include <iostream>\n\n";
std::cout << "int main()\n{\n";
std::cout << " int i0 = " << size << ";";
for (int i = 1; i < size; ++i)
{
std::cout << " int ";
for (int j = 0; j < i; ++j) std::cout << "*";
std::cout << " i" << i << " = &i" << i-1 << ";\n";
}
std::cout << " std::cout << ";
for (int i = 1; i < size; ++i) std::cout << "*";
std::cout << "i" << size-1 << " << \"\\n\";\n";
std::cout << " return 0;\n}\n";
return 0;
}
Sounds fun to check.
Visual Studio 2010 (on Windows 7), you can have 1011 levels before getting this error:
fatal error C1026: parser stack overflow, program too complex
gcc (Ubuntu), 100k+ * without a crash ! I guess the hardware is the limit here.
(tested with just a variable declaration)
There is no limit, check example at Pointers :: C Interview Questions and Answers.
The answer depends on what you mean by "levels of pointers." If you mean "How many levels of indirection can you have in a single declaration?" the answer is "At least 12."
int i = 0;
int *ip01 = & i;
int **ip02 = & ip01;
int ***ip03 = & ip02;
int ****ip04 = & ip03;
int *****ip05 = & ip04;
int ******ip06 = & ip05;
int *******ip07 = & ip06;
int ********ip08 = & ip07;
int *********ip09 = & ip08;
int **********ip10 = & ip09;
int ***********ip11 = & ip10;
int ************ip12 = & ip11;
************ip12 = 1; /* i = 1 */
If you mean "How many levels of pointer can you use before the program gets hard to read," that's a matter of taste, but there is a limit. Having two levels of indirection (a pointer to a pointer to something) is common. Any more than that gets a bit harder to think about easily; don't do it unless the alternative would be worse.
If you mean "How many levels of pointer indirection can you have at runtime," there's no limit. This point is particularly important for circular lists, in which each node points to the next. Your program can follow the pointers forever.
It's actually even funnier with pointer to functions.
#include <cstdio>
typedef void (*FuncType)();
static void Print() { std::printf("%s", "Hello, World!\n"); }
int main() {
FuncType const ft = &Print;
ft();
(*ft)();
(**ft)();
/* ... */
}
As illustrated here this gives:
Hello, World!
Hello, World!
Hello, World!
And it does not involve any runtime overhead, so you can probably stack them as much as you want... until your compiler chokes on the file.
There is no limit. A pointer is a chunk of memory whose contents are an address.
As you said
int a = 10;
int *p = &a;
A pointer to a pointer is also a variable which contains an address of another pointer.
int **q = &p;
Here q is pointer to pointer holding the address of p which is already holding the address of a.
There is nothing particularly special about a pointer to a pointer. So there is no limit on chain of poniters which are holding the address of another pointer.
ie.
int **************************************************************************z;
is allowed.
Every C++ developer should have heard of the (in)famous Three star programmer.
And there really seems to be some magic "pointer barrier" that has to be camouflaged.
Quote from C2:
Three Star Programmer
A rating system for C-programmers. The more indirect your pointers are (i.e. the more "*" before your variables), the higher your reputation will be. No-star C-programmers are virtually non-existent, as virtually all non-trivial programs require use of pointers. Most are one-star programmers. In the old times (well, I'm young, so these look like old times to me at least), one would occasionally find a piece of code done by a three-star programmer and shiver with awe.
Some people even claimed they'd seen three-star code with function pointers involved, on more than one level of indirection. Sounded as real as UFOs to me.
Note that there are two possible questions here: how many levels of pointer indirection we can achieve in a C type, and how many levels of pointer indirection we can stuff into a single declarator.
The C standard allows a maximum to be imposed on the former (and gives a minimum value for that). But that can be circumvented via multiple typedef declarations:
typedef int *type0;
typedef type0 *type1;
typedef type1 *type2; /* etc */
So ultimately, this is an implementation issue connected to the idea of how big/complex can a C program be made before it is rejected, which is very compiler specific.
I'd like to point out that producing a type with an arbitrary number of *'s is something that can happen with template metaprogramming. I forget what I was doing exactly, but it was suggested that I could produce new distinct types that have some kind of meta maneuvering between them by using recursive T* types.
Template Metaprogramming is a slow descent into madness, so it is not necessary to make excuses when generating a type with several thousand level of indirection. It's just a handy way to map peano integers, for example, onto template expansion as a functional language.
Rule 17.5 of the 2004 MISRA C standard prohibits more than 2 levels of pointer indirection.
There isn't such a thing like real limit but limit exists. All pointers are variables that are usually storing in stack not heap. Stack is usually small (it is possible to change its size during some linking). So lets say you have 4MB stack, what is quite normal size. And lets say we have pointer which is 4 bytes size (pointer sizes are not the same depending on architecture, target and compiler settings).
In this case 4 MB / 4 b = 1024 so possible maximum number would be 1048576, but we shouldn't ignore the fact that some other stuff is in stack.
However some compilers may have maximum number of pointer chain, but the limit is stack size. So if you increase stack size during linking with infinity and have machine with infinity memory which runs OS which handles that memory so you will have unlimited pointer chain.
If you use int *ptr = new int; and put your pointer into heap, that is not so usual way limit would be heap size, not stack.
EDIT Just realize that infinity / 2 = infinity. If machine has more memory so the pointer size increases. So if memory is infinity and size of pointer is infinity, so it is bad news... :)
It depends on the place where you store pointers. If they are in stack you have quite low limit. If you store it in heap, you limit is much much much higher.
Look at this program:
#include <iostream>
const int CBlockSize = 1048576;
int main()
{
int number = 0;
int** ptr = new int*[CBlockSize];
ptr[0] = &number;
for (int i = 1; i < CBlockSize; ++i)
ptr[i] = reinterpret_cast<int *> (&ptr[i - 1]);
for (int i = CBlockSize-1; i >= 0; --i)
std::cout << i << " " << (int)ptr[i] << "->" << *ptr[i] << std::endl;
return 0;
}
It creates 1M pointers and at the shows what point to what it is easy to notice what the chain goes to the first variable number.
BTW. It uses 92K of RAM so just imagine how deep you can go.
I have been struggling in finding an explanation to an error I get in the following code:
#include <stdlib.h>
int main() {
int m=65536;
int n=65536;
float *a;
a = (float *)malloc(m*n*sizeof(float));
for (int i = 0; i < m; i++){
for (int j = 0; j < n; j++){
a[i*n + j] = 0;
}
}
return 0;
}
Why do I get an "Access Violation" Error when executing this program?
The memory allocation is succesful, the problem is in the nested for loops at some iteration count. I tried with a smaller value of m&n and the program works.
Does this mean I ran out of memory?
The problem is that m*n*sizeof(float) is likely an overflow, resulting in a relatively small value. Thus the malloc works, but it does not allocate as much memory as you're expecting and so you run off the end of the buffer.
Specifically, if your ints are 32 bits wide (which is common), then 65336 * 65336 is already an overflow, because you would need at least 33 bits to represent it. Signed integer overflows in C++ (and I believe in C) result in undefined behavior, but a common result is that the most significant bits are lopped off, and you're left with the lower ones. In your case, that gives 0. That's then multiplied by sizeof(float), but zero times anything is still zero.
So you've tried to allocate 0 bytes. It turns out that malloc will let you do that, and it will give back a valid pointer rather than a null pointer (which is what you'd get if the allocation failed). (See Edit below.)
So you have a valid pointer, but it's not valid to dereference it. That fact that you are able to dereference it at all is a side-effect of the implementation: In order to generate a unique address that doesn't get reused, which is what malloc is required to do when you ask for 0 bytes, malloc probably allocated a small-but-non-zero number of bytes. When you try to reference far enough beyond those, you'll typically get an access violation.
EDIT:
It turns out that what malloc does when requesting 0 bytes may depend on whether you're using C or C++. In the old days, the C standard required a malloc of 0 bytes to return a unique pointer as a way of generating "special" pointer values. In modern C++, a malloc of 0 bytes is undefined (see Footnote 35 in Section 3.7.4.1 of the C++11 standard). I hadn't realized malloc's API had changed in this way when I originally wrote the answer. (I love it when a newbie question causes me to learn something new.) VC++2013 appears to preserve the older behavior (returning a unique pointer for an allocation of 0 bytes), even when compiling for C++.
You are victim of 2 problems.
First the size calculation:
As some people have pointned out, you are exceeding the range of size_t. You can verify the size that you are trying to allocate with this code:
cout << "Max size_t is: " << SIZE_MAX<<endl;
cout << "Max int is : " << INT_MAX<<endl;
long long lsz = static_cast<long long>(m)*n*sizeof(float); // long long to see theoretical result
size_t sz = m*n*sizeof(float); // real result with overflow as will be used by malloc
cout << "Expected size: " << lsz << endl;
cout << "Requested size_t:" << sz << endl;
You'll be surprised but with MSVC13, you are asking 0 bytes because of the overflow (!!). You might get another number with a different compiler (resulting in a lower than expected size).
Second, malloc() might return a problem pointer:
The call for malloc() could appear as successfull because it does not return nullptr. The allocated memory could be smaller than expected. And even requesting 0 bytes might appear as successfull, as documented here: If size is zero, the return value depends on the particular library implementation (it may or may not be a null pointer), but the returned pointer shall not be dereferenced.
float *a = reinterpret_cast<float*>(malloc(m*n*sizeof(float))); // prefer casts in future
if (a == nullptr)
cout << "Big trouble !"; // will not be called
Alternatives
If you absolutely want to use C, prefer calloc(), you'll get at least a null pointer, because the function notices that you'll have an overflow:
float *b = reinterpret_cast<float*>(calloc(m,n*sizeof(float)));
But a better approach would be to use the operator new[]:
float *c = new (std::nothrow) float[m*n]; // this is the C++ way to do it
if (c == nullptr)
cout << "new Big trouble !";
else {
cout << "\nnew Array: " << c << endl;
c[n*m-1] = 3.0; // check that last elements are accessible
}
Edit:
It's also subject to the size_t limit.
Edit 2:
new[] throws bad_alloc exceptions when there is a problem, or even bad_array_new_length. You could try/catch these if you want. But if you prefer to get nullptr when there's not enough memory, you have to use (std::nothrow) as pointed out in the comments by Beat.
The best approach for your case, if you really need these huge number of floats, would be to go for vectors. As they are also subject to size_t limitation, but as you have in fact a 2D array, you could use vectors of vectors (if you have enough memory):
vector <vector<float>> v (n, vector<float>(m));
I'm trying to write a simple program to show how variables can be manipulated indirectly on the stack. In the code below everything works as planned: even though the address for a is passed in, I can indirectly change the value of c. However, if I delete the last line of code (or any of the last three), then this no longer applies. Do those lines somehow force the compiler to put my 3 in variables sequentially onto the stack? My expectation was that that would always be the case.
#include <iostream>
using namespace std;
void someFunction(int* intPtr)
{
// write some code to break main's critical output
int* cptr = intPtr - 2;
*cptr = 0;
}
int main()
{
int a = 1;
int b = 2;
int c = 3;
someFunction(&a);
cout << a << endl;
cout << b << endl;
cout << "Critical value is (must be 3): " << c << endl;
cout << &a << endl;
cout << &b << endl;
cout << &c << endl; //when commented out, critical value is 3
}
Your code causes undefined behaviour. You can't pass a pointer to an int and then just subtract an arbitrary amount from it and expect it to point to something meaningful. The compiler can put a, b, and c wherever it likes in whatever order it likes. There is no guaranteed relationship of any kind between them, so you you can't assume someFunction will do anything meaningful.
The compiler can place those wherever and in whatever order it likes in the current stack frame, it may even optimize them out if not used. Just make the compiler do what you want, by using arrays, where pointer arithmetic is safe:
int main()
{
int myVars[3] = {1,2,3};
//In C++, one could use immutable (const) references for convenience,
//which should be optimized/eliminated pretty well.
//But I would never ever use them for pointer arithmetic.
int& const a = myVars[0];
int& const b = myVars[1];
int& const c = myVars[2];
}
What you do is undefined behaviour, so anything may happen. But what is probably going on, is that when you don't take the adress of c by commenting out cout << &c << endl;, the compiler may optimize avay the variable c. It then substitutes cout << c with cout << 3.
As many have answered, your code is wrong since triggering undefined behavior, see also this answer to a similar question.
In your original code the optimizing compiler could place a, b and c in registers, overlap their stack location, etc....
There are however legitimate reasons for wanting to know what are the location of local variables on the stack (precise garbage collection, introspection and reflection, ...).
The correct way would then to pack these variables in a struct (or a class) and to have some way to access that structure (for example, linking them in a list, etc.)
So your code might start with
void fun (void)
{
struct {
int a;
int b;
int c;
} _frame;
#define a _frame.a
#define b _frame.b
#define c _frame.c
do_something_with(&_frame); // e.g. link it
You could also use array members (perhaps even flexible or zero-length arrays for housekeeping routines), and #define a _frame.v[0] etc...
Actually, a good optimizing compiler could optimize that nearly as well as your original code.
Probably, the type of the _frame might be outside of the fun function, and you'll generate housekeeping functions for inspecting, or garbage collecting, that _frame.
Don't forget to unlink the frame at end of the routine. Making the frame an object with a proper constructor and destructor definitely helps. The constructor would link the frame and the destructor would unlink it.
For two examples where such techniques are used (both because a precise garbage collector is needed), see my qish garbage collector and the (generated C++) code of MELT (a domain specific language to extend GCC). See also the (generated) C code of Chicken Scheme or Ocaml runtime conventions (and its <caml/memory.h> header).
In practice, such an approach is much more welcome for generated C or C++ code (precisely because you will also generate the housekeeping code). If writing them manually, consider at least fancy macros (and templates) to help you. See e.g. gcc/melt-runtime.h
I actually believe that this is a deficiency in C. There should be some language features (and compiler implementations) to introspect the stack and to (portably) backtrace on it.
bool example1()
{
long a;
a = 0;
cout << a;
a = 1;
cout << a;
a = 2;
cout << a;
//and again...again until
a = 1000000;
cout << a+1;
return true;
}
bool example2()
{
long* a = new long;//sorry for the misstake
*a = 0;
cout << *a;
*a = 1;
cout << *a;
*a = 2;
cout << *a;
//and again...again until
*a = 1000000;
cout << *a + 1;
return true;
}
Note that I do not delete a in example2(), just a newbie's questions:
1. When the two functions are executing, which one use more memories?
2. After the function return, which one make the whole program use more memories?
Thanks for your help!
UPATE: just repace long* a; with long* a = new long;
UPDATE 2: to avoid the case that we are not doing anything with a, I cout the value each time.
Original answer
It depends and there will be no difference, at the same time.
The first program is going to consume sizeof(long) bytes on the stack, and the second is going to consume sizeof(long*). Typically long* will be at least as big as a long, so you could say that the second program might use more memory (depends on the compiler and architecture).
On the other hand, stack memory is allocated with OS memory page granularity (4KB would be a good estimate), so both programs are almost guaranteed to use the same number of memory pages for the stack. In this sense, from the viewpoint of someone observing the system, memory usage is going to be identical.
But it gets better: the compiler is free to decide (depending on settings) that you are not really doing anything with these local variables, so it might decide to simply not allocate any memory at all in both cases.
And finally you have to answer the "what does the pointer point to" question (as others have said, the way the program is currently written it will almost surely crash due to accessing invalid memory when it runs).
Assuming that it does not (let's say the pointer is initialized to a valid memory address), would you count that memory as being "used"?
Update (long* a = new long edit):
Now we know that the pointer will be valid, and heap memory will be allocated for a long (but not released!). Stack allocation is the same as before, but now example2 will also use at least sizeof(long) bytes on the heap as well (in all likelihood it will use even more, but you can't tell how much because that depends on the heap allocator in use, which in turn depends on compiler settings etc).
Now from the viewpoint of someone observing the system, it is still unlikely that the two programs will exhibit different memory footprints (because the heap allocator will most likely satisfy the request for the new long in example2 from memory in a page that it has already received from the OS), but there will certainly be less free memory available in the address space of the process. So in this sense, example2 would use more memory. How much more? Depends on the overhead of the allocation which is unknown as discussed previously.
Finally, since example2 does not release the heap memory before it exits (i.e. there is a memory leak), it will continue using heap memory even after it returns while example1 will not.
There is only one way to know, which is by measuring. Since you never actually use any of the values you assign, a compiler could, under the "as-if rule" simply optimize both functions down to:
bool example1()
{
return true;
}
bool example2()
{
return true;
}
That would a perfectly valid interpretation of your code under the rules of C++. It's up to you to compile and measure it to see what actually happens.
Sigh, an edit to the question made a difference to the above. The main point still stands: you can't know unless you measure it. Now both of the functions can be optimized to:
bool example1()
{
cout << 0;
cout << 1;
cout << 2;
//and again...again until
cout << 1000001;
return true;
}
bool example2()
{
cout << 0;
cout << 1;
cout << 2;
//and again...again until
cout << 1000001;
return true;
}
example2() never allocates memory for the value referenced by pointer a. If it did, it would take slightly more memory because it would require the space required for a long as well as space for the pointer to it.
Also, no matter how many times you assign a value to a, no more memory is used.
example 2 has a problem of not allocating memory for the pointer. Pointer a initially has an unknown value which makes it to point to somewhere in memory. assigning values to this pointer corrupts the content of that somewhere.
both examples use same amount of memory. (which is 4 bytes.)
Me and some peers are working on a game (Rigs ofRods) and are trying to integrate OpenCL for physics calculation. At the same time we are trying to do some much needed cleanup of our data structures. I guess I should say we are trying to cleanup our data structures and be mindful of OpenCL requirements.
One of the problems with using open CL is the inability to use pointers as the memory space is different. From what little I know of OpenCL is copies all the data onto the GPU then performs the calculations, pointer values would be copied but the address would not correspond to the expected address.
The data in question is centralized in an array, when objects need to that data they use pointers to the object it needs, or stores an array index.
One solution to account for OpenCL is to use array index instead of pointers. This leads to hard coupling that could lead to headaches later on. As a solution I had the idea of calculating the array index based on the address of the start and the address of the current. This of course would only work with a continuous array.
I wrote a sample app to test this and it worked just fine, some people verified it on different platforms as well.
#include <iostream>
typedef struct beam_t
{
unsigned int item;
} beam_t;
#define GLOBAL_STATIC_ASSERT(expr, msg) \
extern char STATIC_ASSERTION__##msg[1]; \
extern char STATIC_ASSERTION__##msg[(expr)?1:2]
#ifdef __amd64
typedef unsigned long pointer_int;
#else
typedef unsigned int pointer_int;
#endif
GLOBAL_STATIC_ASSERT(sizeof(pointer_int) == sizeof(pointer_int*), integer_size);
#define MAX_BEAM 5
int main ()
{
beam_t beams[MAX_BEAM];
beam_t* beam_start = &beams[0];
beam_t* beam_ptr = NULL;
std::cout << "beams: " << &beams << "\n";
for( pointer_int i = 0; i < MAX_BEAM; ++i )
{
beam_ptr = &beams[i];
pointer_int diff = ((pointer_int)beam_ptr - (pointer_int)beam_start);
std::cout << "beams[" << i << "]: " << beam_ptr
<< "\t calculated index:" << diff / sizeof(beam_t)
<< "\n";
}
return 0;
}
I'm concerned that this more of a kludge than a bonified solution. I'm aware that this would not work no non-coninuous memory.
basically my questions are this:
What would be the pitfalls for using this approach in known coninuous memory?
How would you be able to tell it was continuous?
What approaches have people used when dealing with this type of issue?
Thanks, and my appologies if the formating is off, this is my first time posting a question.
This should give you the index of pointer relative to base:
pointer - base
Yes, it's that easy. =]
Use ptrdiff_t to store the result portably.
Although simple subtraction of pointers works, it's advisable to use std::distance. This will also work for iterator types that aren't pointers, and can be overloaded for custom types, too. The result, for pointers, will be a ptrdiff_t.
#define ARRAY_INDEX_FROM_ADDR(base, addr, type) \
(((uintptr_t)(addr)-(uintptr_t)(base))/sizeof(type))
If not using C99, use unsigned long long instead of uintptr_t