am a novice programmer and I was mingling with pointers to get my base strong for DSA . The following is my code
int main() {
int AnArray[20];
int* plocation6, * plocation0;
plocation6 = &AnArray[6];
plocation0 = &AnArray[0];
cout << (int)plocation6 << endl << (int)plocation0<<endl;
cout << "Difference " << plocation6 - plocation0;
}
And i expected that the value of Difference would be 24 as in Hexadecimal the pointer locations differ by 18 and by 24 in decimal but the answer comes out to be 6 where as if i use convert them using (int) and then do the operation then i get 24 as answer , Why is that? please expalin why 6 comes??
The difference between two pointers isn't in bytes, it's in the number of elements. So you're seeing the number of bytes (24) divided by sizeof(int) (4).
To build on top of the previous answers:
It is true, since an int takes 4 bytes, that the two pointers are 24 bytes apart.
The reason that you are getting 6 is because the - operator (for pointers) is defined as the difference between the two pointer's addresses divided by the size of the data type the pointers point to.
This is a similair concept to operator overloading, where an operator is defined to do something other than the typical operation.
Note: I dont think this is technically operator overloading, but understanding operator overloading will help to understand this concept.
please expalin why 6 comes??
The element at index 0 and the element at index 6 are 6 indices apart. That's where the 6 comes from when you subtract pointer of one from the other.
Related
I am learning C++, and read that when an array is passed into a function it decays into a pointer. I wanted to play around with this and wrote the following function:
void size_print(int a[]){
cout << sizeof(a)/sizeof(a[0]) << endl;
cout << "a ->: " << sizeof(a) << endl;
cout << "a[0] ->" << sizeof(a[0]) << endl;
}
I tried inputting an array with three elements, let's say
int test_array[3] = {1, 2, 3};
With this input, I was expecting this function to print 1, as I thought a would be an integer pointer (4 bytes) and a[0] would also be 4 bytes. However, to my surprise the result is 2 and sizeof(a) = 8.
I cannot figure out why a takes up 8 bytes, but a[0] takes up 4. Shouldn't they be the same?
Shouldn't they be the same?
No. a is (meant to be) an array (but because it's a function argument, has been adjusted to a pointer to the 1st element), and as such, has the size of a pointer. Your machine seems to have 64 bit addresses, and thus, each address (and hence, each pointer) is 64 bits (8 bytes) long.
a[0], on the other hand, is of the type that an element of that array has (an int), and that type has 32 bits (4 bytes) on your machine.
A pointer is just an address of memory where the start of the variable is located. That address is 8 bytes.
a[0] is a variable in the first place of the array. It technically could be anything of whatever size. When you take a pointer to it, the pointer just contains an address of memory (integer) without knowing or caring what this address contains. (This is just to illustrate the concept, in the example in the question, a[] is an integer array but the same logic works with anything).
Note, the size of the pointer is actually different on different architectures. This is where the 32-bit, 64-bit, etc. comes in. It can also depend on the compiler but this is beyond the question.
The size of the pointer depends on the system and implementation. Your uses 64 bits (8 bytes).
a[0] is an integer and the standard only gives an indication of the minimum max value it has to store. It can be anything from 2 bytes up. Most modern implementations use 32 bits (4 bytes) integers.
sizeof(a)/sizeof(a[0]) will not work on the function parameters. Arrays are passed by the reference and this division will only give you information how many times size of the pointer is larger than the size of an integer, but not the size of the object referenced by the pointer.
I wrote a simple program in order to understand the functioning of the function of the standard c++ library sizeof().
It follows:
const char* array[] = {
"1234",
"5678"
};
std::cout << sizeof(array) << std::endl;//16
std::cout << sizeof (array[0]) << std::endl;//8
std::cout << printf("%lu\n",sizeof (char) );//1
std::cout << printf("%lu\n",sizeof (int) );//24
std::cout << printf("%lu\n",sizeof (float) );//24
std::cout << printf("%lu",sizeof (double) );//281
It is possible to see by the output reported the characters has dimension 1 byte in my OS, as
expectable. But I do not understand why the dimension of '''array[0]''' is 8, as it contains 4 charcaters and at least other 2 charcaters for the end sequence "\n" which is contained in a string. Thus, I supposed that the number of bytes occupied by the first element of the array should be 6 and not 8.
Moreover, if I increase/decrease the number of charcaters contained in the first element of the array, the its size does not change.
Clearly, I am wrong. If somebody can explain me this functioning, I would really appreciate.
Thanks,
I wrote a simple program in order to understand the functioning of the function of the standard c++ library sizeof().
Wrong terminology. Please read n3337 (a C++ standard) and the wikipage on sizeof.
sizeof is a compile-time operator, not a function. If v is some variable, sizeof(v) only depends on the type of v and never on its value (in contrast, for most functions f, the value of f(v) depends upon the value of v).
And a good way to understand something about C++ is to refer to documents like standards or good web pages about it.
If somebody can explain me
Yes. Read a good book about C++. This one is written by the main designer of C++. Try to understand more and better the (difficult) semantics of C++. You could also study the source code of existing open source C++ compilers such as GCC or Clang/LLVM (thus effectively using one of your free software freedoms).
BTW, with a lot of pain you might find C++ implementations with sizeof(int) being 1 (e.g. for some DSP processors). On cheap 32 bits ARM processors (those in cheap mobile phones today, for instance; then you would probably use some cross-compiler) or on some Raspberry Pis (or perhaps some mainframes) you could have sizeof(array[0]) or sizeof(void*) being 4 even in 2019.
Let's break down the meaning of the somewhat confusing output values you see!
First, the sizeof(array) and sizeof(array[0]) (where your output method is fine). You have delared/defined array as an array of two char* values, each of which is a pointer. The size of a pointer on your system is 8 bytes, so the total size of array is: 8 * 2 = 16. For array[0]: this is a single pointer, so its size is simply 8 bytes.
Does all this make sense so far? If so, then let's look at the second part of your code …
The values for sizeof(char), sizeof(int), sizeof(float) and sizeof(double) are, on your system, in order, 1, 4, 4, and 8. These values are actually being output! However, as you are also outputting the return value of printf(), which is the number of characters it has written, you are getting the extra values, "2", "2", "2" and "1" inserted (in a confusing, and possibly undefined, order), for the four calls (the last one has no newline, so it's only one character; all others are one digit + newline = 2 characters).
Change the second part of your code as follows, to get the correct outputs:
printf("%zu\n", sizeof(char)); //1
printf("%zu\n", sizeof(int)); //4
printf("%zu\n", sizeof(float)); //4
printf("%zu\n", sizeof(double)); //8
Apologies i got frustrated and posted question through mobile without proper details
Consider the following C++ code:
int arr[2];
arr[0] = 10;
arr[1] = 20;
// cout << &arr;
for (int i = 0; i < 2; i++)
{
cout << &arr + i << "\t\t"<<endl;
}
cout << sizeof (arr);
cout in for loop prints following
0x7ffeefbff580 0x7ffeefbff588
which is 8 bytes farther than the first element
My question is why it is 8 bytes further and not 4 bytes if on my machine sizeof(int) is 4?
Now that you gave us the code we can answer your question.
So the confusing piece is this: &arr + i. This does not do what you think it does. Remember that & takes precedence over +. And so you take address of arr and move it forward i times.
Pointer arithmetic works in such a way that &x + 1 moves the pointer forward by size(x). So in your case what is size(arr)? It is 8 because it is 2-element array of integers (I'm assuming ints are of size 4). And so &arr + 1 actually moves the pointer 8 bytes forward. The exact thing you experience. You don't ask for next int, you ask for next array. I encourage you to play around, for example define arr as int[3] (which is of size 12) and see how the pointer moves 12 bytes forward.
So first solution is to do arr + i without &. We can apply pointer arithmetic to an array in which case it decays to a pointer type int*. Now since int is of size 4 then arr + 1 points correctly to a memory segment 4 bytes forward.
But what I suggest is to stay away from pointer arithmetic and do &arr[i] instead. This does the same thing but IMO is less error prone, less confusing and tells us more about the intent.
int (*ptr)[3]=new int [1][3];
I understand that int (*ptr)[3] creates a pointer to a 3-element integer-holding array.
I understand that new int [1][3] dynamically allocates some memory of size 1 row x 3 col x 4 bytes (32-bit machine) = 12 bytes.
I also understand that ptr [0] = &ptr [0] in this case.
The total memory allocated here is 3 * 12 bytes. Why?
Why is the 3 on the LHS dependent on the 3 on the RHS? If we use 3 on the RHS, we have to use 3 on the LHS. I cannot use 2 or 4.
Maybe it's a trivial logic, but I don't seem to find good literature on this.
First, the total memory allocated is not 3*12 bytes. It's
3*1*sizeof(int) + k bytes, where k is unspecified (but in
most implementations, will be 0 when allocating arrays of
int).
Second, the two 3 must be equal because they are part of the
type. On the left, the type is "pointer to 3 int". On the
right, you are allocating an "array of 1 array of 3 int";
because of the semantics of array new, the type of the
expression is point to array of 3 int (and any information
concerning whether it was int[1][3] or int[2][3] or whatever
has been lost). C++ uses static type checking (for the most
part), so the compiler must know all parts of the type at
compile time.
Given the information you have provided so far, I can only answer the second part of your question:
Why is the 3 on the LHS dependent on the 3 on the RHS?
int (*ptr)[3] // creates a pointer to an array of 3 ints
that means your variable ptr should point to an array of array of length threes only and when you are writing new int [1][3], you are essentially creating an array of length threes with one row(2-D array) only. Similarly new int [2][3] would give you an array of threes with two rows.(2-D array) That's why both of the threes are dependent on each other.
I encountered the following line in a OpenGL tutorial and I wanna know what does the *(int*) mean and what is its value
if ( *(int*)&(header[0x1E])!=0 )
Let's take this a step at a time:
header[0x1E]
header must be an array of some kind, and here we are getting a reference to the 0x1Eth element in the array.
&(header[0x1E])
We take the address of that element.
(int*)&(header[0x1E])
We cast that address to a pointer-to-int.
*(int*)&(header[0x1E])
We dereference that pointer-to-int, yielding an int by interpreting the first sizeof(int) bytes of header, starting at offset 0x1E, as an int and gets the value it finds there.
if ( *(int*)&(header[0x1E])!=0 )
It compares that resulting value to 0 and if it isn't 0, executes whatever is in the body of the if statement.
Note that this is potentially very dangerous. Consider what would happen if header were declared as:
double header [0xFF];
...or as:
int header [5];
It's truly a terrible piece of code, but what it's doing is:
&(header[0x1E])
takes the address of the (0x1E + 1)th element of array header, let's call it addr:
(int *)addr
C-style cast this address into a pointer to an int, let's call this pointer p:
*p
dereferences this memory location as an int.
Assuming header is an array of bytes, and the original code has been tested only on intel, it's equivalent with:
header[0x1E] + header[0x1F] << 8 + header[0x20] << 16 + header[0x21] << 24;
However, besides the potential alignment issues the other posters mentioned, it has at least two more portability problems:
on a platform with 64 bit ints, it will make an int out of bytes 0x1E to 0x25 instead of the above; it will be also wrong on a platform with 16 bit ints, but I suppose those are too old to matter
on a big endian platform the number will be wrong, because the bytes will get reversed and it will end up as:
header[0x1E] << 24 + header[0x1F] << 16 + header[0x20] << 8 + header[0x21];
Also, if it's a bmp file header as rici assumed, the field is probably unsigned and the cast is done to a signed int. In this case it doesn't matter as it's being compared to zero, but in some other case it may.