I've this code i wrote that sets the array to 0
int arr[4];
memset(arr, 0, sizeof (arr));
Very simple, but how the code works without any errors even though sizeof(arr) = 16 (4 the array size * 4 for int) and the size i used when i declared the array is 4, How memset sets 16 bits to zero and the array i passed as a parameter has the size of 4?
I used memset(arr, 0, sizeof(arr)/sizeof(*arr)); to get the real size of the array and the result was accurate and it gives me 4 but how the above code works correctly?
memset sets 16 bytes (not bits) to 0. This is correct because the size of your array is 16 bytes, as you correctly stated (4 integers x 4 bytes per integer). sizeof knows the number of elements in your array and it knows the size of each element. As you can see in the docs, the third argument of memset takes the number of bytes, not the number of elements. http://www.cplusplus.com/reference/cstring/memset/
But be careful with using sizeof() where you pass array as int x[] or int* x. For example the following piece of code will not do what you expect:
void foo(int arr[]) {
auto s = sizeof(arr); // be careful! this won't do what you expect! it will return the size of pointer to array, not the size of the array itself
...
}
int a[10];
foo(a);
The third parameter is number of bytes. Which is 4*4=16 for your case.
memset
Actually the first solution is the correct one.
The function memset takes as third parameter the number of bytes to set to zero.
num:
Number of bytes to be set to the value.
sizeof returns the number of bytes occupied by the expression.
In your case sizeof(arr) = 16 which is exactly yhe number of bytes requested by memset function.
Your second solution:
memset(arr, 0, sizeof(arr)/sizeof(*arr)); // Note that: sizeof(arr)/sizeof(*arr) == 16 / 4 (generally) == 4 bytes
will set only the first 4 bytes to zero, that is the first integer of the array. So that solution is wrong if your intent is to set each element of the array to zero.
Related
When I declared an integer array of size 10 and check the size it gives 40 but when I declared integer pointer variable with an array of size 10 and I try to check the size it always gives 8. Why?
int A[10];
cout<<sizeof(A)<<endl; // it gives 40;
int *p;
p = new int[10];
cout<<sizeof(*p)<<endl; // but it gives always 8;
sizeof(p) is always going to be sizeof(int*), which is 8 on your platform.
That does not change regardless of the number of objects you allocate using new [].
A is of type int [10] (an array of 10 ints). Its size is 10 * sizeof(int).
It makes sense that sizeof(A) is 40.
Pointers and arrays are different types. They can be used interchangeably in many use cases but it's also important to know how they are different and where they behave differently. sizeof operator is one of the use cases where they are different.
On a typical 64-bit machine, sizeof(*p) should be 4, while sizeof(p) should be 8.
Hopefully this example clears things up:
int p1[10];
cout<<sizeof(*p1)<<endl; // 4 size of the first element (int)
cout<<sizeof(p1)<<endl; // 40 size of array (int size * 10)
int *p2;
p2 = new int[10];
cout<<sizeof(*p2)<<endl; // 4 size of the first element (int)
cout<<sizeof(p2)<<endl; // 8 size of the pointer on 64-bit system
// shows how you're getting the first value by dereferencing
cout<<*p2<<endl; // 0
p2[0] = 100;
cout<<*p2<<endl; // 100
Always pointer holds the starting address of an array. Since it's an int pointer, and the size of the int in ur system is 8 bytes, it shows 8 bytes.
Here is the problem program:
#include <stdio.h>
int main()
{
int apricot[2][3][5];
int (*r)[5]=apricot[0];
int *t=apricot[0][0];
printf("%p\n%p\n%p\n%p\n",r,r+1,t,t+1);
}
The output of it is:
# ./a.out
0xbfa44000
0xbfa44014
0xbfa44000
0xbfa44004
I think t's dimension's value should be 5 because t is the last dimension,and the fact is matched(0xbfa44004-0xbfa44000+1=5)
But the r's dimension's value is 0xbfa44014-0xbfa44000+1=21,I think it should be 3*5=15,because 3 and 5 are the last two dimensions,then why the difference is 21?
r is a pointer to an array of 5 ints.
Assuming 1 int is 4 bytes on your system (from t and t+1), then "stepping" that pointer by 1 (r+1) means an increase in 5*4 = 20 bytes. Which is what you get here.
You get tricked by the C syntax. r is an array pointer to an array of int, t is a plain int pointer. When doing any kind of pointer arithmetic, you do it in the unit pointed at.
Thus t+1 means the address of t + the size of one pointed-at object. Since t points at int and int is 4 bytes on your system, you get an address 4 bytes from t.
The same rule applies to r. It is a pointer to an array of 5 int. When you do pointer arithmetic on it by r+1, you get the size of the pointed-at object, which has size 5*sizeof(int), which happens to be 20 bytes on your computer. So therefore r+1 gives you an address 20 bytes (==14 hex) from r.
What is the difference between the following three commands?
Suppose we declare an array arr having 10 elements.
int arr[10];
Now the commands are:
Command 1:
memset(arr,0,sizeof(arr));
and
Command 2:
memset(arr,0,10*sizeof(int));
These two commands are running smoothly in an program but the following command is not
Command 3:
memset(arr,0,10);
So what is the difference between the 3 commands?
Case #1: sizeof(arr) returns 10 * sizeof(int)
Case #2: sizeof(int) * 10 returns the same thing
Case #3: 10 returns 10
An int takes up more than one byte (usually 4 on 32 bit). So if you did 40 for case three, it would probably work. But never actually do that.
memset's 3rd paranneter is how many bytes to fill. So here you're telling memset to set 10 bytes:
memset(arr,0,10);
But arr isn't necesarrily 10 bytes. (In fact, it's not) You need to know how many bytes are in arr, not how many elements.
The size of an int is not guaranteed to be 1 byte. On most modern PC-type hardware, it's going to be 4 bytes.
You should not assume the size of any particular datatype, except char which is guaranteed to be 1 byte exactly. For everything else, you must determine the size of it (at compile time) by using sizeof.
memset(arr,0,sizeof(arr)) fills arr with sizeof(arr) zeros -- as bytes. sizeof(arr) is correct in this case, but beware using this approach on pointers rather than arrays.
memset(arr,0,10*sizeof(int)) fills arr with 10*sizeof(int) zeros, again as bytes. This is once again the correct size in this case. This is more fragile than the first. What if arr does not contain 10 elements, and what if the type of each element is not int. For example, you find you are getting overflow and change int arr[10] to long long arr[10].
memset(arr,0,10) fills the first 10 bytes of arr with zeros. This clearly is not what you want.
None of these is very C++-like. It would be much better to use std::fill, which you get from the <algorithm> header file. For example, std::fill (a, a+10, 0).
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Question about pointer increment
When i increment a int pointer then its address have a gap of 4 bytes. why it is so ? why a int pointer takes 4 bytes to store whereas a char takes 2 bytes ?
When you increment a pointer of a type A, you move that pointer forward in the memory by the size of the type it points to. On your machine, int takes 4 bytes, so the pointer moves by 4 bytes.
As for "why does int take 4 bytes on my machine?":
The C++ standard says (4.9.1. paragraph 2):
There are five standard signed integer types : “signed char”, “short
int”, “int”, “long int”, and “long long int”. In this list, each type
provides at least as much storage as those preceding it in the list.
<...> Plain ints have the natural size suggested by the architecture
of the execution environment[44]; the other signed integer types are
provided to meet special needs.
[44]: that is, large enough to contain any value in the
range of INT_MIN and INT_MAX, as defined in the header .
Basically, the sizes of fundamental types are not set in stone, and are implementation-defined. The accepted answer to this SO question has some information about it.
Here is the general rule:
If the type is T, its size N is calculated as sizeof(T) bytes. So pointer of type T* is increased by N bytes if you increment the pointer by 1.
Mathematically,
T *p = getT();
size_t diff = static_cast<size_t>(p+1) - static_cast<size_t>(p);
bool alwaysTrue = (diff == sizeof(T)); //alwaysTrue is always true!
the size of the pointer to any data types always be the same as supported by your system
If system is 32 -bit the size would be 4 bytes for all the pointers.
In pointer arithmetic when you do ptr++ or ptr-- the increments and decrements takes place according to the size of the data type this ptrpointer points to .
char *cptr;
int *iptr;
char c[5];
int a[5];
cptr=c;
iptr=a;
By doing cptr++ you will get c[1] and pointer will increments by only one byte
You can check the address of each char.
Similarly iptr++ will give you a[1] here pointer increased by 4 bytes.
int main()
{
int i;
for(i=0;i<5;i++)
{
printf("%p\t",&c[i]); //internally pointer arithmeitc: (c+sizeof(char)*i) ,
printf("%p\n",&a[i]); //intenally pointer arithmetic : (a+sizeof(int)*i)
}
}
Size of int or other data types are implementation defined
Pointers increment by the size in bytes of the things they point to. ints take 4 bytes on a 32-bit machine.
Because, on your computer, sizeof (int) == 4, so stepping from one int to the next requires an increment of four bytes.
Most integer types have different sizes on different computers. int must have at least 16 bits, and is supposed to be a "natural" size for the computer. Most 32 or 64-bit platforms choose 32 bits as a "natural" size, and most computers have 8-bit bytes, so 4 bytes is a very common size for int.
However, sizeof (char) == 1 on all computers, so I'm rather surprised that you say "a char takes 2 bytes". It should only take one.
because the size of data (int) which the pointer is pointing has 4 byte size so the pointer increments 4 bytes (size of data (int))
another example: if you have structure with size 8 byte and you have pointer pointing to this structure the increment of this pointer will be 8 byte:
struct test {
int x;
int y;
}
struct test ARRAY[50];
struct test *p=ARRAY; // p pointer is pointing here to the first element ARRAY[0]. ARRAY[0] is with size 8 bytes
p++; // this will increment p with 8 byte (size of struct test). So p now is pointing to the second element ARRAY[1]
Lets have an array of type int:-
int arr[5];
Now,
if arr[0] is at address 100 then
Why do we have;
arr[1] at address 102 ,
arr[2] at address 104 and so on.
Instead of
arr[1] at address 101 ,
arr[2] at address 102 and so on.
Is it because an integer takes 2 bytes?
Does each memory block has 1 Byte capacity (whether it is 32 bit processor or 64 bit)?
Your first example is consistent with 16-bit ints.
As to your second example (&arr[0]==100, &arr[1]==101, &arr[2]==103), this can't possibly be a valid layout since the distance between consecutive elements varies between the first pair and the second.
It is because an integer takes 2 bytes?
Yes
Apparently on your system, int has the size of 2. On other systems, this might not be the case. Usually int is either sized 4 or 8 bytes, but other sizes are possible also.
You are right, on your machine the sizeof int is 2, so next possible value in the array will be 2 bytes away from the previous one.
-------------------------------
|100|101|102|103|104|105|106....
-------------------------------
arr[0] arr[1] arr[2]
There is no guaranty regarding size of int. C++ spec just says that sizeof(int) >= sizeof(char). It depends upon processor, compiler etc.
For more info try this