I want to define an integer variable in C/C++ such that my integer can store 10 bytes of data or may be a x bytes of data as defined by me in the program.
for now..!
I tried the
int *ptr;
ptr = (int *)malloc(10);
code. Now if I'm finding the sizeof ptr, it is showing as 4 and not 10. Why?
C and C++ compilers implement several sizes of integer (typically 1, 2, 4, and 8 bytes {8, 16, 32, and 64 bits}), but without some helper code to preform arithmetic operations you can't really make arbitrary sized integers.
The declarations you did:
int *ptr;
ptr = (int *)malloc(10);
Made what is probably a broken array of integers. Broken because unless you are on a system where (10 % sizeof(int) ) == 0) then you have extra bytes at the end which can't be used to store an entire integer.
There are several big number Class libraries you should be able to locate for C++ which do implement many of the operations you may want preform on your 10 byte (80 bit) integers. With C you would have to do operation as function calls because it lacks operator overloading.
Your sizeof(ptr) evaluated to 4 because you are using a machine that uses 4 byte pointers (a 32 bit system). sizeof tells you nothing about the size of the data that a pointer points to. The only place where this should get tricky is when you use sizeof on an array's name which is different from using it on a pointer. I mention this because arrays names and pointers share so many similarities.
Because on you machine, size of a pointer is 4 byte. Please note that type of the variable ptr is int *. You cannot get complete allocated size by sizeof operator if you malloc or new the memory, because sizeof is a compile time operator, meaning that at compile time the value is evaluated.
It is showing 4 bytes because a pointer on your platform is 4 bytes. The block of memory the pointer addresses may be of any arbitrary size, in your case it is 10 bytes. You need to create a data structure if you need to track that:
struct VariableInteger
{
int *ptr;
size_t size;
};
Also, using an int type for your ptr variable doesn't mean the language will allow you to do arithmetic operations on anything of a size different than the size of int on your platform.
Because the size of the pointer is 4. Try something like:
typedef struct
{
int a[10];
} big_int_t;
big_int_t x;
printf("%d\n", sizeof(x));
Note also that an int is typically not 1 byte in size, so this will probably print 20 or 40, depending on your platform.
Integers in C++ are of a fixed size. Do you mean an array of integers? As for sizeof, the way you are using it, it tells you that your pointer is four bytes in size. It doesn't tell you the size of a dynamically allocated block.
Few or no compilers support 10-byte integer arithmetic. If you want to use integers bigger than the values specified in <limits.h>, you'll need to either find a library with support for big integers or make your own class which defines the mathematical operators.
I believe what you're looking for is known as "Arbitrary-precision arithmetic". It allows you to have numbers of any size and any number of decimals. Instead of using fixed-size assembly level math functions, these libraries are coded to do math how one would do them on paper.
Here's a link to a list of arbitrary-precision arithmetic libraries in a few different languages, compliments of Wikipedia: link.
Related
On my MS VS 2015 compiler, the sizeof int is 4 (bytes). But the sizeof vector<int> is 16. As far as I know, a vector is like an empty box when it's not initialized yet, so why is it 16? And why 16 and not another number?
Furthermore, if we have vector<int> v(25); and then initialize it with int numbers, then still the size of v is 16 although it has 25 int numbers! The size of each int is 4 so the sizeof v should then be 25*4 bytes seemingly but in effect, it is still 16! Why?
The size of each int is 4 so the sizeof v should then be 25*4 bytes seemingly but in effect, it is still 16! Why?
You're confusing sizeof(std::vector) and std::vector::size(), the former will return the size of vector itself, not including the size of elements it holds. The latter will return the count of the elements, you can get all their size by std::vector::size() * sizeof(int).
so why is it 16? And why 16 and not another number?
What is sizeof(std::vector) depends on implmentation, mostly implemented with three pointers. For some cases (such as debug mode) the size might increase for the convenience.
std::vector is typically a structure which contains two elements: pointer (array) of its elements and size of the array (number of elements).
As size is sizeof(void *) and the pointer is also sizeof(void *), the size of the structure is 2*sizeof(void *) which is 16.
The number of elements has nothing to do with the size as the elements are allocated on the heap.
EDIT: As M.M mentioned, the implementation could be different, like the pointer, start, end, allocatedSize. So in 32-bit environment that should be 3*sizeof(size_t)+sizeof(void *) which might be the case here. Even the original could work with start hardcoded to 0 and allocatedSize computed by masking end so really dependent on implementation. But the point remains the same.
sizeof is evaluated at compile time, so it only counts the size of the variables declared in the class, which probably includes a couple of counters and a pointer. It's what the pointer points to that varies with the size, but the compiler doesn't know about that.
The size can be explained using pointers which can be: 1) begin of vector 2) end of vector and 3) vector's capacity. So it would be more of like implementation dependent and it will change for different implementation.
You seem to be mixing "array" with "vector". If you have a local array, sizeof will provide the size of the array indeed. However, vector is not an array. It is a class, a container from STL guaranteeing that the memory contents are located within a single block of memory (that may get relocated, if vector grows).
Now, if you take a look at the std::vector implementation, you'll notice it contains fields (at least in MSVC 14.0):
size_type _Count = 0;
typename _Alloc_types::_Alty _Alval; // allocator object (from base)
_Mylast
_Myfirst
That could sum up to 16 bytes under your implementation (note: experience may vary).
I have some questions about how to calculate the size of different data types in C++, I have int, char, unsigned char, unsigned int, double, string, after I run the sizeof(i), the computer gave me the answer of sizeof(int/unsigned int)==4; sizeof(char/unsigned char)==1; sizeof(string)==32;. I studied in many different tutorials recently, just got very confused about this result, and some claim that unsigned int size is 8 bytes, a kind like that, really confusing.....
By the way, I'm really confused about the difference between char and string, when I declare a string, I say string mystring="asd";. I also can declare a char mystring = "asd";. It is really confusing too, I am just a beginner, hope somebody could help me go to the right direction.
Can anybody help me out?
C++ was originally based on C, which was made to be a language to closely follow the hardware. And for hardware it makes sense to have many different data-types of different size (bytes, half-words, words, etc,), so it makes sense for C to follow that and this was inherited by C++ (which can also be used to make programs that run close to the hardware).
The size of the data-types depends on the compilers and the hardware it target, and can differ between platforms and even between different compilers on the same platform. For example, on a 64-bit Windows system, using the Visual Studio the type long is 32 bits (four bytes) while using GCC a long is 64 bits (eight bytes).
Generally speaking you can say that
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
Also, the C++ specification says that sizeof(char) is always 1 no matter the actual size of a char. There is also no difference between an unsigned and a signed type, sizeof(unsigned int) == sizeof(signed int).
As for the size of structures and classes, roughly speaking the size of a structure or class is the sum of size of the members in the structure or class. So if you have a structure (or class) with two int member variables, then the size of the structure (or class) will be sizeof(int) + sizeof(int). This is however not the full truth, as the compiler may add padding to a structure to make member variables to end up on nicely aligned positions inside the structure, and this padding is also counted when getting the size of a structure.
The C++ standard is very open about what the size of various data types is; implementations are allowed to vary a lot.
As a quick summary of the rules:
sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
I don't know offhand what the relationship between sizeof(int) and sizeof(unsigned int) is; the standard might specify that they must be equal, but I wouldn't rely on it.
sizeof(int) on modern desktops is typically either 4 or 8, depending on that compiler's approach to 64-bit numbers. But you shouldn't assume that.
The reason sizeof(std::string) and sizeof(char) are different is that char is the type of the smallest addressable unit in the system, and C strings are just an array of them. So if you write char* a = "abcd"; std::cout << sizeof(a) << std::endl; you will get the size of a pointer-to-char in the system. std::string, on the other hand, is a class. std::string a = "abcd"; std::cout << sizeof(a) << std::endl; will give you the full size of the std::string class, including padding, function tables, every member of std::string, etc..
Well for the size of the data types, you dont have to worry much about it. You can always use the sizeof(); operator to always check the size of the the data types. Theres is no need because the sizes of the different data types depends on the types of computer and operating systems. And for the string and char, actually string is just an object made a series of char. so if you use string to declare a string that string becomes an object of the class string. And if you use char to declare a string (By the way which is actually declared as a pointer type char*) then that string is just a series of characters of the type char. You can also declare a string using an array. For example char name[9] = Christina. So actually there are many ways to declare a string depending on its purpose. But the string object has a lot more functionalities so it. Check out string object c++ for more information about the string object. I hope this could help.
I want to have a data variable which will be an integer and its range will be from
0 - 1.000.000.
For example normal int variables can store numbers from -2,147,483,648 to 2,147,483,647.
I want the new data type to have less range so it can have LESS SIZE.
If there is a way to do that please let me know?
There isn't; you can't specify arbitrary ranges for variables like this in C++.
You need 20 bits to store 1,000,000 different values, so using a 32-bit integer is the best you can do without creating a custom data type (even then you'd only be saving 1 byte at 24 bits, since you can't allocate less than 8 bits).
As for enforcing the range of values, you could do that with a custom class, but I assume your goal isn't the validation but the size reduction.
So, there's no true good answer to this problem. Here are a few thoughts though:
If you're talking about an array of these 20 bit values, then perhaps the answers at this question will be helpful: Bit packing of array of integers
On the other hand, perhaps we are talking about an object, that has 3 int20_ts in it, and you'd like it to take up less space than it would normally. In that case, we could use a bitfield.
struct object {
int a : 20;
int b : 20;
int c : 20;
} __attribute__((__packed__));
printf("sizeof object: %d\n", sizeof(struct object));
This code will probably print 8, signifying that it is using 8 bytes of space, not the 12 that you would normally expect.
You can only have data types to be multiple of 8 bits. This is because, otherwise that data type won't be addressable. Imagine a pointer to a 5 bit data. That won't exist.
If I have a struct A defined as:
struct A {
char* c;
float f;
int i;
};
and an array
A col[5];
then why is
sizeof(*(col+0))
16?
On your platform, 16 bytes are required to hold that structure, the structure being of type A.
You should keep in mind that *(col+0) is identical to col[0] so it's only one of the structure, not the entire array of them. If you wanted the size of the array, you would use sizeof(col).
Possibly because:
you are on a 64-bit platform and char* takes 8 bytes while int and float take 4 bytes,
you are on a 32-bit platform and char* takes 4 bytes but your compiler decided that the array would be faster if it dropped 4 bytes of padding there. Padding can be controlled on most compilers by #pragma pack(push,1) and #pragma pack(pop) respectively.
If you want to be sure, you can use offsetof (on GCC) or create an object and examine the addresses of its member fields to inspect which fields got actually padded and how much.
For starters, your original declaration was incorrect (this has now been fixed in a question edit). A is the name of the type; to declare an array named col, you want
A col[5];
not
col A[5];
sizeof(*(col+0)) is the same as sizeof col[0], which is the same as sizeof (A).
It's 16 because that's the size of that structure, for the compiler and system you're using (you haven't mentioned what it is).
I take it from the question that you were expecting something different, but you didn't say so.
Compilers may insert padding bytes between members, or after the last member, to ensure that each member is aligned properly. I find 16 bytes to be an unsurprising size for that structure on a 64-bit system -- and in this particular case, it's probably that no padding is even required.
And in case you weren't aware, sizeof yields a result in bytes, where a byte is usually (but not always) 8 bits.
Your problem is most likely that your processor platform uses 8-byte alignment on floats. So, your char* will take 4 (assuming you're on a 32-bit system) since it's a pointer which is an address. Your float will take 8, and your int will take another 4 which totals 16 bytes.
Compilers will often make certain types align on certain byte boundaries in order to speed up computation on the hardware platform in use.
For example, if you did:
struct x {
char y;
int z;
};
Your system would (probably) say the size of x was 8, padding the char out to an int inside the structure.
You can add pragmas (implementation dependent) to stop this:
#pragma pack(1)
struct x {
char y;
int z;
};
#pragma pack(0)
which would make the size of this equal to 5.
Edit: There seem to be two parts to this question. "Why is sizeof(A) equal to 16?" On balance, I see now that this is probably the question that was intended. Instead I am answering the second part, i.e. "Why is sizeof(*(col+0)) == sizeof(A)?"
col is an array. col + 0 is meaningless for arrays, so the compiler must convert col to a pointer first. Then col is effectively just an A*. Adding zero to a pointer changes nothing. Finally, you dereference it with * and are left with a simple A of size 16.
In short, sizeof(A) == sizeof(*(col+0))
PS: I have not addressed the question "Why does that one element of the array take up 16 bytes?" Others have answered that well.
On a modern x86-64 processor, char* is 8 bytes, float is 4 bytes, int is 4 bytes. So the sizes of the members added together is 16. What else would you be expecting? Did someone tell you a pointer is 4 bytes? Because that's only true for x86-32.
how do i define 24 bit array in c++? (variable declaration)
There is no 24-bit variable type in C++.
You can use a bitpacked struct:
struct ThreeBytes {
uint32_t value:24;
};
But it is not guaranteed that sizeof ThreeBytes == 3.
You can also just use uint32_t or sint32_t, depending on what you need.
Another choice is to use std::bitset:
typedef std::bitset<24> ThreeBytes;
Then make an array out of that:
ThreeBytes *myArray = new ThreeBytes[10];
Of course, if you really just need "three bytes", you can make an array of arrays:
typedef uint8_t ThreeBytes[3];
Note that uint8_t and friends are non-standard, and are used simply for clarification.
An unsigned byte array of 3 bytes is 24 bits. Depending on how you are planning to use it, it could do.
unsigned char arrayname[3]
As #GMan points out you should be aware that it's not 100% of all systems that has 8 bits chars.
If you intend to perform bitwise operations on them, then simply use an integral type that has at least 24 bits. An int is 32 bits on most platforms, so an int may be suitable for this purpose.
EDIT: Since you actually wanted an array of 24 bit variables, the most straightforward way to do this is to create an array of ints or longs (as long as it's an integral data type that contains at least 24 bits) and treat each element as though it was 24 bits.
Depending on your purpose (for example, if you are concerned that using a 32bit type might waste too much memory), you might also consider creating an array of bytes with three times the length.
I used to do that a lot for storing RGB images. To access the n'th element, you would have to multiply by three and then add zero, one or two depending on which "channel" out of the element you wanted. Of course, if you want to access all 24 bits as one integer, this approach requires some additional arithmetics.
So simply unsigned char myArray[ELEMENTS * 3]; where ELEMENTS is the number of 24bit elements that you want.
Use bitset or bitvector if they are supported on your platform. (They are :) )
std::vector<std::bitset<24> > myArray;