Does compiler adjust int size? - c++

I wonder if in that case, compiller will adjust int variable size to its maximum possible value? Or will it use whole 32 bit int?
pseudocode:
int func()
{
if (statement)
return 10;
else if (statement2)
return 50;
else
return 100;
}
// how much memory will be alocated as it needs only 1 byte?

The function returns int, the allocated memory will be sizeof(int), regardless of the actual value stored in it.

I will use the full 32 bits (assuming that an int is 32 bits on this architecture).
It is defined at compile time

Yes friend it will use whole 32 bit because the memory allocation to the primitive types is done at compile time.

Int32 is value type. It is stored on stack on compile time. If it is inside any object then it will go to heap which is dynamic memory.
In your case, for any return value, compiler will allocate fixed bits on stack to store your return integer value, according to the size of int32 that is 32 bits, which can have range –2,147,483,648 to 2,147,483,647 if singed and 0 to 4,294,967,295 if unsigned.

Related

long long int array pointers arithmetics of this array

I have a problem with below code:
#include <iostream.h>
#include <stdlib.h>
int main()
{
unsigned long long int i,j;
unsigned long long int *Array;
i=j=0;
Array=(unsigned long long int*)malloc(18446744073709551616);
for(i=0ULL;i<18446744073709551616;i++)
*(Array+i)=i;
std::cin>>j;
std::cout<<*(Array+j);
return 0;
}
My compiler (Borland C++ builder 6.0) gives me an Access Violation Error. There are also warnings on the stage of compiling the program. I have never used unsigned long long int, so I have no idea where is the problem in this case.
The issue you're facing is due to the fact that malloc cannot possibly return a valid pointer to a block of memory that's is of the requested size due to memory constraints your system is faced with, thus malloc does what it normally does when it cannot allocate the desired memory -- it returns nullptr. (malloc reference here)
The most relevant portion of the web-page linked is the following:
Return Value:
On success, a pointer to the memory block allocated by the function.
The type of this pointer is always void*, which can be cast to the desired type of data pointer in order to be dereferenceable.
If the function failed to allocate the requested block of memory, a null pointer is returned.
The reason why you are getting an access violation error is due to the fact that you are trying to deference a pointer that is pointing to a null (hence invalid) location in memory.
In the future, I recommend you try to allocate more reasonably sized blocks of memory (For instance, 1kb, 1mb etc). If you wish to use an unsigned long long int you should perhaps look into creating something pertaining to math instead of memory manipulation.
Addendum:
If you want to get the max value for the type you should have done something like the following:
std::numeric_limits<unsigned long long int>::max(); as noted by iBug, the number you input undergoes unsigned integer overflow. Integer overflow for unsigned integers is a defined behavior so the actual value of the magic value 18446744073709551616 is 0; you are malloc'ing 0 bytes.
The behavior for malloc'ing 0 bytes is the following (As per the c standard):
If the size of the space requested is zero, the behavior is implementation defined: either a null pointer is returned, or the behavior is as if the size were some nonzero value, except that the returned pointer shall not be used to access an object.
You still cannot deference the pointer returned or use it as an object.
As an aside: *(Array + i) = i; is equivalent to Array[i] = i; or even i[Array] = i; :)
unsigned long long is non-negative 64-bit integer, its highest possible value is 264-1 = 18446744073709551615. the compiler may not have known what you meant by 18446744073709551616 (value overflowed).
Also, 264 equals to 16 EB, or 16,777,216 TB. I don't know where such a huge storage is available, even if it's not RAM.
All malloc can do is finding that the requested size is too huge to allocate and return you a null pointer. Then when you're trying to access a null pointer, you get an "Access Violation" error.

Reliably determine the size of char

I was wondering how to reliably determine the size of a character in a portable way. AFAIK sizeof(char) can not be used because this yields always 1, even on system where the byte has 16 bit or even more or less.
For example when dealing with bits, where you need to know exactly how big it is, I was wondering if this code would give the real size of a character, independent on what the compiler thinks of it. IMO the pointer has to be increased by the compiler to the correct size, so we should have the correct value. Am I right on this, or might there be some hidden problem with pointer arithmetics, that would yield also wrong results on some systems?
int sizeOfChar()
{
char *p = 0;
p++;
int size_of_char = (int)p;
return size_of_char;
}
There's a CHAR_BIT macro defined in <limits.h> that evaluates to exactly what its name suggests.
IMO the pointer has to be increased by the compiler to the correct size, so we should have the correct value
No, because pointer arithmetic is defined in terms of sizeof(T) (the pointer target type), and the sizeof operator yields the size in bytes. char is always exactly one byte long, so your code will always yield the NULL pointer plus one (which may not be the numerical value 1, since NULL is not required to be 0).
I think it's not clear what you consider to be "right" (or "reliable", as in the title).
Do you consider "a byte is 8 bits" to be the right answer? If so, for a platform where CHAR_BIT is 16, then you would of course get your answer by just computing:
const int octets_per_char = CHAR_BIT / 8;
No need to do pointer trickery. Also, the trickery is tricky:
On an architecture with 16 bits as the smallest addressable piece of memory, there would be 16 bits at address 0x00001, another 16 bits at address 0x0001, and so on.
So, your example would compute the result 1, since the pointer would likely be incremented from 0x0000 to 0x0001, but that doesn't seem to be what you expect it to compute.
1 I use a 16-bit address space for brevity, it makes the addresses easier to read.
The size of one char (aka byte ) in bits is determined by the macro CHAR_BIT in <limits.h> (or <climits> in C++).
The sizeof operator always returns the size of a type in bytes, not in bits.
So if on some system CHAR_BIT is 16 and sizeof(int) is 4, that means an int has 64 bits on that system.

can anyone explain why size_t type is used with an example?

I was wondering why this size_t is used where I can use say int type. Its said that size_t is a return type of sizeof operator. What does it mean? like if I use sizeof(int) and store what its return to an int type variable, then it also works, it's not necessary to store it in a size_t type variable. I just clearly want to know the basic concept of using size_t with a clearly understandable example.Thanks
size_t is guaranteed to be able to represent the largest size possible, int is not. This means size_t is more portable.
For instance, what if int could only store up to 255 but you could allocate arrays of 5000 bytes? Clearly this wouldn't work, however with size_t it will.
The simplest example is pretty dated: on an old 16-bit-int system with 64 k of RAM, the value of an int can be anywhere from -32768 to +32767, but after:
char buf[40960];
the buffer buf occupies 40 kbytes, so sizeof buf is too big to fit in an int, and it needs an unsigned int.
The same thing can happen today if you use 32-bit int but allow programs to access more than 4 GB of RAM at a time, as is the case on what are called "I32LP64" models (32 bit int, 64-bit long and pointer). Here the type size_t will have the same range as unsigned long.
You use size_t mostly for casting pointers into unsigned integers of the same size, to perform calculations on pointers as if they were integers, that would otherwise be prevented at compile time. Such code is intended to compile and build correctly in the context of different pointer sizes, e.g. 32-bit model versus 64-bit.
It is implementation defined but on 64bit systems you will find that size_t is often 64bit while int is still 32bit (unless it's ILP64 or SILP64 model).
depending on what architecture you are on (16-bit, 32-bit or 64-bit) an int could be a different size.
if you want a specific size I use uint16_t or uint32_t .... You can check out this thread for more information
What does the C++ standard state the size of int, long type to be?
size_t is a typedef defined to store object size. It can store the maximum object size that is supported by a target platform. This makes it portable.
For example:
void * memcpy(void * destination, const void * source, size_t num);
memcpy() copies num bytes from source into destination. The maximum number of bytes that can be copied depends on the platform. So, making num as type size_t makes memcpy portable.
Refer https://stackoverflow.com/a/7706240/2820412 for further details.
size_t is a typedef for one of the fundamental unsigned integer types. It could be unsigned int, unsigned long, or unsigned long long depending on the implementation.
Its special property is that it can represent the size of (in bytes) of any object (which includes the largest object possible as well!). That is one of the reasons it is widely used in the standard library for array indexing and loop counting (that also solves the portability issue). Let me illustrate this with a simple example.
Consider a vector of length 2*UINT_MAX, where UINT_MAX denotes the maximum value of unsigned int (which is 4294967295 for my implementation considering 4 bytes for unsigned int).
std::vector vec(2*UINT_MAX,0);
If you would want to fill the vector using a for-loop such as this, it would not work because unsigned int can iterate only upto the point UINT_MAX (beyond which it will start again from 0).
for(unsigned int i = 0; i<2*UINT_MAX; ++i) vec[i] = i;
The solution here is to use size_t since it is guaranteed to represent the size of any object (and therefore our vector vec too!) in bytes. Note that for my implementation size_t is a typedef for unsigned long and therefore its max value = ULONG_MAX = 18446744073709551615 considering 8 bytes.
for(size_t i = 0; i<2*UINT_MAX; ++i) vec[i] = i;
References: https://en.cppreference.com/w/cpp/types/size_t

Creating integer variable of a defined size

I want to define an integer variable in C/C++ such that my integer can store 10 bytes of data or may be a x bytes of data as defined by me in the program.
for now..!
I tried the
int *ptr;
ptr = (int *)malloc(10);
code. Now if I'm finding the sizeof ptr, it is showing as 4 and not 10. Why?
C and C++ compilers implement several sizes of integer (typically 1, 2, 4, and 8 bytes {8, 16, 32, and 64 bits}), but without some helper code to preform arithmetic operations you can't really make arbitrary sized integers.
The declarations you did:
int *ptr;
ptr = (int *)malloc(10);
Made what is probably a broken array of integers. Broken because unless you are on a system where (10 % sizeof(int) ) == 0) then you have extra bytes at the end which can't be used to store an entire integer.
There are several big number Class libraries you should be able to locate for C++ which do implement many of the operations you may want preform on your 10 byte (80 bit) integers. With C you would have to do operation as function calls because it lacks operator overloading.
Your sizeof(ptr) evaluated to 4 because you are using a machine that uses 4 byte pointers (a 32 bit system). sizeof tells you nothing about the size of the data that a pointer points to. The only place where this should get tricky is when you use sizeof on an array's name which is different from using it on a pointer. I mention this because arrays names and pointers share so many similarities.
Because on you machine, size of a pointer is 4 byte. Please note that type of the variable ptr is int *. You cannot get complete allocated size by sizeof operator if you malloc or new the memory, because sizeof is a compile time operator, meaning that at compile time the value is evaluated.
It is showing 4 bytes because a pointer on your platform is 4 bytes. The block of memory the pointer addresses may be of any arbitrary size, in your case it is 10 bytes. You need to create a data structure if you need to track that:
struct VariableInteger
{
int *ptr;
size_t size;
};
Also, using an int type for your ptr variable doesn't mean the language will allow you to do arithmetic operations on anything of a size different than the size of int on your platform.
Because the size of the pointer is 4. Try something like:
typedef struct
{
int a[10];
} big_int_t;
big_int_t x;
printf("%d\n", sizeof(x));
Note also that an int is typically not 1 byte in size, so this will probably print 20 or 40, depending on your platform.
Integers in C++ are of a fixed size. Do you mean an array of integers? As for sizeof, the way you are using it, it tells you that your pointer is four bytes in size. It doesn't tell you the size of a dynamically allocated block.
Few or no compilers support 10-byte integer arithmetic. If you want to use integers bigger than the values specified in <limits.h>, you'll need to either find a library with support for big integers or make your own class which defines the mathematical operators.
I believe what you're looking for is known as "Arbitrary-precision arithmetic". It allows you to have numbers of any size and any number of decimals. Instead of using fixed-size assembly level math functions, these libraries are coded to do math how one would do them on paper.
Here's a link to a list of arbitrary-precision arithmetic libraries in a few different languages, compliments of Wikipedia: link.

C++ vector max_size();

On 32 bit System.
std::vector<char>::max_size() returns 232-1, size of char — 1 byte
std::vector<int>::max_size() returns 230-1, size of int — 4 byte
std::vector<double>::max_size() returns 229-1, size of double — 8 byte
can anyone tell me max_size() depends on what?
and what will be the return value of max_size() if it runs on 64 bit system.
max_size() is the theoretical maximum number of items that could be put in your vector. On a 32-bit system, you could in theory allocate 4Gb == 2^32 which is 2^32 char values, 2^30 int values or 2^29 double values. It would appear that your implementation is using that value, but subtracting 1.
Of course, you could never really allocate a vector that big on a 32-bit system; you'll run out of memory long before then.
There is no requirement on what value max_size() returns other than that you cannot allocate a vector bigger than that. On a 64-bit system it might return 2^64-1 for char, or it might return a smaller value because the system only has a limited memory space. 64-bit PCs are often limited to a 48-bit address space anyway.
max_size() returns
the maximum potential size the vector
could reach due to system or library
implementation limitations.
so I suppose that the maximum value is implementation dependent. On my machine the following code
std::vector<int> v;
cout << v.max_size();
produces output:
4611686018427387903 // built as 64-bit target
1073741823 // built as 32-bit target
so the formula 2^(64-size(type))-1 looks correct for that case as well.
Simply get the answer by
std::vector<dataType> v;
std::cout << v.max_size();
Or we can get the answer by (2^nativePointerBitWidth)/sizeof(dataType) - 1. For example, on a 64 bit system, long long is (typically) 8 bytes wide, so we have (2^64)/8 - 1 == 2305843009213693951.