This question already has answers here:
Difference between long and int data types [duplicate]
(6 answers)
Closed 7 years ago.
While the size of int depends on the CPU, long seems to be 32 bit (?). But it seems so intuitive to use int for numbers where size doesn't really matter, like in a for loop.
It's also confusing that C++ has both long and __int32. What is the second for then?
Question: What number types should I use in what situations?
Both int and long don´t have fixed sizes (or any fixed representation at all), as long they can hold specific value ranges (including that long can´t be smaller than int).
For specific sizes, there are some types like int32_t etc. (which may be the same).
And __int32 isn´t standard C++, but a compiler-specific thing (eg. MSVC)
Standard specifies, that long is not shorter than int - Specified in C++ standard §3.9.1
C++11 introduced integers with fixed number of bytes, like int32_t.
Note that int is 32 bits even on many 64-bit architecture/compiler combinations (the 64-bit versions of gcc and MSVC both use 32 bits, as far as I know). On the other hand, long will typically be 64 bits on a 64-bit compiler (not on Windows, though).
These are only guidelines though, you always have to look into your compiler's manual to find out how these datatypes are defined.
Related
Everyone knows this, int are smaller than long.
Behind this MSDN link, I'm reading the following :
INT_MIN (Minimum value for a variable of type int.) –2147483648
INT_MAX (Maximum value for a variable of type int.) 2147483647
LONG_MIN (Minimum value for a variable of type long.) –2147483648
LONG_MAX (Maximum value for a variable of type long.) 2147483647
The same information can be found here.
Have I been told a lie my whole life? What is the difference between int and long if not the values they can hold ? How come?
You've mentioned both C++ and ASP.NET. The two are very different.
As far as the C and C++ specifications are concerned, the only thing you know about a primitive data type is the maximal range of values it can store. Prepare for your first surprise - int corresponds to a range of [-32767; 32767]. Most people today think that int is a 32-bit number, but it's really only guaranteed to be able to store the equivallent of a 16-bit number, almost. Also note that the range isn't the more typical [-32768; 32767], because C was designed as a common abstract machine for a wide range of platforms, including platforms that didn't use 2's complement for their negative numbers.
It shouldn't therefore be surprising that long is actually a "sort-of-32-bit" data type. This doesn't mean that C++ implementations on Linux (which commonly use a 64-bit number for long) are wrong, but it does mean that C++ applications written for Linux that assume that long is 64-bit are wrong. This is a lot of fun when porting C++ applications to Windows, of course.
The standard 64-bittish integer type to use is long long, and that is the standard way of declaring a 64-bittish integer on Windows.
However, .NET cares about no such things, because it is built from the ground up on its own specification - in part exactly because of how history-laden C and C++ are. In .NET, int is a 32-bit integer, and long is a 64-bit integer, and long is always bigger than int. In C, if you used long (32-bittish) and stored a value like ten trillion in there, there was a chance it would work, since it's possible that your long was actually a 64-bit number, and C didn't care about the distinction - that's exactly what happens on most Linux C and C++ compilers. Since the types are defined like this for performance reasons, it's perfectly legal for the compiler to use a 32-bit data type to store a 8-bit value (keep that in mind when you're "optimizing for performance" - the compiler is doing optimizations of its own). .NET can still run on platforms that don't have e.g. 32-bit 2's complement integers, but the runtime must ensure that the type can hold as much as a 32-bit 2's complement integer, even if that means taking the next bigger type ("wasting" twice as much memory, usually).
In C and C++ the requirements are that int can hold at least 16 bits, long can hold at least 32 bits, and int can not be larger than long. There is no requirement that int be smaller than long, although compilers often implement them that way. You haven't been told a lie, but you've been told an oversimplification.
This is C++
On many (but not all) C and C++ implementations, a long is larger than
an int. Today's most popular desktop platforms, such as Windows and
Linux, run primarily on 32 bit processors and most compilers for these
platforms use a 32 bit int which has the same size and representation
as a long.
See the ref http://tsemba.org/c/inttypes.html
No! Well! Its like, we had been told since childhood, that sun rises in the east and sets in the west. (the Sun doesn't move after all! )
In earlier processing environments, where we had 16 bit Operating Systems, an integer was considered to be of 16 bits(2 bytes), and a 'long' as 4 bytes (32 bits)
But, with the advent of 32 bit and 64 bit OS, an integer is said to consist of 32 bits(4 bytes) and a long to be 'atleast as big as an integer', hence, 32 bits again. Thereby explaining the equality between the maximum and minimum ranges 'int' and 'long' can take.
Hence, this depends entirely on the architecture of your system.
If the range of int is up to 32768, then I have to input a value of around 50000 and use it,I want to input it without using long and if possible without using typecasting also. Is there any way to do it. I want the datatype to remain int only.
Any built-in type cannot be altered nor expanded in any sense. You have to switch to a different type.
The type int has the following requirements:
represents at least the range -32767 to 32767 (16bit)
is at least as large as short (sizeof(short) <= sizeof(int))
This means, that strictly speaking (although most platforms use at least 32bit for int), you can't safely store the value 50000 in an int.
If you need a guaranteed range, use int16_t, int32_t or int64_t. They are defined in the header <cstdint>. There is no arbitrary precision integer type in the language or in the standard library.
If you only need to observe the range of valid integers, use the header <limits>:
std::cout << std::numeric_limits<int>::min() << " to " << std::numeric_limits<int>::max() << "\n";
You may try unsigned int. Its same as int but with positive range(if you really dont want to use long).
see this for the range of data types
suggestion:
You might aswell consider switching your compiler. From the range you've mentioned for int, it seems you are using a 16 bit compiler(probably turbo c). A 16-bit compiler would restrict unsigned int range to 0-65536(2^16) and signed int to –32,768 to 32,767.
No!
An int depends on the native machine word, which really means it depends on 3 things - the processor, the OS, and the compiler.
The only way you can "increase" an int foo; (not a long foo;, int is not a long) is:
You are compiling with Turbo-C or a legacy 16-bit DOS compiler on a modern computer, likely because your university requires you to use that, because that's what your professor knows. Switch the compiler. If your professor insists you use it, switch the university.
You are compiling with a 32-bit compiler on a 64-bit OS. Switch the compiler.
You have 32-bit OS on a 64-bit computer. Reinstall a 64-bit OS.
You have 32-bit processor. Buy a new computer.
You have a 16-bit processor. Really, buy a new computer.
Several possibilities come to mind.
#abcthomas had the idea to use unsigned; since you are restricted to int, you may abuse int as unsigned. That will probably work, although it is UB according to the standard (cf. Integer overflow in C: standards and compilers).
Use two ints. probably involves writing your own scanf and printf versions, but that shouldn't be too hard. Strictly spoken though, you still haven't expanded the range of an int.
[Use long long] Not possible since you must use int.
You can always use some big number library. Probably not allowed either.
Keep the numbers in strings and do arithmetic digit-wise on the strings. Doesn't use int though.
But you'll never ever be able to store something > MAX_INT in an int.
Try splitting up your value (that would fit inside a 64-bit int) into two 32-bit chunks of data, then use two 32-bit ints to store it. A while ago, I wrote some code that helped me split 16-bit values into 8-bit ones. If you alter this code a bit, then you can split your 64-bit values into two 32-bit values each.
#define BYTE_T uint8_t
#define TWOBYTE_T uint16_t
#define LOWBYTE(x) ((BYTE_T)x)
#define HIGHBYTE(x) ((TWOBYTE_T)x >> 0x8)
#define BYTE_COMBINE(h, l) (((BYTE_T)h << 0x8) + (BYTE_T)l)
I don't know if this is helpful or not, since it doesn't actually answer your original question, but at least you could store your values this way even if your platform only supports 32-bit ints.
Here is an idea to actually store values larger than MAX_INT in an int. It is based on the condition that there is only a small, known number of possible values.
You could write a compression method which computes something akin to a 2-byte hash. The hashes would have to have a bijective (1:1) relation to the known set of possible values. That way you would actually store the value (in compressed form) in the int, and not in a string as before, and thus expand the range of possible values at the cost of not being able to represent every value within that range.
The hashing algorithm would depend on the set of possible values. As a simple example let's assume that the possible values are 2^0, 2^1, 2^2... 2^32767. The obvious hash algorithm is to store the exponent in the int. A stored value of 4 would represent the value 16, 5 would represent 32, 1000 would represent a number close to 10^301 etc. One can see that one can "store" extraordinarily large numbers in a 16 bit int ;-). Less regular sets would require more complicated algorithms, of course.
My desktop and laptop machines have 64 bit and 32 bit Ubuntu 10.10's running on them respectively. I use the gcc compiler
Now on my desktop machine I observe that sizeof(long)=8 while on my laptop sizeof(long)=4.
On machines such as my laptop where sizeof(int) =sizeof(long)=4 are there any situations where would I would prefer long over int even though they cover the same range of integers?
On my desktop of course long would be advantageous if I want a larger range of integers (though of course I could have used int64_t or long long for that also)
Don't use either of them. In modern C (starting with C89) or C++ there are typedef that have a semantic that helps you to write portable code. int is almost always wrong, the only use case that I still have for that is the return value of library functions. Otherwise use
bool or _Bool for Booleans (if you have C++ or C99, otherwise use a typedef)
enum for applicative case distinction
size_t for counting and indexing
unsigned types when you use integers for bit patterns
ptrdiff_t (if you must) for differences of addresses
If you really have an application use for a signed integer type, use either intmax_t to have the most and to be on the safe end, or one of the intXX_t to have a type with well defined precision and arithmetic.
Edit: If your main concern is performance with some minimum width guarantee use the "least" or "fast" types, e.g int_least32_t. On all platforms that I programmed so far there was not much of a difference between the precise width types and the "least" types, but who knows.
On a 32-bit OS, where sizeof(int)==sizeof(long)==4 an int and a long offer the same services.
But, for portability reasons (if you compile your code in 64-bit for example), since an int will stay at 32-bit while a long can be either 32 or 64-bit, you should use types that fit a constant size to avoid overflows.
For this purpose, the <stdint.h> header declares non-ambiguous types like:
int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
intptr_t
Where intptr_t / uintptr_t can represent pointers better than a long (the common sizeof(long)==sizeof(void*) assumption is not always true).
time_t and size_t are also types defined to make it easier to write portable code without wondering about the platform specifications.
Just make sure that, when you need to allocate memory, you use sizeof (like sizeof(size_t)) instead of assuming that a type has any given (hardcoded) value.
While it certainly isn't a common practice, the Linux kernel source makes the assumption that pointers -- any pointer to any type -- can fit entirely inside an unsigned long.
As #Alf noted above, you might choose either int or long for portability reasons.
On older, 16-bit operating systems, int was 16-bit and long was 32-bit;
On 32-bit Unix, DOS, and Windows (or 64-bit processors running 32-bit programs), int and long are 32-bits;
On 64-bit Unix, int is 32-bits, while long is 64-bits.
For portability reasons you should use a long if you need more than 16 bits and up to 32 bits of precision. That's really all there is to it - if you know that your values won't exceed 16 bits, then int is fine.
As I've learned recently, a long in C/C++ is the same length as an int. To put it simply, why? It seems almost pointless to even include the datatype in the language. Does it have any uses specific to it that an int doesn't have? I know we can declare a 64-bit int like so:
long long x = 0;
But why does the language choose to do it this way, rather than just making a long well...longer than an int? Other languages such as C# do this, so why not C/C++?
When writing in C or C++, every datatype is architecture and compiler specific. On one system int is 32, but you can find ones where it is 16 or 64; it's not defined, so it's up to compiler.
As for long and int, it comes from times, where standard integer was 16bit, where long was 32 bit integer - and it indeed was longer than int.
The specific guarantees are as follows:
char is at least 8 bits (1 byte by definition, however many bits it is)
short is at least 16 bits
int is at least 16 bits
long is at least 32 bits
long long (in versions of the language that support it) is at least 64 bits
Each type in the above list is at least as wide as the previous type (but may well be the same).
Thus it makes sense to use long if you need a type that's at least 32 bits, int if you need a type that's reasonably fast and at least 16 bits.
Actually, at least in C, these lower bounds are expressed in terms of ranges, not sizes. For example, the language requires that INT_MIN <= -32767, and INT_MAX >= +32767. The 16-bit requirements follows from this and from the requirement that integers are represented in binary.
C99 adds <stdint.h> and <inttypes.h>, which define types such as uint32_t, int_least32_t, and int_fast16_t; these are typedefs, usually defined as aliases for the predefined types.
(There isn't necessarily a direct relationship between size and range. An implementation could make int 32 bits, but with a range of only, say, -2**23 .. +2^23-1, with the other 8 bits (called padding bits) not contributing to the value. It's theoretically possible (but practically highly unlikely) that int could be larger than long, as long as long has at least as wide a range as int. In practice, few modern systems use padding bits, or even representations other than 2's-complement, but the standard still permits such oddities. You're more likely to encounter exotic features in embedded systems.)
long is not the same length as an int. According to the specification, long is at least as large as int. For example, on Linux x86_64 with GCC, sizeof(long) = 8, and sizeof(int) = 4.
long is not the same size as int, it is at least the same size as int. To quote the C++03 standard (3.9.1-2):
There are four signed integer types: “signed char”, “short int”,
“int”, and “long int.” In this list, each type provides at least as
much storage as those preceding it in the list. Plain ints have the
natural size suggested by the architecture of the execution
environment); the other signed integer types are provided to meet special needs.
My interpretation of this is "just use int, but if for some reason that doesn't fit your needs and you are lucky to find another integral type that's better suited, be our guest and use that one instead". One way that long might be better is if you 're on an architecture where it is... longer.
looking for something completely unrelated and stumbled across this and needed to answer. Yeah, this is old, so for people who surf on in later...
Frankly, I think all the answers on here are incomplete.
The size of a long is the size of the number of bits your processor can operate on at one time. It's also called a "word". A "half-word" is a short. A "doubleword" is a long long and is twice as large as a long (and originally was only implemented by vendors and not standard), and even bigger than a long long is a "quadword" which is twice the size of a long long but it had no formal name (and not really standard).
Now, where does the int come in? In part registers on your processor, and in part your OS. Your registers define the native sizes the CPU handles which in turn define the size of things like the short and long. Processors are also designed with a data size that is the most efficient size for it to operate on. That should be an int.
On todays 64bit machines you'd assume, since a long is a word and a word on a 64bit machine is 64bits, that a long would be 64bits and an int whatever the processor is designed to handle, but it might not be. Why? Your OS has chosen a data model and defined these data sizes for you (pretty much by how it's built). Ultimately, if you're on Windows (and using Win64) it's 32bits for both a long and int. Solaris and Linux use different definitions (the long is 64bits). These definitions are called things like ILP64, LP64, and LLP64. Windows uses LLP64 and Solaris and Linux use LP64:
Model ILP64 LP64 LLP64
int 64 32 32
long 64 64 32
pointer 64 64 64
long long 64 64 64
Where, e.g., ILP means int-long-pointer, and LLP means long-long-pointer
To get around this most compilers seem to support setting the size of an integer directly with types like int32 or int64.
This question already has answers here:
What is the difference between an int and a long in C++?
(9 answers)
Closed 7 years ago.
Considering that the following statements return 4, what is the difference between the int and long types in C++?
sizeof(int)
sizeof(long)
From this reference:
An int was originally intended to be
the "natural" word size of the
processor. Many modern processors can
handle different word sizes with equal
ease.
Also, this bit:
On many (but not all) C and C++
implementations, a long is larger than
an int. Today's most popular desktop
platforms, such as Windows and Linux,
run primarily on 32 bit processors and
most compilers for these platforms use
a 32 bit int which has the same size
and representation as a long.
The guarantees the standard gives you go like this:
1 == sizeof(char) <= sizeof(short) <= sizeof (int) <= sizeof(long) <= sizeof(long long)
So it's perfectly valid for sizeof (int) and sizeof (long) to be equal, and many platforms choose to go with this approach. You will find some platforms where int is 32 bits, long is 64 bits, and long long is 128 bits, but it seems very common for sizeof (long) to be 4.
(Note that long long is recognized in C from C99 onwards, but was normally implemented as an extension in C++ prior to C++11.)
You're on a 32-bit machine or a 64-bit Windows machine. On my 64-bit machine (running a Unix-derivative O/S, not Windows), sizeof(int) == 4, but sizeof(long) == 8.
They're different types — sometimes the same size as each other, sometimes not.
(In the really old days, sizeof(int) == 2 and sizeof(long) == 4 — though that might have been the days before C++ existed, come to think of it. Still, technically, it is a legitimate configuration, albeit unusual outside of the embedded space, and quite possibly unusual even in the embedded space.)
On platforms where they both have the same size the answer is nothing. They both represent signed 4 byte values.
However you cannot depend on this being true. The size of long and int are not definitively defined by the standard. It's possible for compilers to give the types different sizes and hence break this assumption.
The long must be at least the same size as an int, and possibly, but not necessarily, longer.
On common 32-bit systems, both int and long are 4-bytes/32-bits, and this is valid according to the C++ spec.
On other systems, both int and long long may be a different size. I used to work on a platform where int was 2-bytes, and long was 4-bytes.
A typical best practice is not using long/int/short directly. Instead, according to specification of compilers and OS, wrap them into a header file to ensure they hold exactly the amount of bits that you want. Then use int8/int16/int32 instead of long/int/short. For example, on 32bit Linux, you could define a header like this
typedef char int8;
typedef short int16;
typedef int int32;
typedef unsigned int uint32;