I can read that int range (signed) is from [−32767, +32767]
but I can say, for example
int a=70000;
int b=71000;
int c=a+b;
printf("%i", c);
return 0;
And the output is 141000 (correct). Should not the debugger tell me
"this operation is out of range" or something similar?
I suppose that this has to be with me ignoring the basics of C programming, but none of the books that I'm currently reading tell nothing about this "issue".
EDIT:
2147483647 seems to be the upper limit, thank you. If a sum exceeds that number, the result is negative, wich is expected, BUT if it is a subtraction, for example: 2147483649-2147483647=2 the result is still good. I mean, why the value 2147483649 is correctly hold for that substraction purpose (or at least it seems to me)?
The range [−32767, +32767] is the required minimum range. An implementation is allowed to provide a larger range.
All types are compiler-dependent. int used to be the "native word" of the underlying hardware, which on 16-bit systems meant that int was 16 bits (which leads to the -32k to +32k range). When 32-bit systems started coming then int naturally followed along and became 32 bits, which can store values around -2 billion to +2 billion.
However this "native word" use for int didn't follow along when 64-bit systems came around, I know of no 64-bit system or compiler that have int being 64 bits.
See e.g. this reference of integer types for more information.
In C++, int is at least 16-bits long, but typically 32-bits on modern hardware. You can write INT_MIN and INT_MAX and check yourself.
Note that signed integer overflow is undefined behavior, you are not guaranteed to get a warning, except perhaps with high compiler warnings and debug mode.
You have misunderstood. The standard guarantees that a int holds [-32767, +32767], but it is permitted to hold more. (In particular, nearly every compiler you are likely to use allows a range [-2147483648, 2147483647]).
There is another problem. If you make the value you assign to a and b bigger you still probably won't get any warning or error. Integer overflow causes "undefined behaviour", and literally anything is allowed to happen.
If an int is four bytes an unsigned is 4294967295, signed max. 2147483647 and signed min. -2147483648
unsigned int ui = ~0;
int max = ui>>1;
int min = ~max;
int size = sizeof(max);
While the standard guarantees the size of int to be 16 bit, it is usually implemented as a 32-bit value.
The size of an int (and the max value it can hold) depends on the compiler and the computer you are using. There is no guarantee that it will have 2 bytes or 4 bytes, but there is a guaranteed minimum size for the c++ variables.
You can see a list of minimum sizes for c++ types in this page: http://www.cplusplus.com/doc/tutorial/variables/
Related
I'm using cout to print digits to the console. I am also storing values of up to 13+billion as a digit and doing computations on it. What data type should I use?
When I do the following:
int a = 6800000000;
cout << a;
It prints -1789934592.
thanks.
long long can hold up to 9223372036854775807. Use something like gmp if you need larger.
Use int64_t to guarantee you won't overflow. It is available from stdint.h.
Just a note that both int64_t and long long are included in C99 and in C++ 0x, but not in the current version of C++. As such, using either does put your code at some risk of being non-portable. Realistically, however, that risk is probably already pretty low -- to the point that when/if you port your code, there are likely to be much bigger problems.
If, however, you really want to assure against that possibility, you might consider using a double precision floating point. Contrary to popular belief, floating point types can represent integers exactly up to a certain limit -- that limit being set (in essence) by the size of the mantissa in the F.P. type. The typical implementation of a double has a 53-bit mantissa, so you can represent 53-bit integers with absolute precision. That supports numbers up to 9,007,199,254,740,992 (which is substantially more than 13 of either of the popular meanings of "billion").
Your data type (int) is too small to hold such large numbers. You should use a larger data type or one of the fixed size data types as given in the other answer (though you should really use uint64_t if you're not using negative numbers).
It's a good idea to understand the range limits of different sized types.
A 32 bit type (on most 32 bit platforms, both int and long are 32 bit) have the following ranges:
signed: -2,147,483,648 to 2,147,483,647
unsigned: 0 to 4,294,967,295
While 64 bit types (typically long long's are 64 bit, on most Unix 64 bit platforms a long is also 64) have the following range:
signed: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
unsigned: 0 to 18,446,744,073,709,551,615
just use double in the declaration statement
You could use a long int:
long int a
Or if it's always going to be positive, an unsigned long int:
unsigned long int a
See: http://forums.guru3d.com/showthread.php?t=131678
unsigned long long
can be used
Everyone knows this, int are smaller than long.
Behind this MSDN link, I'm reading the following :
INT_MIN (Minimum value for a variable of type int.) –2147483648
INT_MAX (Maximum value for a variable of type int.) 2147483647
LONG_MIN (Minimum value for a variable of type long.) –2147483648
LONG_MAX (Maximum value for a variable of type long.) 2147483647
The same information can be found here.
Have I been told a lie my whole life? What is the difference between int and long if not the values they can hold ? How come?
You've mentioned both C++ and ASP.NET. The two are very different.
As far as the C and C++ specifications are concerned, the only thing you know about a primitive data type is the maximal range of values it can store. Prepare for your first surprise - int corresponds to a range of [-32767; 32767]. Most people today think that int is a 32-bit number, but it's really only guaranteed to be able to store the equivallent of a 16-bit number, almost. Also note that the range isn't the more typical [-32768; 32767], because C was designed as a common abstract machine for a wide range of platforms, including platforms that didn't use 2's complement for their negative numbers.
It shouldn't therefore be surprising that long is actually a "sort-of-32-bit" data type. This doesn't mean that C++ implementations on Linux (which commonly use a 64-bit number for long) are wrong, but it does mean that C++ applications written for Linux that assume that long is 64-bit are wrong. This is a lot of fun when porting C++ applications to Windows, of course.
The standard 64-bittish integer type to use is long long, and that is the standard way of declaring a 64-bittish integer on Windows.
However, .NET cares about no such things, because it is built from the ground up on its own specification - in part exactly because of how history-laden C and C++ are. In .NET, int is a 32-bit integer, and long is a 64-bit integer, and long is always bigger than int. In C, if you used long (32-bittish) and stored a value like ten trillion in there, there was a chance it would work, since it's possible that your long was actually a 64-bit number, and C didn't care about the distinction - that's exactly what happens on most Linux C and C++ compilers. Since the types are defined like this for performance reasons, it's perfectly legal for the compiler to use a 32-bit data type to store a 8-bit value (keep that in mind when you're "optimizing for performance" - the compiler is doing optimizations of its own). .NET can still run on platforms that don't have e.g. 32-bit 2's complement integers, but the runtime must ensure that the type can hold as much as a 32-bit 2's complement integer, even if that means taking the next bigger type ("wasting" twice as much memory, usually).
In C and C++ the requirements are that int can hold at least 16 bits, long can hold at least 32 bits, and int can not be larger than long. There is no requirement that int be smaller than long, although compilers often implement them that way. You haven't been told a lie, but you've been told an oversimplification.
This is C++
On many (but not all) C and C++ implementations, a long is larger than
an int. Today's most popular desktop platforms, such as Windows and
Linux, run primarily on 32 bit processors and most compilers for these
platforms use a 32 bit int which has the same size and representation
as a long.
See the ref http://tsemba.org/c/inttypes.html
No! Well! Its like, we had been told since childhood, that sun rises in the east and sets in the west. (the Sun doesn't move after all! )
In earlier processing environments, where we had 16 bit Operating Systems, an integer was considered to be of 16 bits(2 bytes), and a 'long' as 4 bytes (32 bits)
But, with the advent of 32 bit and 64 bit OS, an integer is said to consist of 32 bits(4 bytes) and a long to be 'atleast as big as an integer', hence, 32 bits again. Thereby explaining the equality between the maximum and minimum ranges 'int' and 'long' can take.
Hence, this depends entirely on the architecture of your system.
I'm investigating a standard for my team around using size_t vs int (or long, etc). The biggest drawback I've seen pointed out is that taking the difference of two size_t objects can cause problems (I'm unsure of specific problems -- maybe something wasn't 2s complemented and the signed/unsigned angers the compiler). I wrote a quick program in C++ using the V120 VS2013 compiler that allowed me to do the following:
#include <iostream>
main()
{
size_t a = 10;
size_t b = 100;
int result = a - b;
}
The program resulted in -90, which although correct, makes me nervous about type mismatches, signed/unsigned problems, or just plain undefined behavior if the size_t happens to get used in complex math.
My question is if it's safe to do math with size_t objects, specifically, taking the difference? I'm considering using size_t as a standard for things like indexes. I've seen some interesting posts on the topic here, but they don't address the math issue (or I missed it).
What type for subtracting 2 size_t's?
typedef for a signed type that can contain a size_t?
This is not guaranteed to work portably, but is not UB either. The code must run without error, but the resulting int value is implementation defined. So as long as you are working on platforms that guarantee the desired behavior, this is fine (as long as the difference can be represented by an int of course), otherwise, just use signed types everywhere (see last paragraph).
Subtracting two std::size_ts will yield a new std::size_t† and its value will be determined by wrapping. In your example, assuming 64 bit size_t, a - b will equal 18446744073709551526. This does not fit into an (commonly used 32 bit) int, so an implementation defined value is assigned to result.
To be honest, I would recommend to not use unsigned integers for anything but bit magic. Several members of the standard committee agree with me: https://channel9.msdn.com/Events/GoingNative/2013/Interactive-Panel-Ask-Us-Anything 9:50, 42:40, 1:02:50
Rule of thumb (paraphrasing Chandler Carruth from the above video): If you could count it yourself, use int, otherwise use std::int64_t.
†Unless its conversion rank is less than int, e.g. if std::size_t is unsigned short. In that case, the result is an int and everything will work fine (unless int is not wider than short). However
I do not know of any platform that does this.
This would still be platform specific, see first paragraph.
The size_t type is unsigned. The subtraction of any two size_t values is defined-behavior
However, firstly, the result is implementation-defined if a larger value is subtracted from a smaller one. The result is the mathematical value, reduced to the smallest positive residue modulo SIZE_T_MAX + 1. For instance if the largest value of size_t is 65535, and the result of subtracting two size_t values is -3, then the result will be 65536 - 3 = 65533. On a different compiler or machine with a different size_t, the numeric value will be different.
Secondly, a size_t value might be out of range of the type int. If that is the case, we get a second implementation-defined result arising from the forced conversion. In this situation, any behavior can apply; it just has to be documented by the implementation, and the conversion must not fail. For instance, the result could be clamped into the int range, producing INT_MAX. A common behavior seen on two's complement machines (virtually all) in the conversion of wider (or equal width) unsigned types to narrower signed types is simple bit truncation: enough bits are taken from the unsigned value to fill the signed value, including its sign bit.
Because of the way two's complement works, if the original arithmetically correct abstract result itself fits into int, then the conversion will produce that result.
For instance, suppose that the subtraction of a pair of 64 bit size_t values on a two's complement machine yields the abstract arithmetic value -3, which is becomes the positive value 0xFFFFFFFFFFFFFFFD. When this is coerced into a 32 bit int, then the common behavior seen in many compilers for two's complement machines is that the lower 32 bits are taken as the image of the resulting int: 0xFFFFFFFD. And, of course, that is just the value -3 in 32 bits.
So the upshot is, that your code is de facto quite portable because virtually all mainstream machines are two's complement with conversion rules based on sign extension and bit truncation, including between signed and unsigned.
Except that sign extension doesn't occur when a value is widened while converting from unsigned to signed. Thus he one problem is the rare situation in which int is wider than size_t. If a 16 bit size_t result is 65533, due to 4 being subtracted from 1, this will not produce a -3 when converted to a 32 bit int; it will produce 65533!
If you don't use size_t, you are screwed: size_t is the one type that exists to be used for memory sizes, and which is consequently guaranteed to always be big enough for that purpose. (uintptr_t is quite similar, but it's neither the first such type, nor is it used by the standard libraries, nor is it available without including stdint.h.) If you use an int, you can get undefined behavior when your allocations exceed 2GiB of address space (or 32kiB if you are on a platform where int is only 16 bits!), even though the machine has more memory and you are executing in 64 bit mode.
If you need a difference of size_t that may become negative, use the signed variant ssize_t.
If the range of int is up to 32768, then I have to input a value of around 50000 and use it,I want to input it without using long and if possible without using typecasting also. Is there any way to do it. I want the datatype to remain int only.
Any built-in type cannot be altered nor expanded in any sense. You have to switch to a different type.
The type int has the following requirements:
represents at least the range -32767 to 32767 (16bit)
is at least as large as short (sizeof(short) <= sizeof(int))
This means, that strictly speaking (although most platforms use at least 32bit for int), you can't safely store the value 50000 in an int.
If you need a guaranteed range, use int16_t, int32_t or int64_t. They are defined in the header <cstdint>. There is no arbitrary precision integer type in the language or in the standard library.
If you only need to observe the range of valid integers, use the header <limits>:
std::cout << std::numeric_limits<int>::min() << " to " << std::numeric_limits<int>::max() << "\n";
You may try unsigned int. Its same as int but with positive range(if you really dont want to use long).
see this for the range of data types
suggestion:
You might aswell consider switching your compiler. From the range you've mentioned for int, it seems you are using a 16 bit compiler(probably turbo c). A 16-bit compiler would restrict unsigned int range to 0-65536(2^16) and signed int to –32,768 to 32,767.
No!
An int depends on the native machine word, which really means it depends on 3 things - the processor, the OS, and the compiler.
The only way you can "increase" an int foo; (not a long foo;, int is not a long) is:
You are compiling with Turbo-C or a legacy 16-bit DOS compiler on a modern computer, likely because your university requires you to use that, because that's what your professor knows. Switch the compiler. If your professor insists you use it, switch the university.
You are compiling with a 32-bit compiler on a 64-bit OS. Switch the compiler.
You have 32-bit OS on a 64-bit computer. Reinstall a 64-bit OS.
You have 32-bit processor. Buy a new computer.
You have a 16-bit processor. Really, buy a new computer.
Several possibilities come to mind.
#abcthomas had the idea to use unsigned; since you are restricted to int, you may abuse int as unsigned. That will probably work, although it is UB according to the standard (cf. Integer overflow in C: standards and compilers).
Use two ints. probably involves writing your own scanf and printf versions, but that shouldn't be too hard. Strictly spoken though, you still haven't expanded the range of an int.
[Use long long] Not possible since you must use int.
You can always use some big number library. Probably not allowed either.
Keep the numbers in strings and do arithmetic digit-wise on the strings. Doesn't use int though.
But you'll never ever be able to store something > MAX_INT in an int.
Try splitting up your value (that would fit inside a 64-bit int) into two 32-bit chunks of data, then use two 32-bit ints to store it. A while ago, I wrote some code that helped me split 16-bit values into 8-bit ones. If you alter this code a bit, then you can split your 64-bit values into two 32-bit values each.
#define BYTE_T uint8_t
#define TWOBYTE_T uint16_t
#define LOWBYTE(x) ((BYTE_T)x)
#define HIGHBYTE(x) ((TWOBYTE_T)x >> 0x8)
#define BYTE_COMBINE(h, l) (((BYTE_T)h << 0x8) + (BYTE_T)l)
I don't know if this is helpful or not, since it doesn't actually answer your original question, but at least you could store your values this way even if your platform only supports 32-bit ints.
Here is an idea to actually store values larger than MAX_INT in an int. It is based on the condition that there is only a small, known number of possible values.
You could write a compression method which computes something akin to a 2-byte hash. The hashes would have to have a bijective (1:1) relation to the known set of possible values. That way you would actually store the value (in compressed form) in the int, and not in a string as before, and thus expand the range of possible values at the cost of not being able to represent every value within that range.
The hashing algorithm would depend on the set of possible values. As a simple example let's assume that the possible values are 2^0, 2^1, 2^2... 2^32767. The obvious hash algorithm is to store the exponent in the int. A stored value of 4 would represent the value 16, 5 would represent 32, 1000 would represent a number close to 10^301 etc. One can see that one can "store" extraordinarily large numbers in a 16 bit int ;-). Less regular sets would require more complicated algorithms, of course.
I'm using cout to print digits to the console. I am also storing values of up to 13+billion as a digit and doing computations on it. What data type should I use?
When I do the following:
int a = 6800000000;
cout << a;
It prints -1789934592.
thanks.
long long can hold up to 9223372036854775807. Use something like gmp if you need larger.
Use int64_t to guarantee you won't overflow. It is available from stdint.h.
Just a note that both int64_t and long long are included in C99 and in C++ 0x, but not in the current version of C++. As such, using either does put your code at some risk of being non-portable. Realistically, however, that risk is probably already pretty low -- to the point that when/if you port your code, there are likely to be much bigger problems.
If, however, you really want to assure against that possibility, you might consider using a double precision floating point. Contrary to popular belief, floating point types can represent integers exactly up to a certain limit -- that limit being set (in essence) by the size of the mantissa in the F.P. type. The typical implementation of a double has a 53-bit mantissa, so you can represent 53-bit integers with absolute precision. That supports numbers up to 9,007,199,254,740,992 (which is substantially more than 13 of either of the popular meanings of "billion").
Your data type (int) is too small to hold such large numbers. You should use a larger data type or one of the fixed size data types as given in the other answer (though you should really use uint64_t if you're not using negative numbers).
It's a good idea to understand the range limits of different sized types.
A 32 bit type (on most 32 bit platforms, both int and long are 32 bit) have the following ranges:
signed: -2,147,483,648 to 2,147,483,647
unsigned: 0 to 4,294,967,295
While 64 bit types (typically long long's are 64 bit, on most Unix 64 bit platforms a long is also 64) have the following range:
signed: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
unsigned: 0 to 18,446,744,073,709,551,615
just use double in the declaration statement
You could use a long int:
long int a
Or if it's always going to be positive, an unsigned long int:
unsigned long int a
See: http://forums.guru3d.com/showthread.php?t=131678
unsigned long long
can be used