C++ Sum of numbers inserted by the user less than 10billion [duplicate] - c++

I'm using cout to print digits to the console. I am also storing values of up to 13+billion as a digit and doing computations on it. What data type should I use?
When I do the following:
int a = 6800000000;
cout << a;
It prints -1789934592.
thanks.

long long can hold up to 9223372036854775807. Use something like gmp if you need larger.

Use int64_t to guarantee you won't overflow. It is available from stdint.h.

Just a note that both int64_t and long long are included in C99 and in C++ 0x, but not in the current version of C++. As such, using either does put your code at some risk of being non-portable. Realistically, however, that risk is probably already pretty low -- to the point that when/if you port your code, there are likely to be much bigger problems.
If, however, you really want to assure against that possibility, you might consider using a double precision floating point. Contrary to popular belief, floating point types can represent integers exactly up to a certain limit -- that limit being set (in essence) by the size of the mantissa in the F.P. type. The typical implementation of a double has a 53-bit mantissa, so you can represent 53-bit integers with absolute precision. That supports numbers up to 9,007,199,254,740,992 (which is substantially more than 13 of either of the popular meanings of "billion").

Your data type (int) is too small to hold such large numbers. You should use a larger data type or one of the fixed size data types as given in the other answer (though you should really use uint64_t if you're not using negative numbers).

It's a good idea to understand the range limits of different sized types.
A 32 bit type (on most 32 bit platforms, both int and long are 32 bit) have the following ranges:
signed: -2,147,483,648 to 2,147,483,647
unsigned: 0 to 4,294,967,295
While 64 bit types (typically long long's are 64 bit, on most Unix 64 bit platforms a long is also 64) have the following range:
signed: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
unsigned: 0 to 18,446,744,073,709,551,615

just use double in the declaration statement

You could use a long int:
long int a
Or if it's always going to be positive, an unsigned long int:
unsigned long int a
See: http://forums.guru3d.com/showthread.php?t=131678

unsigned long long
can be used

Related

storing big numbers (10-12 digits ) in an unordered map <string, int>

how do I store big numbers (10 digits long ) as values for keys in an unordered map in C++.I get this warning- "warning: overflow in implicit constant conversion [-Woverflow] ".All the values for the corresponding keys are printed different from what they were initialised.
On typical modern hardware, int has 32 bit, which would allow, in two's complement, for values in range of [-2 147 483 648; 2 147 483 647]. If your values don't fit into this range, you need a larger data type.
Be aware, though, that the standard only guarantees a much smaller range of [-32767; 32767], and actually there exists hardware only using 16 bit for int even today (if you wonder why not -32768: Well, standard covers architectures based on one's complement or sign magnitude as well...).
Typically, long has 32-bit range (which is standard's minimum) as well (but e. g. on 64-bit linux, it has 64 bit), and long long is guaranteed to be at least as large as 64 bit.
Quite a mess, as you see... If you need guaranteed range, best thing you can do is using the data types from <cstdint> header, like int64_t (or – if you don't deal with negative values – preferrably uint64_t).
There are quite a number of other useful types, e. g. uint_least8_t (smallest data type that has at least 8 bits – if you write portable code and need to cover platforms that might not be able to provide uint8_t) or uint_fast8_t (data type with at least 8 bit that can be accessed fastest on given platform; pretty often, this is larger than uint8_t). Got curious? Why not read a bit further?
in c++ int can store this much
int 4bytes -2147483648 to 2147483647
long int (or long long) can store -9223372036854775808 to +9223372036854775807
or you can just go for unsigned long int which can store from 0 to 18446744073709551615
for a larger number then that you can try boost libraries or try to make a string based function that can save it and convert it for math operations when you need it

Is there a way to increase the size of an int in C++ without using long?

If the range of int is up to 32768, then I have to input a value of around 50000 and use it,I want to input it without using long and if possible without using typecasting also. Is there any way to do it. I want the datatype to remain int only.
Any built-in type cannot be altered nor expanded in any sense. You have to switch to a different type.
The type int has the following requirements:
represents at least the range -32767 to 32767 (16bit)
is at least as large as short (sizeof(short) <= sizeof(int))
This means, that strictly speaking (although most platforms use at least 32bit for int), you can't safely store the value 50000 in an int.
If you need a guaranteed range, use int16_t, int32_t or int64_t. They are defined in the header <cstdint>. There is no arbitrary precision integer type in the language or in the standard library.
If you only need to observe the range of valid integers, use the header <limits>:
std::cout << std::numeric_limits<int>::min() << " to " << std::numeric_limits<int>::max() << "\n";
You may try unsigned int. Its same as int but with positive range(if you really dont want to use long).
see this for the range of data types
suggestion:
You might aswell consider switching your compiler. From the range you've mentioned for int, it seems you are using a 16 bit compiler(probably turbo c). A 16-bit compiler would restrict unsigned int range to 0-65536(2^16) and signed int to –32,768 to 32,767.
No!
An int depends on the native machine word, which really means it depends on 3 things - the processor, the OS, and the compiler.
The only way you can "increase" an int foo; (not a long foo;, int is not a long) is:
You are compiling with Turbo-C or a legacy 16-bit DOS compiler on a modern computer, likely because your university requires you to use that, because that's what your professor knows. Switch the compiler. If your professor insists you use it, switch the university.
You are compiling with a 32-bit compiler on a 64-bit OS. Switch the compiler.
You have 32-bit OS on a 64-bit computer. Reinstall a 64-bit OS.
You have 32-bit processor. Buy a new computer.
You have a 16-bit processor. Really, buy a new computer.
Several possibilities come to mind.
#abcthomas had the idea to use unsigned; since you are restricted to int, you may abuse int as unsigned. That will probably work, although it is UB according to the standard (cf. Integer overflow in C: standards and compilers).
Use two ints. probably involves writing your own scanf and printf versions, but that shouldn't be too hard. Strictly spoken though, you still haven't expanded the range of an int.
[Use long long] Not possible since you must use int.
You can always use some big number library. Probably not allowed either.
Keep the numbers in strings and do arithmetic digit-wise on the strings. Doesn't use int though.
But you'll never ever be able to store something > MAX_INT in an int.
Try splitting up your value (that would fit inside a 64-bit int) into two 32-bit chunks of data, then use two 32-bit ints to store it. A while ago, I wrote some code that helped me split 16-bit values into 8-bit ones. If you alter this code a bit, then you can split your 64-bit values into two 32-bit values each.
#define BYTE_T uint8_t
#define TWOBYTE_T uint16_t
#define LOWBYTE(x) ((BYTE_T)x)
#define HIGHBYTE(x) ((TWOBYTE_T)x >> 0x8)
#define BYTE_COMBINE(h, l) (((BYTE_T)h << 0x8) + (BYTE_T)l)
I don't know if this is helpful or not, since it doesn't actually answer your original question, but at least you could store your values this way even if your platform only supports 32-bit ints.
Here is an idea to actually store values larger than MAX_INT in an int. It is based on the condition that there is only a small, known number of possible values.
You could write a compression method which computes something akin to a 2-byte hash. The hashes would have to have a bijective (1:1) relation to the known set of possible values. That way you would actually store the value (in compressed form) in the int, and not in a string as before, and thus expand the range of possible values at the cost of not being able to represent every value within that range.
The hashing algorithm would depend on the set of possible values. As a simple example let's assume that the possible values are 2^0, 2^1, 2^2... 2^32767. The obvious hash algorithm is to store the exponent in the int. A stored value of 4 would represent the value 16, 5 would represent 32, 1000 would represent a number close to 10^301 etc. One can see that one can "store" extraordinarily large numbers in a 16 bit int ;-). Less regular sets would require more complicated algorithms, of course.

Store a digit larger than 9 in array

I am wondering if it is possible to store a digit larger than 9 in an array. For example is
Int myarray[0] = 1234567898
From what I know about arrays they can only be integers in which the highest number will be a 9 digit number.
Is it possible to change the data type or have a larger digit. If not what are other ways to do it
It is quite possible. C++11 adds support for long long which is typically 64bit (much larger) that can store up to 19 digits and many compilers supported it before this as an extension. You can store arbitrarily large numbers in an array provided the array is large enough and you have the requisite mathematics. However, it's not pleasant and an easier bet is to download a library to do it for you. These are known as bignum or arbitrary size integer libraries and they use arrays to store integers of any size your machine can handle.
It's still not possible to express a literal above long long though.
A 32bit int has a max value of 2147483647 - that is 10 digits. If you need values higher than that, you need to use a 64bit int (long long, __int64, int64_t, etc depending on your compiler).
I am afraid that your idea that ints can only represent a single decimal digit is incorrect. In addition your idea that arrays can only hold ints is incorrect. An int is usually the computer's most efficient integral type. This is on most machines 16, 32 or 64 bits of binary data, thus can go up to 2^16 (65k), 2^32 (4million), or 2^64(very big). It is most likely 32 bits on your machine, thus numbers up to just over 4million can be stored. If you need more than that types such as long and long long are available.
Finally, arrays can hold any type, you just declare them as:
type myarray[arraysize];

Long Vs. Int C/C++ - What's The Point?

As I've learned recently, a long in C/C++ is the same length as an int. To put it simply, why? It seems almost pointless to even include the datatype in the language. Does it have any uses specific to it that an int doesn't have? I know we can declare a 64-bit int like so:
long long x = 0;
But why does the language choose to do it this way, rather than just making a long well...longer than an int? Other languages such as C# do this, so why not C/C++?
When writing in C or C++, every datatype is architecture and compiler specific. On one system int is 32, but you can find ones where it is 16 or 64; it's not defined, so it's up to compiler.
As for long and int, it comes from times, where standard integer was 16bit, where long was 32 bit integer - and it indeed was longer than int.
The specific guarantees are as follows:
char is at least 8 bits (1 byte by definition, however many bits it is)
short is at least 16 bits
int is at least 16 bits
long is at least 32 bits
long long (in versions of the language that support it) is at least 64 bits
Each type in the above list is at least as wide as the previous type (but may well be the same).
Thus it makes sense to use long if you need a type that's at least 32 bits, int if you need a type that's reasonably fast and at least 16 bits.
Actually, at least in C, these lower bounds are expressed in terms of ranges, not sizes. For example, the language requires that INT_MIN <= -32767, and INT_MAX >= +32767. The 16-bit requirements follows from this and from the requirement that integers are represented in binary.
C99 adds <stdint.h> and <inttypes.h>, which define types such as uint32_t, int_least32_t, and int_fast16_t; these are typedefs, usually defined as aliases for the predefined types.
(There isn't necessarily a direct relationship between size and range. An implementation could make int 32 bits, but with a range of only, say, -2**23 .. +2^23-1, with the other 8 bits (called padding bits) not contributing to the value. It's theoretically possible (but practically highly unlikely) that int could be larger than long, as long as long has at least as wide a range as int. In practice, few modern systems use padding bits, or even representations other than 2's-complement, but the standard still permits such oddities. You're more likely to encounter exotic features in embedded systems.)
long is not the same length as an int. According to the specification, long is at least as large as int. For example, on Linux x86_64 with GCC, sizeof(long) = 8, and sizeof(int) = 4.
long is not the same size as int, it is at least the same size as int. To quote the C++03 standard (3.9.1-2):
There are four signed integer types: “signed char”, “short int”,
“int”, and “long int.” In this list, each type provides at least as
much storage as those preceding it in the list. Plain ints have the
natural size suggested by the architecture of the execution
environment); the other signed integer types are provided to meet special needs.
My interpretation of this is "just use int, but if for some reason that doesn't fit your needs and you are lucky to find another integral type that's better suited, be our guest and use that one instead". One way that long might be better is if you 're on an architecture where it is... longer.
looking for something completely unrelated and stumbled across this and needed to answer. Yeah, this is old, so for people who surf on in later...
Frankly, I think all the answers on here are incomplete.
The size of a long is the size of the number of bits your processor can operate on at one time. It's also called a "word". A "half-word" is a short. A "doubleword" is a long long and is twice as large as a long (and originally was only implemented by vendors and not standard), and even bigger than a long long is a "quadword" which is twice the size of a long long but it had no formal name (and not really standard).
Now, where does the int come in? In part registers on your processor, and in part your OS. Your registers define the native sizes the CPU handles which in turn define the size of things like the short and long. Processors are also designed with a data size that is the most efficient size for it to operate on. That should be an int.
On todays 64bit machines you'd assume, since a long is a word and a word on a 64bit machine is 64bits, that a long would be 64bits and an int whatever the processor is designed to handle, but it might not be. Why? Your OS has chosen a data model and defined these data sizes for you (pretty much by how it's built). Ultimately, if you're on Windows (and using Win64) it's 32bits for both a long and int. Solaris and Linux use different definitions (the long is 64bits). These definitions are called things like ILP64, LP64, and LLP64. Windows uses LLP64 and Solaris and Linux use LP64:
Model ILP64 LP64 LLP64
int 64 32 32
long 64 64 32
pointer 64 64 64
long long 64 64 64
Where, e.g., ILP means int-long-pointer, and LLP means long-long-pointer
To get around this most compilers seem to support setting the size of an integer directly with types like int32 or int64.

Storing and printing 10+ digit integer in c++

I'm using cout to print digits to the console. I am also storing values of up to 13+billion as a digit and doing computations on it. What data type should I use?
When I do the following:
int a = 6800000000;
cout << a;
It prints -1789934592.
thanks.
long long can hold up to 9223372036854775807. Use something like gmp if you need larger.
Use int64_t to guarantee you won't overflow. It is available from stdint.h.
Just a note that both int64_t and long long are included in C99 and in C++ 0x, but not in the current version of C++. As such, using either does put your code at some risk of being non-portable. Realistically, however, that risk is probably already pretty low -- to the point that when/if you port your code, there are likely to be much bigger problems.
If, however, you really want to assure against that possibility, you might consider using a double precision floating point. Contrary to popular belief, floating point types can represent integers exactly up to a certain limit -- that limit being set (in essence) by the size of the mantissa in the F.P. type. The typical implementation of a double has a 53-bit mantissa, so you can represent 53-bit integers with absolute precision. That supports numbers up to 9,007,199,254,740,992 (which is substantially more than 13 of either of the popular meanings of "billion").
Your data type (int) is too small to hold such large numbers. You should use a larger data type or one of the fixed size data types as given in the other answer (though you should really use uint64_t if you're not using negative numbers).
It's a good idea to understand the range limits of different sized types.
A 32 bit type (on most 32 bit platforms, both int and long are 32 bit) have the following ranges:
signed: -2,147,483,648 to 2,147,483,647
unsigned: 0 to 4,294,967,295
While 64 bit types (typically long long's are 64 bit, on most Unix 64 bit platforms a long is also 64) have the following range:
signed: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
unsigned: 0 to 18,446,744,073,709,551,615
just use double in the declaration statement
You could use a long int:
long int a
Or if it's always going to be positive, an unsigned long int:
unsigned long int a
See: http://forums.guru3d.com/showthread.php?t=131678
unsigned long long
can be used