I am wondering if it is possible to store a digit larger than 9 in an array. For example is
Int myarray[0] = 1234567898
From what I know about arrays they can only be integers in which the highest number will be a 9 digit number.
Is it possible to change the data type or have a larger digit. If not what are other ways to do it
It is quite possible. C++11 adds support for long long which is typically 64bit (much larger) that can store up to 19 digits and many compilers supported it before this as an extension. You can store arbitrarily large numbers in an array provided the array is large enough and you have the requisite mathematics. However, it's not pleasant and an easier bet is to download a library to do it for you. These are known as bignum or arbitrary size integer libraries and they use arrays to store integers of any size your machine can handle.
It's still not possible to express a literal above long long though.
A 32bit int has a max value of 2147483647 - that is 10 digits. If you need values higher than that, you need to use a 64bit int (long long, __int64, int64_t, etc depending on your compiler).
I am afraid that your idea that ints can only represent a single decimal digit is incorrect. In addition your idea that arrays can only hold ints is incorrect. An int is usually the computer's most efficient integral type. This is on most machines 16, 32 or 64 bits of binary data, thus can go up to 2^16 (65k), 2^32 (4million), or 2^64(very big). It is most likely 32 bits on your machine, thus numbers up to just over 4million can be stored. If you need more than that types such as long and long long are available.
Finally, arrays can hold any type, you just declare them as:
type myarray[arraysize];
Related
how do I store big numbers (10 digits long ) as values for keys in an unordered map in C++.I get this warning- "warning: overflow in implicit constant conversion [-Woverflow] ".All the values for the corresponding keys are printed different from what they were initialised.
On typical modern hardware, int has 32 bit, which would allow, in two's complement, for values in range of [-2 147 483 648; 2 147 483 647]. If your values don't fit into this range, you need a larger data type.
Be aware, though, that the standard only guarantees a much smaller range of [-32767; 32767], and actually there exists hardware only using 16 bit for int even today (if you wonder why not -32768: Well, standard covers architectures based on one's complement or sign magnitude as well...).
Typically, long has 32-bit range (which is standard's minimum) as well (but e. g. on 64-bit linux, it has 64 bit), and long long is guaranteed to be at least as large as 64 bit.
Quite a mess, as you see... If you need guaranteed range, best thing you can do is using the data types from <cstdint> header, like int64_t (or – if you don't deal with negative values – preferrably uint64_t).
There are quite a number of other useful types, e. g. uint_least8_t (smallest data type that has at least 8 bits – if you write portable code and need to cover platforms that might not be able to provide uint8_t) or uint_fast8_t (data type with at least 8 bit that can be accessed fastest on given platform; pretty often, this is larger than uint8_t). Got curious? Why not read a bit further?
in c++ int can store this much
int 4bytes -2147483648 to 2147483647
long int (or long long) can store -9223372036854775808 to +9223372036854775807
or you can just go for unsigned long int which can store from 0 to 18446744073709551615
for a larger number then that you can try boost libraries or try to make a string based function that can save it and convert it for math operations when you need it
I'm using cout to print digits to the console. I am also storing values of up to 13+billion as a digit and doing computations on it. What data type should I use?
When I do the following:
int a = 6800000000;
cout << a;
It prints -1789934592.
thanks.
long long can hold up to 9223372036854775807. Use something like gmp if you need larger.
Use int64_t to guarantee you won't overflow. It is available from stdint.h.
Just a note that both int64_t and long long are included in C99 and in C++ 0x, but not in the current version of C++. As such, using either does put your code at some risk of being non-portable. Realistically, however, that risk is probably already pretty low -- to the point that when/if you port your code, there are likely to be much bigger problems.
If, however, you really want to assure against that possibility, you might consider using a double precision floating point. Contrary to popular belief, floating point types can represent integers exactly up to a certain limit -- that limit being set (in essence) by the size of the mantissa in the F.P. type. The typical implementation of a double has a 53-bit mantissa, so you can represent 53-bit integers with absolute precision. That supports numbers up to 9,007,199,254,740,992 (which is substantially more than 13 of either of the popular meanings of "billion").
Your data type (int) is too small to hold such large numbers. You should use a larger data type or one of the fixed size data types as given in the other answer (though you should really use uint64_t if you're not using negative numbers).
It's a good idea to understand the range limits of different sized types.
A 32 bit type (on most 32 bit platforms, both int and long are 32 bit) have the following ranges:
signed: -2,147,483,648 to 2,147,483,647
unsigned: 0 to 4,294,967,295
While 64 bit types (typically long long's are 64 bit, on most Unix 64 bit platforms a long is also 64) have the following range:
signed: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
unsigned: 0 to 18,446,744,073,709,551,615
just use double in the declaration statement
You could use a long int:
long int a
Or if it's always going to be positive, an unsigned long int:
unsigned long int a
See: http://forums.guru3d.com/showthread.php?t=131678
unsigned long long
can be used
I have seen the link What does it mean by word size in computer? . It defines what word size is.
I am trying to represent very long string in bits where each character is represented by 4 bits and save it in long or integer array so that I can extract my string when required.
I can save the bits either in integer array or long array.
If I use long array (8 bytes) I will be able to save 8*4=32 bits in one long array.
But if I use int I will be able to save 4*4=16 bits only.
Now, if I am given my Word Size=32 then is it the case that I should use int only and not long.
To answer your direct question: There is no guaranteed relationship between the natural word-size of the processor and the C and C++ types int or long. Yes, quite often int will be the same as the size of a register in the processor, but most 64-bit processors do not follow this rule, as it makes data unnecessarily large. On the other hand, an 8-bit processor would have a register size of 8 bits, but int according to the C and C++ standards needs to be at least 16 bits in size, so the compiler would have to use more than one register to represent one integer [in some fashion].
In general, if you want to KNOW how many bits or bytes some type is, it's best to NOT rely on int, long, size_t or void *, since they are all likely to be different for different processor architectures or even different compilers on the same architecture. An int or long may be the same size or different sizes. Only rule that the standard says is that long is at least 32 bits.
So, to have control of the number of bits, use #include <cstdint> (or in C, stdint.h), and use the types for example uint16_t or uint32_t - then you KNOW that it will hold a given number of bits.
On a processor that has 36-bit "wordsize", the type uint32_t for example, will not exist, since there is no type that holds exactly 32-bits [most likely]. Alternatively, the compiler may add extra instructions to "behave as if it's a 32-bit type" (in other words, sign extending if necessary, and masking off the top bits as needed)
I'm planning on creating a number class. The purpose is to hold any amount of numbers without worrying about getting too much (like with int, or long). But at the same time not USING too much. For example:
If I have data that only really needs 1-10, I don't need a int (4 bytes), a short(2 bytes) or even a char(1 byte). So why allocate so much?
If i want to hold data that requires an extremely large amount (only integers in this scenario) like past the billions, I cannot.
My goal is to create a number class that can handle this problem like strings do, sizing to fit the number. But before I begin, I was wondering..
bitset<1>, bitset is a template class that allows me to minipulate bits in C++, quite useful, but is it efficient?, bitset<1> would define 1 bit, but do I want to make an array of them? C++ can allocate a byte minimum, does bitset<1> allocate a byte and provide 1 bit OF that byte? if thats the case I'd rather create my number class with unsigned char*'s.
unsigned char, or BYTE holds 8 bytes, anything from 0 - 256 would only need one, more would require two, then 3, it would simply keep expanding when needed in byte intervals rather than bit intervals.
Which do you think is MORE efficient?, the bits would be if bitset actually allocated 1 bit, but I have a feeling that it isn't even possible. In fact, it may actually be more efficient to allocate in bytes until 4 bytes, (32 bits), on a 32 bit processor 32 bit allocation is most efficient thus I would use 4 bytes at a time from then on out.
Basically my question is, what are your thoughts? how should I go about this implementation, bitset<1>, or unsigned char (or BYTE)??
Optimizing for bits is silly unless you're target architecture is a DigiComp-1. Reading individual bits is always slower than reading ints - 4 bits isn't more efficient than 8.
Use unsigned char if you want to store it as a decimal number. This will be the most efficient.
Or, you could just use GMP.
The bitset template requires a compile-time const integer for its template argument. This could be a drawback when you have to determine the max bits size at run-time. Another thing is that most of the compilers / libraries use unsigned int or unsigned long long to store the bits for faster memory access. If your application would run in a environment with limited memory, you should create a new class like bitset or use a different library.
While it won't directly help you with arithmetic on giant numbers, if this kind of space-saving is your goal then you might find my Nstate library useful (boost license):
http://hostilefork.com/nstate/
For instance: if you have a value that can be between 0 and 2...then so long as you are going to be storing a bunch of these in an array you can exploit the "wasted" space for the unused 4th state (3) to pack more values. In that particular case, you can get 20 tristates in a 32-bit word instead of the 16 that you would get with 2-bits per tristate.
I'm using cout to print digits to the console. I am also storing values of up to 13+billion as a digit and doing computations on it. What data type should I use?
When I do the following:
int a = 6800000000;
cout << a;
It prints -1789934592.
thanks.
long long can hold up to 9223372036854775807. Use something like gmp if you need larger.
Use int64_t to guarantee you won't overflow. It is available from stdint.h.
Just a note that both int64_t and long long are included in C99 and in C++ 0x, but not in the current version of C++. As such, using either does put your code at some risk of being non-portable. Realistically, however, that risk is probably already pretty low -- to the point that when/if you port your code, there are likely to be much bigger problems.
If, however, you really want to assure against that possibility, you might consider using a double precision floating point. Contrary to popular belief, floating point types can represent integers exactly up to a certain limit -- that limit being set (in essence) by the size of the mantissa in the F.P. type. The typical implementation of a double has a 53-bit mantissa, so you can represent 53-bit integers with absolute precision. That supports numbers up to 9,007,199,254,740,992 (which is substantially more than 13 of either of the popular meanings of "billion").
Your data type (int) is too small to hold such large numbers. You should use a larger data type or one of the fixed size data types as given in the other answer (though you should really use uint64_t if you're not using negative numbers).
It's a good idea to understand the range limits of different sized types.
A 32 bit type (on most 32 bit platforms, both int and long are 32 bit) have the following ranges:
signed: -2,147,483,648 to 2,147,483,647
unsigned: 0 to 4,294,967,295
While 64 bit types (typically long long's are 64 bit, on most Unix 64 bit platforms a long is also 64) have the following range:
signed: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
unsigned: 0 to 18,446,744,073,709,551,615
just use double in the declaration statement
You could use a long int:
long int a
Or if it's always going to be positive, an unsigned long int:
unsigned long int a
See: http://forums.guru3d.com/showthread.php?t=131678
unsigned long long
can be used