Converting string to int fails - c++

I'm trying to convert the string to int with stringstream, the code down below works, but if i use a number more then 1234567890, like 12345678901 then it return 0 back ...i dont know how to fix that, please help me out
std:: string number= "1234567890";
int Result;//number which will contain the result
std::stringstream convert(number.c_str()); // stringstream used for the conversion initialized with the contents of Text
if ( !(convert >> Result) )//give the value to Result using the characters in the string
Result = 0;
printf ("%d\n", Result);

the maximum number an int can contain is slightly more than 2 billion. (assuming ubiquitios 32 bit ints)
It just doesn't fit in an int!

The largest unsigned int (on a 32-bit platform) is 2^32 (4294967296), and your input is larger than that, so it's giving up. I'm guessing you can get an error code from it somehow. Maybe check failbit or badbit?
int Result;
std::stringstream convert(number.c_str());
convert >> Result;
if(convert.fail()) {
std::cout << "Bad things happened";
}

If you're on a 32-bit or LP64 64-bit system then int is 32-bit so the largest number you can store is approximately 2 billion. Try using a long or long long instead, and change "%d" to "%ld" or "%lld" appropriately.

The (usual) maximum value for a signed int is 2.147.483.647 as it is (usually) a 32bit integer, so it fails for numbers which are bigger.
if you replace int Result; by long Result; it should be working for even bigger numbers, but there is still a limit. You can extend that limit by factor 2 by using unsigned integer types, but only if you don't need negative numbers.

Hm, lots of disinformation in the existing four or five answers.
An int is minimum 16 bits, and with common desktop system compilers it’s usually 32 bits (in all Windows version) or 64 bits. With 32 bits it has maximum 232 distinct values, which, setting K=210 = 1024, is 4·K3, i.e. roughly 4 billion. Your nearest calculator or Python prompt can tell you the exact value.
A long is minimum 32 bits, but that doesn’t help you for the current problem, because in all extant Windows variants, including 64-bit Windows, long is 32 bits…
So, for better range than int, use long long. It’s minimum 64 bits, and in practice, as of 2012 it’s 64 bits with all compilers. Or, just use a double, which, although not an integer type, with the most common implementation (IEEE 754 64-bit) can represent integer values exactly with, as I recall, about 51 or 52 bits – look it up if you want exact number of bits.
Anyway, remember to check the stream for conversion failure, which you can do by s.fail() or simply !s (which is equivalent to fail(), more precisely, the stream’s explicit conversion to bool returns !fail()).

Related

c++ portable conversion of long to double

I need to accurately convert a long representing bits to a double and my soluton shall be portable to different architectures (being able to be standard across compilers as g++ and clang++ woulf be great too).
I'm writing a fast approximation for computing the exp function as suggested in this question answers.
double fast_exp(double val)
{
double result = 0;
unsigned long temp = (unsigned long)(1512775 * val + 1072632447);
/* to convert from long bits to double,
but must check if they have the same size... */
temp = temp << 32;
memcpy(&result, &temp, sizeof(temp));
return result;
}
and I'm using the suggestion found here to convert the long into a double. The issue I'm facing is that whereas I got the following results for int values in [-5, 5] under OS X with clang++ and libc++:
0.00675211846828461
0.0183005779981613
0.0504353642463684
0.132078289985657
0.37483024597168
0.971007823944092
2.7694206237793
7.30961990356445
20.3215942382812
54.8094177246094
147.902587890625
I always get 0 under Ubuntu with clang++ (3.4, same version) and libstd++. The compiler there even tells me (through a warning) that the shifting operation can be problematic since the long has size equal or less that the shifting parameter (indicating that longs and doubles have not the same size probably)
Am I doing something wrong and/or is there a better way to solve the problem being as more compatible as possible?
First off, using "long" isn't portable. Use the fixed length integer types found in stdint.h. This will alleviate the need to check for the same size, since you'll know what size the integer will be.
The reason you are getting a warning is that left shifting 32 bits on the 32 bit intger is undefined behavior. What's bad about shifting a 32-bit variable 32 bits?
Also see this answer: Is it safe to assume sizeof(double) >= sizeof(void*)? It should be safe to assume that a double is 64bits, and then you can use a uint64_t to store the raw hex. No need to check for sizes, and everything is portable.

Differences in assignment of integer variable

I just asked this question and it got me thinking if there is any reason
1)why you would assign a int variable using hexidecimal or octal instead of decimal and
2)what are the difference between the different way of assignment
int a=0x28ff1c; // hexideciaml
int a=10; //decimal (the most commonly used way)
int a=012177434; // octal
You may have some constants that are more easily understood when written in hexadecimal.
Bitflags, for example, in hexadecimal are compact and easily (for some values of easily) understood, since there's a direct correspondence 4 binary digits => 1 hex digit - for this reason, in general the hexadecimal representation is useful when you are doing bitwise operations (e.g. masking).
In a similar fashion, in several cases integers may be internally divided in some fields, for example often colors are represented as a 32 bit integer that goes like this: 0xAARRGGBB (or 0xAABBGGRR); also, IP addresses: each piece of IP in the dotted notation is two hexadecimal digits in the "32-bit integer" notation (usually in such cases unsigned integers are used to avoid messing with the sign bit).
In some code I'm working on at the moment, for each pixel in an image I have a single byte to use to store "accessory information"; since I have to store some flags and a small number, I use the least significant 4 bits to store the flags, the 4 most significant ones to store the number. Using hexadecimal notations it's immediate to write the appropriate masks and shifts: byte & 0x0f gives me the 4 LS bits for the flags, (byte & 0xf0)>>4 gives me the 4 MS bits (re-shifted in place).
I've never seen octal used for anything besides IOCCC and UNIX permissions masks (although in the last case they are actually useful, as you probably know if you ever used chmod); probably their inclusion in the language comes from the fact that C was initially developed as the language to write UNIX.
By default, integer literals are of type int, while hexadecimal literals are of type unsigned int or larger if unsigned int isn't large enough to hold the specified value. So, when assigning a hexadecimal literal to an int there's an implicit conversion (although it won't impact the performance, any decent compiler will perform the cast at compile time). Sorry, brainfart. I checked the standard right now, it goes like this:
decimal literals, without the u suffix, are always signed; their type is the smallest that can represent them between int, long int, long long int;
octal and hexadecimal literals without suffix, instead, may also be of unsigned type; their actual type is the smallest one that can represent the value between int, unsigned int, long int, unsigned long int, long long int, unsigned long long int.
(C++11, §2.14.2, ¶2 and Table 6)
The difference may be relevant for overload resolution1, but it's not particularly important when you are just assigning a literal to a variable. Still, keep in mind that you may have valid integer constants that are larger than an int, i.e. assignment to an int will result in signed integer overflow; anyhow, any decent compiler should be able to warn you in these cases.
Let's say that on our platform integers are in 2's complement representation, int is 16 bit wide and long is 32 bit wide; let's say we have an overloaded function like this:
void a(unsigned int i)
{
std::cout<<"unsigned";
}
void a(int i)
{
std::cout<<"signed";
}
Then, calling a(1) and a(0x1) will produce the same result (signed), but a(32768) will print signed and a(0x10000) will print unsigned.
It matters from a readability standpoint - which one you choose expresses your intention.
If you're treating the variable as an integral type, you know, like 2+2=4, you use the decimal representation. It's intuitive and straight-forward.
If you're using it as a bitmask, you can use hexa, octal or even binary. For example, you'll know
int a = 0xFF;
will have the last 8 bits set to 1. You'll know that
int a = 0xF0;
is (...)11110000, but you couldn't directly say the same thing about
int a = 240;
although they are equivalent. It just depends on what you use the numbers for.
well the truth is it doesn't matter if you want it on decimal, octal or hexadecimal its just a representation and for your information, numbers in computers are stored in binary(so they are just 0's and 1's) which you can use also to represent a number. so its just a matter of representation and readability.
NOTE:
Well in some of C++ debuggers(in my experience) I assigned a number as a decimal representation but in my debugger it is shown as hexadecimal.
It's similar to the assignment of and integer this way:
int a = int(5);
int b(6);
int c = 3;
it's all about preference, and when it breaks down you're just doing the same thing. Some might choose octal or hex to go along with their program that manipulates that type of data.

char* to double and back to char* again ( 64 bit application)

I am trying to convert a char* to double and back to char* again. the following code works fine if the application you created is 32-bit but doesn't work for 64-bit application. The problem occurs when you try to convert back to char* from int. for example if the hello = 0x000000013fcf7888 then converted is 0x000000003fcf7888 only the last 32 bits are right.
#include <iostream>
#include <stdlib.h>
#include <tchar.h>
using namespace std;
int _tmain(int argc, _TCHAR* argv[]){
char* hello = "hello";
unsigned int hello_to_int = (unsigned int)hello;
double hello_to_double = (double)hello_to_int;
cout<<hello<<endl;
cout<<hello_to_int<<"\n"<<hello_to_double<<endl;
unsigned int converted_int = (unsigned int)hello_to_double;
char* converted = reinterpret_cast<char*>(converted_int);
cout<<converted_int<<"\n"<<converted<<endl;
getchar();
return 0;
}
On 64-bit Windows pointers are 64-bit while int is 32-bit. This is why you're losing data in the upper 32-bits while casting. Instead of int use long long to hold the intermediate result.
char* hello = "hello";
unsigned long long hello_to_int = (unsigned long long)hello;
Make similar changes for the reverse conversion. But this is not guaranteed to make the conversions function correctly because a double can easily represent the entire 32-bit integer range without loss of precision but the same is not true for a 64-bit integer.
Also, this isn't going to work
unsigned int converted_int = (unsigned int)hello_to_double;
That conversion will simply truncate anything digits after the decimal point in the floating point representation. The problem exists even if you change the data type to unsigned long long. You'll need to reinterpret_cast<unsigned long long> to make it work.
Even after all that you may still run into trouble depending on the value of the pointer. The conversion to double may cause the value to be a signalling NaN for instance, in which cause your code might throw an exception.
Simple answer is, unless you're trying this out for fun, don't do conversions like these.
You can't cast a char* to int on 64-bit Windows because an int is 32 bits, while a char* is 64 bits because it's a pointer. Since a double is always 64 bits, you might be able to get away with casting between a double and char*.
A couple of issues with encoding any integer (specifically, a collection of bits) into a floating point value:
Conversions from 64-bit integers to doubles can be lossy. A double has 53-bits of actual precision, so integers above 2^52 (give or take an extra 2) will not necessarily be represented precisely.
If you decide to reinterpret the bits of a pointer as a double instead (via union or reinterpret_cast) you will still have issues if you happen to encode a pointer as set of bits that are not a valid double representation. Unless you can guarantee that the double value never gets written back by the FPU, the FPU can silently transform an invalid double into another invalid double (see NaN), i.e., a double value that represents the same value but has different bits. (See this for issues related to using floating point formats as bits.)
You can probably safely get away with encoding a 32-bit pointer in a double, as that will definitely fit within the 53-bit precision range.
only the last 32 bits are right.
That's because an int in your platform is only 32 bits long. Note that reinterpret_cast only guarantees that you can convert a pointer to an int of sufficient size (not your case), and back.
If it works in any system, anywhere, just all yourself lucky and move on. Converting a pointer to an integer is one thing (as long as the integer is large enough, you can get away with it), but a double is a floating point number - what you are doing simply doesn't make any sense, because a double is NOT necessarily capable of representing any random number. A double has range and precision limitations, and limits on how it represents things. It can represent numbers across a wide range of values, but it can't represent EVERY number in that range.
Remember that a double has two components: the mantissa and the exponent. Together, these allow you to represent either very big or very small numbers, but the mantissa has limited number of bits. If you run out of bits in the mantissa, you're going to lose some bits in the number you are trying to represent.
Apparently you got away with it under certain circumstances, but you're asking it to do something it wasn't made for, and for which it is manifestly inappropriate.
Just don't do that - it's not supposed to work.
This is as expected.
Typically a char* is going to be 32 bits on a 32-bit system, 64 bits on a 64-bit system; double is typically 64 bits on both systems. (These sizes are typical, and probably correct for Windows; the language permits a lot more variations.)
Conversion from a pointer to a floating-point type is, as far as I know, undefined. That doesn't just mean that the result of the conversion is undefined; the behavior of a program that attempts to perform such a conversion is undefined. If you're lucky, the program will crash or fail to compile.
But you're converting from a pointer to an integer (which is permitted, but implementation-defined) and then from an integer to a double (which is permitted and meaningful for meaningful numeric values -- but converted pointer values are not numerically meaningful). You're losing information because not all of the 64 bits of a double are used to represent the magnitude of the number; typically 11 or so bits are used to represent the exponent.
What you're doing quite simply makes no sense.
What exactly are you trying to accomplish? Whatever it is, there's surely a better way to do it.

Problem when converting from decimal to binary and returning a bitset

I have a strange problem. I have created a simple function to convert from decimal to binary. The argument is a int value that represents the number in decimal, and the function returns a bitset that represent the binary number.
The problem is that conversion for a binary number smaller than 10000000000000000000000000000000 (2,147,483,648 in decimal) works perfectly, but when the number to be converted is higher, the conversion doesn't work properly. Where is the mistake???
Here I send you the function:
bitset<15000> Utilities::getDecToBin(int dec)
{
bitset<15000> columnID;
int x;
for(x=0;x<columnID.size();x++)
{
columnID[x]=dec%2;
dec=dec/2;
}
return columnID;
}
Thanks in advance for all your help! :D
The range for an 32 bit int is −2,147,483,648 to 2,147,483,647.
If by larger you mean 1073741825, then I can't see anything wrong.
If you mean adding an extra bit at the most significant position (i.e. 2147483648) then you might be running into signed/unsigned issues.
I see that you restrict your loop by the size of columnID. It would be a good idea to limit it by the size of dec in bits too, or stop when dec is 0.
You should use a long int, an array of long int or even a string as input to your method (depending on the source of your data and how you will call your function).
I am surprised it only works up to 30 bits, you should be able to manage 31 but for 32 bits you would need unsigned ints. If you used unsigned 64-bit integers you could manage a lot more than that, but 15,000-bit integers are something that can only be implemented through special classes.

Why do I get a "constant too large" error?

I'm new to Windows development and I'm pretty confused.
When I compile this code with Visual C++ 2010, I get an error "constant too large." Why do I get this error, and how do I fix it?
Thanks!
int _tmain(int argc, _TCHAR* argv[])
{
unsigned long long foo = 142385141589604466688ULL;
return 0;
}
The digit sequence you're expressing would take about 67 bits -- maybe your "unsigned long long" type takes only (!) 64 bits, your digit sequence won't fit in its, etc, etc.
If you regularly need to deal with integers that won't fit in 64 bits you might want to look at languages that smoothly support them, such as Python (maybe with gmpy;-). Or, give up on language support and go for suitable libraries, such as GMP and MPIR!-)
A long long is 64 bits and thus holds a maximum value of 2^64, which is 9223372036854775807 as a signed value and 18446744073709551615 as an unsigned value. Your value is bigger, hence it's a constant value that's too large.
Pick a different data type to hold your value.
You get the error because your constant is too large.
From Wikipedia:
An unsigned long long's max value is at least 18,446,744,073,709,551,615
Here is the max value and your value:
18,446,744,073,709,551,615 // Max value
142,385,141,589,604,466,688 // Your value
See why your value is too long?
According to http://msdn.microsoft.com/en-us/library/s3f49ktz%28VS.100%29.aspx, the range of unsigned long long is 0 to 18,446,744,073,709,551,615.
142385141589604466688 > 18446744073709551615
You have reached the limit of your hardware to represent integers directly.
It seems that beyond 64 bits (on your hardware) requires the integer to be simulated by software constructs. There are several projects out there that help.
See BigInt
http://sourceforge.net/projects/cpp-bigint/
Note: Others have misconstrued that long long has a limit of 64 bits.
This is not accurate. The only limitation placed by the language are:
(Also Note: Currently C++ does not support long long (But C does) It is an extension by your compiler (coming in the next version of the standard))
sizeof(long) <= sizeof(long long)
sizeof(long long) * CHAR_BITS >= 64 // Not defined explicitly but deducible from
// The values defined in limits.h
For more details See:
What is the difference between an int and a long in C++?