Unexpected output from to_string(int) - c++

I am trying to count the length of an int that may or may not have leading 0s. For instance 0100. I tried using the to_string() method and it turned 0100 into the string "64" for some reason & I don't understand why. I am relatively new to C++ and I think I may have fundamentally misunderstood how to_string() works.
I am using C++11 for my compiler.

By starting your integer literal with 0, you actually declared the number in octal base. (100 in octal = 64 in decimal)
See cppreference's page on integer literals for more details.

Related

Hexadecimal in [] operator

I found some article and I saw this:
// Capture vendor string
char vendor[0x20];
memset(vendor, 0, sizeof(vendor));
*reinterpret_cast<int*>(vendor) = data_[0][1];
*reinterpret_cast<int*>(vendor + 4) = data_[0][3];
*reinterpret_cast<int*>(vendor + 8) = data_[0][2];
This line: char vendor[0x20];.
Why threre are hexadecimal and may I use octal value?
CPUID
Why threre are hexadecimal
Because the author chose to use hexadecimal. As you can see, 0x20 is quite "round" in hexadecimal, as it has only one non-zero digit.
may I use octal value?
Yes. Where-ever you can use an integer literal, you can use any of the available base representations. Binary, decimal, octal and hexadecimal are the options.
P.S. The example is technically broken in standard C++, because it fails to align the buffer, so it is not a good example to use for learning the language. I appears though that it is written specifically for x86 processors, which do work with unaligned operations.
A correct way to write this would have been to use an array of integers, copy the values, and then reinterpret the result as characters when reading.

Storing negative number in an unsigned int [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I have access to a program which I'm running which SHOULD be guessing a very low number for certain things and outputting the number (probably 0 or 1). However, 0.2% of the time when it should be outputting 0 it outputs a number from 4,294,967,286 - 4,294,967,295. (Note: the latter is the max number an unsigned integer can be).
What I GUESS is happening is the function is guessing the number of the data to be less than 0 aka -1 to -9 and when it assigns that number to an unsigned int it's wrapping the number around to be the max or close to the max number.
I therefore assumed the program is written in C (I do not have access to the source code) and then tested in Visual Studio .NET 2012 C what would happen if I assign a variety of negative numbers to an unsigned integer. Unfortunately, nothing seemed to happen - it would still output the number to the console as a negative integer. I'm wondering if this is to do with MSVS 2012 trying to be smart or perhaps some other reason.
Anyway, am I correct in assuming that this is in fact what is happening and the reason why the programs outputs the max number of an unisnged int? Or are there any other valid reasons as to why this is happening?
Edit: All I want to know is if it's valid to assume that attempting to assign a negative number to an unsigned integer can result in setting the integer to the max number aka 4,294,967,295. If this is IMPOSSIBLE then okay, I'm not looking at SPECIFICS on exactly why this is happening with the program as I do not have access to the code. All I want to know is if it's possible and therefore a possible explanation as to why I am getting these results.
In C and C++ assigning -1 to an unsigned number will give you the maximum unsigned value.
This is guaranteed by the standard and all compilers I know (even VC) implement this part correctly. Probably your C example has some other problem for not showing this result (cannot say without seeing the code).
You can think of negative numbers to have its first bit counting negative.
A 4 bit integer would be
Binary HEX INT4 UINT4
(In Memory) (As decimal) (As decimal)
0000 0x0 0 0 (UINT4_MIN)
0001 0x1 1 1
0010 0x2 2 2
0100 0x4 4 4
0111 0x7 7 (INT4_MAX) 7
1000 0x8 -8 (INT4_MIN) 8
1111 0xF -1 15 (UINT4_MAX)
It may be that the header of a library lies to you and the value is negative.
If the library has no other means of telling you about errors this may be a deliberate error value. I have seen "nonsensical" values used in that manner before.
The error could be calculated as (UINT4_MAX - error) or always UINT4_MAX if an error occurs.
Really, without any source code this is a guessing game.
EDIT:
I expanded the illustrating table a bit.
If you want to log a number like that you may want to log it in hexadecimal form. The Hex view allows you to peek into memory a bit quicker if you are used to it.

What is the difference between the types of 0x7FFF and 32767?

I'd like to know what the difference is between the values 0x7FFF and 32767. As far as I know, they should both be integers, and the only benefit is the notational convenience. They will take up the same amount of memory, and be represented the same way, or is there another reason for choosing to write a number as 0x vs base 10?
The only advantage is that some programmers find it easier to convert between base 16 and binary in their heads. Since each base 16 digit occupies exactly 4 bits, it's a lot easier to visualize the alignment of bits. And writing in base 2 is quite cumbersome.
The type of an undecorated decimal integral constants is always signed. The type of an undecorated hexadecimal or octal constant alternates between signed and unsigned as you hit the various boundary values determined by the widths of the integral types.
For constants decorated as unsigned (e.g. 0xFU), there is no difference.
Also, it's not possible to express 0 as a decimal literal.
See Table 6 in C++11 and 6.4.4.1/5 in C11.
Both are integer literals, and just provide a different means of expressing the same number. There is no technical advantage to using one form over the other.
Note that you can also use octal notation as well (by prepending the value with 0).
The 0x7FFF notation is much more clear about potential over/underflow than the decimal notation.
If you using something that is 16 bits wide, 0x7FFF alerts you to the fact that if you use those bits in a signed way, you are at the very maximum of what those 16 bits can hold for a positive, signed value. Add 1 to it, and you'll overflow.
Same goes for 32 bits wide. The maximum that it can hold (signed, positive) is 0x7FFFFFFF.
You can see these maximums straight off of the hex notation, whereas you can't off of the decimal notation. (Unless you happen to have memorized that 32767 is the positive signed max for 16 bits).
(Note: the above is true when twos complement is being used for distinguishing between positive and negative values if the 16 bits are holding a signed value).
One is hex -- base 16 -- and the other is decimal?
That is true, there is no difference. Any differences would be in the variable the value is stored in. The literals 0x7FFF and 32767 are identical to the compiler in every way.
See http://www.cplusplus.com/doc/tutorial/constants/.
Choosing to write 0x7fff or 32767 into source code it's only a programmer choice because, those values are stored in the same identical way into computer memory.
For example: I'd feel more comfortable use the 0x notation when I need to do operations with 4bit instead the classical byte.
If I need to extract the lower 4 bit of a char variable I'd do
res = charvar & 0x0f;
That's the same of:
res = charvar & 15;
The latter is just less intuitive and readable but the operation is identical

Check for negative values when using lexical_cast to unsigned type

I have a situation where I am grabbing command line arguments and using boost::lexical_cast<unsigned long>(my_param). I was hoping that negative values of my_param would cause lexical_cast to throw, but instead it happily converts them, with -1 becoming 18446744073709551615. Which seems absurd, as the max value for an unsigned long is 2^32-1, it looks much more like an unsigned long long.
So I am looking for either a smarter way to cast the char * input to unsigned long, or a way to verify that I have not accepted a negative value in its disguise as a large unsigned long long.
There is a bug report against boost with your problem which explains why it behaves that way:
boost::lexical_cast has the behavior of stringstream, which uses num_get functions of std::locale to convert numbers. If we look at the [22.2.2.1.2] of Programming languages — C++ ( or at [22.2.2.1.2] Working Draft, Standard for Programming Language C++) we`ll see, that num_get uses the rules of scanf for conversions. And in the C99 standard for %u the input value minus sign is optional, so if a negative number is read, no errors will arise and the result will be the two's complement.
And also a suggested wrapper workaround:
https://svn.boost.org/trac/boost/ticket/5494
That's exactly how negative values are to be treated per the standard. They use modulo 2N arithmetic. This enables a handy trick: Using -1 as shorthand for the largest possible unsigned value of some type.
If you don't like this conversion, you'll have to scan the input for a minus sign before doing the conversion.

C++: How to Convert From Float to String Without Rounding, Truncation or Padding? [duplicate]

This question already has answers here:
Why do I see a double variable initialized to some value like 21.4 as 21.399999618530273?
(14 answers)
Closed 6 years ago.
I am facing a problem and unable to resolve it. Need help from gurus. Here is sample code:-
float f=0.01f;
printf("%f",f);
if we check value in variable during debugging f contains '0.0099999998' value and output of printf is 0.010000.
a. Is there any way that we may force the compiler to assign same values to variable of float type?
b. I want to convert float to string/character array. How is it possible that only and only exactly same value be converted to string/character array. I want to make sure that no zeros are padded, no unwanted values are padded, no changes in digits as in above example.
It is impossible to accurately represent a base 10 decimal number using base 2 values, except for a very small number of values (such as 0.25). To get what you need, you have to switch from the float/double built-in types to some kind of decimal number package.
You could use boost::lexical_cast in this way:
float blah = 0.01;
string w = boost::lexical_cast<string>( blah );
The variable w will contain the text value 0.00999999978. But I can't see when you really need it.
It is preferred to use boost::format to accurately format a float as an string. The following code shows how to do it:
float blah = 0.01;
string w = str( boost::format("%d") % blah ); // w contains exactly "0.01" now
Have a look at this C++ reference. Specifically the section on precision:
float blah = 0.01;
printf ("%.2f\n", blah);
There are uncountably many real numbers.
There are only a finite number of values which the data types float, double, and long double can take.
That is, there will be uncountably many real numbers that cannot be represented exactly using those data types.
The reason that your debugger is giving you a different value is well explained in Mark Ransom's post.
Regarding printing a float without roundup, truncation and with fuller precision, you are missing the precision specifier - default precision for printf is typically 6 fractional digits.
try the following to get a precision of 10 digits:
float amount = 0.0099999998;
printf("%.10f", amount);
As a side note, a more C++ way (vs. C-style) to do things is with cout:
float amount = 0.0099999998;
cout.precision(10);
cout << amount << endl;
For (b), you could do
std::ostringstream os;
os << f;
std::string s = os.str();
In truth using the floating point processor or co-processor or section of the chip itself (most are now intergrated into the CPU), will never result in accurate mathematical results, but they do give a fairly rough accuracy, for more accurate results, you could consider defining a class "DecimalString", which uses nybbles as decimal characters and symbols... and attempt to mimic base 10 mathematics using strings... in that case, depending on how long you want to make the strings, you could even do away with the exponent part altogether a string 256 can represent 1x10^-254 upto 1^+255 in straight decimal using actual ASCII, shorter if you want a sign, but this may prove significantly slower. You could speed this by reversing the digit order, so from left to right they read
units,tens,hundreds,thousands....
Simple example
eg. "0021" becomes 1200
This would need "shifting" left and right to make the decimal points line up before routines as well, the best bet is to start with the ADD and SUB functions, as you will then build on them in the MUL and DIV functions. If you are on a large machine, you could make them theoretically as long as your heart desired!
Equally, you could use the stdlib.h, in there are the sprintf, ecvt and fcvt functions (or at least, there should be!).
int sprintf(char* dst,const char* fmt,...);
char *ecvt(double value, int ndig, int *dec, int *sign);
char *fcvt(double value, int ndig, int *dec, int *sign);
sprintf returns the number of characters it wrote to the string, for example
float f=12.00;
char buffer[32];
sprintf(buffer,"%4.2f",f) // will return 5, if it is an error it will return -1
ecvt and fcvt return characters to static char* locations containing the null terminated decimal representations of the numbers, with no decimal point, most significant number first, the offset of the decimal point is stored in dec, the sign in "sign" (1=-,0=+) ndig is the number of significant digits to store. If dec<0 then you have to pad with -dec zeros pror to the decimal point. I fyou are unsure, and you are not working on a Windows7 system (which will not run old DOS3 programs sometimes) look for TurboC version 2 for Dos 3, there are still one or two downloads available, it's a relatively small program from Borland which is a small Dos C/C++ edito/compiler and even comes with TASM, the 16 bit machine code 386/486 compile, it is covered in the help files as are many other useful nuggets of information.
All three routines are in "stdlib.h", or should be, though I have found that on VisualStudio2010 they are anything but standard, often overloaded with function dealing with WORD sized characters and asking you to use its own specific functions instead... "so much for standard library," I mutter to myself almost each and every time, "Maybe they out to get a better dictionary!"
You would need to consult your platform standards to determine how to best determine the correct format, you would need to display it as a*b^C, where 'a' is the integral component that holds the sign, 'b' is implementation defined (Likely fixed by a standard), and 'C' is the exponent used for that number.
Alternatively, you could just display it in hex, it'd mean nothing to a human, though, and it would still be binary for all practical purposes. (And just as portable!)
To answer your second question:
it IS possible to exactly and unambiguously represent floats as strings. However, this requires a hexadecimal representation. For instance, 1/16 = 0.1 and 10/16 is 0.A.
With hex floats, you can define a canonical representation. I'd personally use a fixed number of digits representing the underlying number of bits, but you could also decide to strip trailing zeroes. There's no confusion possible on which trailing digits are zero.
Since the representation is exact, the conversions are reversible: f==hexstring2float(float2hexstring(f))