I'm working in a XE6 project, but this may apply to other version of builder as well.
I'm looking at a function name, which I think may be misleading. I'm curious if StrToFloat() returns a float or if it returns a double. I found an alternative, which is .ToDouble() but we have a bunch of references in our code already that uses StrToFloat(). I wish to verify that I'm getting the proper precision that doubles offer.
I've done a couple tests like:
UnicodeString temp = "1234567890.12345678901234567890";
double a = StrToFloat(temp);
double b = temp.ToDouble();
These seem to give the same values from the tests I've done, but I wish to verify the StrToFloat() is the same as .ToDouble()
I found enough references to answer my own question...
StrToFloat() returns an extended and an extended is a long double.
.ToDouble() returns a double.
So short answer is they are not the same, and vary as shown above.
References:
StrToFloat():
http://docwiki.embarcadero.com/Libraries/XE6/en/System.SysUtils.StrToFloat
Extended:
http://docwiki.embarcadero.com/Libraries/XE6/en/System.Extended
.ToDouble():
http://docwiki.embarcadero.com/Libraries/XE2/en/System.UnicodeString.ToDouble
long double are more precise then double precision. http://en.wikipedia.org/wiki/Long_double
Related
The video "Gangnam Style" (I'm sure you've heard it) just exceeded 2 billion views on youtube. In fact, Google says that they never expected a video to be greater than a 32-bit integer... which alludes to the fact that Google used int instead of unsigned for their view counter. I think they had to re-write their code a bit to accommodate larger views.
Checking their style guide: https://google-styleguide.googlecode.com/svn/trunk/cppguide.html#Integer_Types
...they advise "don't use an unsigned integer type," and give one good reason why: unsigned could be buggy.
It's a good reason, but could be guarded against. My question is: is it bad coding practice in general to use unsigned int?
The Google rule is widely accepted in professional circles. The problem
is that the unsigned integral types are sort of broken, and have
unexpected and unnatural behavior when used for numeric values; they
don't work well as a cardinal type. For example, an index into an array
may never be negative, but it makes perfect sense to write
abs(i1 - i2) to find the distance between two indices. Which won't work if
i1 and i2 have unsigned types.
As a general rule, this particular rule in the Google style guidelines
corresponds more or less to what the designers of the language intended.
Any time you see something other than int, you can assume a special
reason for it. If it is because of the range, it will be long or
long long, or even int_least64_t. Using unsigned types is generally
a signal that you're dealing with bits, rather than the numeric value of
the variable, or (at least in the case of unsigned char) that you're
dealing with raw memory.
With regards to the "self-documentation" of using an unsigned: this
doesn't hold up, since there are almost always a lot of values that the
variable cannot (or should not) take, including many positive ones. C++
doesn't have sub-range types, and the way unsigned is defined means
that it cannot really be used as one either.
This guideline is extremely misleading. Blindly using int instead of unsigned int won't solve anything. That simply shifts the problems somewhere else. You absolutely must be aware of integer overflow when doing arithmetic on fixed precision integers. If your code is written in a way that it does not handle integer overflow gracefully for some given inputs, then your code is broken regardless of whether you use signed or unsigned ints. With unsigned ints you must be aware of integer underflow as well, and with doubles and floats you must be aware of many additional issues with floating point arithmetic.
Just take this article about a bug in the standard Java binary search algorithm published by none other than Google for why you must be aware of integer overflow. In fact, that very article shows C++ code casting to unsigned int in order to guarantee correct behavior. The article also starts out by presenting a bug in Java where guess what, they don't have unsigned int. However, they still ran into a bug with integer overflow.
Use the right type for the operations which you will perform. float wouldn't make sense for a counter. Nor does signed int. The normal operations on the counter are print and +=1.
Even if you had some unusual operations, such as printing the difference in viewcounts, you wouldn't necessarily have a problem. Sure, other answers mention the incorrect abs(i2-i1) but it's not unreasonable to expect programmers to use the correct max(i2,i1) - min(i2,i1). Which does have range issues for signed int. No uniform solution here; programmers should understand the properties of the types they're working with.
Google states that: "Some people, including some textbook authors, recommend using unsigned types to represent numbers that are never negative. This is intended as a form of self-documentation."
I personally use unsigned ints as index parameters.
int foo(unsigned int index, int* myArray){
return myArray[index];
}
Google suggests: "Document that a variable is non-negative using assertions. Don't use an unsigned type."
int foo(int index, int* myArray){
assert(index >= 0);
return myArray[index];
}
Pro for Google: If a negative number is passed in debug mode my code will hopefully return an out of bounds error. Google's code is guaranteed to assert.
Pro for me: My code can support a greater size of myArray.
I think the actual deciding factor comes down to, how clean is your code? If you clean up all warnings, it will be clear when the compiler warns you know when you're trying to assign a signed variable to an unsigned variable. If your code already has a bunch of warnings, the compiler's warning is going to be lost on you.
A final note here: Google says: "Sometimes gcc will notice this bug and warn you, but often it will not." I haven't seen that to be the case on Visual Studio, checks against negative numbers and assignments from signed to unsigned are always warned. But if you use gcc you might have a care.
You specific question is:
"Is it bad practice to use unsigned?" to which the only correct answer can be no. It is not bad practice.
There are many style guides, each with a different focus, and while in some cases, an organisation, given their typical toolchain and deployment platform may choose not to use unsigned for their products, other toolchains and platforms almost demand it's use.
Google seem to get a lot of deference because they have a good business model (and probably employ some smart people like everyone else).
CERT IIRC recommend unsigned for buffer indexes, because if you do overflow, at least you'll still be in your own buffer, some intrinsic security there.
What do the language and standard library designers say (probably the best representation of accepted wisdom). strlen returns a size_t, which is probably unsigned (platform dependent), other answers suggest this is an anachronism because shiny new computers have wide architectures, but this misses the point that C and C++ are general programming languages and should scale well on big and small platforms.
Bottom line is that this is one of many religious questions; certainly not settled, and in these cases, I normally go with my religion for green field developments, and go with the existing convention of the codebase for existing work. Consistency matters.
One common question ,may be I am wrong at this point but last two hours on google not give me any answer,
Question is how to convert CString to float without using std: library in c++.
I tried with wcstod() function but, This function has very different behaviour.
Please find following code,
CString Tempconvert = "32.001";
double TempConvertedValue = wcstod(Tempconvert,NULL);
So, Finally TempConvertedValue should have value 32.001. But i am getting TempConvertedValue = 32.000999999999998
Please provide me any help which convert
String "32.001" to float 32.001 only. Without using std: library.
What you see is an example of floating point internal representation. You can have a look here to see how double are stored in memory. See also this question to know why debugger shows you this value.
With this storing format, it is not possible to represent a decimal number exactly. That is why money calculations usually use special decimal types that are either custom or part of the programming language.
I want to calculate the number of inversions for a very big array, something like 200,000 ints, and the number I get is quite big. So big it can't be stored in an int value.
The answer I get is something like -8,353,514,212, while for simple cases it works, so I think that the problem is the type of the variable I use to store the number of inversions.
I also tried with long int and the output is the same, but if I try with double 4.0755e+009 is the output. I don't know what the problem is.
use an unsigned data type
use unsigned long (usually 2^32-1) or unsigned long long (usually 2^64-1)
For full reference see this article.
If the native types of the compiler aren't fit to hold the result of your computation, you could consider using a bignum library.
A quick search revealed these two:
http://www.ttmath.org/
http://gmplib.org/
I've no experience with either, but gmp seems to be the more popular choice around SO, so maybe that's what you should try first
I wrote some parameters (all of type double) to a file for use in performing some complex computations. I write the parameters to the files like so:
refStatsOut << "SomeParam:" << value_of_type_double << endl;
where refStatsOut is an ofstreamparameter. There are four such parameters, each of type double. What I see as written to the file is different from what its actual value is (in terms of loss of precision). As an example, if value_of_type_double had a value -28.07270379934792, then what I see as written in the file is -28.0727.
Also, once these stats have been computed and written I run different programs that use these statistics. The files are read and the values are initially stored as std::strings and then converted to double via atof functions. This results in the values that I have shown above and ruins the computations further down.
My question is this:
1. Is there a way to increase the resolution with which one can write values (of type double and the like) to a file so as to NOT lose any precision?
2. Could this also be a problem of std::string to double conversion with atof? If so, what other function could I use to solve this?
P.S: Please let me know in case some of the details in this question are not clear. I will try to update them and provide more details.
You can use the setprecision function.
ofstream your_file;
you can use your_file.precision(X);
The main difference between precision() and setPrecision() is that precision returns the current precision and setPrecision doesn't. Therefore, you can use precision like this.
streamsize old_precision = your_file.precision(X);
// do what ever you want
//restore precision
your_file.precision(old_precision);
a double is basically a 64-bit integer, if you want a cheap way of writing it out, you can do something like this (note I'm assuming that your compiler uses long for 64-bit ints)
double value = 32985.932235;
long *saveme = (long*)&value;
Just beware of the caveat that the saved value may not remain the same if loaded back on a different architecture.
I have double (or float) variables that might be "empty", as in holding no valid value. How can I represent this condition with the built in types float and double?
One option would be a wrapper that has a float and a boolean, but that can´t work, as my libraries have containers that store doubles and not objects that behave as doubles. Another would be using NaN (std::numeric_limits). But I see no way to check for a variable being NaN.
How can I solve the problem of needing a "special" float value to mean something other than the number?
We have done that by using NaN:
double d = std::numeric_limits<double>::signaling_NaN();
bool isNaN = (d != d);
NaN values compared for equality against itself will yield false. That's the way you test for NaN, but which seems to be only valid if std::numeric_limits<double>::is_iec559 is true (if so, it conforms to ieee754 too).
In C99 there is a macro called isnan for this in math.h, which checks a floating point number for a NaN value too.
In Visual C++, there is a non-standard _isnan(double) function that you can import through float.h.
In C, there is a isnan(double) function that you can import through math.h.
In C++, there is a isnan(double) function that you can import through cmath.
As others have pointed out, using NaN's can be a lot of hassle. They are a special case that has to be dealt with like NULL pointers. The difference is that a NaN will not usually cause core dumps and application failures, but they are extremely hard to track down. If you decide to use NaN's, use them as little as possible. Overuse of NaN's is an offensive coding practice.
It's not a built-in type, but I generally use boost::optional for this kind of thing. If you absolutely can't use that, perhaps a pointer would do the trick -- if the pointer is NULL, then you know the result doesn't contain a valid value.
One option would be a wrapper that has a float ad a boolean, but that can´t work, as my libraries have containers that store doubles and not objects that behave as doubles.
That's a shame. In C++ it's trivial to create a templated class that auto-converts to the actual double (reference) attribute. (Or a reference to any other type for that matter.) You just use the cast operator in a templated class. E.g.: operator TYPE & () { return value; } You can then use a HasValue<double>anywhere you'd normally use a double.
Another would be using NaN (std::numeric_limits). But i see no way to check for a variable being NaN.
As litb and James Schek also remarked, C99 provides us with isnan().
But be careful with that! Nan values make math & logic real interesting! You'd think a number couldn't be both NOT>=foo and NOT<=foo. But with NaN, it can.
There's a reason I keep a WARN-IF-NAN(X) macro in my toolbox. I've had some interesting problems arise in the past.