while doing some homework in my very strange C++ book, which I've been told before to throw away, had a very peculiar code segment. I know homework stuff always throws in extra "mystery" to try to confuse you like indenting 2 lines after a single-statement for-loop. But this one I'm confused on because it seems to serve some real-purpose.
basically it is like this:
int counter=10;
...
if(pow(floor(sqrt(counter+0.0)),2) == counter)
...
I'm interested in this part especially:
sqrt(counter+0.0)
Is there some purpose to the +0.0? Is this the poormans way of doing a static cast to a double? Does this avoid some compiler warning on some compiler I do not use? The entire program printed the exact same thing and compiled without warnings on g++ whenever I left out the +0.0 part. Maybe I'm not using a weird enough compiler?
Edit:
Also, does gcc just break standard and not make an error for Ambiguous reference since sqrt can take 3 different types of parameters?
[earlz#EarlzBeta-~/projects/homework1] $ cat calc.cpp
#include <cmath>
int main(){
int counter=0;
sqrt(counter);
}
[earlz#EarlzBeta-~/projects/homework1] $ g++ calc.cpp
/usr/lib/libstdc++.so.47.0: warning: strcpy() is almost always misused, please use strlcpy()
/usr/lib/libstdc++.so.47.0: warning: strcat() is almost always misused, please use strlcat()
[earlz#EarlzBeta-~/projects/homework1] $
Also, here is the relevant part of my system libraries cmath I'm not too keen on templates, so I'm not sure what it's doing
using ::sqrt;
inline float
sqrt(float __x)
{ return __builtin_sqrtf(__x); }
inline long double
sqrt(long double __x)
{ return __builtin_sqrtl(__x); }
template<typename _Tp>
inline typename __gnu_cxx::__enable_if<__is_integer<_Tp>::__value,
double>::__type
sqrt(_Tp __x)
{ return __builtin_sqrt(__x);
Is this the poormans way of doing a static cast to a double?
Yes.
You can't call sqrt with an int as its parameter, because sqrt takes a float, double, or long double. You have to cast the int to one of those types, otherwise the call is ambiguous.
the reason for the expression counter + 0.0 is to explicitly make it a real number. if we donot add 0.0 the compiler will do implicit conversion
It's just another way to cast to a double. This is because sqrt doesn't accept ints. Because a double is higher it will merge the int into the 0.0. The same way can be done for converting from (int,double,float) to string.
double n = 0;
string m = ""+n;
Related
Basically an integer variable should allow only integer values to be set for its variable. Then how come such special words as follows are allowed?
int a = 200L;
int a = 200U;
int a = 200F;
I found this when i run the program, it ran perfectly without giving any error. Other letters are not allowed as expected. But why these?
L, U and F means long, unsigned and float respectively.
so, the code means
int a = (long) 200;
int a = (unsigned) 200;
int a = (float) 200;
What you do is called implicit conversion.
If you are using gcc compiler you can add
-Wconversion
(not part of -Wall) option to check any implicit conversion that may alter the value.
Without any option, conversion from signed to unsigned is not warned by default. So you need to active
-Wsign-conversion
If you want an explicit conversion, it will not be warned by those 2 options.
int percent = (int)((int)4.1)*.5;
Two different things are going on here.
1) Some letters when stuck on the end of a number take on meaning. 'l' is for long, 'u' is for unsigned, and 'f' is for float.
"Long" is generally 64 bits wide vs int's 32 bits... but that can
vary wildly from machine to machine. DO NOT depend on bit width of
int and long.
"Unsigned" means it doesn't bother to track positive or
negative values... assuming everything is positive. This about
doubles how high an integer can go. Look up "two's complement" for
further information.
"Float" means "floating point". Non whole numbers. 1.5, 3.1415, etc. They can be very large, or very precise, but not both. Floats ARE 32 bits. "Double" is a 64-bit floating point value, which can permit some extreme values of size or precision.
2) Type Coercion, pronounced "co ER shun".
The compiler knows how to convert (coerce) from long to int, unsigned to int, or float to int. They're all just numbers, right? Note that converting from float to into "truncates" (drops) anything after a decimal place. ((int)3.00000001) == 3. ((int)2.9999999) == 2
If you dial your warnings up to max sensitivity, I believe those statements will all trigger warnings because all those conversions could potentially lose data... though the exact phrasing of that warning will vary from compiler to compiler.
Bonus Information:
You can trigger this same behavior (accidentally) with classes.
struct Foo {
Foo(int bar) {...}
};
Foo baz = 42;
The compiler will treat the above constructor as an option when looking to convert from int to Foo. The compiler is willing to hop through more than one hoop to get there... so Foo qux = 3.14159; would also compile. This is also true of other class constructors... so if you have some other class that takes a foo as it's only constructor param, you can declare a variable of that class and assign it something that can be coerced to a foo... and so on:
struct Corge {
Corge(Foo foo) {...}
};
corge grault = 1.2345; // you almost certainly didn't intend what will happen here
That's three layers of coercion. double to int, into to foo, and foo to corge. Bleh!
You can block this with the explicit keyword:
struct Foo {
explicit Foo(int bar) {...}
};
Foo baz = 1; // won't compile
I wish they'd made explicit the default and used some keyword to define conversion constructors instead, but that change would almost certainly break someone's code, so it'll never happen.
What happens is that you are telling the compiler to convert the value into a different type of data. That is to say:
int a = 200L; // It's like saying: Hey C++, convert this whole to Long
int a = 200U; // And this to Unsigned
int a = 200F; // And this one to Float
There is no error because the compiler understands that these letters at the end indicate a type of conversion.
I checked the difference between abs and fabs on python here
As I understand there are some difference regarding the speed and the passed types, but my question related to native c++ on V.S.
Regarding the V.S.
I tried the following on Visual Studio 2013 (v120):
float f1= abs(-9.2); // f = 9.2
float f2= fabs(-9); // Compile error [*]
So fabs(-9) it will give me a compiler error, but when I tried to do the following:
double i = -9;
float f2= fabs(i); // This will work fine
What I understand from the first code that it will not compile because fabs(-9) need a double, and the compiler could not convert -9 to -9.0, but in the second code the compiler will convert i=-9 to i=-9.0 at compile time so fabs(i) will work fine.
Any better explanation?
Another thing, why the compiler can't accept fabs(-9) and convert the int value to double automatically like what we have in c#?
[*]:
Error: more than one instance of overloaded function "fabs" matches the argument list:
function "fabs(double _X)"
function "fabs(float _X)"
function "fabs(long double _X)"
argument types are: (int)
In C++, std::abs is overloaded for both signed integer and floating point types. std::fabs only deals with floating point types (pre C++11). Note that the std:: is important; the C function ::abs that is commonly available for legacy reasons will only handle int!
The problem with
float f2= fabs(-9);
is not that there is no conversion from int (the type of -9) to double, but that the compiler does not know which conversion to pick (int -> float, double, long double) since there is a std::fabs for each of those three. Your workaround explicitly tells the compiler to use the int -> double conversion, so the ambiguity goes away.
C++11 solves this by adding double fabs( Integral arg ); which will return the abs of any integer type converted to double. Apparently, this overload is also available in C++98 mode with libstdc++ and libc++.
In general, just use std::abs, it will do the right thing. (Interesting pitfall pointed out by #Shafik Yaghmour. Unsigned integer types do funny things in C++.)
With C++ 11, using abs() alone is very dangerous:
#include <iostream>
#include <cmath>
int main() {
std::cout << abs(-2.5) << std::endl;
return 0;
}
This program outputs 2 as a result. (See it live)
Always use std::abs():
#include <iostream>
#include <cmath>
int main() {
std::cout << std::abs(-2.5) << std::endl;
return 0;
}
This program outputs 2.5.
You can avoid the unexpected result with using namespace std; but I would adwise against it, because it is considered bad practice in general, and because you have to search for the using directive to know if abs() means the int overload or the double overload.
My Visual C++ 2008 didn't know which to choice from long double fabs(long double), float fabs(float), or double fabs(double).
In the statement double i = -9;, the compiler will know that -9 should be converted to double because the type of i is double.
abs() is declared in stdlib.h and it will deal with int value.
fabs() is declared in math.h and it will deal with double value.
Why should I get this error
C2668: 'abs' : ambiguous call to overloaded function
For a simple code like this
#include <iostream>
#include <cmath>
int main()
{
unsigned long long int a = 10000000000000;
unsigned long long int b = 20000000000000;
std::cout << std::abs(a-b) << "\n"; // ERROR
return 0;
}
The error still presents after removing std::. However if I use int data type (with smaller values) there is no problem.
The traditional solution is to check that manually
std::cout << (a<b) ? (b-a) : (a-b) << "\n";
Is that the only solution?
The check seem the only really good solution. Alternatives require type bigger than yours and nonstandard extension to use it.
You can go with solutions casting to signed long long if your range fits. I would hardly suggest that way, especially if the implementation is placed in a function that does only that.
You are including <cmath> and thus using the "floating-point abs".
The "integer abs" is declared in <cstdlib>.
However, there is no overload for unsigned long long int (both a and b are, thus a-b is, too), and the overload for long long int only exists since C++11.
First, you need to include the correct header. As pointed out by gx_, <cmath> has a floating-point abs and on my compiler it actually compiles, but the result is probably not the one you expected:
1.84467e+19
Include <cstdlib> instead. Now the error is:
main.cpp:7:30: error: call of overloaded ‘abs(long long unsigned int)’ is ambiguous
main.cpp:7:30: note: candidates are:
/usr/include/stdlib.h:771:12: note: int abs(int)
/usr/include/c++/4.6/cstdlib:139:3: note: long int std::abs(long int)
/usr/include/c++/4.6/cstdlib:173:3: note: long long int __gnu_cxx::abs(long long int)
As you can see, there is no unsigned overload of this function, because computing an absolute value of something which is of type unsigned makes no sense.
I see answers suggesting you to cast an unsigned type to a signed one, but I believe this is dagereous, unless you really know what you are doing!
Let me ask first what is the expected range of the values a and b that you are going to operate on? If both are below 2^63-1 I would strongly suggest to just use long long int. If that is not true however, let me note that your program for the values:
a=0, b=1
and
a=2^64-1, b=0
will produce exactly the same result, because you actually need 65 bits to represent any possible outcome of a difference of 2 64-bit values. If you can confirm that this is not going to be a problem, use the cast as suggested. However, if you don't know, you may need to rethink what you are actually trying to achieve.
Because back before C++ with C you used to have use abs, fabs, labs for each different type, c++ allows overloading of abs, in this case it doesn't understand or isn't happy with your overload.
Use labs(a-b) seeing as you're using longs, this should solve your problem.
I'm often using the wrong literals in expressions, e.g. dividing a float by an int, like this:
float f = read_f();
float g = f / 2;
I believe that the compiler will in this case first convert the int literal (2) to float, and then apply the division operator. GCC and Clang have always let stuff like that pass, but Visual C++ warns about an implicit conversion. So I have to write it like this:
float f = read_f();
float g = f / 2.0f;
That got me wondering: Should I always use the appropriate literals for float, double, long etc.? I normally use int literals whenever I can get away with it, but I'm not sure if that's actually a good idea.
Is this a likely cause of subtle errors?
Is this only an issue for expressions or also for function parameters?
Are there warning levels for GCC or Clang that warn about such implicit conversions?
How about unsigned int, long int etc?
You should always explicitly indicate the type of literal that you intend to use. This will prevent problems when for example this sort of code:
float foo = 9.0f;
float bar = foo / 2;
changes to the following, truncating the result:
int foo = 9;
float bar = foo / 2;
It's a concern with function parameters as well when you have overloading and templates involved.
I know gcc has -Wconversion but I can't recall everything that it covers.
For integer values that fit in int I usually don't qualify those for long or unsigned as there is usually much less chance there for subtle bugs.
There's pretty much never an absolutely correct answer to a "should" question. Who's going to use this code, and for what? That's relevant here. But also, particularly for anything to do with floats, it's good to get into the habit of specifying exactly the operations you require. float*float is done in single-precision. anything with a double is done double-precision, 2 gets converted to a double so you're specifying different operations here.
The best answer here is What Every Computer Scientist Should Know About Floating-Point Arithmetic. I'd say don't tl;dr it, there are no simple answers with floating point.
My company has a piece of software, sporting a rather large codebase. Recently I was assigned the task of checking wether the code would compile on an x86_64 target using gcc 4.1.2. I've gotten pretty far in the compilation with very minor modifications to the code but just this morning I got a somewhat confusing compile error.
The code is trying, and failing, to call powfrom <cmath> using int, unsigned int& as parameters. The compiler spits out an error because it can't find a suitable match to call. The overloads for pow in <cmath> are as follows:
double pow(double base, double exponent)
long double pow(long double base, long double exponent)
float pow(float base, float exponent)
double pow(double base, int exponent)
long double pow(long double base, int exponent)
I'm not quite shure as to why this builds on our 32-bit environments but that's beside the point right now.
My question is: how should I cast the parameters, which pow should I use? Thanks.
P.S. I can't change the datatype of the parameters as doing so would require too much work. My assignment is to get the code to compile, detailing any hacks I make so that later, we can go over those hacks and find proper ways do deal with them.
If you are making many calls to pow(int, unsigned int) why don't you just code it by yourself? If execution speed is not an issue, it's not much work.
Otherwise, I'd use a pow() overload whose input parameters are guaranteed to contain your expected values, such as pow(float, float) or pow(double, double). Anyway, I feel that making your own version could prevent problems with conversion between floating point and integer.
The result will always be integer, with these types of arguments.
Depending on the expected range of the arguments, especially the exponent, you should choose for the float or double or long double version.
So that would become
pow( (float) i, (int)ui );
You can find the allowed range of arguments by solving the equation pow(i,ui) < max_double.