Given this starting point:
double y = readDoubleValue();
Is there any significant difference in C++ between:
int x = y;
and
int x = trunc(y);
Which one should I prefer? If somebody else (including my future self :) ) reads my code, it looks to me that with the second it's more explicit the fact that I know exactly what I am doing, however it requires a library inclusion.
Reference:
Is there a trunc function in C++?
Just using static_cast<int>(y) will give all the benefits you are looking for:
truncation
the casting
explicit conversion for clarity.
the reasons why I won't use trunc()
it is not that common, and probably someone else reading your code will have to review the documentation (that is why I did, but again, I'm not an expert)
you are still using implicit conversion anyway, trunc() doesn't return an int.
for me it is not explicit enough, after reading your code and the documentation I asked myself this: "did he intent casting to int, or you just wanted a float without the fraction part"
I can think of a situation or two where I want to get rid of the fraction part but I still want to the variable to have the type float for several reasons like, I want the operation x + 0.1f to save the fraction part.
so I still would have doubts about your intentions, maybe you didn't mean the implicit conversion.
OR you can just put a little comment next to it int x = y; // yes, I know what I'm doing.This will also give the clarity you need.
IMO you should not. The truncate is a function defined for floating point types. It does not change type into integral type.
int x = y; here you say you are assigning something to an int variable
int x = trunc(y); here you say you drop the fractional part part for whatever reason, then convert
Use-cases are pretty different in my opinion.
Why would I discourage use of trunc before conversion. Probably preference, to me its kind of obfuscation actually in such use-case.
Related
In the code base, I often see people writing
void f(unsigned int){/* some stuff*/ }; // define f here
f(0) // call f later
unsigned int a = 0;
double d = 0;
Initialized number (0 here) not matching the declared type annoys me, if this is performance critical code, does this kind of initialization hurt performance?
EDIT
For the downvoters:
I did search before I posted the question, c++ is famous for hidden rules, i didn't know integer promotion rules until someone commented under this question. For people saying "even stupid compiler won't do conversion at run time", here below is an example from this answer
float f1(float x) { return x*3.14; }
float f2(float x) { return x*3.14F; }
These two versions have different performance, and to an unexperienced c++ programmer, I don't see much difference between my question and this example. And C++ is famous for hidden rules and pitfalls, which means intuition sometimes is not right, so why it is getting downvoted to ask a question like this?
You are right that 0 is an int, not a double (0. would be) nor an unsigned int. This does not matter here though as int is implicitly convertible to those types and 0 can be represented perfectly by both of them. There's no performance implication either; the compiler does it all at compile time.
if this is performance critical code, does this kind of initialization hurt performance?
Very unlikely so. Even without any optimization enabled there is no reason for a compiler to generate code to get original value and convert to type of variable instead of initializing by converted representation of that type (if that conversion is necessary). Though this may affect your code if you use new style, that some people recommend to use in modern C++
auto a = 0U; // if you do not specify U then variable type would be signed int
to use this style or not is a subjective question.
For your addition:
float f1(float x) { return x*3.14; }
float f2(float x) { return x*3.14F; }
this case is quite different. In the first case x is promoted to double calculations with double is used and then result is converted to float, while on the second case float is multiplied by float. Difference is significant - you either convert compile time constant of one type to constant of another or use calculations with that types.
I am trying to write a C++ code for conversion of assembly dq 3FA999999999999Ah into C++ double. What to type inside asm block? I dont know how to take out the value.
int main()
{
double x;
asm
{
dq 3FA999999999999Ah
mov x,?????
}
std::cout<<x<<std::endl;
return 0;
}
From the comments it sounds a lot like you want to use a reinterpret cast here. Essentially what this does is to tell the compiler to treat the sequence of bits as if it were of the type that it was casted to but it doesn't do any attempt to convert the value.
uint64_t raw = 0x3FA999999999999A;
double x = reinterpret_cast<double&>(raw);
See this in action here: http://coliru.stacked-crooked.com/a/37aec366eabf1da7
Note that I've used the specific 64bit integer type here to make sure the bit representation required matches that of the 64bit double. Also the cast has to be to double& because of the C++ rules forbidding the plain cast to double. This is because reinterpret cast deals with memory and not type conversions, for more details see this question: Why doesn't this reinterpret_cast compile?. Additionally you need to be sure that the representation of the 64 bit unsigned here will match up with the bit reinterpretation of the double for this to work properly.
EDIT: Something worth noting is that the compiler warns about this breaking strict aliasing rules. The quick summary is that more than one value refers to the same place in memory now and the compiler might not be able to tell which variables are changed if the change occurs via the other way it can be accessed. In general you don't want to ignore this, I'd highly recommend reading the following article on strict aliasing to get to know why this is an issue. So while the intent of the code might be a little less clear you might find a better solution is to use memcpy to avoid the aliasing problems:
#include <iostream>
int main()
{
double x;
const uint64_t raw = 0x3FA999999999999A;
std::memcpy(&x, &raw, sizeof raw);
std::cout<<x<<std::endl;
return 0;
}
See this in action here: http://coliru.stacked-crooked.com/a/5b738874e83e896a
This avoids the issue with the aliasing issue because x is now a double with the correct constituent bits but because of the memcpy usage it is not at the same memory location as the original 64 bit int that was used to represent the bit pattern needed to create it. Because memcpy is treating the variable as if it were an array of char you still need to make sure you get any endianness considerations correct.
Casts are used for both type conversion and disambiguation. In further research I found these two as examples :
(double) 3; // conversion
(double) 3.0; // disambiguation
Can someone explain the difference, I don't see any. Is this distinction, also valid in C++
EDIT
Originally the code snippet was like so:
(float) 3; // conversion
(float) 3.0; // disambiguation
But changed it to double because floating point literals are no longer float in modern compilers and the question had no meaning. I hope I interpreted the comments correctly and I appologize for any answer already posted that became irrelevant after the edit.
The (double) 3 is a conversion from an Integer (int) to a floting point number (double).
The cast in (double) 3.0 is useless, it doesn't do anything since it's already double.
An unsuffixed floating constant has type double.
(ANSI C Standard, ยง3.1.3.1 Floating constants)
This answer is valid for C, it should be the same in C++.
This is similar to many things asked of programmers in any language. I will give a first example, which is different from what you are asking, but should illustrate better why this would appear wherever you've seen it.
Say I define a constant variable:
static const a = 5;
Now, what is 'a'? Let's try again...
static const max_number_of_buttons = 5;
This is explicit variable naming. It is much longer, and in the long run it is likely to save your ass. In C++ you have another potential problem in that regard: naming of member variables, versus local and parameter variables. All 3 should make use of a different scheme so when you read your C++ function you know exactly what it is doing. There is a small function which tells you nothing about the variables:
void func(char a)
{
char b;
b = a;
p = b * 3;
g = a - 7;
}
Using proper naming conventions and you would know whether p and g are local variables, parameters to the function, variable members, or global variables. (This function is very small so you have an idea, imagine a function of 1,000 lines of code [I've seen those], after a couple pages, you have no idea what's what and often you will shadow variables in ways that are really hard to debug...)
void func(char a)
{
char b;
b = a;
g_p = b * 3;
f_g = a - 7;
}
My personal convention is to add g_ for global variables and f_ for variable members. At this point I do not distinguish local and parameter variables... although you could write p_a instead of just a for the parameter and now you know for all the different types of variables.
Okay, so that makes sense in regard for disambiguation of variable names, although in your case you specifically are asking about types. Let's see...
Ada is known for its very strong typing of variables. For example:
type a is 1 .. 100;
type b is 1 .. 100;
A: a;
B: b;
A := 5;
B := A;
The last line does NOT compile. Even though type a and type b look alike, the compiler view them both as different numeric types. In other words, it is quite explicit. To make it work in Ada, you have to cast A as follow:
B := B'(A);
The same in C/C++, you declare two types and variables:
typedef int a;
typedef int b;
a A;
b B;
A = 5;
B = A;
That works as is in C/C++ because type a and type b are considered to be exactly the same (int, even though you gave them names a and b). It is dearly NOT explicit. So writing something like:
A = (b) 5;
would explicitly tell you that you view that 5 as of type b and convert it (implicitly) to type a in the assignment. You could also write it in this way to be fully explicit:
A = a(b(5));
Most C/C++ programmers will tell you that this is silly, which is why we have so many bugs in our software, unfortunately. Ada protects you against mixing carrots and bananas because even if both are defined as integers they both are different types.
Now there is a way to palliate to that problem in C/C++, albeit pretty much never used, you can make use of a class for each object, including numbers. Then you'd have a specific type for each different type (because variables of class A and class B cannot just be mixed together, at least not unless you allow it by adding functions for the purpose.) Very frankly, even I do not do that because it would be way too much work to write any C program of any length...
So as Juri said: 3.0 and (double) 3.0 are exactly the same thing and the (double) casting is redundant (a pleonasm, as in you say the same thing twice). Yet, it may help the guy who comes behind you see that you really meant for that number to be a double and not whatever the language says it could eventually be.
Just a simple question,having this:
fftw_complex *H_cast;
H_cast = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*M*N);
what is the difference between:
H_cast= reinterpret_cast<fftw_complex*> (H);
and
H_cast= reinterpret_cast<fftw_complex*> (&H);
Thanks so much in advance
Antonio
Answer to current question
The difference is that they do two completely different things!
Note: you do not tell us what H is, so it's impossible to answer the question with confidence. But general principles apply.
For the first case to be sensible code, H should be a pointer (typed as void* possibly?) to a fftw_complex instance. You would do this to tell the compiler that H is really a fftw_complex*, so you can then use it.
For the second case to be sensible code, H should be an instance of a class with a memory layout identical to that of class fftw_complex. I can't think of a compelling reason to put yourself in this situation, it is very unnatural. Based on this, and since you don't give us information regarding H, I think it's almost certainly a bug.
Original answer
The main difference is that in the second case you can search your source code for reinterpret_cast (and hopefully ensure that every use is clearly documented and a necessary evil).
However, if you are casting from void* to another pointer type (is this the case here?) then it's preferable to use static_cast instead (which can also be easily searched for).
H_cast= reinterpret_cast<fftw_complex*> (H);
This converts the pointer-ish type inside H (or the integer itself, if H is an integer type) and tells the compiler "this is a pointer. Stop thinking whatever it was, it's a pointer now". H is used as something where you had stored a pointer-like address.
H_cast= reinterpret_cast<fftw_complex*> (&H);
This converts the address of H (which is a pointer to whatever type H is) into a pointer to "fftw_complex". Modifying the contents of H_cast will now change H itself.
You'll want the second if H is not a pointer and usually the first if it is. There are use cases for the other way around but they're uncommon and ugly (especially reinterpreting an int or - god forbid - a double as a pointer).
Pointer casts are always executed as a reinterpret_cast, so when casting from or to a void * there's no difference between a c-style cast, a static_cast or a reinterpret_cast.
Reinterpret_casts are usually reserved for the ugliest of locations where c-style casts and static_casts are used for innocuous casts. You basically use reinterpret_cast to tag some code as really-ugly:
float f = 3.1415f;
int x = *reinterpret_cast<int *>(&f);
That way, these ugly unsafe casts are searchable/greppable.
If I have something like:
typedef int MyType;
is it good practice to cast the operands of an operation if I do something like this:
int x = 5;
int y = 6;
MyType a = (MyType)(x + y);
I know that I don't need to do that but wondering if it's better for intent/documentation/readability concerns. Or, should I just do:
MyType a = x + y;
There may be reasons why x and y aren't declared as MyType but the sum of them could be used as an argument to a function that takes MyType, for example.
I wouldn't use a cast. It's unnecessary, looks messy, makes the code harder for a person to parse, and obscures the intent of the code.
If you use typedefs consistently (i.e., if you declare x and y as MyType objects as well), you shouldn't have too much of a problem with this.
The cast is unnecessary, and kinda ugly; if I were maintaining this code I'd yank it out in passing the the first time I read through that file on general principles. So no, it won't do anything bad - it's just indicating what is going to happen to the value of x + y anyways - but it clutters the line and the declaration of a as MyType already provides all the documentation that line needs.
In general, I feel you should try to explicitly cast things as little as possible; when you see a cast in code it should be an indicator that something noteworthy is happening, not simply that a variable's type is going to be changed during execution.