Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Why is implicit casting even allowed? I mean what is the benefit of casting a float to an int implicitly? Doesn't explicit casting makes a more readable and easier to debug code?
answer : Yes it is and here is an example
#include <stdio.h>
int main()
{
unsigned int a=1;
int b=-1;
if(b>a)
printf("-1 > 1 \n");
else
printf("boring!\n");
return 0;
}
If you execute this code you will get
-1 > 1
This is due to the implicit cast of the variable b (b will be casted to unsigned int and turn -1 to 4294976295 which is bigger than 1) which sometimes makes a problem so it will be a good habit to make explicit cast in order to make things clear for you and for programmers working on the same project !!
Question:
Why is implicit casting even allowed? I mean what is the benefit of
casting a float to an int implicitly?
The maintainer of the C FAQ, Steve Summit, says in a tutorial:
The default conversion rules serve two purposes. One is purely selfish
on the compiler's part: it does not want to have to know how to
generate code to add, say, a floating-point number to an integer. The
compiler would much prefer if all operations operated on two values of
the same type: two integers, two floating-point numbers, etc. (Indeed,
few processors have an instruction for adding a floating-point number
to an integer; most have instructions for adding two integers, or two
floating-point numbers.) The other purpose for the default conversions
is the programmer's convenience: the mentality that ``the computer and
the compiler are stupid, we programmers must specify everything in
excruciating detail'' can be carried too far, and it's reasonable to
define the language such that certain conversions are performed
implicitly and automatically by the compiler, when it's unambiguous
and safe to do so.
Question 2:
Doesn't explicit casting makes a more readable and easier to debug
code?
Answer:
As mentioned above this is how implicit conversions happen but
explicit conversions "YES" adds readability.
First: the large number of implicit conversions in C++ is due to
historical reasons, and nothing else. I don't think any one considers
all of them a good idea. On the other hand, there are many different
types of implicit conversions, and some of them are almost essential to
the language: you wouldn't like it if you needed an explicit conversion
to pass a MyType x; to a function taking a MyType const&; I'm pretty
sure that there is a consensus that const conversions adding const, like
this one, should be implicit.
With regards to conversions where there isn't a consensus:
Almost no one seems to have a problem with non-lossy conversion;
things like int to long, or float to double. Most people also
seem to accept conversions from integral types to floating point (eg
int to double), although these can loose precision in some cases.
(int i = 123456789; float f = i;, for example.)
There was a proposal during the standardization of C++98 to deprecate
narrowing conversions, like float to int. (The author of the
proposal was Stroustrup; if you don't like such conversions, you're in
good company.) It didn't pass; I don't know why exactly, but I suspect
that it was a question of breaking too much from the traditions of C.
In C++11, such conversions are forbidden in some newer constructs,
like the new initialization sequences. So it sounds to me like there is
a consensus that these implicit conversions aren't really a good idea,
but that they can't be removed for fear of breaking code or maybe just
breaking with the tradition in C. (I know that more than a few people
don't like the fact that someString += 3.14159; is a legal statement,
adding an ETX character to the end of the string.)
The original proposal for bool proposed deprecating all of the
conversions of numeric and pointer types to bool. This was removed;
it soon became apparent that the proposal wouldn't pass if it made
things like if ( somePointer ) (as opposed to
if ( somePointer != NULL )) illegal. There is still a large body of
people (myself included) who consider such conversions "bad", and avoid
them.
Finally: a compiler is free to issue a warning for anything it feels
like. If the market insisted warnings for such conversions, compilers
would implement them (probably as an option). I suspect that the
reason they don't is that the warnings have a bad reputation, due to
the initial implementations generating too many warnings. Integral
promotion leads to a number of narrowing conversions that no one wants
to eliminate:
char ch = '0' + v % 10;
for example, involves an int to char conversion (which is
narrowing); in C++11:
char ch{ '0' + v % 10 };
is illegal (but both VC++ and g++ accept it, g++ with a warning). I
suspect that to be usable, banning narrowing conversions would at least
have to make exceptions for cases where the wider type is itself due to
integral promotion, mixed type arithmetic and cases where the source
expression is a compile time constant which "fits" in the target type.
Obviously breaking old code is what prevents new versions of the languages (C and C++) to change the rules. So the question is why this was permitted in the first place when C was conceived. The fundamental reason is, I understand, that C was modeled to be close to the hardware, and hardware doesn't (often, fundamentally) distinguish between addresses, integers and boolean types. Thus code like int i=10; while(i--) doSomething(i); or int *p, offset; ... if(p) doSomethingElse(p+offset); is almost directly translatable to machine code. In fact, it is not far from a macro assembler, most differences being the niceties around function calls. In my opinion it is also extremely readable. Any additional casts or explicit comparisons would compromise the bare-bones visibility of the logic. But that, of course, is a matter of taste and programming socialization.
And then yes, experience not available in the 70s has shown that some of the implicit conversions are sources of errors. If K&R could conceive C again they would probably change a few (literally few) things. The world being as it is though we have to make do with compiler warnings.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
This is a pretty minor question, so my apologies if it is too broad or possibly a duplicate. I searched and found several questions regarding how implicit integer conversion works, but none asking whether it is a good thing. I wouldn't ordinarily have cared about this at all, but all the loud and irritating warnings that compilers give about implicit conversions made me wonder whether this is considered a problem.
As a simple example, here is a snippet calling a function that takes a linked list, an integer (index), and an unsigned integer (range) which removes the specified range from the linked list.
const int64_t first = foo;
const int64_t last = bar;
const int diff = last - first; /* int for example's sake */
/* ... */
ll_delete_range_at(baz->ll, first+1, diff-1);
Dead simple; not terribly interesting. But clang complains that the values passed to the function are shortened and in the second case the sign is changed. Assuming that I know (as I do here) that there will not be an overflow problem, and that the values are always positive, is this actually a problem? Should one explicitly cast like this?
ll_delete_range_at(baz->ll, (int)(first+1), (unsigned)(diff-1));
As far as I understand it, this changes nothing other than to add clutter and explicitly state that the programmer is aware of the casts being done and is confident that they are OK. Is that worth the clutter?
Clutter from frequent casting is a sign of bad architecture, not a sign that explicit casting is wrong.
If you know that specific variables or members will always be in a specific range or size, then declare them with the appropriate type from the beginning.
If the types you have to work with need to be declared with a specific range or size, then that has a reason, and that reason will be valid wherever you use them, so stick to the types throughout your code.
If you have a special case where you must cast (e.g. when combining two libraries that use different types), then encapsulate that cast in a helper function or wrapper class with proper error handling.
C style casting to int snd unsigned should be avouded, because in my experience accidentally feeding them pointers and it becoming a reinterpret cast. Use static_cast.
Second, those casts are a sign of errors waiting to happen; if int doesn't fit the int64_t we just got an unspecified overflow. And a large unsigned value above 2^31 is probably not the number of elements we want to delete when diff==0.
So in that sense they are great documentation of where the code has dangerous problems.
The current programmer "knows" that the values are safe. The different programmer 7 years, months or days from now "knows" that any valid 64 bit value can be used for first and last, and that a diff of 0 is reasonable and later code shouldn't choke.
Describing the contract of code where diff cannot be 0 and two 64 bit values must be within 2^31-1 of each other makes the code smell here more apparent.
All of which follows from an innocuous scary pair of C style casts and some numerically questionable function calls.
Explicit casts are better than implicit conversion, but the type conversion itself is fraught here. Often type changes are a sign of bugs and fencepost errors to come. Program defensively and deal with the bounds checks unless you know the code is in a critical performance path; then, get the types right in the critical performance path.
In F#, if you write 3 + 2.5 you get a syntax error. You have to write e.g. 3. + 2.5 to make it work, which can get annoying in math-heavy domains with many numeric literals.
Seeing as many other languages (e.g. C#) handle this just fine, is there a particular reason why F# doesn't implicitly convert int literals to float (which is a lossless conversion as far as I know) when performing arithmetic operations?
It's true that int to float is "safe". However, the lack of implicit conversion between types in general is considered to be a good feature of F# by many, as others have mentioned.
F# has much more extensive type-inference than C#. The types that are inferred by usage can be passed all the way through a large codebase. Implicit conversion between numeric types could complicate that inference can make it harder to understand type errors and increase the maintenance burden of the compiler code itself. In fact, F# doesn't perform any implicit conversions defined in C#.
By eliminating unnecessary casts, implicit conversions can improve source code readability. However, because implicit conversions do not require programmers to explicitly cast from one type to the other, care must be taken to prevent unexpected results.
Again, this decreases convenience but reduces the chance of incorrect behaviour, which can be a much bigger inconvenience later on, or for someone else.
Basically, this approach trades some convenience for another convenience (not having to write type names everywhere) and some increased safety/explicitness. I personally think it's a good trade-off for F#.
F# is a functional first language, one of the core values in functional languages is being able to reason about your code. Which is a fancy way of saying it's easy to understand what your code and what it is doing. Now explicit operations mean that your code will be easier to reason about, don't believe me?
Here is some python code that takes a number, turns it into a string, then back into a number, guess what it returns:
float(str(0.47000000000000003))
Did you guess 0.47000000000000003? Sorry, it's actually 0.46999999999999997! There are all sorts of weirdness like that when converting from double to decimal to float! Best to pick a type and stick to it. Now constantly having to specify the type might seem annoying at first, but the value of never having to worry about what types your functions are using, vs the the types being sent in... god help you if a library chose types for you as well... well, let's just say you will appreciate the explicitness as time goes on ;)
I have been doing some testing of my application by compiling it on different platforms, and the shift from a 64-bit system to a 32-bit system is exposing a number of issues.
I make heavy use of vectors, strings, etc., and as such need to count them. However, my functions also make use of 32-bit unsigned numbers because in many cases I need to explicitly consume a positive integer.
I'm having issues with seemingly simple tasks such as std::min and std::max, which may be more systemic. Consider the following code:
uint32_t getmax()
{
return _vecContainer.size();
}
Seems simple enough: I know that a vector can't have a negative number of elements, so returning an unsigned integer makes complete sense.
void setRowCol(const uint32_t &r_row; const uint32_t &r_col)
{
myContainer_t mc;
mc.row = r_row;
mc.col = r_col;
_vecContainer.push_back(mc);
}
Again, simple enough.
Problem:
uint32_t foo(const uint32_t &r_row)
{
return std::min(r_row, _vecContainer.size());
}
This gives me errors such as:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/algorithm:2589:1: note: candidate template ignored: deduced conflicting types for parameter '_Tp' ('unsigned long' vs. 'unsigned int')
min(const _Tp& __a, const _Tp& __b)
I did a lot of digging, and on one platform vector::size_type is an 8 byte number. However, by design I am using unsigned 4-byte numbers. This is presumably causing things to be wacky because you cannot implicitly convert from an 8-byte number to a 4-byte number.
The solution was to do this the old fashioned weay:
#define MIN_M(a,b) a < b ? a : b
return MIN_M(r_row, _vecContainer.size());
Which works dandy. But the systemic issue remains: when planning for multiple platform support, how do you handle instances like this? I could use size_t as my standard size, but that adds other complications (e.g. moving from one platform which supports 64 bit numbers to another which supports 32 bit numbers at a later date). The bigger issue is that size_t is unsigned, so I can't update my signatures:
size_t foo(const size_t &r_row)
// bad, this allows -1 to be passed, which I don't want
Any suggestions?
EDIT: I had read somewhere that size_t was signed, and I've since been corrected. So far it looks like this is a limitation of my own design (e.g. 32-bit numbers vs. using std::vector::size_type and/or size_t).
One way to deal with this is to use
std::vector<Type>::size_type
as the underlying type of your function parameters/returns, or auto returns if using C++14.
An answer in the form of a set of tidbits:
Instead of relying on the compiler to deduce the type, you can explicitly specify the type when using function templates like std::min<T>. For example: std::min<std::uint32_t>(4, my_vec.size());
Turn on all the compiler warnings related to signed versus unsigned comparisons and implicit narrowing conversions. Use brace initialization where you can, as it will treat narrowing conversions as errors.
If you explicitly want to use 32-bit values like std::uint32_t, I'd try to find the minimal number of places to explicitly convert (i.e., static_cast) the "sizes" to the smaller types. You don't want casts everywhere, but if you're using library container sizes internally and you want your API to use std::uint32_t, explicitly cast at the API boundaries so that a user of your class never has to worry about doing the conversion themselves. If you can keep the conversions to just a couple places, it becomes practical to add run-time checks (i.e., assertions) that the size has not actually outgrown the range of the smaller type.
If you don't care about the exact size, use std::size_t, which is almost certainly identical to std::XXX::size_type for all of the standard containers. It's theoretically possible for them to be different, but it doesn't happen in practice. In most contexts, std::size_t is less verbose that std::vector::size_type, so it makes a good compromise.
Lots of people (including many people on the C++ standards committee) will tell you to avoid unsigned values even for sizes and indexes. I understand and respect their arguments, but I don't find them persuasive enough to justify the extra friction at the interface with the standard library. Whether or not it's an historical artifact that std::size_t is unsigned, the fact is that the standard library uses unsigned sizes extensively. If you use something else, your code ends up littered with implicit conversions, all of which are potential bugs. Worse, those implicit conversions make turning on the compiler warnings impractical, so all those latent bugs remain relatively invisible. (And even if you know your sizes will never exceed the smaller type, being forced to turn of the compiler warnings for signedness and narrowing means you could miss bugs in completely unrelated parts of the code.) Match the types of the APIs you're using as much as possible, assert and explicitly convert when necessary, and turn on all the warnings.
Keep in mind that auto is not a panacea. for (auto i = 0; i < my_vec.size(); ++i) ... is just as bad as for (int i .... But if you generally prefer algorithms and iterators to raw loops, auto will get you pretty far.
With division you must never divide unless you know the denominator is not 0. Similarly, with unsigned integral types, you must never subtract unless you know the subtrahend is smaller than or equal to the original value. If you can make that a habit, you can avoid the bugs that the always-use-a-signed-type folks are concerned about.
So on a fairly regular bases it seems I find the type of some constant I declared (typically integer, but occasionally other things like strings) is not the ideal type in a context it is being used, requiring a cast or resulting in a compiler warning about the implicit cast.
E.g. in one piece of code I had something like the below, and got a signed/unsigned comparison issue.
static const int MAX_FOO = 16;
...
if (container.size() > MAX_FOO) {...}
I have been thinking of just always using the smallest / most basic type allowed for a given constant (e.g. char, unsigned char, const char* etc rather than say int, size_t and std::string), but was wondering if this is really a good idea, and if there are some places where it would potentially be a really bad idea? e.g. code using the 'auto' keyword (or perhaps templates) getting a too small type and overflowing on what appeared to be a safe operation?
Going for the smallest type that can hold the initial value is a bad habit. That invites overflow.
Always code for the most general (which according to Murphy's Law is the worst) case. As templates generalize things, that makes the worst case a lot worse. Be prepared for bizarre kinds of overflows and avoid negative numbers while unsigned types are in the neighborhood.
std::size_t is the best choice for the size or length of anything, for the reason you mentioned. But subtract pointers and you get a std::ptrdiff_t instead. Personally I recommend to cast the result of such a subtraction to std::size_t if it can be guaranteed to be positive.
char * does not own its string in the C++ sense as std::string does, so the latter is the more conservative choice.
This question is so broad that no more specific advice can be madeā¦
Is there any compiler that has a directive or a parameter to cast integer calculation to float implicitly. For example:
float f = (1/3)*5;
cout << f;
the "f" is "0", because calculation's constants(1, 3, 10) are integer. I want to convert integer calculation with a compiler directive or parameter. I mean, I won't use explicit casting or ".f" prefix like that:
float f = ((float)1/3)*5;
or
float f = (1.0f/3.0f)*5.0f;
Do you know any c/c++ compiler which has any parameter to do this process without explicit casting or ".f" thing?
Any compiler that did what you want would no longer be a conforming C++ compiler. The semantics of integer division are well specified (at least for positive numbers), and you're proposing to change that.
It would also be dangerous since it would wind up applying to everything, and you might at some point have code that relies on standard integer arithmetic, which would silently be invalid. (After all, if you had tests that would catch that, you presumably would have tests that would catch the undesired integer arithmetic.)
So, the only advice I've got is to write unit tests, have code reviews, and try to avoid magic numbers (instead defining them as const float).
If you don't like either of the two methods you mentioned, you're probably out of luck.
What are you hoping to accomplish with this? Any specialized operator that did "float-division" would have to convert ints to floats at some point after tokenization, which means you're not going to get any performance benefit on the execution.
In C++ it's a bit odd to see a bunch of numeric values sprinkled through the code. Generally it is considered best practice to move any 'magic numbers' like these to their own static const float value, which removes this problem.
No, those two options are the best you have.