(Integer. 1)
As I understand it is the same as following Java code:
new Integer(1)
So now I have got following construction
(Integer. (Long. 1))
#=> 1
How does this work? Java Integer class has got two constructors and they don't accept Long.
By the way, following doesn't work:
(Long. (Integer. 1))
This indeed seems like a bug in Clojure. For Java, it's the other way around. This seems related to CLJ-445, an "enhancement" request that is over five years old. It's perhaps best to ping that issue with this trivial example.
new Long(new Integer(1)) should be acceptable due to a combination of unboxing and widening.
With unboxing (the inverse of autoboxing), some objects are implicitly converted to primitive types:
Converting an object of a wrapper type (Integer) to its corresponding primitive (int) value is called unboxing. The Java compiler applies unboxing when an object of a wrapper class is:
Passed as a parameter to a method that expects a value of the corresponding primitive type.
Assigned to a variable of the corresponding primitive type.
In this example, Integer objects are implicitly reduced to int, and Long objects are implicitly reduced to long.
With widening, primitive types can be implicitly converted to "wider" primitive types when this is possible without information loss. This means that an int can be converted to a long, but not the other way around, so new Integer(new Long(1)) should be rejected.
Related
Given a type, say A and some arguments, say 1, 4.2, I want to find the constructor of A that can be called with these types. Due to conversions, the types may be different. I.e. instead of the passed int, double, the signature may be unsigned int, float and this is what I'm after. As an extra twist: the constructor will be overloaded, i.e. there's not just one to consider.
For some context: The reason is that I want to store the arguments in a std::tuple and then store that in a std::any. Later on to get the tuple out of the any, I must know the type that was stored. At that point (within A e.g.), I only know the types unsigned int, float.
Here is an example on godbolt that shows this issue above in A. Additionally, a class B takes a std::shared_ptr<Base> but I want to create the tuple/any from a std::shared_ptr<Child>...
https://godbolt.org/z/RB8UCR
So far, we only came across https://gist.githubusercontent.com/deni64k/c5728d0596f8f1640318b357701f43e6/raw/87ea05a8f7b3f6add5b3775fecf089e0aa421492/reflection.hxx which goes into the right direction. But it's not possible to compile this code on Windows: https://godbolt.org/z/TAk7Dy
Has anyone come across this problem before and knows a C++17 cross-platform solution to it?
var a : Double
a = Math.sin(10) // error: the integer literal does not conform to the expected type Double
a = Math.sin(10.0) //This compiles successfully
println(a)
Why doesn't kotlin perform implicit type conversion and force us to pass the exact type of data?
fun sin(value: Double): Double // at kotlin documentation
We all know that Kotlin has both non-nullable Int and nullable Int?.
When we use Int? this happens: Kotlin actually 'boxes' JVM primitives when Kotlin needs a nullable reference since it's aimed at eliminating the danger of null references from code.
Now look at this: (assuming this is a compilable code)
val a: Int? = 1
val b: Long? = a
Kotlin doesn't perform implicit type conversion because of this thing happens. If Kotlin did implicit type conversions, b should be 1. but since a is a boxed Int and b is a boxed Long, a == b yields false and falls into contradiction, since its == operator checks for equals() and Long's equals() checks other part to be Long as well.
Check the documentation:
Explicit Conversions in Kotlin
https://kotlinlang.org/docs/reference/basic-types.html
https://kotlinlang.org/docs/reference/equality.html
Kotlin does not allow implicit conversions of numeric types. There is a misconception that implicit conversions are "no harm, no foul" ... which is wrong.
The process in Java for implicit conversions is more complicated than you think, read the docs to see what all is entailed. And then you can try to analyze all of the cases that can go wrong.
Kotlin, does not want the compiler to guess as to your intention, so it makes everything in the language explicit, including numeric type conversions. As explained in the Kotlin docs for Explicit Conversions it says clearly:
Due to different representations, smaller types are not subtypes of bigger ones.
[...]
As a consequence, smaller types are NOT implicitly converted to bigger types.
[...]
We can use explicit conversions to widen numbers.
And the documentation shows one such sample of where things can go wrong, but there are many others.
Nor can you just cast one numeric type to another, as mentioned here in incorrect comments and answers. That will only result in a nice runtime error. Instead look at the numeric conversion functions such as toInt() and toDouble() found on the numeric types, such as on the Number class.
Explicitness is part of the Kotlin personality, and it is not planned to change.
Automatic type casting for numeric types can lead to losing precision. Just consider following java code:
double hoursSinceUnixEra = System.currentTimeMillis()/1000/60/60;
The intention was not to cut the result to full hours, although it compiles without any warning in Java.
val hoursSinceUnixEra = System.currentTimeMillis()/1000/60/60;
someObject.doubleValue = hoursSinceUnixEra
Above Kotlin code won't compile due to unexplicit casting.
Issue of this type can be very hard to find and fix and it's the reason behind this decision.
You can still explicitly convert type:
val value = 3
Math.sin(value.toDouble())
Imagine you have this function:
void foo(long l) { /* do something with l */}
Now you call it like so at the call site:
foo(65); // here 65 is of type int
Why, (technically) when you specify in the declaration of your function that you are expecting a long and you pass just a number without the L suffix, is it being treated as an int?
Now, I know it is because the C++ Standard says so, however, what is the technical reason that this 65 isn't just promoted to being of type long and so save us the silly error of forgetting L suffix to make it a long explicitly?
I have found this in the C++ Standard:
4.7 Integral conversions [conv.integral]
5 The conversions allowed as integral promotions are excluded from the set of integral conversions.
That a narrowing conversion isn't being done implicitly, I can think with, but here the destination type is obviously wider than the source type.
EDIT
This question is based on a question I saw earlier, which had funny behavior when you didn't specify the L suffix. Example, but perhaps it's a C thing, more than C++?!!
In C++ objects and values have a type, that is independent on how you use them. Then when you use them, if you need a different type it will be converted appropriately.
The problem in the linked question is that varargs is not type-safe. It assumes that you pass in the correct types and that you decode them for what they are. While processing the caller, the compiler does not know how the callee is going to decode each one of the arguments so it cannot possibly convert them for you. Effectively, varargs is as typesafe as converting to a void* and converting back to a different type, if you get it right you get what you pushed in, if you get it wrong you get trash.
Also note that in this particular case, with inlining the compiler has enough information, but this is just a small case of a general family if errors. Consider the printf family of functions, depending on the contents of the first argument each one of the arguments is processed as a different type. Trying to fix this case at the language level would lead to inconsistencies, where in some cases the compiler does the right thing or the wrong one and it would not be clear to the user when to expect which, including the fact that it could do the right thing today, and the wrong one tomorrow if during refactoring the function definition is moved and not available for inlining, or if the logic of the function changes and the argument is processed as one type or another based on some previous parameter.
The function in this instance does receive a long, not an int. The compiler automatically converts any argument to the required parameter type if it's possible without losing any information (as here). That's one of the main reasons function prototypes are important.
It's essentially the same as with an expression like (1L + 1) - because the integer 1 is not the right type, it's implicitly converted to a long to perform the calculation, and the result is a long.
If you pass 65L in this function call, no type conversion is necessary, but there's no practical difference - 65L is used either way.
Although not C++, this is the relevant part of the C99 standard, which also explains the var args note:
If the expression that denotes the called function has a type that
does include a prototype, the arguments are implicitly converted, as
if by assignment, to the types of the corresponding parameters, taking
the type of each parameter to be the unqualified version of its
declared type. The ellipsis notation in a function prototype
declarator causes argument type conversion to stop after the last
declared parameter. The default argument promotions are performed on
trailing arguments.
Why, (technically) when you specify in the declaration of your function that you are expecting a long and you pass just a number without the L suffix, is it being treated as an int?
Because the type of a literal is specified only by the form of the literal, not the context in which it is used. For an integer, that is int unless the value is too large for that type, or a suffix is used to specify another type.
Now, I know it is because the C++ Standard says so, however, what is the technical reason that this 65 isn't just promoted to being of type long and so save us the silly error of forgetting L suffix to make it a long explicitly?
The value should be promoted to long whether or not you specify that type explicitly, since the function is declared to take an argument of type long. If that's not happening, perhaps you could give an example of code that fails, and describe how it fails?
UPDATE: the example you give passes the literal to a function taking untyped ellipsis (...) arguments, not a typed long argument. In that case, the function caller has no idea what type is expected, and only the default argument promotions are applied. Specifically, a value of type int remains an int when passed through ellipsis arguments.
The C standard states:
"The type of an integer constant is the first of the corresponding list in which its value can be represented."
In C89, this list is:
int, long int, unsigned long int
C99 extends that list to include:
long long int, unsigned long long int
As such, when you code is compiled, the literal 65 fits in an int type, and so it's type is accordingly int. The int is then promoted to long when the function is called.
If, for instance, sizeof(int) == 2, and your literal is something like 64000, the type of the value will be a long (assuming sizeof(long) > sizeof(int)).
The suffixes are used to overwrite the default behavior and force the specified literal value to be of a certain type. This can be particularly useful when the integer promotion would be expensive (e.g. as part of an equation in a tight loop).
We have to have a standard meaning for types because for lower level applications, the type REALLY matters, especially for integral types. Low level operators (such as bitshift, add, ect) rely on the type of the input to determine overflow locations. ((65 << 2) with integers is 260 (0x104), but with a single char it is 4! (0x004)). Sometimes you want this behavior, sometimes you don't. As a programmer, you just need to be able to always know what the compiler is going to do. Thus the design decision was made to make the human explicitly declare the integral types of their constants, with "undecorated" as the most commonly used type, integer.
The compiler does automatically "cast" your constant expressions at compile time, such that the effective value passed to the function is long, but up until the cast it is considered an int for this reason.
Where I can find an excellently understandable article on C++ type conversion covering all of its types (promotion, implicit/explicit, etc.)?
I've been learning C++ for some time and, for example, virtual functions mechanism seems clearer to me than this topic. My opinion is that it is due to the textbook's authors who are complicating too much (see Stroustroup's book and so on).
(Props to Crazy Eddie for a first answer, but I feel it can be made clearer)
Type Conversion
Why does it happen?
Type conversion can happen for two main reasons. One is because you wrote an explicit expression, such as static_cast<int>(3.5). Another reason is that you used an expression at a place where the compiler needed another type, so it will insert the conversion for you. E.g. 2.5 + 1 will result in an implicit cast from 1 (an integer) to 1.0 (a double).
The explicit forms
There are only a limited number of explicit forms. First off, C++ has 4 named versions: static_cast, dynamic_cast, reinterpret_cast and const_cast. C++ also supports the C-style cast (Type) Expression. Finally, there is a "constructor-style" cast Type(Expression).
The 4 named forms are documented in any good introductory text. The C-style cast expands to a static_cast, const_cast or reinterpret_cast, and the "constructor-style" cast is a shorthand for a static_cast<Type>. However, due to parsing problems, the "constructor-style" cast requires a singe identifier for the name of the type; unsigned int(-5) or const float(5) are not legal.
The implicit forms
It's much harder to enumerate all the contexts in which an implicit conversion can happen. Since C++ is a typesafe OO language, there are many situations in which you have an object A in a context where you'd need a type B. Examples are the built-in operators, calling a function, or catching an exception by value.
The conversion sequence
In all cases, implicit and explicit, the compiler will try to find a conversion sequence. A conversion sequence is a series of steps that gets you from type A to type B. The exact conversion sequence chosen by the compiler depends on the type of cast. A dynamic_cast is used to do a checked Base-to-Derived conversion, so the steps are to check whether Derived inherits from Base, via which intermediate class(es). const_cast can remove both const and volatile. In the case of a static_cast, the possible steps are the most complex. It will do conversion between the built-in arithmetic types; it will convert Base pointers to Derived pointers and vice versa, it will consider class constructors (of the destination type) and class cast operators (of the source type), and it will add const and volatile. Obviously, quite a few of these step are orthogonal: an arithmetic type is never a pointer or class type. Also, the compiler will use each step at most once.
As we noted earlier, some type conversions are explicit and others are implicit. This matters to static_cast because it uses user-defined functions in the conversion sequence. Some of the conversion steps consiered by the compiler can be marked as explicit (In C++03, only constructors can). The compiler will skip (no error) any explicit conversion function for implicit conversion sequences. Of course, if there are no alternatives left, the compiler will still give an error.
The arithmetic conversions
Integer types such as char and short can be converted to "greater" types such as int and long, and smaller floating-point types can similarly be converted into greater types. Signed and unsigned integer types can be converted into each other. Integer and floating-point types can be changed into each other.
Base and Derived conversions
Since C++ is an OO language, there are a number of casts where the relation between Base and Derived matters. Here it is very important to understand the difference between actual objects, pointers, and references (especially if you're coming from .Net or Java). First, the actual objects. They have precisely one type, and you can convert them to any base type (ignoring private base classes for the moment). The conversion creates a new object of base type. We call this "slicing"; the derived parts are sliced off.
Another type of conversion exists when you have pointers to objects. You can always convert a Derived* to a Base*, because inside every Derived object there is a Base subobject. C++ will automatically apply the correct offset of Base with Derived to your pointer. This conversion will give you a new pointer, but not a new object. The new pointer will point to the existing sub-object. Therefore, the cast will never slice off the Derived part of your object.
The conversion the other way is trickier. In general, not every Base* will point to Base sub-object inside a Derived object. Base objects may also exist in other places. Therefore, it is possible that the conversion should fail. C++ gives you two options here. Either you tell the compiler that you're certain that you're pointing to a subobject inside a Derived via a static_cast<Derived*>(baseptr), or you ask the compiler to check with dynamic_cast<Derived*>(baseptr). In the latter case, the result will be nullptr if baseptr doesn't actually point to a Derived object.
For references to Base and Derived, the same applies except for dynamic_cast<Derived&>(baseref) : it will throw std::bad_cast instead of returning a null pointer. (There are no such things as null references).
User-defined conversions
There are two ways to define user conversions: via the source type and via the destination type. The first way involves defining a member operator DestinatonType() const in the source type. Note that it doesn't have an explicit return type (it's always DestinatonType), and that it's const. Conversions should never change the source object. A class may define several types to which it can be converted, simply by adding multiple operators.
The second type of conversion, via the destination type, relies on user-defined constructors. A constructor T::T which can be called with one argument of type U can be used to convert a U object into a T object. It doesn't matter if that constructor has additional default arguments, nor does it matter if the U argument is passed by value or by reference. However, as noted before, if T::T(U) is explicit, then it will not be considered in implicit conversion sequences.
it is possible that multiple conversion sequences between two types are possible, as a result of user-defined conversion sequences. Since these are essentially function calls (to user-defined operators or constructors), the conversion sequence is chosen via overload resolution of the different function calls.
Don't know of one so lets see if it can't be made here...hopefully I get it right.
First off, implicit/explicit:
Explicit "conversion" happens everywhere that you do a cast. More specifically, a static_cast. Other casts either fail to do any conversion or cover a different range of topics/conversions. Implicit conversion happens anywhere that conversion is happening without your specific say-so (no casting). Consider it thusly: Using a cast explicitly states your intent.
Promotion:
Promotion happens when you have two or more types interacting in an expression that are of different size. It is a special case of type "coercion", which I'll go over in a second. Promotion just takes the small type and expands it to the larger type. There is no standard set of sizes for numeric types but generally speaking, char < short < int < long < long long, and, float < double < long double.
Coercion:
Coercion happens any time types in an expression do not match. The compiler will "coerce" a lesser type into a greater type. In some cases, such as converting an integer to a double or an unsigned type into a signed type, information can be lost. Coercion includes promotion, so similar types of different size are resolved in that manner. If promotion is not enough then integral types are converted to floating types and unsigned types are converted to signed types. This happens until all components of an expression are of the same type.
These compiler actions only take place regarding raw, numeric types. Coercion and promotion do not happen to user defined classes. Generally speaking, explicit casting makes no real difference unless you are reversing promotion/coercion rules. It will, however, get rid of compiler warnings that coercion often causes.
User defined types can be converted though. This happens during overload resolution. The compiler will find the various entities that resemble a name you are using and then go through a process to resolve which of the entities should be used. The "identity" conversion is preferred above all; this means that a f(t) will resolve to f(typeof_t) over anything else (see Function with parameter type that has a copy-constructor with non-const ref chosen? for some confusion that can generate). If the identity conversion doesn't work the system then goes through this complex higherarchy of conversion attempts that include (hopefully in the right order) conversion to base type (slicing), user-defined constructors, user-defined conversion functions. There's some funky language about references which will generally be unimportant to you and that I don't fully understand without looking up anyway.
In the case of user type conversion explicit conversion makes a huge difference. The user that defined a type can declare a constructor as "explicit". This means that this constructor will never be considered in such a process as I described above. In order to call an entity in such a way that would use that constructor you must explicitly do so by casting (note that syntax such as std::string("hello") is not, strictly speaking, a call to the constructor but instead a "function-style" cast).
Because the compiler will silently look through constructors and type conversion overloads during name resolution, it is highly recommended that you declare the former as 'explicit' and avoid creating the latter. This is because any time the compiler silently does something there's room for bugs. People can't keep in mind every detail about the entire code tree, not even what's currently in scope (especially adding in koenig lookup), so they can easily forget about some detail that causes their code to do something unintentional due to conversions. Requiring explicit language for conversions makes such accidents much more difficult to make.
For integer types, check the book Secure Coding n C and C++ by Seacord, the chapter about integer overflows.
As for implicit type conversions, you will find the books Effective C++ and More Effective C++ to be very, very useful.
In fact, you shouldn't be a C++ developer without reading these.
Given a C++ function f(X x) where x is a variable of type X, and a variable y of type Y, what are all the automatic/implicit conversions the C++ compiler will perform on y so that the statement "f(y);" is legal code (no errors, no warnings)?
For example:
Pass Derived& to function taking Base& - ok
Pass Base& to function Derived& - not ok without a cast
Pass int to function taking long - ok, creates a temporary long
Pass int& to function taking long& - NOT ok, taking reference to temporary
Note how the built-in types have some quirks compared to classes: a Derived can be passed to function taking a Base (although it gets sliced), and an int can be passed to function taking a long, but you cannot pass an int& to a function taking a long&!!
What's the complete list of cases that are always "ok" (don't need to use any cast to do it)?
What it's for: I have a C++ script-binding library that lets you bind your C++ code and it will call C++ functions at runtime based on script expressions. Since expressions are evaluated at runtime, all the legal combinations of source types and function argument types that might need to be used in an expression have to be anticipated ahead of time and precompiled in the library so that they'll be usable at runtime. If I miss a legal combination, some reasonable expressions won't work in runtime expressions; if I accidently generate a combination that isn't legal C++, my library just won't compile.
Edit (narrowing the question):
Thanks, all of your answers are actually pretty helpful. I knew the answer was complicated, but it sounds like I've only seen the tip of the iceberg.
Let me rephrase the question a little then to limit its scope then:
I will let the user specify a list of "BaseClasses" and a list of "UserDefinedConversions". For Bases, I'll generate everything including reference and pointer conversions. But what cases (const/reference/pointer) can I safely do from the UserDefined Conversions list? (The user will give bare types, I will decorate with *, &, const, etc. in the template.)
C++ Standard gives the answer to your question in 13.3.3.1 Implicit conversion sequences, but it too large to post it here. I recommend you to read at least that part of C++ Standard.
Hope this link will help you.
Unfortunately the answer to your question is hugely complex, occupying at least 9 pages in the ISO C++ standard (specifically: ~6 pages in "3 Standard Conversions" and ~3 pages in "13.3.3.1 Implicit Conversion Sequences").
Brief summary: A conversion that does not require a cast is called an "implicit conversion sequence". C++ has "standard conversions", which are conversions between fundamental types (such as char being promoted to int) and things such as array-to-pointer decay; there can be several of these in a row, hence the term "sequences". C++ also permits user-defined conversions, which are defined by conversion functions and converting constructors. The important thing to note is that an implicit conversion sequence can have at most one user-defined conversion, with optionally a sequence of standard conversions on either side -- C++ will never "chain" more than one user-defined conversion together without a cast.
(If anyone would like to flesh this post out with the full details, please go ahead... But for me, that would just be too exhausting, sorry :-/)
Note how the built-in types have some
quirks compared to classes: a Derived
can be passed to function taking a
Base (although it gets sliced), and an
int can be passed to function taking a
long, but you cannot pass an int& to a
function taking a long&!!
That's not a quirk of built-in vs. class types. It's a quirk of inheritance.
If you had classes A and B, and B had a conversion to A (either because A has a constructor taking B, or because B has a conversion operator to A), then they'd behave just like int and long in this respect - conversion can occur where a function takes a value, but not where it takes a non-const reference. In both cases the problem is that there is no object to which the necessary non-const reference can be taken: a long& can't refer to an int, and an A& can't refer to a B, and no non-const reference can refer to a temporary.
The reason the base/derived example doesn't encounter this problem because a non-const Base reference can refer to a Derived object. The fact that the types are user-defined is a necessary but not a sufficient condition for the reference to be legal. Convertible user-defined classes where there is no inheritance behave just like built-ins.
This comment is way too long for comments, so I've used an answer. It doesn't actually answer your question, though, other than to distinguish between:
"Conversions" where a reference to a derived class is passed to a function taking a reference to a base class.
Conversions where a user-defined or built-in conversion actually creates an object, such as from int to long.