Why does the following compile in TypeScript?
enum xEnum {
X1,X2
}
function test(x: xEnum) {
}
test(6);
Shouldn't it throw an error? IMHO this implicit cast is wrong here, no?
Here is the playground link.
This is part of the language specification (3.2.7 Enum Types):
Enum types are assignable to the Number primitive type, and vice
versa, but different enum types are not assignable to each other
So the decision to allow implicit conversion between number and Enum and vice-versa is deliberate.
This means you will need to ensure the value is valid.
function test(x: xEnum) {
if (typeof xEnum[x] === 'undefined') {
alert('Bad enum');
}
console.log(x);
}
Although you might not agree with the implementation, it is worth noting that enums are useful in these three situations:
// 1. Enums are useful here:
test(xEnum.X2);
// 2. ...and here
test(yEnum.X2);
And 3. - when you type test( it will tell you the enum type you can use to guarantee you pick one that exists.
No, it shouldn't. There is no type casting here, the base type behind them all is the same, integer.
typescript enum type checking works fine
Your complaint is about range value which, in this case, has nothing to do with type checking.
enum is a flexible set of constants
enum xEnum {X1=6, X2} // ruins it for test(0)
Related
Is there a legal way, according to the C++20 standard, to turn a pointer to an unscoped enumeration type's underlying type into a pointer to the enumeration type? In other words:
enum Enum : int {
FOO = 0,
BAR = 1,
}
// How do I implement this without undefined behavior (and ideally without
// implementation-defined behavior)?
const Enum* ToEnum(const int* p);
I'm surprised to find that it's not listed as a legal use of reinterpret_cast.
If you're interested in why I want this: in a templated API I'm trying to work around the fact that protocol buffers provide repeated enum fields as a proto2::RepeatedField<int>, i.e. an array of ints, despite the fact that there is a strongly-typed enum associated with the field. I would like to be able to turn this into a std::span<Enum> without needing to copy the values.
No, this is not one of the very few exceptions to the aliasing rule ([basic.lval]/11): you can construct such a std::span, but attempting to use its elements (e.g., s.front()=FOO) has undefined behavior.
#include <iostream>
typedef enum my_time {
day,
night
} my_time;
int main(){
// my_time t1 = 1; <-- will not compile
int t2 = night;
return 0;
}
How is it expected that I can assign an enum value to an int but not the other way in C++?
Of course this is all doable in C.
Implicit conversions, or conversions in general, are not mutual. Just because a type A can be converted to a type B does not imply that B can be converted to A.
Old enums (unscoped enums) can be converted to integer but the other way is not possible (implicitly). Thats just how it is defined. See here for details: https://en.cppreference.com/w/cpp/language/enum
Consider that roughly speaking enums are just named constants and for a function
void foo(my_time x);
It is most likely an error to pass an arbitrary int. However, a
void bar(int x);
can use an enum for special values of x while others are still allowed:
enum bar_parameter { NONE, ONE, MORE, EVEN_MORE, SOME_OTHER_NAME };
bar(NONE);
bar(SOME_OTHER_NAME);
bar(42);
This has been "fixed" in C++11 with scoped enums that don't implicitly convert either way.
You could cast to int. This expression makes an explicit conversion of the specified data type (int) and the given value (night).
int t2 = static_cast<int>(night)
Of course this is all doable in C
That doesn't mean that the minds behind C++ automatically consider it a desired behavior. Nor should they have such an attitude. C++ follows its own philosophy with regard to types. This is not the only aspect where a conscious decision was made to be more strongly typed than C. This valid C snippet is invalid in C++
void *vptr = NULL;
int *iptr = vptr; // No implicit conversion from void* to T* in C++
How is it expected that I can assign an enum value to an int but not the other way in C++?
It's the behavior because one side of the conversion is less error prone. Allowing an enumerator to become an integer isn't likely to break any assumptions the programmer has about an integer value.
An enumeration is a new type. Some of the enmueration's values are named. And for most cases of using an enumeration, we really do want to restrict ourselves to those named constants only.
Even if an enumeration can hold the integer value, it doesn't mean that value is one of the named constants. And that can easily violate the assumptions code has about its input.
// Precondition: e is one of the name values of Enum, under pain of UB
void frombulate_the_cpu(Enum e);
This function documents its precondition. A violation of the precondition can cause dire problems, that's what UB usually is. If an implicit conversion was possible everywhere in the program, it'd be that more likely that we violate the precondition unintentionally.
C++ is geared to catch problems at compile time whenever it can. And this is deemed problematic.
If a programmer needs to convert an integer an enumeration, they can still do it with a cast. Casts stand out in code-bases. They require a conscious decision to override a compiler's checks. And it's a good thing, because when something potentially unsafe is done, it should be with full awareness.
Cast the int when assigning . . .
my_time t1 = (my_time)1;
I am asking why the following code yields an error in Visual Studio 2014 update 4.
enum A
{ a = 0xFFFFFFFF };
enum class B
{ b = 0xFFFFFFFF };
I know that I can use enum class B : unsigned int. But why is the default underlying type of enum different that the default underlying type of enum class? There should be a design decision.
Clarifications
I forgot to mention the error:
error C3434: enumerator value '4294967295' cannot be represented as 'int', value is '-1'
That suggests that the default underlying type of enum class is signed int while the default type of enum is unsigned int. This question is about the sign part.
enum class is also called scoped enum.
enum is pretty much necessary for backwards compatibility reasons. scoped enum (or enum class) was added, among other reasons, to pin down the underlying type of the enum.
The details are as follows. When you do something like this:
enum MyEnumType {
Value1, Value2, Value3
};
The compiler is free to choose the underlying numeric type of MyEnumType as long as all your values can fit into that type. This means that the compiler is free to choose char, short, int, long, or another numeric type as the underlying type of MyEnumType. One practice that's done often is to add a last value to the enumeration to force a minimum size of the underlying type. For example:
enum MyEnumType2 {
Value1, Value2, Value3, LastValue=0xffffff
};
is guaranteed to have an underlying type of at least as large as unsigned 32-bit, but it could be larger (for example, 64-bit unsigned). This flexibility on the compiler's part is good and bad.
It is good in that you don't have to think about the underlying type. It is bad in that this is now an uncertainty that is up to the compiler, and if you do think about the underlying type, you can't do anything about it. This means that the same piece of code can mean different things on different compilers, which may, for example, be a problem if you wanted to do something like this:
MyEnumType a = ...;
fwrite(&a, sizeof(a), 1, fp);
Where you're writing the enum to a file. In this case, switching compiler or adding a new value to the enumeration can cause the file to be misaligned.
The new scoped enumeration solves this issue, among other things. In order to do this, when you declare a scoped enum, there must be a way for the language to fix the underlying type. The standard is, then, that:
enum class MyEnumType {
....
}
defaults to type int. The underlying type can be explicitly changed by deriving your enum class from the appropriate numeric type.
For example:
enum class MyEnumType : char {
....
}
changes the underlying type to char.
For this reason, default underlying type of an enum can change based on how many items and what literal values are assigned to the items in the enumeration. On the other hand, the default underlying type of an enum class is always int.
As far as N4140 is concerned, MSVC is correct:
§7.2/5 Each enumeration defines a type that is different from all
other types. Each enumeration also has an underlying type. The
underlying type can be explicitly specified using an enum-base. For
a scoped enumeration type, the underlying type is int if it is not
explicitly specified. [...]
For rationale, you can read the proposal entitled Strongly Typed Enums (Revision 3) N2347. Namely, section 2.2.2 Predictable/specifiable type (notably signedness) explains that the underlying type of enum is implementation-defined. For example, N4140 again:
§7.2/7 For an enumeration whose underlying type is not fixed, the
underlying type is an integral type that can represent all the
enumerator values defined in the enumeration. If no integral type can
represent all the enumerator values, the enumeration is ill-formed. It
is implementation-defined which integral type is used as the
underlying type except that the underlying type shall not be larger
than int unless the value of an enumerator cannot fit in an int or
unsigned int. If the enumerator-list is empty, the underlying type
is as if the enumeration had a single enumerator with value 0.
And N2347's proposed solutions:
This proposal is in two parts, following the EWG direction to date:
• provide a distinct new enum type having all the features that are
considered desirable:
o enumerators are in the scope of their enum
o enumerators and enums do not implicitly convert to int
o enums have a defined underlying type
• provide pure backward-compatible extensions for plain enums with a
subset of those features
o the ability to specify the underlying type
o the ability to qualify an enumerator with the name of the enum
The proposed syntax and wording for the distinct new enum type is
based on the C++/CLI [C++/CLI] syntax for this feature. The proposed
syntax for extensions to existing enums is designed for similarity.
So they went with the solution to give scoped enums a defined underlying type.
That's what the standard requires. A scoped enum always has an explicit
underlying type, which defaults to int unless you say otherwise.
As for the motivation: superficially, it doesn't make sense to conflate
the underlying type with whether the enum is scoped or not. I suspect
that this is done only because the authors want to always be able to
forward declare scoped enums; at least in theory, the size and
representation of a pointer to the enum may depend on the underlying
type. (The standard calls such forward declarations opaque enum types.)
And no, I don't think this is really a valid reason for conflating
scoping and underlying type. But I'm not the whole committee, and
presumably, a majority don't feel the way I do about it. I can't see
much use for specifying the underlying type unless you are forward
declaring the enum; it doesn't help with anything else. Where as I want
to use scoped enum pretty much everywhere I'm dealing with a real
enumeration. (Of course, a real enumeration will never have values
which won't fit in an int; those really only come up when you're using
an enum to define bitmasks.)
Is it safe to assume that static_cast will never throw an exception?
For an int to Enum cast, an exception is not thrown even if it is invalid. Can I rely on this behavior? This following code works.
enum animal {
CAT = 1,
DOG = 2
};
int y = 10;
animal x = static_cast<animal>(y);
For this particular type of cast (integral to enumeration type), an exception might be thrown.
C++ standard 5.2.9 Static cast [expr.static.cast] paragraph 7
A value of integral or enumeration type can be explicitly converted to
an enumeration type. The value is unchanged if the original value is
within the range of the enumeration values (7.2). Otherwise, the
resulting enumeration value is unspecified / undefined (since C++17).
Note that since C++17 such conversion might in fact result in undefined behavior, which may include throwing an exception.
In other words, your particular usage of static_cast to get an enumerated value from an integer is fine until C++17 and always fine, if you make sure that the integer actually represents a valid enumerated value via some kind of input validation procedure.
Sometimes the input validation procedure completely eliminates the need for a static_cast, like so:
animal GetAnimal(int y)
{
switch(y)
{
case 1:
return CAT;
case 2:
return DOG;
default:
// Do something about the invalid parameter, like throw an exception,
// write to a log file, or assert() it.
}
}
Do consider using something like the above structure, for it requires no casts and gives you the opportunity to handle boundary cases correctly.
Is it safe to assume that static_cast will never throw an exception?
No. For user-defined types, the constructor and/or conversion operator might throw an exception, resulting in well-defined behavior.
Consider the output of this program:
#include <iostream>
struct A {
A(int) { throw 1; }
};
int main () {
int y = 7;
try {
static_cast<A>(y);
} catch(...) {
std::cout << "caught\n";
}
}
static_cast can't throw exception since static_cast is not runtime cast, if some cannot be casted, code will not compiles. But if it compiles and cast is bad - result is undefined.
(This answer focuses exclusively on the int to enum conversion in your question.)
For an int to Enum cast, an exception is not thrown even if it is invalid. Can I rely on this behavior? This following code works.
enum animal { CAT = 1, DOG = 2 };
int y = 10;
animal x = static_cast<animal>(y);
Actually, enums are not restricted to the list of enumerations in their definition, and that's not just some strange quirk, but a deliberately utilised feature of enums - consider how enumeration values are often ORed together to pack them into a single value, or a 0 is passed when none of the enumerations apply.
In C++03, it's not under explicit programmer control how big a backing integer will be used by the compiler, but the range is guaranteed to span 0 and the explicitly listed enumerations.
So, it's not necessarily true that 10 is not a valid, storable value for an animal. Even if the backing value were not big enough to store the integral value you're trying to convert to animal, a narrowing conversion may be applied - typically this will use however many of the least significant bits that the enum backing type can hold, discarding any additional high order bits, but for details check the Standard.
In practice, most modern C++03 compilers on PC and server hardware default to using a (32 bit) int to back the enumeration, as that facilitates calling into C library functions where 32 bits is the norm.
I would never expect a compiler to throw an exception when any value is shoehorned into an enum using static_cast<>.
There are sort of two related questions here:
A) How is enum implemented? For example, if I have the code:
enum myType
{
TYPE_1,
TYPE_2
};
What is actually happening? I know that you can treat TYPE_1 and TYPE_2 as ints, but are they actually just ints?
B) Based on that information, assuming that the enum passed in didn't need to be changed, would it make more sense to pass myType into a function as a value or as a const reference?
For example, which is the better choice:
void myFunction(myType x){ // some stuff }
or
void myFunction(const myType& x) { // some stuff }
Speed wise it almost certainly doesn't matter - any decent C++ compiler is just going to pass a single int.
The important point is readability - which will make your code more obvious to the reader?
If it's obvious that these enums are really just ints then I would pass them by value, as if they were ints. Using the const ref might cause a programmer to think twice (never a good idea!)
However - if you are later going to replace them with a class then keeping the API the same and enforcing the const-ness might make sense.
C++ Standard (§7.2/5) guarantees that the underlying type of an enumeration is an integral type that can represent all the enumerator values defined in the enumeration. So pass it by value and don't make your code more sophisticated that it can be.
I know that you can treat TYPE_1 and TYPE_2 as ints, but are they actually just ints?
Yes. They're integral type, and most likely their type is just int because that is most natural type. So you can pass by value; passing by reference wouldn't give you any significant advantage.
By the way, for your reference, the section §7.2/5 says,
The underlying type of an enumeration is an integral type that can
represent all the enumerator values defined in the enumeration. It is
implementation-defined which integral type is used as the underlying
type for an enumeration except that the underlying type shall not be
larger than int unless the value of an enumerator cannot fit in an int
or unsigned int. If the enumerator-list is empty, the underlying type
is as if the enumeration had a single enumerator with value 0. The
value of sizeof() applied to an enumeration type, an object of
enumeration type, or an enumerator, is the value of sizeof() applied
to the underlying type.
pass built-in simple types (char, short, int, enum, float, pointers) by value
enums are implemented as integers, you can even explicitly specifiy values for them.
typedef enum
{
FIRST_THING,
SECOND_THING
} myType;
Then use it just like an int. Pass it by value.