Why should we NOT use enableImplicitConversion when using class-transformer? - class-validator

The class-transformer docs say:
Implicit type conversion
NOTE If you use class-validator together with class-transformer you propably DON'T want to enable this function.
Why not?
I did some tests and found no issues.
Actually it is the other way around: using class-transformer (with enableImplicitConversion=true and reflect-metadata) in combination with class-validator seems to be a perfect fit and it is supported out-of-the-box by NestJS

Some reasons why we should not use implicit conversion.
It is too lenient
e.g. when we use #IsString() every type will pass the validation - even a plain object will be converted to the string [object Object], which is probably not what you want
here's a stackblitz example
#Transform() may not work
Example:
class Test {
#Transform(value => (value === "zero" ? 0 : value), {
toClassOnly: true
})
val: number;
}
const transformed = plainToClass(Test, {
val: 'zero'
}, {
enableImplicitConversion
});
// transformed.val = NaN
The problem here is that the implicit conversion is already happening before #Transform() and since it cannot convert the string to a number it sets the value to NaN
Transform Stackblitz example

Related

c++ best way to define a type consisting of a set of fixed integers

i need to work with "indicator" variables I that may take on one of 3 integer values {-1, 0, 1}.
so i would rather not declare
int I;
but rather
indicator_t I;
where the type indicator_t ~ {-1, 0, 1}
if furthermore i can later use I in numerical expressions, as an integer (without casting?), that would be excellent.
question:
how should i define the type indicator_t?
The simplest approach would be to convert the tri-state input into an enum class with 3 fixed discriminators:
enum class indicator_t {
negative = -1,
zero = 0,
positive = 1,
};
There are several nice things with this approach:
It's simple, which makes it easy to understand and maintain
enum class makes it a unique type from ints, which allow both the enum and an integer to appear as part of an overload set, if needed
The enum logically has a cardinality of 3 (the type can accept 3 logical inputs). 1
This cardinality prevents code that would otherwise be like a != 22 even though 2 is never a possible input
When inputs are passed to a function accepting indicator_t, it's clear from the call-site the intention. Consider the difference between the following two code snippets:
accept(1); // is this an indicator?
accept(indicator_t::positive); // ah, it's an indicator
If you need to convert to / from numeric values, you can create simple wrappers for this as well:
auto to_int(indicator_t indicator) -> int
{
return static_cast<int>(indicator);
}
auto to_indicator(int indicator) -> indicator_t
{
if (indicator > 1 || indicator < -1) {
// handle error. Throw an exception?
}
return static_cast<indicator_t>(indicator);
}
1 Technically C++ enums can take on any integral value that fits in std::underlying_type_t<the_enum>, but this doesn't change that the logical valid set of inputs is fixed, which prevents developer bugs. Compilers will even try to warn on checks outside of the logical range.
2 Technically this could still be done, but you'd need to explicitly static_cast all the time -- so it would appear as a code smell rather than a silently missed logic bug.

Why does bool exist when we can use int?

This may sound as a really dumb question. But it has been bothering me for the past few days. And It's not only concerning the C++ Programming Language as I've added it's tag. My Question is that. In Computer Science Boolean (bool) datatype has only two possible values. 'true' or 'false'. And also, in Computer Science, 1 is true and 0 is false. So why does boolean exists at all? Why not we use an integer that can return only two possible values, Such as 1 or 0.
For example :
bool mindExplosion = true; // true!
int mindExplosion = 1; // true!!
// or we can '#define true 1' and it's the same right?
What am I missing?
Why does bool exist when we can use int?
Well, you don't need something as large as an int to represent two states so it makes sense to allow for a smaller type to save space
Why not we use an integer that can return only two possible values, Such as 1 or 0.
That is exactly what bool is. It is an unsigned integer type that represents true (1) or false (0).
Another nice thing about having a specific type for this is that it express intent without any need for documentation. If we had a function like (warning, very contrived example)
void output(T const & val, bool log)
It is easy to see that log is an option and if we pass false it wont log. If it were instead
void output(T const & val, int log)
Then we aren't sure what it does. Is it asking for a log level? A flag on whether to log or not? Something else?
What am I missing?
Expressiveness.
When a variable is declared int it might be used only for 0 and 1, or it might hold anything from INT_MIN..INT_MAX.
When a variable is declared bool, it is made explicit that it is to hold a true / false value.
Among other things, this allows the compiler to toss warnings when an int is used in places where you really want a bool, or attempt to store a 2 in a bool. The compiler is your friend; give it all the hints possible so it can tell you when your code starts looking funky.

How to print boolean in cocos2d-x

I already know how to use log with different format and i already read this wiki
http://www.cocos2d-x.org/wiki/How_to_use_CCLOG
I want to print bool in my game. (The output is intended for me, not for the end user.)
bool x=true;
How i check what is the status of x in runtime ??
Since the output is intended for you, not for the end user, you can print it in any format you like.
CCLOG appears to be based on printf. Like printf, it has no special format specifier for bool.
The simplest approach is to convert the value to an integer type, yielding 0 or 1:
CCLOG("x = %d\n", (int)x);
(Yes, you should cast the value; since int and bool are likely to have different sizes, they might not be passed as variadic arguments in the same way.)
If you want the output to be a bit more user-friendly:
CCLOG("x = %s\n", x ? "true" : "false");

How do I convert between numeric types safely and idiomatically?

Editor's note: This question is from a version of Rust prior to 1.0 and references some items that are not present in Rust 1.0. The answers still contain valuable information.
What's the idiomatic way to convert from (say) a usize to a u32?
For example, casting using 4294967295us as u32 works and the Rust 0.12 reference docs on type casting say
A numeric value can be cast to any numeric type. A raw pointer value can be cast to or from any integral type or raw pointer type. Any other cast is unsupported and will fail to compile.
but 4294967296us as u32 will silently overflow and give a result of 0.
I found ToPrimitive and FromPrimitive which provide nice functions like to_u32() -> Option<u32>, but they're marked as unstable:
#[unstable(feature = "core", reason = "trait is likely to be removed")]
What's the idiomatic (and safe) way to convert between numeric (and pointer) types?
The platform-dependent size of isize / usize is one reason why I'm asking this question - the original scenario was I wanted to convert from u32 to usize so I could represent a tree in a Vec<u32> (e.g. let t = Vec![0u32, 0u32, 1u32], then to get the grand-parent of node 2 would be t[t[2us] as usize]), and I wondered how it would fail if usize was less than 32 bits.
Converting values
From a type that fits completely within another
There's no problem here. Use the From trait to be explicit that there's no loss occurring:
fn example(v: i8) -> i32 {
i32::from(v) // or v.into()
}
You could choose to use as, but it's recommended to avoid it when you don't need it (see below):
fn example(v: i8) -> i32 {
v as i32
}
From a type that doesn't fit completely in another
There isn't a single method that makes general sense - you are asking how to fit two things in a space meant for one. One good initial attempt is to use an Option — Some when the value fits and None otherwise. You can then fail your program or substitute a default value, depending on your needs.
Since Rust 1.34, you can use TryFrom:
use std::convert::TryFrom;
fn example(v: i32) -> Option<i8> {
i8::try_from(v).ok()
}
Before that, you'd have to write similar code yourself:
fn example(v: i32) -> Option<i8> {
if v > std::i8::MAX as i32 {
None
} else {
Some(v as i8)
}
}
From a type that may or may not fit completely within another
The range of numbers isize / usize can represent changes based on the platform you are compiling for. You'll need to use TryFrom regardless of your current platform.
See also:
How do I convert a usize to a u32 using TryFrom?
Why is type conversion from u64 to usize allowed using `as` but not `From`?
What as does
but 4294967296us as u32 will silently overflow and give a result of 0
When converting to a smaller type, as just takes the lower bits of the number, disregarding the upper bits, including the sign:
fn main() {
let a: u16 = 0x1234;
let b: u8 = a as u8;
println!("0x{:04x}, 0x{:02x}", a, b); // 0x1234, 0x34
let a: i16 = -257;
let b: u8 = a as u8;
println!("0x{:02x}, 0x{:02x}", a, b); // 0xfeff, 0xff
}
See also:
What is the difference between From::from and as in Rust?
About ToPrimitive / FromPrimitive
RFC 369, Num Reform, states:
Ideally [...] ToPrimitive [...] would all be removed in favor of a more principled way of working with C-like enums
In the meantime, these traits live on in the num crate:
ToPrimitive
FromPrimitive

Avoid casting floating point constant

I'm creating cargo that (among other things) will implement idiomatic angle measurment. When creating methods to convert between angle units I've found problem:
impl<T> Angle<T>
where T: Float {
pub fn to_deg(self) -> Self {
Deg(match self {
Rad(v) => v * cast(180.0 / f64::consts::PI).unwrap(),
Deg(v) => v,
Grad(v) => v * cast(180.0 / 200.0).unwrap() // how to get rid of this cast?
})
}
}
Runnable
The cast of 180.0 / 200.0 seem really unneded for me? Is there any way to get rid of this?
When I delete cast then I get:
src/angles.rs:42:28: 42:33 error: mismatched types:
expected `T`,
found `_`
(expected type parameter,
found floating-point variable) [E0308]
src/angles.rs:42 Grad(v) => v * 180.0 / 200.0
^~~~~
When you have a generic function with a type parameter, such as T, you don't get to choose the type. The type is forced on you by the caller of the function.
The error here is that you're trying to assign a specific f32/f64 type to a type T, which could be anything that implements Float.
You know in practice it's going to be either one of the floating-point types, but theoretically the type system won't stop someone from implementing Float on a string or an array, or a tuple of two function pointers, or any other bizarre thing that can't be assigned a float. When the compiler can't guarantee it'll always work, including in theory in the future, then it won't accept it.
If you want to assign a float value to T, you have to declare that this operation is possible, e.g. by adding f32: Into<T>, and using 180f32.into().