Could someone please tell me if go supports automatic casting of numeric types. Right now I have to manually convert the results of all my computation to int or int64 and keep track of what numeric type I am using.
Go won't convert numeric types automatically for you.
From the language specification:
Conversions are required when different numeric types are mixed in
an expression or assignment. For instance, int32 and int are not
the same type even though they may have the same size on a particular
architecture.
Go does not support implicit type conversions in numeric type.
Refer spec. I think this is for reasons of safety and predictability. One more thing I found was a bit weird/interesting is that you cant even convert from int to int32 implicitly, which is weird cause both are of the same size.
You have to convert between types manually, e.g.
var b byte = byte(x % 256);
Related
What I really want is an 8bit signed integer type. The problem is that under the covers we all know it gets aliased to a char where applicable. Well, for me, it’s applicable. So my dilemma is that I want to use c++ std lib containers and algorithms with my 8bit int, but the overloads get picked according to the char it is, not the signed 8bit int I want it to be. Is there a way to cast into a numeric type or somehow imbue numeric traits without full blown wrapping or something? I specifically want to avoid casting into a short or int or other larger numeric literal type etc. thanks
We have a fairly sized C++ code base which uses signed 32-bit int as the default integer data type. Due to changing requirements, it is necessary that we switch to 64-bit long integers for a particular data structure. Changing the default integer data type in the entire program is not viable due to the significant memory overhead. On the other hand, we need to avoid that unaware developers mix 64-bit and 32-bit integers and create problems that only occur when very large data sets are handled (and which are thus hard to detect and even harder to debug).
Question: How can I create a zero-overhead 64-bit integer type that does not implicitly convert to other types (specifically, 32-bit integers) that is still "convenient" to use?
Or - if the above is not possible or sensible - what would be a good alternative to my proposed approach?
Example
I'm thinking about creating a data structure like this:
class Int64 {
long value;
};
And then add c'tors for implicit construction, assignment, and operator overloads for arithmetic operations etc. However, I was not able to find a good resource online that might explain how to go about something like this and what the caveats are. Any suggestions?
I have code that depends on data that is a mixture of uint16_t, int32_t / uint32_t and int64_t values. It also includes some larger bit shifted constants (e.g., 1<<23, even 1<<33).
In calculation of a int64_t value, if I carefully cast each sub-part (e.g., up-casting uint16_t values to int64_t) it works - if I don't, the calculations often go awry.
I end up with code that looks like this:
int64_t sensDT = (int64_t)sensD2-(int64_t)promV[PROM_C5]*(int64_t)(1<<8);
temperatureC = (double)((2000+sensDT*(int64_t)promV[PROM_C6]/(1<<23))/100.0);
I wonder, though, if my sprinkling of type casts here is too cluttered and too generous. I'm not sure the 1<<8 requires the cast (while despite not having one, 1<<23 doesn't lead to erroneous calculations) but perhaps they do too. How much is too much when it comes to up-casting values for a calculation like this?
Edit: So it's clear, I'm asking what the minimum proper amount of casting is - what's necessary for correct functionality (one can add more casts or modifiers for clarity, but from the compiler's perspective what's necessary to ensure correct calculations?)
Edit2: I'm using C-style casts as this is from an Arduino-type embedded code base (which itself used that style of casts already). From the perspective of having the desired effect they appear to be equivalent, thus I used the existing coding style.
Generally you can rely on the integer promotions to give you the correct operation, as long as one of the operands for each operator have the correct size. So your first example could be simplified:
int64_t sensDT = sensD2-(int64_t)promV[PROM_C5]*(1<<8);
Be careful to consider the precedence rules to know what order the operators will be applied!
You might run into trouble if you're mixing signed and unsigned types of the same size, although either should promote to a larger signed type.
You need to be careful with constants, because without any decoration those will be the default integer size and signed. 1<<8 won't be a problem, but 1<<35 probably will; you need 1LL<<35.
When in doubt, a few extra casts or parentheses won't hurt.
What is the difference between widening and narrowing in c++ ?
What is meant by casting and what are the types of casting ?
This is a general casting thing, not C++ specific.
A "widening" cast is a cast from one type to another, where the "destination" type has a larger range or precision than the "source" (e.g. int to long, float to double). A "narrowing" cast is the exact opposite (long to int). A narrowing cast introduces the possibility of overflow.
Widening casts between built-in primitives are implicit, meaning you do not have to specify the new type with the cast operator, unless you want the type to be treated as the wider type during a calculation. By default, types are cast to the widest actual type used on the variable's side of a binary expression or assignment, not counting any types on the other side).
Narrowing casts, on the other hand, must be explicitly cast, and overflow exceptions must be handled unless the code is marked as not being checked for overflow (the keyword in C# is unchecked; I do not know if it's unique to that language)
widening conversion is when you go from a integer to a double, you are increasing the precision of the cast.
narrowing conversion is the inverse of that, when you go from double to integer. You are losing precision
There are two types of casting , implicit and explicit casting. The page below will be helpful. Also the entire website is pretty much the goto for c/c++ needs.
Tutorial on casting and conversion
Take home exam? :-)
Let's take casting first. Every object in C or C++ has a type, which is nothing more than the name give to two kinds of information: how much memory the thing takes up, and what operations you can do on it.
So
int i;
just means that i refers to some location in memory, usually 32 bits wide, on which you can do +,-,*,/,%,++,-- and some others.
Ci isn't really picky about it, though:
int * ip;
defines another type, called pointer to integer which represents an address in memory. It has an additional opertion, prefix-*. On many machines, that also happens to be 32 bits wide.
A cast, or typecast tell the compiler to treat memory identified as one type as if it were another type. Typecasts are written as (typename).
So
(int*) i;
means "treat i as if it were a pointer, and
(int) ip;
means treat the pointer ip as just an integer number.
Now, in this context, widening and narrowing mean casting from one type to another that has more or fewer bits respectively.
I've been trying to reduce implicit type conversions when I use named constants in my code. For example rather than using
const double foo = 5;
I would use
const double foo = 5.0;
so that a type conversion doesn't need to take place. However, in expressions where I do something like this...
const double halfFoo = foo / 2;
etc. Is that 2 evaluated as an integer and is it implicitly converted? Should I use a 2.0 instead?
The 2 is implicitly converted to a double because foo is a double. You do have to be careful because if foo was, say, an integer, integer division would be performed and then the result would be stored in halfFoo.
I think it is good practice to always use floating-point literals (e.g. 2.0 or 2. wherever you intend for them to be used as floating-point values. It's more consistent and can help you to find pernicious bugs that can crop up with this sort of thing.
This is known as Type Coercion. Wikipedia has a nice bit about it:
Implicit type conversion, also known as coercion, is an automatic type conversion by the compiler. Some languages allow, or even require, compilers to provide coercion.
In a mixed-type expression, data of one or more subtypes can be converted to a supertype as needed at runtime so that the program will run correctly.
...
This behavior should be used with caution, as unintended consequences can arise. Data can be lost when floating-point representations are converted to integral representations as the fractional components of the floating-point values will be truncated (rounded down). Conversely, converting from an integral representation to a floating-point one can also lose precision, since the floating-point type may be unable to represent the integer exactly (for example, float might be an IEEE 754 single precision type, which cannot represent the integer 16777217 exactly, while a 32-bit integer type can). This can lead to situations such as storing the same integer value into two variables of type integer and type real which return false if compared for equality.
In the case of C and C++, the value of an expression of integral types (i.e. longs, integers, shorts, chars) is the largest integral type in the expression. I'm not sure, but I imagine something similar happens (assuming floating point values are "larger" than integer types) with expressions involving floating point numbers.
Strictly speaking, what you are trying to achieve seems to be counterproductive.
Normally, one would strive to reduce the number of explicit type conversions in a C program and, generally, to reduce all and any type dependencies in the source code. Good C code should be as type-independent as possible. That generally means that it is a good idea to avoid any explicit syntactical elements that spell out specific types as often as possible. It is better to do
const double foo = 5; /* better */
than
const double foo = 5.0; /* worse */
because the latter is redundant. The implicit type conversion rules of C language will make sure that the former works correctly. The same can be said about comparisons. This
if (foo > 0)
is better than
if (foo > 0.0)
because, again, the former is more type-independent.
Implicit type conversion in this case is a very good thing, not a bad thing. It helps you to write generic type-independent code. Why are you trying to avoid them?
It is true that in some cases you have no other choice but to express the type explicitly (like use 2.0 instead of 2 and so on). But normally one would do it only when one really has to. Why someone would do it without a real need is beyond me.