Should I bother using short int instead of int? Is there any useful difference? Any pitfalls?
short vs int
Don't bother with short unless there is a really good reason such as saving memory on a gazillion values, or conforming to a particular memory layout required by other code.
Using lots of different integer types just introduces complexity and possible wrap-around bugs.
On modern computers it might also introduce needless inefficiency.
const
Sprinkle const liberally wherever you can.
const constrains what might change, making it easier to understand the code: you know that this beastie is not gonna move, so, can be ignored, and thinking directed at more useful/relevant things.
Top-level const for formal arguments is however by convention omitted, possibly because the gain is not enough to outweight the added verbosity.
Also, in a pure declaration of a function top-level const for an argument is simply ignored by the compiler. But on the other hand, some other tools may not be smart enough to ignore them, when comparing pure declarations to definitions, and one person cited that in an earlier debate on the issue in the comp.lang.c++ Usenet group. So it depends to some extent on the toolchain, but happily I've never used tools that place any significance on those consts.
Cheers & hth.,
Absolutely not in function arguments. Few calling conventions are going to make any distinction between short and int. If you're making giant arrays you could use short if your data fits in short to save memory and increase cache effectiveness.
What Ben said. You will actually create less efficient code since all the registers need to strip out the upper bits whenever any comparisons are done. Unless you need to save memory because you have tons of them, use the native integer size. That's what int is for.
EDIT: Didn't even see your sub-question about const. Using const on intrinsic types (int, float) is useless, but any pointers/references should absolutely be const whenever applicable. Same for class methods as well.
The question is technically malformed "Should I use short int?". The only good answer will be "I don't know, what are you trying to accomplish?".
But let's consider some scenarios:
You know the definite range of values that your variable can take.
The ranges for signed integers are:
signed char — -2⁷ – 2⁷-1
short — -2¹⁵ – 2¹⁵-1
int — -2¹⁵ – 2¹⁵-1
long — -2³¹ – 2³¹-1
long long — -2⁶³ – 2⁶³-1
We should note here that these are guaranteed ranges, they can be larger in your particular implementation, and often are. You are also guaranteed that the previous range cannot be larger than the next, but they can be equal.
You will quickly note that short and int actually have the same guaranteed range. This gives you very little incentive to use it. The only reason to use short given this situation becomes giving other coders a hint that the values will be not too large, but this can be done via a comment.
It does, however, make sense to use signed char, if you know that you can fit every potential value in the range -128 — 127.
You don't know the exact range of potential values.
In this case you are in a rather bad position to attempt to minimise memory useage, and should probably use at least int. Although it has the same minimum range as short, on many platforms it may be larger, and this will help you out.
But the bigger problem is that you are trying to write a piece of software that operates on values, the range of which you do not know. Perhaps something wrong has happened before you have started coding (when requirements were being written up).
You have an idea about the range, but realise that it can change in the future.
Ask yourself how close to the boundary are you. If we are talking about something that goes from -1000 to +1000 and can potentially change to -1500 – 1500, then by all means use short. The specific architecture may pad your value, which will mean you won't save any space, but you won't lose anything. However, if we are dealing with some quantity that is currently -14000 – 14000, and can grow unpredictably (perhaps it's some financial value), then don't just switch to int, go to long right away. You will lose some memory, but will save yourself a lot of headache catching these roll-over bugs.
short vs int - If your data will fit in a short, use a short. Save memory. Make it easier for the reader to know how much data your variable may fit.
use of const - Great programming practice. If your data should be a const then make it const. It is very helpful when someone reads your code.
Related
Class A
{
public:
void fun1();
void fun2();
private:
uint16 temp_var;
};
Is there any reason why I shouldn't just make this variable uint16 to full uint64? Doing it this way (uin16), do I leave memory "holes" in the object ?, and I'm told the processor is more efficient in dealing with full uint64s.
for clarification, temp_var is the only member variable. And no where i was using it for size(temp_var) or as a counter to loop back to zero
Thank you all for your inputs, appreciate it..
If the question is, can the compiler do the substitution:
You asked for a uint16 so it gave you a uint16. It would be surprising to get something else.
For instance, imagine if a developer was counting on a behavior of integer overflow or underflow. In that case, if the compiler substituted a uint64 behind the scenes, then that would be problematically surprising for the developer.
Along the same lines, one would expect sizeof(temp_var) to equal sizeof(uint16).
There are probably further cases in which such a substitution could lead to unexpected behavior that wouldn't be anticipated by the developer.
If the question is, can you the developer pick something else:
Sure, you can if you want a variable of that size. So then how about some possibilities where you wouldn't...
If you rely on overflow/underflow behavior of a uint16 then of course you would want to stick to that.
Or perhaps this data is going to be passed along to some further location that only supports values in the range of a uint16, so leaving it that size may make logical sense to try to implicitly document what's valid and/or avoid compiler warnings.
Similarly you might want for sizeof(temp_var) to be 2 bytes instead of 8 for some logical reason relevant to other parts of the program.
I also expect there are some games one could play with the packing pragma, but I presume that isn't relevant to the intended question.
Depending on the goal of your program, logical consistency or clarity of code may be more important than maximum possible performance (especially at the micro level of being concerned about size/type of a member variable). To phrase that another way, uint16 is still fast enough for many many use cases.
In some situations though, there won't be any compelling reason to make the decision one way or the other. At that point, I would go with whatever seems to make the most sense as per personal sensibilities.
I understand that there are applications in which using unsigned integer over/underflow is a good way to get cheap modular arithmetic.
In my code, I use uint exclusively for indices to containers, so I never want this behaviour.
Is this a bad idea? Should I be using int everywhere instead? I do have to do some unsavoury things to get a for loop to count down to 0.
Is there a commonly used implementation of a less unsafe unsigned integer type? Something that throws an exception?
Do compilers (for me gcc, clang) provide a mechanism for less unsafe behaviour in the given compilation unit?
First, a terminology quibble: there is no such thing as unsigned integer underflow, precisely because of the way they wrap around (using modulo arithmetic), which is probably the phrase you meant.
Second, is this a common scenario to be in? Yes, it is a bit. You're not the only one doing "unsavoury things" with loops for reverse counting, and I bet there are a ton of bugs out there where people haven't done "unsavoury things" and, as a result, their code has an unsavoury infinite loop hidden in it. Mind you, I'm not sure I'd go so far as to call unsigneds "unsafe" as a result; like anything, they are the right tool for a subset of infinite possible jobs, and within that subset they perfectly safe.
There is debate over whether unsigned integers should be used for array indexes at all. Some standard committee members believe that their use in the standard library was a mistake; I know that several members of the c++ community here on Stack Overflow also hate unsigned values and wish they'd go away.
Personally I think having access to the full range of the integer by default is absolutely crucial (and losing that is not worth it for a single "-1" sentinel value or whatever), so I think that — while you're not alone in this requirement, and it's a sensible requirement — using unsigned array indexes by default is a good thing. (And what the heck is a negative array index? Semantics, people!)
But that doesn't help you in this scenario. So, what can you do about it? No, there's no trapping unsigned integer implementation (at least, not one that I'm aware of, let alone widespread) because that would literally violate the rules of the type as defined by C++: it would introduce well-defined underflow/overflow semantics to a type for which underflow/overflow shouldn't even be possible.
You will have to use signed integers and check for "logical underflow" (i.e. going out of your desired range, say -1) yourself. You could wrap this behaviour in a class.
I suppose you could actually just wrap an unsigned integer while you're at it, adding some extra logic to operator-- and operator-= to detect a wrap-around and throw.
But I guess my point is that, whatever you do, it's going to be in your "code space" and thus subject to decreased performance. You can't eke out this behaviour from the platform itself.
Th title is quite obvious.
In my case, and for the sake of simplicity, I avoid using, for instance, unsigned int instead of int, as it makes coding faster and simpler.
(BTW, Im using an Android IDE, CppDroid)
Yet, the IDE frequently alerts me to implicit conversions at, for example, For loops where the incremented variable (int) is compared with the size of a vector (size_t/unsigned int).
My questions are:
Do type conversions take time?
If so, how long do they take compared to other common operations?
In the case convertions do take some time, is it worth to correctly define variables in order to avoid convertions?
Your question is valid, although the goal is misconstrued. It is paramount to correctly define variables, but not because of mysterious performance.
It is to ensure correctness. Comparing unsigned integer with signed one is a ticking bomb, as well as (most usually) comparing size_t with integer.
For example, consider following snippet:
for (int i = 0; i < vec.size(); ++i) { }
For all you know, this code can lead to undefined behavior! If the size of the vector is bigger than maximum size signed integer can hold (which is usually the case with 64bit systems) your integer will be overflowing, which is undefined. Compiler might just remove the loop altogether, if it can proove that size of the vector is bigger than maximum int!
Similar looking (and incorrecet as well) line
for (unsigned int i = 0; i < vec.size(), ++i) { }
Is not going to cause undefined behaviour, but it will hang the program when vector size is greater than maximum int. No good thing either.
And of course, the correct way of doing this is
for (decltype(vec.size()) i = 0; i < vec.size(), ++i) { }
Depends what you convert to what.
That particular warning of signed/unsigned mismatch results in zero overhead, but you may end treating negative number as huge unsigned one (or other way around) - so as long as you are using int, and you don't expect to break into 2^31 numbers land, you are safe.
As safe, as people writing file I/O routines around 1990 (never expecting to see 3GiB file in their life). ...not very funny nowadays (still so much SW is broken on 2+GiB file size).
Some other conversions like from int to uint_8 may have tiny overhead, so it's better to avoid them - if possible (by designing the code to use the desired data type all around).
I would firstly address clarity and functionality of the code, and that usually leads to usage of particular data type for particular value all the time, without any conversion.
After the code works, you can measure the performance and consider what optimization makes sense (including usage of mismatched data types with conversions between them).
conclusion: just fix it, use proper data type.
Type conversions might give performance hits (when signedness, bitness or conversion between floating-point types are involved), but, as a general rule, the type identity of the many things in a program is merely a conceptual language front-end feature. When such hits do happen, however, it is because the types involved mean reasonably different things, and hence code must be emitted in order to properly solve the conversion.
Another thing which is completely different from the above is the invocation of type conversion operators in C++, which can run arbitrary code and, thus, most obviously influence in the final program behavior (not only performance).
As mentioned by others, correct use of the type system is most important for program correctness, at least or specially in languages such as C and C++. Using mismatched types can affect the program behavior in some corner cases, albeit can have no impact whatsoever on the execution time otherwise.
It depends.
Actually converting the data between types will require extra calculations. That much should probably be obvious. Usually those calculations take extra time, so they will have a performance impact. However, there are several factors that mitigate the actual impact of this:
The compiler can optimize types in some cases to minimize conversions.
Some platforms implement certain conversions in hardware.
The primary concern surrounding calculations with unlike types typically has much less to do with performance and more to do with safety and producing expected results. That is why the compiler is warning you; the vast majority of compilers will not tell you that you are doing something inefficient, but they will tell you that you are doing something dangerous.
For example, comparing an int with an unsigned int is asking for trouble. The int has a negative range, the unsigned int has a larger positive range. On conversion to unsigned int, the negative range of the int will appear to be larger than its own positive range. It is very easy to generate endless loops or out-of-bounds errors with such a construct.
You should normally only worry yourself about type conversion performance if you are dealing with huge amounts of data - like large vectors/arrays that need converted between formats. You would have to loop over the data in some way, so it would be a conscious action. For example, converting a 10000 element vector of chars to ints. In these cases, you might need to consider if you have a design flaw that is requiring needless conversion of data.
It is worth pointing out that in the above example, even if the conversion itself were instant, the iteration and copy is not.
As for an example of platforms where this can be done to an extent in hardware, most video cards are able to interpret integers as floats on a normalized range, of the sort 255 --> 1.0. However, many other conversions, like conversions between image formats, are still done in software.
Given the platform and optimization details vary greatly, answering how long a given conversion takes relative to other operations is effectively impossible. If you are dealing with enough data that a conversion is creating a noticeable performance bottleneck, then profile that conversion.
It is worth it to make sure your types match to the best of your ability if you are dealing with enough data for it to matter; that is a subjective measurement of value, though, and will depend on what you are doing.
It is always worth it to make sure implicit type conversions do not cause errors, as errors due to them can be some of the worst possible in C/C++ (memory leaks, buffer overflows, access violations, etc.).
The video "Gangnam Style" (I'm sure you've heard it) just exceeded 2 billion views on youtube. In fact, Google says that they never expected a video to be greater than a 32-bit integer... which alludes to the fact that Google used int instead of unsigned for their view counter. I think they had to re-write their code a bit to accommodate larger views.
Checking their style guide: https://google-styleguide.googlecode.com/svn/trunk/cppguide.html#Integer_Types
...they advise "don't use an unsigned integer type," and give one good reason why: unsigned could be buggy.
It's a good reason, but could be guarded against. My question is: is it bad coding practice in general to use unsigned int?
The Google rule is widely accepted in professional circles. The problem
is that the unsigned integral types are sort of broken, and have
unexpected and unnatural behavior when used for numeric values; they
don't work well as a cardinal type. For example, an index into an array
may never be negative, but it makes perfect sense to write
abs(i1 - i2) to find the distance between two indices. Which won't work if
i1 and i2 have unsigned types.
As a general rule, this particular rule in the Google style guidelines
corresponds more or less to what the designers of the language intended.
Any time you see something other than int, you can assume a special
reason for it. If it is because of the range, it will be long or
long long, or even int_least64_t. Using unsigned types is generally
a signal that you're dealing with bits, rather than the numeric value of
the variable, or (at least in the case of unsigned char) that you're
dealing with raw memory.
With regards to the "self-documentation" of using an unsigned: this
doesn't hold up, since there are almost always a lot of values that the
variable cannot (or should not) take, including many positive ones. C++
doesn't have sub-range types, and the way unsigned is defined means
that it cannot really be used as one either.
This guideline is extremely misleading. Blindly using int instead of unsigned int won't solve anything. That simply shifts the problems somewhere else. You absolutely must be aware of integer overflow when doing arithmetic on fixed precision integers. If your code is written in a way that it does not handle integer overflow gracefully for some given inputs, then your code is broken regardless of whether you use signed or unsigned ints. With unsigned ints you must be aware of integer underflow as well, and with doubles and floats you must be aware of many additional issues with floating point arithmetic.
Just take this article about a bug in the standard Java binary search algorithm published by none other than Google for why you must be aware of integer overflow. In fact, that very article shows C++ code casting to unsigned int in order to guarantee correct behavior. The article also starts out by presenting a bug in Java where guess what, they don't have unsigned int. However, they still ran into a bug with integer overflow.
Use the right type for the operations which you will perform. float wouldn't make sense for a counter. Nor does signed int. The normal operations on the counter are print and +=1.
Even if you had some unusual operations, such as printing the difference in viewcounts, you wouldn't necessarily have a problem. Sure, other answers mention the incorrect abs(i2-i1) but it's not unreasonable to expect programmers to use the correct max(i2,i1) - min(i2,i1). Which does have range issues for signed int. No uniform solution here; programmers should understand the properties of the types they're working with.
Google states that: "Some people, including some textbook authors, recommend using unsigned types to represent numbers that are never negative. This is intended as a form of self-documentation."
I personally use unsigned ints as index parameters.
int foo(unsigned int index, int* myArray){
return myArray[index];
}
Google suggests: "Document that a variable is non-negative using assertions. Don't use an unsigned type."
int foo(int index, int* myArray){
assert(index >= 0);
return myArray[index];
}
Pro for Google: If a negative number is passed in debug mode my code will hopefully return an out of bounds error. Google's code is guaranteed to assert.
Pro for me: My code can support a greater size of myArray.
I think the actual deciding factor comes down to, how clean is your code? If you clean up all warnings, it will be clear when the compiler warns you know when you're trying to assign a signed variable to an unsigned variable. If your code already has a bunch of warnings, the compiler's warning is going to be lost on you.
A final note here: Google says: "Sometimes gcc will notice this bug and warn you, but often it will not." I haven't seen that to be the case on Visual Studio, checks against negative numbers and assignments from signed to unsigned are always warned. But if you use gcc you might have a care.
You specific question is:
"Is it bad practice to use unsigned?" to which the only correct answer can be no. It is not bad practice.
There are many style guides, each with a different focus, and while in some cases, an organisation, given their typical toolchain and deployment platform may choose not to use unsigned for their products, other toolchains and platforms almost demand it's use.
Google seem to get a lot of deference because they have a good business model (and probably employ some smart people like everyone else).
CERT IIRC recommend unsigned for buffer indexes, because if you do overflow, at least you'll still be in your own buffer, some intrinsic security there.
What do the language and standard library designers say (probably the best representation of accepted wisdom). strlen returns a size_t, which is probably unsigned (platform dependent), other answers suggest this is an anachronism because shiny new computers have wide architectures, but this misses the point that C and C++ are general programming languages and should scale well on big and small platforms.
Bottom line is that this is one of many religious questions; certainly not settled, and in these cases, I normally go with my religion for green field developments, and go with the existing convention of the codebase for existing work. Consistency matters.
I was always taught to use the appropriate data type depending on the specific needs of the class/method/function/member/variable/what-have-you. That said, does it even matter anymore?
Hypothetically, if I have a class that has a data member that will never be negative and will never be more than the maximum value of unsigned char, does storing it as an unsigned char (1 byte) versus an int (4 bytes) even matter anymore due to implicit type promotion/demotion, internal representation, register size and the often quoted "CPUs are more efficient when working with int"?
Example:
class Foo {
public:
Foo() : _status(0) { /* DO NOTHING */ }
void AddTo(unsigned char value) {
if(std::numeric_limits<unsigned char>::max() - _stat < value) {
value = std::numeric_limits<unsigned char>::max() - _status;
}
_status += value;
}
void Increment() {
if(_status == std::numeric_limits<unsigned char>::max()) return;
++_status;
}
private:
unsigned char _status;
};
A main effect of generally using "right-sized" types is that you and others waste a lot of time on it.
If you have a zillion values stored, e.g. a very large picture, or if you absolutely need a 64-bit range, say, then sure, in such cases it makes sense to right-size.
But using right-sizing as a general guideline produces no significant gain and much pain.
Authority argument: Bjarne Stroustrup, who created the language, generally just uses a few types, e.g. int for integers.
"Premature optimization is the root of all evil" Donald Knuth.
Is this one data member's size going to significantly impact the size of the class? Are you serializing the class? Is the serialization representation seeing any reduction? Are you making the code harder to read worrying about this when your boss doesn't care?
Y2K, IPv4 32bit addresses, ASCII, yes the future will look back at your code and laugh. Remember moores law, write something that works, and expect that something will be wrong. Until it is you'll never know what. Write testable, maintainable, and refactorable code and it might just stay in production long enough for someone to care.
For most use cases when targeting PCs and servers, you're not going to need to worry about using chars vs using ints to hold numeric values. Just use an int or, if you need a larger range, a long.
However, if you're targeting a platform with 16 bytes of RAM which has less than 1 KB to store your program, you may need to carefully consider whether that loop counter really has to take up more than 1 byte.
Unless there's a particular reason for choosing some other variable type, just stick with int. A large part of modern programming is managing complexity and there's no reason to start sprinkling your code with a whole variety of types if it doesn't actually help anything. Sure, if you have 5,000 copies of a particular class or working on a system with a tightly constrained memory footprint then it might be important. But on a multigigabyte system this isn't generally going to be a concern. In that case it's more about writing something understandable and maintainable.
You are hitting one of the problems of C-style languages. They deprive the ability to do range checking that you can do in other languages. If your value should be within a specific range, the ability to say a type can be, say, 1..64 is a big help for error tracking. I have found so many bugs in C/C++ code by converting it to pascal or ada.
I like to use typedefs for documentation purposes in the situation you describe--
COLORCOMPONENT
DEGREES
RADIANS
--for documentation purposes. Even if the compiler does not do the checking for me, I can usually spot when I am using degrees when I should be using radians.