I use the next way to get the byte representation of numbers in C++:
template< class T >
union u_value
{
T value;
unsigned char bytes[ sizeof( T ) ];
}
Please, tell me is that the true way? And if not why and how should I get it?
There is no "true" way. What you did is one way to do it. Generally speaking, stuff like this is discouraged as it usually results in non-portable code. Sometimes there are good reasons for poking internals like that, but since you didn't tell what you're about to do with the "byte representation", there's little we can do to judge if this approach is appropriate.
Edit: So networking is your subject here. In this case, either:
You are transferring POD types only (char, short, int, the likes). In this case, you might want to look into <netinet/in.h>, where you'll find the macros htons(), htonl(), ntohs() and ntohl(), which do host-to-network-byte-order and vice versa for you.
You are transferring complex types (e.g. classes). In this case, you might want to look into Boost Serialization, because there's much more to be considered here than mere byte order.
Either way, it is advisable to use ready-made, well-documented and -understood code, instead of doing byte-juggling yourself.
Not true way. This way is not portable and often may result in undefined behavior. Because,
Padding (done by compiler) is ignored
virtual function extra space is not taken care visibly (if T is polymorphic)
If you are transferring the bytes across the network you need to be careful about the Endianess
This will work. Just be sure that anything with virtual functions, pointers and so on won't be accurate next time you deserialize them.
Related
I understand that there are applications in which using unsigned integer over/underflow is a good way to get cheap modular arithmetic.
In my code, I use uint exclusively for indices to containers, so I never want this behaviour.
Is this a bad idea? Should I be using int everywhere instead? I do have to do some unsavoury things to get a for loop to count down to 0.
Is there a commonly used implementation of a less unsafe unsigned integer type? Something that throws an exception?
Do compilers (for me gcc, clang) provide a mechanism for less unsafe behaviour in the given compilation unit?
First, a terminology quibble: there is no such thing as unsigned integer underflow, precisely because of the way they wrap around (using modulo arithmetic), which is probably the phrase you meant.
Second, is this a common scenario to be in? Yes, it is a bit. You're not the only one doing "unsavoury things" with loops for reverse counting, and I bet there are a ton of bugs out there where people haven't done "unsavoury things" and, as a result, their code has an unsavoury infinite loop hidden in it. Mind you, I'm not sure I'd go so far as to call unsigneds "unsafe" as a result; like anything, they are the right tool for a subset of infinite possible jobs, and within that subset they perfectly safe.
There is debate over whether unsigned integers should be used for array indexes at all. Some standard committee members believe that their use in the standard library was a mistake; I know that several members of the c++ community here on Stack Overflow also hate unsigned values and wish they'd go away.
Personally I think having access to the full range of the integer by default is absolutely crucial (and losing that is not worth it for a single "-1" sentinel value or whatever), so I think that — while you're not alone in this requirement, and it's a sensible requirement — using unsigned array indexes by default is a good thing. (And what the heck is a negative array index? Semantics, people!)
But that doesn't help you in this scenario. So, what can you do about it? No, there's no trapping unsigned integer implementation (at least, not one that I'm aware of, let alone widespread) because that would literally violate the rules of the type as defined by C++: it would introduce well-defined underflow/overflow semantics to a type for which underflow/overflow shouldn't even be possible.
You will have to use signed integers and check for "logical underflow" (i.e. going out of your desired range, say -1) yourself. You could wrap this behaviour in a class.
I suppose you could actually just wrap an unsigned integer while you're at it, adding some extra logic to operator-- and operator-= to detect a wrap-around and throw.
But I guess my point is that, whatever you do, it's going to be in your "code space" and thus subject to decreased performance. You can't eke out this behaviour from the platform itself.
I was always taught to use the appropriate data type depending on the specific needs of the class/method/function/member/variable/what-have-you. That said, does it even matter anymore?
Hypothetically, if I have a class that has a data member that will never be negative and will never be more than the maximum value of unsigned char, does storing it as an unsigned char (1 byte) versus an int (4 bytes) even matter anymore due to implicit type promotion/demotion, internal representation, register size and the often quoted "CPUs are more efficient when working with int"?
Example:
class Foo {
public:
Foo() : _status(0) { /* DO NOTHING */ }
void AddTo(unsigned char value) {
if(std::numeric_limits<unsigned char>::max() - _stat < value) {
value = std::numeric_limits<unsigned char>::max() - _status;
}
_status += value;
}
void Increment() {
if(_status == std::numeric_limits<unsigned char>::max()) return;
++_status;
}
private:
unsigned char _status;
};
A main effect of generally using "right-sized" types is that you and others waste a lot of time on it.
If you have a zillion values stored, e.g. a very large picture, or if you absolutely need a 64-bit range, say, then sure, in such cases it makes sense to right-size.
But using right-sizing as a general guideline produces no significant gain and much pain.
Authority argument: Bjarne Stroustrup, who created the language, generally just uses a few types, e.g. int for integers.
"Premature optimization is the root of all evil" Donald Knuth.
Is this one data member's size going to significantly impact the size of the class? Are you serializing the class? Is the serialization representation seeing any reduction? Are you making the code harder to read worrying about this when your boss doesn't care?
Y2K, IPv4 32bit addresses, ASCII, yes the future will look back at your code and laugh. Remember moores law, write something that works, and expect that something will be wrong. Until it is you'll never know what. Write testable, maintainable, and refactorable code and it might just stay in production long enough for someone to care.
For most use cases when targeting PCs and servers, you're not going to need to worry about using chars vs using ints to hold numeric values. Just use an int or, if you need a larger range, a long.
However, if you're targeting a platform with 16 bytes of RAM which has less than 1 KB to store your program, you may need to carefully consider whether that loop counter really has to take up more than 1 byte.
Unless there's a particular reason for choosing some other variable type, just stick with int. A large part of modern programming is managing complexity and there's no reason to start sprinkling your code with a whole variety of types if it doesn't actually help anything. Sure, if you have 5,000 copies of a particular class or working on a system with a tightly constrained memory footprint then it might be important. But on a multigigabyte system this isn't generally going to be a concern. In that case it's more about writing something understandable and maintainable.
You are hitting one of the problems of C-style languages. They deprive the ability to do range checking that you can do in other languages. If your value should be within a specific range, the ability to say a type can be, say, 1..64 is a big help for error tracking. I have found so many bugs in C/C++ code by converting it to pascal or ada.
I like to use typedefs for documentation purposes in the situation you describe--
COLORCOMPONENT
DEGREES
RADIANS
--for documentation purposes. Even if the compiler does not do the checking for me, I can usually spot when I am using degrees when I should be using radians.
Should I bother using short int instead of int? Is there any useful difference? Any pitfalls?
short vs int
Don't bother with short unless there is a really good reason such as saving memory on a gazillion values, or conforming to a particular memory layout required by other code.
Using lots of different integer types just introduces complexity and possible wrap-around bugs.
On modern computers it might also introduce needless inefficiency.
const
Sprinkle const liberally wherever you can.
const constrains what might change, making it easier to understand the code: you know that this beastie is not gonna move, so, can be ignored, and thinking directed at more useful/relevant things.
Top-level const for formal arguments is however by convention omitted, possibly because the gain is not enough to outweight the added verbosity.
Also, in a pure declaration of a function top-level const for an argument is simply ignored by the compiler. But on the other hand, some other tools may not be smart enough to ignore them, when comparing pure declarations to definitions, and one person cited that in an earlier debate on the issue in the comp.lang.c++ Usenet group. So it depends to some extent on the toolchain, but happily I've never used tools that place any significance on those consts.
Cheers & hth.,
Absolutely not in function arguments. Few calling conventions are going to make any distinction between short and int. If you're making giant arrays you could use short if your data fits in short to save memory and increase cache effectiveness.
What Ben said. You will actually create less efficient code since all the registers need to strip out the upper bits whenever any comparisons are done. Unless you need to save memory because you have tons of them, use the native integer size. That's what int is for.
EDIT: Didn't even see your sub-question about const. Using const on intrinsic types (int, float) is useless, but any pointers/references should absolutely be const whenever applicable. Same for class methods as well.
The question is technically malformed "Should I use short int?". The only good answer will be "I don't know, what are you trying to accomplish?".
But let's consider some scenarios:
You know the definite range of values that your variable can take.
The ranges for signed integers are:
signed char — -2⁷ – 2⁷-1
short — -2¹⁵ – 2¹⁵-1
int — -2¹⁵ – 2¹⁵-1
long — -2³¹ – 2³¹-1
long long — -2⁶³ – 2⁶³-1
We should note here that these are guaranteed ranges, they can be larger in your particular implementation, and often are. You are also guaranteed that the previous range cannot be larger than the next, but they can be equal.
You will quickly note that short and int actually have the same guaranteed range. This gives you very little incentive to use it. The only reason to use short given this situation becomes giving other coders a hint that the values will be not too large, but this can be done via a comment.
It does, however, make sense to use signed char, if you know that you can fit every potential value in the range -128 — 127.
You don't know the exact range of potential values.
In this case you are in a rather bad position to attempt to minimise memory useage, and should probably use at least int. Although it has the same minimum range as short, on many platforms it may be larger, and this will help you out.
But the bigger problem is that you are trying to write a piece of software that operates on values, the range of which you do not know. Perhaps something wrong has happened before you have started coding (when requirements were being written up).
You have an idea about the range, but realise that it can change in the future.
Ask yourself how close to the boundary are you. If we are talking about something that goes from -1000 to +1000 and can potentially change to -1500 – 1500, then by all means use short. The specific architecture may pad your value, which will mean you won't save any space, but you won't lose anything. However, if we are dealing with some quantity that is currently -14000 – 14000, and can grow unpredictably (perhaps it's some financial value), then don't just switch to int, go to long right away. You will lose some memory, but will save yourself a lot of headache catching these roll-over bugs.
short vs int - If your data will fit in a short, use a short. Save memory. Make it easier for the reader to know how much data your variable may fit.
use of const - Great programming practice. If your data should be a const then make it const. It is very helpful when someone reads your code.
In beej's guide to networking there is a section of marshalling or packing data for Serialization where he describes various functions for packing and unpacking data (int,float,double ..etc).
It is easier to use union(similar can be defined for float and double) as defined below and transmit integer.pack as packed version of integer.i, rather than pack and unpack functions.
union _integer{
char pack[4];
int i;
}integer;
Can some one shed some light on why union is a bad choice?
Is there any better method of packing data?
Different computers may lay the data out differently. The classic issue is endianess (in your example, whether pack[0] has the MSB or LSB). Using a union like this ties the data to the specific representation on the computer that generated it.
If you want to see other ways to marshall data, check out the Boost serialization and Google protobuf.
The union trick is not guaranteed to work, although it usually does. It's perfectly valid (according to the standard) for you to set the char data, and then read 0s when you attempt to read the int, or vice-versa. union was designed to be a memory micro-optimization, not a replacement for casting.
At this point, usually you either wrap up the conversion in a handy object or use reinterpret_cast. Slightly bulky, or ugly... but neither of those are necessarily bad things when you're packing data.
Why not just do a reinterpret_cast to a char* or a memcpy into a char buffer? They're basically the same thing and less confusing.
Your idea would work, so go for it if you want, but I find that clean code is happy code. The easier it is to understand my work, the less likely it is that someone (like my future self) will break it.
Also note that only POD (plain old data) types can be placed in a union, which puts some limitations on the union approach that aren't there in a more intuitive one.
What is uintptr_t and what can it be used for?
First thing, at the time the question was asked, uintptr_t was not in C++. It's in C99, in <stdint.h>, as an optional type. Many C++03 compilers do provide that file. It's also in C++11, in <cstdint>, where again it is optional, and which refers to C99 for the definition.
In C99, it is defined as "an unsigned integer type with the property that any valid pointer to void can be converted to this type, then converted back to pointer to void, and the result will compare equal to the original pointer".
Take this to mean what it says. It doesn't say anything about size.
uintptr_t might be the same size as a void*. It might be larger. It could conceivably be smaller, although such a C++ implementation approaches perverse. For example on some hypothetical platform where void* is 32 bits, but only 24 bits of virtual address space are used, you could have a 24-bit uintptr_t which satisfies the requirement. I don't know why an implementation would do that, but the standard permits it.
uintptr_t is an unsigned integer type that is capable of storing a data pointer (whether it can hold a function pointer is unspecified). Which typically means that it's the same size as a pointer.
It is optionally defined in C++11 and later standards.
A common reason to want an integer type that can hold an architecture's pointer type is to perform integer-specific operations on a pointer, or to obscure the type of a pointer by providing it as an integer "handle".
It's an unsigned integer type exactly the size of a pointer. Whenever you need to do something unusual with a pointer - like for example invert all bits (don't ask why) you cast it to uintptr_t and manipulate it as a usual integer number, then cast back.
There are already many good answers to "what is uintptr_t data type?". I will try to address the "what it can be used for?" part in this post.
Primarily for bitwise operations on pointers. Remember that in C++ one cannot perform bitwise operations on pointers. For reasons see Why can't you do bitwise operations on pointer in C, and is there a way around this?
Thus in order to do bitwise operations on pointers one would need to cast pointers to type uintptr_t and then perform bitwise operations.
Here is an example of a function that I just wrote to do bitwise exclusive or of 2 pointers to store in a XOR linked list so that we can traverse in both directions like a doubly linked list but without the penalty of storing 2 pointers in each node.
template <typename T>
T* xor_ptrs(T* t1, T* t2)
{
return reinterpret_cast<T*>(reinterpret_cast<uintptr_t>(t1)^reinterpret_cast<uintptr_t>(t2));
}
Running the risk of getting another Necromancer badge, I would like to add one very good use for uintptr_t (or even intptr_t) and that is writing testable embedded code.
I write mostly embedded code targeted at various arm and currently tensilica processors. These have various native bus width and the tensilica is actually a Harvard architecture with separate code and data buses that can be different widths.
I use a test driven development style for much of my code which means I do unit tests for all the code units I write. Unit testing on actual target hardware is a hassle so I typically write everything on an Intel based PC either in Windows or Linux using Ceedling and GCC.
That being said, a lot of embedded code involves bit twiddling and address manipulations. Most of my Intel machines are 64 bit. So if you are going to test address manipulation code you need a generalized object to do math on. Thus the uintptr_t give you a machine independent way of debugging your code before you try deploying to target hardware.
Another issue is for the some machines or even memory models on some compilers, function pointers and data pointers are different widths. On those machines the compiler may not even allow casting between the two classes, but uintptr_t should be able to hold either.
-- Edit --
Was pointed out by #chux, this is not part of the standard and functions are not objects in C. However it usually works and since many people don't even know about these types I usually leave a comment explaining the trickery. Other searches in SO on uintptr_t will provide further explanation. Also we do things in unit testing that we would never do in production because breaking things is good.