Should I use #define, enum or const? - c++

In a C++ project I'm working on, I have a flag kind of value which can have four values. Those four flags can be combined. Flags describe the records in database and can be:
new record
deleted record
modified record
existing record
Now, for each record I wish to keep this attribute, so I could use an enum:
enum { xNew, xDeleted, xModified, xExisting }
However, in other places in code, I need to select which records are to be visible to the user, so I'd like to be able to pass that as a single parameter, like:
showRecords(xNew | xDeleted);
So, it seems I have three possible appoaches:
#define X_NEW 0x01
#define X_DELETED 0x02
#define X_MODIFIED 0x04
#define X_EXISTING 0x08
or
typedef enum { xNew = 1, xDeleted, xModified = 4, xExisting = 8 } RecordType;
or
namespace RecordType {
static const uint8 xNew = 1;
static const uint8 xDeleted = 2;
static const uint8 xModified = 4;
static const uint8 xExisting = 8;
}
Space requirements are important (byte vs int) but not crucial. With defines I lose type safety, and with enum I lose some space (integers) and probably have to cast when I want to do a bitwise operation. With const I think I also lose type safety since a random uint8 could get in by mistake.
Is there some other cleaner way?
If not, what would you use and why?
P.S. The rest of the code is rather clean modern C++ without #defines, and I have used namespaces and templates in few spaces, so those aren't out of question either.

Combine the strategies to reduce the disadvantages of a single approach. I work in embedded systems so the following solution is based on the fact that integer and bitwise operators are fast, low memory & low in flash usage.
Place the enum in a namespace to prevent the constants from polluting the global namespace.
namespace RecordType {
An enum declares and defines a compile time checked typed. Always use compile time type checking to make sure arguments and variables are given the correct type. There is no need for the typedef in C++.
enum TRecordType { xNew = 1, xDeleted = 2, xModified = 4, xExisting = 8,
Create another member for an invalid state. This can be useful as error code; for example, when you want to return the state but the I/O operation fails. It is also useful for debugging; use it in initialisation lists and destructors to know if the variable's value should be used.
xInvalid = 16 };
Consider that you have two purposes for this type. To track the current state of a record and to create a mask to select records in certain states. Create an inline function to test if the value of the type is valid for your purpose; as a state marker vs a state mask. This will catch bugs as the typedef is just an int and a value such as 0xDEADBEEF may be in your variable through uninitialised or mispointed variables.
inline bool IsValidState( TRecordType v) {
switch(v) { case xNew: case xDeleted: case xModified: case xExisting: return true; }
return false;
}
inline bool IsValidMask( TRecordType v) {
return v >= xNew && v < xInvalid ;
}
Add a using directive if you want to use the type often.
using RecordType ::TRecordType ;
The value checking functions are useful in asserts to trap bad values as soon as they are used. The quicker you catch a bug when running, the less damage it can do.
Here are some examples to put it all together.
void showRecords(TRecordType mask) {
assert(RecordType::IsValidMask(mask));
// do stuff;
}
void wombleRecord(TRecord rec, TRecordType state) {
assert(RecordType::IsValidState(state));
if (RecordType ::xNew) {
// ...
} in runtime
TRecordType updateRecord(TRecord rec, TRecordType newstate) {
assert(RecordType::IsValidState(newstate));
//...
if (! access_was_successful) return RecordType ::xInvalid;
return newstate;
}
The only way to ensure correct value safety is to use a dedicated class with operator overloads and that is left as an exercise for another reader.

Forget the defines
They will pollute your code.
bitfields?
struct RecordFlag {
unsigned isnew:1, isdeleted:1, ismodified:1, isexisting:1;
};
Don't ever use that. You are more concerned with speed than with economizing 4 ints. Using bit fields is actually slower than access to any other type.
However, bit members in structs have practical drawbacks. First, the ordering of bits in memory varies from compiler to compiler. In addition, many popular compilers generate inefficient code for reading and writing bit members, and there are potentially severe thread safety issues relating to bit fields (especially on multiprocessor systems) due to the fact that most machines cannot manipulate arbitrary sets of bits in memory, but must instead load and store whole words. e.g the following would not be thread-safe, in spite of the use of a mutex
Source: http://en.wikipedia.org/wiki/Bit_field:
And if you need more reasons to not use bitfields, perhaps Raymond Chen will convince you in his The Old New Thing Post: The cost-benefit analysis of bitfields for a collection of booleans at http://blogs.msdn.com/oldnewthing/archive/2008/11/26/9143050.aspx
const int?
namespace RecordType {
static const uint8 xNew = 1;
static const uint8 xDeleted = 2;
static const uint8 xModified = 4;
static const uint8 xExisting = 8;
}
Putting them in a namespace is cool. If they are declared in your CPP or header file, their values will be inlined. You'll be able to use switch on those values, but it will slightly increase coupling.
Ah, yes: remove the static keyword. static is deprecated in C++ when used as you do, and if uint8 is a buildin type, you won't need this to declare this in an header included by multiple sources of the same module. In the end, the code should be:
namespace RecordType {
const uint8 xNew = 1;
const uint8 xDeleted = 2;
const uint8 xModified = 4;
const uint8 xExisting = 8;
}
The problem of this approach is that your code knows the value of your constants, which increases slightly the coupling.
enum
The same as const int, with a somewhat stronger typing.
typedef enum { xNew = 1, xDeleted, xModified = 4, xExisting = 8 } RecordType;
They are still polluting the global namespace, though.
By the way... Remove the typedef. You're working in C++. Those typedefs of enums and structs are polluting the code more than anything else.
The result is kinda:
enum RecordType { xNew = 1, xDeleted, xModified = 4, xExisting = 8 } ;
void doSomething(RecordType p_eMyEnum)
{
if(p_eMyEnum == xNew)
{
// etc.
}
}
As you see, your enum is polluting the global namespace.
If you put this enum in an namespace, you'll have something like:
namespace RecordType {
enum Value { xNew = 1, xDeleted, xModified = 4, xExisting = 8 } ;
}
void doSomething(RecordType::Value p_eMyEnum)
{
if(p_eMyEnum == RecordType::xNew)
{
// etc.
}
}
extern const int ?
If you want to decrease coupling (i.e. being able to hide the values of the constants, and so, modify them as desired without needing a full recompilation), you can declare the ints as extern in the header, and as constant in the CPP file, as in the following example:
// Header.hpp
namespace RecordType {
extern const uint8 xNew ;
extern const uint8 xDeleted ;
extern const uint8 xModified ;
extern const uint8 xExisting ;
}
And:
// Source.hpp
namespace RecordType {
const uint8 xNew = 1;
const uint8 xDeleted = 2;
const uint8 xModified = 4;
const uint8 xExisting = 8;
}
You won't be able to use switch on those constants, though. So in the end, pick your poison...
:-p

Have you ruled out std::bitset? Sets of flags is what it's for. Do
typedef std::bitset<4> RecordType;
then
static const RecordType xNew(1);
static const RecordType xDeleted(2);
static const RecordType xModified(4);
static const RecordType xExisting(8);
Because there are a bunch of operator overloads for bitset, you can now do
RecordType rt = whatever; // unsigned long or RecordType expression
rt |= xNew; // set
rt &= ~xDeleted; // clear
if ((rt & xModified) != 0) ... // test
Or something very similar to that - I'd appreciate any corrections since I haven't tested this. You can also refer to the bits by index, but it's generally best to define only one set of constants, and RecordType constants are probably more useful.
Assuming you have ruled out bitset, I vote for the enum.
I don't buy that casting the enums is a serious disadvantage - OK so it's a bit noisy, and assigning an out-of-range value to an enum is undefined behaviour so it's theoretically possible to shoot yourself in the foot on some unusual C++ implementations. But if you only do it when necessary (which is when going from int to enum iirc), it's perfectly normal code that people have seen before.
I'm dubious about any space cost of the enum, too. uint8 variables and parameters probably won't use any less stack than ints, so only storage in classes matters. There are some cases where packing multiple bytes in a struct will win (in which case you can cast enums in and out of uint8 storage), but normally padding will kill the benefit anyhow.
So the enum has no disadvantages compared with the others, and as an advantage gives you a bit of type-safety (you can't assign some random integer value without explicitly casting) and clean ways of referring to everything.
For preference I'd also put the "= 2" in the enum, by the way. It's not necessary, but a "principle of least astonishment" suggests that all 4 definitions should look the same.

Here are couple of articles on const vs. macros vs. enums:
Symbolic Constants
Enumeration Constants vs. Constant Objects
I think you should avoid macros especially since you wrote most of your new code is in modern C++.

If possible do NOT use macros. They aren't too much admired when it comes to modern C++.

With defines I lose type safety
Not necessarily...
// signed defines
#define X_NEW 0x01u
#define X_NEW (unsigned(0x01)) // if you find this more readable...
and with enum I lose some space (integers)
Not necessarily - but you do have to be explicit at points of storage...
struct X
{
RecordType recordType : 4; // use exactly 4 bits...
RecordType recordType2 : 4; // use another 4 bits, typically in the same byte
// of course, the overall record size may still be padded...
};
and probably have to cast when I want to do bitwise operation.
You can create operators to take the pain out of that:
RecordType operator|(RecordType lhs, RecordType rhs)
{
return RecordType((unsigned)lhs | (unsigned)rhs);
}
With const I think I also lose type safety since a random uint8 could get in by mistake.
The same can happen with any of these mechanisms: range and value checks are normally orthogonal to type safety (though user-defined-types - i.e. your own classes - can enforce "invariants" about their data). With enums, the compiler's free to pick a larger type to host the values, and an uninitialised, corrupted or just miss-set enum variable could still end up interpretting its bit pattern as a number you wouldn't expect - comparing unequal to any of the enumeration identifiers, any combination of them, and 0.
Is there some other cleaner way? / If not, what would you use and why?
Well, in the end the tried-and-trusted C-style bitwise OR of enumerations works pretty well once you have bit fields and custom operators in the picture. You can further improve your robustness with some custom validation functions and assertions as in mat_geek's answer; techniques often equally applicable to handling string, int, double values etc..
You could argue that this is "cleaner":
enum RecordType { New, Deleted, Modified, Existing };
showRecords([](RecordType r) { return r == New || r == Deleted; });
I'm indifferent: the data bits pack tighter but the code grows significantly... depends how many objects you've got, and the lamdbas - beautiful as they are - are still messier and harder to get right than bitwise ORs.
BTW /- the argument about thread safety's pretty weak IMHO - best remembered as a background consideration rather than becoming a dominant decision-driving force; sharing a mutex across the bitfields is a more likely practice even if unaware of their packing (mutexes are relatively bulky data members - I have to be really concerned about performance to consider having multiple mutexes on members of one object, and I'd look carefully enough to notice they were bit fields). Any sub-word-size type could have the same problem (e.g. a uint8_t). Anyway, you could try atomic compare-and-swap style operations if you're desperate for higher concurrency.

Enums would be more appropriate as they provide "meaning to the identifiers" as well as type safety. You can clearly tell "xDeleted" is of "RecordType" and that represent "type of a record" (wow!) even after years. Consts would require comments for that, also they would require going up and down in code.

Even if you have to use 4 byte to store an enum (I'm not that familiar with C++ -- I know you can specify the underlying type in C#), it's still worth it -- use enums.
In this day and age of servers with GBs of memory, things like 4 bytes vs. 1 byte of memory at the application level in general don't matter. Of course, if in your particular situation, memory usage is that important (and you can't get C++ to use a byte to back the enum), then you can consider the 'static const' route.
At the end of the day, you have to ask yourself, is it worth the maintenance hit of using 'static const' for the 3 bytes of memory savings for your data structure?
Something else to keep in mind -- IIRC, on x86, data structures are 4-byte aligned, so unless you have a number of byte-width elements in your 'record' structure, it might not actually matter. Test and make sure it does before you make a tradeoff in maintainability for performance/space.

If you want the type safety of classes, with the convenience of enumeration syntax and bit checking, consider Safe Labels in C++. I've worked with the author, and he's pretty smart.
Beware, though. In the end, this package uses templates and macros!

Do you actually need to pass around the flag values as a conceptual whole, or are you going to have a lot of per-flag code? Either way, I think having this as class or struct of 1-bit bitfields might actually be clearer:
struct RecordFlag {
unsigned isnew:1, isdeleted:1, ismodified:1, isexisting:1;
};
Then your record class could have a struct RecordFlag member variable, functions can take arguments of type struct RecordFlag, etc. The compiler should pack the bitfields together, saving space.

I probably wouldn't use an enum for this kind of a thing where the values can be combined together, more typically enums are mutually exclusive states.
But whichever method you use, to make it more clear that these are values which are bits which can be combined together, use this syntax for the actual values instead:
#define X_NEW (1 << 0)
#define X_DELETED (1 << 1)
#define X_MODIFIED (1 << 2)
#define X_EXISTING (1 << 3)
Using a left-shift there helps to indicate that each value is intended to be a single bit, it is less likely that later on someone would do something wrong like add a new value and assign it something a value of 9.

Based on KISS, high cohesion and low coupling, ask these questions -
Who needs to know? my class, my library, other classes, other libraries, 3rd parties
What level of abstraction do I need to provide? Does the consumer understand bit operations.
Will I have have to interface from VB/C# etc?
There is a great book "Large-Scale C++ Software Design", this promotes base types externally, if you can avoid another header file/interface dependancy you should try to.

If you are using Qt you should have a look for QFlags.
The QFlags class provides a type-safe way of storing OR-combinations of enum values.

I would rather go with
typedef enum { xNew = 1, xDeleted, xModified = 4, xExisting = 8 } RecordType;
Simply because:
It is cleaner and it makes the code readable and maintainable.
It logically groups the constants.
Programmer's time is more important, unless your job is to save those 3 bytes.

Not that I like to over-engineer everything but sometimes in these cases it may be worth creating a (small) class to encapsulate this information.
If you create a class RecordType then it might have functions like:
void setDeleted();
void clearDeleted();
bool isDeleted();
etc... (or whatever convention suits)
It could validate combinations (in the case where not all combinations are legal, eg if 'new' and 'deleted' could not both be set at the same time). If you just used bit masks etc then the code that sets the state needs to validate, a class can encapsulate that logic too.
The class may also give you the ability to attach meaningful logging info to each state, you could add a function to return a string representation of the current state etc (or use the streaming operators '<<').
For all that if you are worried about storage you could still have the class only have a 'char' data member, so only take a small amount of storage (assuming it is non virtual). Of course depending on the hardware etc you may have alignment issues.
You could have the actual bit values not visible to the rest of the 'world' if they are in an anonymous namespace inside the cpp file rather than in the header file.
If you find that the code using the enum/#define/ bitmask etc has a lot of 'support' code to deal with invalid combinations, logging etc then encapsulation in a class may be worth considering. Of course most times simple problems are better off with simple solutions...

Related

Advantages of boolean values to bit-fields

The code base I work in is quite old. While we compile nearly everything with c++11. Much of the code was written in c many years ago. When developing new classes in old areas I always find myself in a situation where I have to choose between matching old methodologies, or going with a more modern approach.
In most cases, I prefer sticking with more modern techniques when possible. However, one common old practice I often see, which I have a hard time arguing the use of, is bitfields. We pass a lot of messages, here, many times, they are full of single bit values. Take the example below:
class NewStructure
{
public:
const bool getValue1() const
{
return value1;
}
void setValue1(const bool input)
{
value1 = input;
}
private:
bool value1;
bool value2;
bool value3;
bool value4;
bool value5;
bool value6;
bool value7;
bool value8;
};
struct OldStructure
{
const bool getValue1() const
{
return value1;
}
void setValue1(const bool input)
{
value1 = input;
}
unsigned char value1 : 1;
unsigned char value2 : 1;
unsigned char value3 : 1;
unsigned char value4 : 1;
unsigned char value5 : 1;
unsigned char value6 : 1;
unsigned char value7 : 1;
unsigned char value8 : 1;
};
In this case the sizes are 8 bytes for the New Structure and 1 for the old.
I added a "getter" and "setter" to illustrate the point that from a user perspective, they can be identical. I realize that perhaps you could make the case of readability for the next developer, but other than that, is there a reason to avoid bit fields? I know that packed fields take a performance hit, but because these are all characters, padding rules are still in place.
There are several things to consider when using bitfields. Those are (order of importance would depend on situation)
Performance
Bitfields operation incur performance penalty when set or read (as compared to direct types). A simple example of codegen shows the extra instructions emitted: https://gcc.godbolt.org/z/DpcErN However, bitfields provide for more compact data, which becomes more cache-friendly, and that could completely outweigh any drawbacks of additional operations. The only way to understand the real performance impact is by benchmarking actual application in the real use case.
ABI Interoperability
Endiannes of bit fields is implementation defined, so layout of the same struct produced by two compiler can differ.
Usability
There is no reference binding to a bitfield, nor you can take it's address. This could affect code and make it less clear.
For you, as the programmer, there's not much difference. But the machine code to access a whole byte is much simpler/shorter than to access an individual bit, so using bitfields bulks up the generated code.
In a pseudo-assembly-language, your setter might turn into something like:
ldb input1,b ; get the new value into accumulator b
movb b,value1 ; put it into the variable
rts ; return from subroutine
But it's not so easy for bitfields:
ldb input1,b ; get the new value into accumulator b
movb bitfields,a ; get current bitfield values into accumulator a
cmpb b,#0 ; See what to do.
brz clearvalue1: ; If it's zero, go to clearing the bit
orb #$80,a ; set the bit representing value1.
bra resume: ; skip the clearing code.
clearvalue1:
andb #$7f,a ; clear the bit representing value1
resume:
movb a,bitfields ; put the value back
rts ; return
And it has to do that for each of your 8 members' setters, and something similar for getters. It adds up. Additionally, even today's dumbest compilers would probably inline the full-byte setter code, rather than actually make a subroutine call. For the bitfield setter, it may depend whether you're compiling optimizing for speed vs. space.
And you've only asked about booleans. If they were integer bitfields, then the compiler has to deal with loading, masking out prior values, shifting the value to align into its field, masking out unused bits, and/or the value into place, then writing it back to memory.
So why would you use one vs. the other?
Bitfields are slower, but pack data more efficiently.
Non-bitfields are faster, and require less machine code to access.
As the developer, it's your judgement call. If you will be keeping many instances of Structure in memory at once, the memory savings may be worth it. If you aren't going to have many instances of that structure in memory at once, the compiled code bloat offsets memory savings and you're sacrificing speed.
template<typename enum_type,size_t n_bits>
class bit_flags{
std::bitset<n_bits> bits;
auto operator[](enum_type bit){return bits[bit];};
auto& set(enum_type bit)){return set(bit);};
auto& reset(enum_type bit)){return set(bit);};
//go on with flip et al...
static_assert(std::is_enum<enum_type>{});
};
enum class v_flags{v1,v2,/*...*/vN};
bit_flags<v_flags,v_flags::vN+1> my_flags;
my_flags.set(v_flags::v1);
my_flags.[v_flags::v2]=true;
std::bitset is as efficient as bool bit fields. You can wrap it in a class to force using every bit by names defined in an enum. Now you have a small but scalable utility to use for multiple defferent sets of bool flags. C++17 makes it even more convenient:
template<auto last_flag, typename enum_type=decltype(last_flag)>
class bit_flags{
std::bitset<last_flag+1> bits;
//...
};
bit_flags<v_flags::vN+1> my_flags;

Portable bit fields for Handles

I want to use and store "Handles" to data in an object buffer to reduce allocation overhead. The handle is simply an index into an array with the object. However I need to detect use-after-reallocations, as this could slip in quite easily. The common approach seems to be using bit fields. However this leads to 2 problems:
Bit fields are implementation defined
Bit shifting is not portable across big/little endian machines.
What I need:
Store handle to file (file handler can manage either integer types (byte swapping) or byte arrays)
Store 2 values in the handle with minimum space
What I got:
template<class T_HandleDef, typename T_Storage = uint32_t>
struct Handle
{
typedef T_HandleDef HandleDef;
typedef T_Storage Storage;
Handle(): handle_(0){}
private:
const T_Storage handle_;
};
template<unsigned T_numIndexBits = 16, typename T_Tag = void>
struct HandleDef{
static const unsigned numIndexBits = T_numIndexBits;
};
template<class T_Handle>
struct HandleAccessor{
typedef typename T_Handle::Storage Storage;
typedef typename T_Handle::HandleDef HandleDef;
static const unsigned numIndexBits = HandleDef::numIndexBits;
static const unsigned numMagicBits = sizeof(Storage) * 8 - numIndexBits;
/// "Magic" struct that splits the handle into values
union HandleData{
struct
{
Storage index : numIndexBits;
Storage magic : numMagicBits;
};
T_Handle handle;
};
};
A usage would be for example:
typedef Handle<HandleDef<24> > FooHandle;
FooHandle Create(unsigned idx, unsigned m){
HandleAccessor<FooHandle>::HandleData data;
data.idx = idx;
data.magic = m;
return data.handle;
}
My goal was to keep the handle as opaque as possible, add a bool check but nothing else. Users of the handle should not be able to do anything with it but passing it around.
So problems I run into:
Union is UB -> Replace its T_Handle by Storage and add a ctor to Handle from Storage
How does the compiler layout the bit field? I fill the whole union/type so there should be no padding. So probably the only thing that can be different is which type comes first depending on endianess, correct?
How can I store handle_ to a file and load it from a possible different endianess machine and still have index and magic be correct? I think I can store the containing Storage 'endian-correct' and get correct values, IF both members occupy exactly half the space (2 Shorts in an uint) But I always want more space for the index than for the magic value.
Note: There are already questions about bitfields and unions. Summary:
Bitfields may have unexpected padding (impossible here as whole type occupied)
Order of "members" depend on compiler (only 2 possible ways here, should be save to assume order depends entirely on endianess, so this may or may not actually help here)
Specific binary layout of bits can be achieved by manual shifting (or e.g. wrappers http://blog.codef00.com/2014/12/06/portable-bitfields-using-c11/) -> Is not an answer here. I need also a specific layout of the values IN the bitfield. So I'm not sure what I get, if I e.g. create a handle as handle = (magic << numIndexBits) | index and save/load this as binary (no endianess conversion) Missing a BigEndian machine for testing.
Note: No C++11, but boost is allowed.
Answer is pretty simple (based on another question I forgot the link to and comments by #Jeremy Friesner ):
As "numbers" are already an abstraction in C++ one can be sure to always have the same bit representation when the variable is in a CPU register (when it is used for anything calculation like) Also bit shifts in C++ are defined in an endian-independent way. This means x << 1 is always equal x * 2 (and hence big-endian)
Only time one get endianess problems is when saving to file, send/recv over network or accessing it from memory differently (e.g. via pointers...)
One cannot use C++ bitfields here, as one cannot be 100% sure about the order of the "entries". Bitfield containers might be ok, if they allow access to the data as a "number".
Savest is (still) using bitshifts, which are very simple in this case (only 2 values) During storing/serialization the number must then be stored in an endian-agnostic way.

C++. Struct padding / alignment on different platforms and atomatic check of layout compatibility

I have embedded device connected to PC
and some big struct S with many fields and arrays of custom defined type FixedPoint_t.
FixedPoint_t is a templated POD class with exactly one data member that vary in size from char to long depending on template params. Anyway it passes static_assert((std::is_pod<FixedPoint_t<0,8,8> >::value == true),"");
It will be good if this big struct has compatible underlaying memory representation on both embedded system and controlling PC. This allows significant simplification of communication protocol to commands like "set word/byte with offset N to value V". Assume endianess is the same on both platforms.
I see 3 solutions here:
Use something like #pragma packed on both sides.
BUT i got warning when i put attribute((packed)) to struct S declaration
warning: ignoring packed attribute because of unpacked non-POD field.
This is because FixedPoint_t is not declared as packed.
I don't want declare it as packed because this type is widely used in whole program and packing can lead to performance drop.
Make correct struct serialization. This is not acceptable because of code bloat, additional RAM usege...Protocol will be more complicated because i need random access to the struct. Now I think this is not an option.
Control padding manually. I can add some field, reorder others...Just to acheive no padding on both platforms. This will satisfy me at the moment. But i need good way to write a test that shows me is padding is there or not.
I can compare sum of sizeof() each field to sizeof(struct).
I can compare offsetof() each struct field on both platforms.
Both variants are ugly enough...
What do you recommend? Especially i am interested in manual padding controling and automaic padding detection in tests.
EDIT: Is it sufficient to compare sizeof(big struct) on two platforms to detect layout compatibility(assume endianess is equal)?? I think size should not match if padding will be different.
EDIT2:
//this struct should have padding on 32bit machine
//and has no padding on 8bit
typedef struct
{
uint8_t f8;
uint32_t f32;
uint8_t arr[5];
} serialize_me_t;
//count of members in struct
#define SERTABLE_LEN 3
//one table entry for each serialize_me_t data member
static const struct {
size_t width;
size_t offset;
// size_t cnt; //why we need cnt?
} ser_des_table[SERTABLE_LEN] =
{
{ sizeof(serialize_me_t::f8), offsetof(serialize_me_t, f8)},
{ sizeof(serialize_me_t::f32), offsetof(serialize_me_t, f32)},
{ sizeof(serialize_me_t::arr), offsetof(serialize_me_t, arr)},
};
void serialize(void* serialize_me_ptr, char* buf)
{
const char* struct_ptr = (const char*)serialize_me_ptr;
for(int i=0; i<SERTABLE_LEN; I++)
{
struct_ptr += ser_des_table[i].offset;
memcpy(buf, struct_ptr, ser_des_table[i].width );
buf += ser_des_table[i].width;
}
}
I strongly recommend to use option 2:
You are save for future changes (new PCD/ABI, compiler, platform, etc.)
Code-bloat can be kept to a minimum if well thought. There is just one function needed per direction.
You can create the required tables/code (semi-)automatically (I use Python for such). This way both sides will stay in sync.
You definitively should add a CRC to the data anyway. As you likely do not want to calculate this in the rx/tx-interrupt, you'll have to provide an array anyway.
Using a struct directly will soon become a maintenance nightmare. Even worse if someone else has to track this code.
Protocols, etc. tend to be reused. If it is a platform with different endianess, the other approach goes bang.
To create the data-structs and ser/des tables, you can use offsetof to get the offset of each type in the struct. If that table is made an include-file, it can be used on both sides. You can even create the struct and table e.g. by a Python script. Adding that to the build-process ensures it is always up-to-date and you avoid additional typeing.
For instance (in C, just to get idea):
// protocol.inc
typedef struct {
uint32_t i;
uint 16_t s[5];
uint32_t j;
} ProtocolType;
static const struct {
size_t width;
size_t offset;
size_t cnt;
} ser_des_table[] = {
{ sizeof(ProtocolType.i), offsetof(ProtocolType.i), 1 },
{ sizeof(ProtocolType.s[0]), offsetof(ProtocolType.s), 5 },
...
};
If not created automatically, I'd use macros to generate the data. Possibly by including the file twice: one to generate the struct definition and one for the table. This is possible by redefining the macros in-between.
You should care about the representation of signed integers and floats (implementation defined, floats are likely IEEE754 as proposed by the standard).
As an alternative to the width field, you can use an "type" code (e.g. a char which maps to an implementation-defined type. This way you can add custom types with same width, but different encoding (e.g. uint32_t and IEEE754-float). This will completely abstract the network protocol encoding from the physical machine (the best solution). Note noting hinders you from using common encodings which do not complicate code a single bit (literally).

When would anyone use a union? Is it a remnant from the C-only days?

I have learned but don't really get unions. Every C or C++ text I go through introduces them (sometimes in passing), but they tend to give very few practical examples of why or where to use them. When would unions be useful in a modern (or even legacy) case? My only two guesses would be programming microprocessors when you have very limited space to work with, or when you're developing an API (or something similar) and you want to force the end user to have only one instance of several objects/types at one time. Are these two guesses even close to right?
Unions are usually used with the company of a discriminator: a variable indicating which of the fields of the union is valid. For example, let's say you want to create your own Variant type:
struct my_variant_t {
int type;
union {
char char_value;
short short_value;
int int_value;
long long_value;
float float_value;
double double_value;
void* ptr_value;
};
};
Then you would use it such as:
/* construct a new float variant instance */
void init_float(struct my_variant_t* v, float initial_value) {
v->type = VAR_FLOAT;
v->float_value = initial_value;
}
/* Increments the value of the variant by the given int */
void inc_variant_by_int(struct my_variant_t* v, int n) {
switch (v->type) {
case VAR_FLOAT:
v->float_value += n;
break;
case VAR_INT:
v->int_value += n;
break;
...
}
}
This is actually a pretty common idiom, specially on Visual Basic internals.
For a real example see SDL's SDL_Event union. (actual source code here). There is a type field at the top of the union, and the same field is repeated on every SDL_*Event struct. Then, to handle the correct event you need to check the value of the type field.
The benefits are simple: there is one single data type to handle all event types without using unnecessary memory.
I find C++ unions pretty cool. It seems that people usually only think of the use case where one wants to change the value of a union instance "in place" (which, it seems, serves only to save memory or perform doubtful conversions).
In fact, unions can be of great power as a software engineering tool, even when you never change the value of any union instance.
Use case 1: the chameleon
With unions, you can regroup a number of arbitrary classes under one denomination, which isn't without similarities with the case of a base class and its derived classes. What changes, however, is what you can and can't do with a given union instance:
struct Batman;
struct BaseballBat;
union Bat
{
Batman brucewayne;
BaseballBat club;
};
ReturnType1 f(void)
{
BaseballBat bb = {/* */};
Bat b;
b.club = bb;
// do something with b.club
}
ReturnType2 g(Bat& b)
{
// do something with b, but how do we know what's inside?
}
Bat returnsBat(void);
ReturnType3 h(void)
{
Bat b = returnsBat();
// do something with b, but how do we know what's inside?
}
It appears that the programmer has to be certain of the type of the content of a given union instance when he wants to use it. It is the case in function f above. However, if a function were to receive a union instance as a passed argument, as is the case with g above, then it wouldn't know what to do with it. The same applies to functions returning a union instance, see h: how does the caller know what's inside?
If a union instance never gets passed as an argument or as a return value, then it's bound to have a very monotonous life, with spikes of excitement when the programmer chooses to change its content:
Batman bm = {/* */};
Baseball bb = {/* */};
Bat b;
b.brucewayne = bm;
// stuff
b.club = bb;
And that's the most (un)popular use case of unions. Another use case is when a union instance comes along with something that tells you its type.
Use case 2: "Nice to meet you, I'm object, from Class"
Suppose a programmer elected to always pair up a union instance with a type descriptor (I'll leave it to the reader's discretion to imagine an implementation for one such object). This defeats the purpose of the union itself if what the programmer wants is to save memory and that the size of the type descriptor is not negligible with respect to that of the union. But let's suppose that it's crucial that the union instance could be passed as an argument or as a return value with the callee or caller not knowing what's inside.
Then the programmer has to write a switch control flow statement to tell Bruce Wayne apart from a wooden stick, or something equivalent. It's not too bad when there are only two types of contents in the union but obviously, the union doesn't scale anymore.
Use case 3:
As the authors of a recommendation for the ISO C++ Standard put it back in 2008,
Many important problem domains require either large numbers of objects or limited memory
resources. In these situations conserving space is very important, and a union is often a perfect way to do that. In fact, a common use case is the situation where a union never changes its active member during its lifetime. It can be constructed, copied, and destructed as if it were a struct containing only one member. A typical application of this would be to create a heterogeneous collection of unrelated types which are not dynamically allocated (perhaps they are in-place constructed in a map, or members of an array).
And now, an example, with a UML class diagram:
The situation in plain English: an object of class A can have objects of any class among B1, ..., Bn, and at most one of each type, with n being a pretty big number, say at least 10.
We don't want to add fields (data members) to A like so:
private:
B1 b1;
.
.
.
Bn bn;
because n might vary (we might want to add Bx classes to the mix), and because this would cause a mess with constructors and because A objects would take up a lot of space.
We could use a wacky container of void* pointers to Bx objects with casts to retrieve them, but that's fugly and so C-style... but more importantly that would leave us with the lifetimes of many dynamically allocated objects to manage.
Instead, what can be done is this:
union Bee
{
B1 b1;
.
.
.
Bn bn;
};
enum BeesTypes { TYPE_B1, ..., TYPE_BN };
class A
{
private:
std::unordered_map<int, Bee> data; // C++11, otherwise use std::map
public:
Bee get(int); // the implementation is obvious: get from the unordered map
};
Then, to get the content of a union instance from data, you use a.get(TYPE_B2).b2 and the likes, where a is a class A instance.
This is all the more powerful since unions are unrestricted in C++11. See the document linked to above or this article for details.
One example is in the embedded realm, where each bit of a register may mean something different. For example, a union of an 8-bit integer and a structure with 8 separate 1-bit bitfields allows you to either change one bit or the entire byte.
Herb Sutter wrote in GOTW about six years ago, with emphasis added:
"But don't think that unions are only a holdover from earlier times. Unions are perhaps most useful for saving space by allowing data to overlap, and this is still desirable in C++ and in today's modern world. For example, some of the most advanced C++ standard library implementations in the world now use just this technique for implementing the "small string optimization," a great optimization alternative that reuses the storage inside a string object itself: for large strings, space inside the string object stores the usual pointer to the dynamically allocated buffer and housekeeping information like the size of the buffer; for small strings, the same space is instead reused to store the string contents directly and completely avoid any dynamic memory allocation. For more about the small string optimization (and other string optimizations and pessimizations in considerable depth), see... ."
And for a less useful example, see the long but inconclusive question gcc, strict-aliasing, and casting through a union.
Well, one example use case I can think of is this:
typedef union
{
struct
{
uint8_t a;
uint8_t b;
uint8_t c;
uint8_t d;
};
uint32_t x;
} some32bittype;
You can then access the 8-bit separate parts of that 32-bit block of data; however, prepare to potentially be bitten by endianness.
This is just one hypothetical example, but whenever you want to split data in a field into component parts like this, you could use a union.
That said, there is also a method which is endian-safe:
uint32_t x;
uint8_t a = (x & 0xFF000000) >> 24;
For example, since that binary operation will be converted by the compiler to the correct endianness.
Some uses for unions:
Provide a general endianness interface to an unknown external host.
Manipulate foreign CPU architecture floating point data, such as accepting VAX G_FLOATS from a network link and converting them to IEEE 754 long reals for processing.
Provide straightforward bit twiddling access to a higher-level type.
union {
unsigned char byte_v[16];
long double ld_v;
}
With this declaration, it is simple to display the hex byte values of a long double, change the exponent's sign, determine if it is a denormal value, or implement long double arithmetic for a CPU which does not support it, etc.
Saving storage space when fields are dependent on certain values:
class person {
string name;
char gender; // M = male, F = female, O = other
union {
date vasectomized; // for males
int pregnancies; // for females
} gender_specific_data;
}
Grep the include files for use with your compiler. You'll find dozens to hundreds of uses of union:
[wally#zenetfedora ~]$ cd /usr/include
[wally#zenetfedora include]$ grep -w union *
a.out.h: union
argp.h: parsing options, getopt is called with the union of all the argp
bfd.h: union
bfd.h: union
bfd.h:union internal_auxent;
bfd.h: (bfd *, struct bfd_symbol *, int, union internal_auxent *);
bfd.h: union {
bfd.h: /* The value of the symbol. This really should be a union of a
bfd.h: union
bfd.h: union
bfdlink.h: /* A union of information depending upon the type. */
bfdlink.h: union
bfdlink.h: this field. This field is present in all of the union element
bfdlink.h: the union; this structure is a major space user in the
bfdlink.h: union
bfdlink.h: union
curses.h: union
db_cxx.h:// 4201: nameless struct/union
elf.h: union
elf.h: union
elf.h: union
elf.h: union
elf.h:typedef union
_G_config.h:typedef union
gcrypt.h: union
gcrypt.h: union
gcrypt.h: union
gmp-i386.h: union {
ieee754.h:union ieee754_float
ieee754.h:union ieee754_double
ieee754.h:union ieee854_long_double
ifaddrs.h: union
jpeglib.h: union {
ldap.h: union mod_vals_u {
ncurses.h: union
newt.h: union {
obstack.h: union
pi-file.h: union {
resolv.h: union {
signal.h:extern int sigqueue (__pid_t __pid, int __sig, __const union sigval __val)
stdlib.h:/* Lots of hair to allow traditional BSD use of `union wait'
stdlib.h: (__extension__ (((union { __typeof(status) __in; int __i; }) \
stdlib.h:/* This is the type of the argument to `wait'. The funky union
stdlib.h: causes redeclarations with either `int *' or `union wait *' to be
stdlib.h:typedef union
stdlib.h: union wait *__uptr;
stdlib.h: } __WAIT_STATUS __attribute__ ((__transparent_union__));
thread_db.h: union
thread_db.h: union
tiffio.h: union {
wchar.h: union
xf86drm.h:typedef union _drmVBlank {
Unions are useful when dealing with byte-level (low level) data.
One of my recent usage was on IP address modeling which looks like below :
// Composite structure for IP address storage
union
{
// IPv4 # 32-bit identifier
// Padded 12-bytes for IPv6 compatibility
union
{
struct
{
unsigned char _reserved[12];
unsigned char _IpBytes[4];
} _Raw;
struct
{
unsigned char _reserved[12];
unsigned char _o1;
unsigned char _o2;
unsigned char _o3;
unsigned char _o4;
} _Octet;
} _IPv4;
// IPv6 # 128-bit identifier
// Next generation internet addressing
union
{
struct
{
unsigned char _IpBytes[16];
} _Raw;
struct
{
unsigned short _w1;
unsigned short _w2;
unsigned short _w3;
unsigned short _w4;
unsigned short _w5;
unsigned short _w6;
unsigned short _w7;
unsigned short _w8;
} _Word;
} _IPv6;
} _IP;
Unions provide polymorphism in C.
An example when I've used a union:
class Vector
{
union
{
double _coord[3];
struct
{
double _x;
double _y;
double _z;
};
};
...
}
this allows me to access my data as an array or the elements.
I've used a union to have the different terms point to the same value. In image processing, whether I was working on columns or width or the size in the X direction, it can become confusing. To alleve this problem, I use a union so I know which descriptions go together.
union { // dimension from left to right // union for the left to right dimension
uint32_t m_width;
uint32_t m_sizeX;
uint32_t m_columns;
};
union { // dimension from top to bottom // union for the top to bottom dimension
uint32_t m_height;
uint32_t m_sizeY;
uint32_t m_rows;
};
The union keyword, while still used in C++031, is mostly a remnant of the C days. The most glaring issue is that it only works with POD1.
The idea of the union, however, is still present, and indeed the Boost libraries feature a union-like class:
boost::variant<std::string, Foo, Bar>
Which has most of the benefits of the union (if not all) and adds:
ability to correctly use non-POD types
static type safety
In practice, it has been demonstrated that it was equivalent to a combination of union + enum, and benchmarked that it was as fast (while boost::any is more of the realm of dynamic_cast, since it uses RTTI).
1Unions were upgraded in C++11 (unrestricted unions), and can now contain objects with destructors, although the user has to invoke the destructor manually (on the currently active union member). It's still much easier to use variants.
A brilliant usage of union is memory alignment, which I found in the PCL(Point Cloud Library) source code. The single data structure in the API can target two architectures: CPU with SSE support as well as the CPU without SSE support. For eg: the data structure for PointXYZ is
typedef union
{
float data[4];
struct
{
float x;
float y;
float z;
};
} PointXYZ;
The 3 floats are padded with an additional float for SSE alignment.
So for
PointXYZ point;
The user can either access point.data[0] or point.x (depending on the SSE support) for accessing say, the x coordinate.
More similar better usage details are on following link: PCL documentation PointT types
From the Wikipedia article on unions:
The primary usefulness of a union is
to conserve space, since it provides a
way of letting many different types be
stored in the same space. Unions also
provide crude polymorphism. However,
there is no checking of types, so it
is up to the programmer to be sure
that the proper fields are accessed in
different contexts. The relevant field
of a union variable is typically
determined by the state of other
variables, possibly in an enclosing
struct.
One common C programming idiom uses
unions to perform what C++ calls a
reinterpret_cast, by assigning to one
field of a union and reading from
another, as is done in code which
depends on the raw representation of
the values.
In the earliest days of C (e.g. as documented in 1974), all structures shared a common namespace for their members. Each member name was associated with a type and an offset; if "wd_woozle" was an "int" at offset 12, then given a pointer p of any structure type, p->wd_woozle would be equivalent to *(int*)(((char*)p)+12). The language required that all members of all structures types have unique names except that it explicitly allowed reuse of member names in cases where every struct where they were used treated them as a common initial sequence.
The fact that structure types could be used promiscuously made it possible to have structures behave as though they contained overlapping fields. For example, given definitions:
struct float1 { float f0;};
struct byte4 { char b0,b1,b2,b3; }; /* Unsigned didn't exist yet */
code could declare a structure of type "float1" and then use "members" b0...b3 to access the individual bytes therein. When the language was changed so that each structure would receive a separate namespace for its members, code which relied upon the ability to access things multiple ways would break. The values of separating out namespaces for different structure types was sufficient to require that such code be changed to accommodate it, but the value of such techniques was sufficient to justify extending the language to continue supporting it.
Code which had been written to exploit the ability to access the storage within a struct float1 as though it were a struct byte4 could be made to work in the new language by adding a declaration: union f1b4 { struct float1 ff; struct byte4 bb; };, declaring objects as type union f1b4; rather than struct float1, and replacing accesses to f0, b0, b1, etc. with ff.f0, bb.b0, bb.b1, etc. While there are better ways such code could have been supported, the union approach was at least somewhat workable, at least with C89-era interpretations of the aliasing rules.
Lets say you have n different types of configurations (just being a set of variables defining parameters). By using an enumeration of the configuration types, you can define a structure that has the ID of the configuration type, along with a union of all the different types of configurations.
This way, wherever you pass the configuration can use the ID to determine how to interpret the configuration data, but if the configurations were huge you would not be forced to have parallel structures for each potential type wasting space.
One recent boost on the, already elevated, importance of the unions has been given by the Strict Aliasing Rule introduced in recent version of C standard.
You can use unions do to type-punning without violating the C standard.
This program has unspecified behavior (because I have assumed that float and unsigned int have the same length) but not undefined behavior (see here).
#include <stdio.h>
union float_uint
{
float f;
unsigned int ui;
};
int main()
{
float v = 241;
union float_uint fui = {.f = v};
//May trigger UNSPECIFIED BEHAVIOR but not UNDEFINED BEHAVIOR
printf("Your IEEE 754 float sir: %08x\n", fui.ui);
//This is UNDEFINED BEHAVIOR as it violates the Strict Aliasing Rule
unsigned int* pp = (unsigned int*) &v;
printf("Your IEEE 754 float, again, sir: %08x\n", *pp);
return 0;
}
I would like to add one good practical example for using union - implementing formula calculator/interpreter or using some kind of it in computation(for example, you want to use modificable during run-time parts of your computing formulas - solving equation numerically - just for example).
So you may want to define numbers/constants of different types(integer, floating-point, even complex numbers) like this:
struct Number{
enum NumType{int32, float, double, complex}; NumType num_t;
union{int ival; float fval; double dval; ComplexNumber cmplx_val}
}
So you're saving memory and what is more important - you avoid any dynamic allocations for probably extreme quantity(if you use a lot of run-time defined numbers) of small objects(compared to implementations through class inheritance/polymorphism). But what's more interesting, you still can use power of C++ polymorphism(if you're fan of double dispatching, for example ;) with this type of struct. Just add "dummy" interface pointer to parent class of all number types as a field of this struct, pointing to this instance instead of/in addition to raw type, or use good old C function pointers.
struct NumberBase
{
virtual Add(NumberBase n);
...
}
struct NumberInt: Number
{
//implement methods assuming Number's union contains int
NumberBase Add(NumberBase n);
...
}
struct NumberDouble: Number
{
//implement methods assuming Number's union contains double
NumberBase Add(NumberBase n);
...
}
//e.t.c. for all number types/or use templates
struct Number: NumberBase{
union{int ival; float fval; double dval; ComplexNumber cmplx_val;}
NumberBase* num_t;
Set(int a)
{
ival=a;
//still kind of hack, hope it works because derived classes of Number dont add any fields
num_t = static_cast<NumberInt>(this);
}
}
so you can use polymorphism instead of type checks with switch(type) - with memory-efficient implementation(no dynamic allocation of small objects) - if you need it, of course.
From http://cplus.about.com/od/learningc/ss/lowlevel_9.htm:
The uses of union are few and far between. On most computers, the size
of a pointer and an int are usually the same- this is because both
usually fit into a register in the CPU. So if you want to do a quick
and dirty cast of a pointer to an int or the other way, declare a
union.
union intptr { int i; int * p; };
union intptr x; x.i = 1000;
/* puts 90 at location 1000 */
*(x.p)=90;
Another use of a union is in a command or message protocol where
different size messages are sent and received. Each message type will
hold different information but each will have a fixed part (probably a
struct) and a variable part bit. This is how you might implement it..
struct head { int id; int response; int size; }; struct msgstring50 { struct head fixed; char message[50]; } struct
struct msgstring80 { struct head fixed; char message[80]; }
struct msgint10 { struct head fixed; int message[10]; } struct
msgack { struct head fixed; int ok; } union messagetype {
struct msgstring50 m50; struct msgstring80 m80; struct msgint10
i10; struct msgack ack; }
In practice, although the unions are the same size, it makes sense to
only send the meaningful data and not wasted space. A msgack is just
16 bytes in size while a msgstring80 is 92 bytes. So when a
messagetype variable is initialized, it has its size field set
according to which type it is. This can then be used by other
functions to transfer the correct number of bytes.
Unions provide a way to manipulate different kind of data in a single area of storage without embedding any machine independent information in the program
They are analogous to variant records in pascal
As an example such as might be found in a compiler symbol table manager, suppose that a
constant may be an int, a float, or a character pointer. The value of a particular constant
must be stored in a variable of the proper type, yet it is most convenient for table management if the value occupies the same amount of storage and is stored in the same place regardless of its type. This is the purpose of a union - a single variable that can legitimately hold any of one of several types. The syntax is based on structures:
union u_tag {
int ival;
float fval;
char *sval;
} u;
The variable u will be large enough to hold the largest of the three types; the specific size is implementation-dependent. Any of these types may be assigned to u and then used in
expressions, so long as the usage is consistent

How far to go with a strongly typed language?

Let's say I am writing an API, and one of my functions take a parameter that represents a channel, and will only ever be between the values 0 and 15. I could write it like this:
void Func(unsigned char channel)
{
if(channel < 0 || channel > 15)
{ // throw some exception }
// do something
}
Or do I take advantage of C++ being a strongly typed language, and make myself a type:
class CChannel
{
public:
CChannel(unsigned char value) : m_Value(value)
{
if(channel < 0 || channel > 15)
{ // throw some exception }
}
operator unsigned char() { return m_Value; }
private:
unsigned char m_Value;
}
My function now becomes this:
void Func(const CChannel &channel)
{
// No input checking required
// do something
}
But is this total overkill? I like the self-documentation and the guarantee it is what it says it is, but is it worth paying the construction and destruction of such an object, let alone all the additional typing? Please let me know your comments and alternatives.
If you wanted this simpler approach generalize it so you can get more use out of it, instead of tailor it to a specific thing. Then the question is not "should I make a entire new class for this specific thing?" but "should I use my utilities?"; the latter is always yes. And utilities are always helpful.
So make something like:
template <typename T>
void check_range(const T& pX, const T& pMin, const T& pMax)
{
if (pX < pMin || pX > pMax)
throw std::out_of_range("check_range failed"); // or something else
}
Now you've already got this nice utility for checking ranges. Your code, even without the channel type, can already be made cleaner by using it. You can go further:
template <typename T, T Min, T Max>
class ranged_value
{
public:
typedef T value_type;
static const value_type minimum = Min;
static const value_type maximum = Max;
ranged_value(const value_type& pValue = value_type()) :
mValue(pValue)
{
check_range(mValue, minimum, maximum);
}
const value_type& value(void) const
{
return mValue;
}
// arguably dangerous
operator const value_type&(void) const
{
return mValue;
}
private:
value_type mValue;
};
Now you've got a nice utility, and can just do:
typedef ranged_value<unsigned char, 0, 15> channel;
void foo(const channel& pChannel);
And it's re-usable in other scenarios. Just stick it all in a "checked_ranges.hpp" file and use it whenever you need. It's never bad to make abstractions, and having utilities around isn't harmful.
Also, never worry about overhead. Creating a class simply consists of running the same code you would do anyway. Additionally, clean code is to be preferred over anything else; performance is a last concern. Once you're done, then you can get a profiler to measure (not guess) where the slow parts are.
Yes, the idea is worthwhile, but (IMO) writing a complete, separate class for each range of integers is kind of pointless. I've run into enough situations that call for limited range integers that I've written a template for the purpose:
template <class T, T lower, T upper>
class bounded {
T val;
void assure_range(T v) {
if ( v < lower || upper <= v)
throw std::range_error("Value out of range");
}
public:
bounded &operator=(T v) {
assure_range(v);
val = v;
return *this;
}
bounded(T const &v=T()) {
assure_range(v);
val = v;
}
operator T() { return val; }
};
Using it would be something like:
bounded<unsigned, 0, 16> channel;
Of course, you can get more elaborate than this, but this simple one still handles about 90% of situations pretty well.
No, it is not overkill - you should always try to represent abstractions as classes. There are a zillion reasons for doing this and the overhead is minimal. I would call the class Channel though, not CChannel.
Can't believe nobody mentioned enum's so far. Won't give you a bulletproof protection, but still better than a plain integer datatype.
Looks like overkill, especially the operator unsigned char() accessor. You're not encapsulating data, you're making evident things more complicated and, probably, more error-prone.
Data types like your Channel are usually a part of something more abstracted.
So, if you use that type in your ChannelSwitcher class, you could use commented typedef right in the ChannelSwitcher's body (and, probably, your typedef is going to be public).
// Currently used channel type
typedef unsigned char Channel;
Whether you throw an exception when constructing your "CChannel" object or at the entrance to the method that requires the constraint makes little difference. In either case you're making runtime assertions, which means the type system really isn't doing you any good, is it?
If you want to know how far you can go with a strongly typed language, the answer is "very far, but not with C++." The kind of power you need to statically enforce a constraint like, "this method may only be invoked with a number between 0 and 15" requires something called dependent types--that is, types which depend on values.
To put the concept into pseudo-C++ syntax (pretending C++ had dependent types), you might write this:
void Func(unsigned char channel, IsBetween<0, channel, 15> proof) {
...
}
Note that IsBetween is parameterized by values rather than types. In order to call this function in your program now, you must provide to the compiler the second argument, proof, which must have the type IsBetween<0, channel, 15>. Which is to say, you have to prove at compile-time that channel is between 0 and 15! This idea of types which represent propositions, whose values are proofs of those propositions, is called the Curry-Howard Correspondence.
Of course, proving such things can be difficult. Depending on your problem domain, the cost/benefit ratio can easily tip in favor of just slapping runtime checks on your code.
Whether something is overkill or not often depends on lots of different factors. What might be overkill in one situation might not in another.
This case might not be overkill if you had lots of different functions that all accepted channels and all had to do the same range checking. The Channel class would avoid code duplication, and also improve readability of the functions (as would naming the class Channel instead of CChannel - Neil B. is right).
Sometimes when the range is small enough I will instead define an enum for the input.
If you add constants for the 16 different channels, and also a static method that fetches the channel for a given value (or throws an exception if out of range) then this can work without any additional overhead of object creation per method call.
Without knowing how this code is going to be used, it's hard to say if it's overkill or not or pleasant to use. Try it out yourself - write a few test cases using both approaches of a char and a typesafe class - and see which you like. If you get sick of it after writing a few test cases, then it's probably best avoided, but if you find yourself liking the approach, then it might be a keeper.
If this is an API that's going to be used by many, then perhaps opening it up to some review might give you valuable feedback, since they presumably know the API domain quite well.
In my opinion, I don't think what you are proposing is a big overhead, but for me, I prefer to save the typing and just put in the documentation that anything outside of 0..15 is undefined and use an assert() in the function to catch errors for debug builds. I don't think the added complexity offers much more protection for programmers who are already used to C++ language programming which contains alot of undefined behaviours in its specs.
You have to make a choice. There is no silver bullet here.
Performance
From the performance perspective, the overhead isn't going to be much if at all. (unless you've got to counting cpu cycles) So most likely this shouldn't be the determining factor.
Simplicity/ease of use etc
Make the API simple and easy to understand/learn.
You should know/decide whether numbers/enums/class would be easier for the api user
Maintainability
If you are very sure the channel
type is going to be an integer in
the foreseeable future , I would go
without the abstraction (consider
using enums)
If you have a lot of use cases for a
bounded values, consider using the
templates (Jerry)
If you think, Channel can
potentially have methods make it a
class right now.
Coding effort
Its a one time thing. So always think maintenance.
The channel example is a tough one:
At first it looks like a simple limited-range integer type, like you find in Pascal and Ada. C++ gives you no way to say this, but an enum is good enough.
If you look closer, could it be one of those design decisions that are likely to change? Could you start referring to "channel" by frequency? By call letters (WGBH, come in)? By network?
A lot depends on your plans. What's the main goal of the API? What's the cost model? Will channels be created very frequently (I suspect not)?
To get a slightly different perspective, let's look at the cost of screwing up:
You expose the rep as int. Clients write a lot of code, the interface is either respected or your library halts with an assertion failure. Creating channels is dirt cheap. But if you need to change the way you're doing things, you lose "backward bug-compatibility" and annoy authors of sloppy clients.
You keep it abstract. Everybody has to use the abstraction (not so bad), and everybody is futureproofed against changes in the API. Maintaining backwards compatibility is a piece of cake. But creating channels is more costly, and worse, the API has to state carefully when it is safe to destroy a channel and who is responsible for the decision and the destruction. Worse case scenario is that creating/destroying channels leads to a big memory leak or other performance failure—in which case you fall back to the enum.
I'm a sloppy programmer, and if it were for my own work, I'd go with the enum and eat the cost if the design decision changes down the line. But if this API were to go out to a lot of other programmers as clients, I'd use the abstraction.
Evidently I'm a moral relativist.
An integer with values only ever between 0 and 15 is an unsigned 4-bit integer (or half-byte, nibble. I imagine if this channel switching logic would be implemented in hardware, then the channel number might be represented as that, a 4-bit register).
If C++ had that as a type you would be done right there:
void Func(unsigned nibble channel)
{
// do something
}
Alas, unfortunately it doesn't. You could relax the API specification to express that the channel number is given as an unsigned char, with the actual channel being computed using a modulo 16 operation:
void Func(unsigned char channel)
{
channel &= 0x0f; // truncate
// do something
}
Or, use a bitfield:
#include <iostream>
struct Channel {
// 4-bit unsigned field
unsigned int n : 4;
};
void Func(Channel channel)
{
// do something with channel.n
}
int main()
{
Channel channel = {9};
std::cout << "channel is" << channel.n << '\n';
Func (channel);
}
The latter might be less efficient.
I vote for your first approach, because it's simpler and easier to understand, maintain, and extend, and because it is more likely to map directly to other languages should your API have to be reimplemented/translated/ported/etc.
This is abstraction my friend! It's always neater to work with objects