We excessively use templates and we cannot always tell the signedness of a type at hand, so we need techniques to hide warnings that will be eventually optimized out. I have a simple ASSERT(condition) macro which throws something if the condition not evalutes to true.
The goal is to check the range of a T typed count value. We need it to be at least zero, at most the max of size_t.
template<typename SomeIntegral>
SomeIntegral ZERO()
{
return SomeIntegral(0);
}
template<typename T>
class C
{
public:
void f(T count)
{
std::vector<std::string> ret;
ASSERT(count>=ZERO<T>()); // Check #1
ASSERT(count<std::numeric_limits<size_t>::max()); // Check #2
ret.reserve(size_t(count)); // Needs check #1 and #2 to succeed.
// ...
}
// ...
};
The #1 check compiles without warning, but the #2 check says comparison between signed and unsigned integer expressions, because in this particular case the count has a signed type. If I could say ASSERT((unsigned T) count < std::numeric_limits<size_t>::max()) or similar somehow... Converting to unsigned T is safe in this situation, because we know from check #1 that it is at least zero.
... or what other compiler agnostic approach can I use?
I think you can use std::make_signed or std::make_unsigned. Whichever fits the need.
Here is a custom implementation.
namespace internal {
#define MK_MAKESIGNED(T,V) \
template<> struct make_signed<T> { \
public: \
typedef V type; \
};
template<typename T>
struct make_signed {
typedef T type;
};
MK_MAKESIGNED(unsigned int, signed int);
MK_MAKESIGNED(unsigned char, signed char);
// .... can convert anything to anything.
#undef MK_MAKESIGNED
};
internal::make_signed<unsigned char>::type c;
Apply static_cast<unsigned long> to both left hand and right hand side expressions in your second assert:
ASSERT(static_cast<unsigned long>(count)
< static_cast<unsigned long>(std::numeric_limits<size_t>::max()));
(I presume that your count is integer, not float - if I am wrong, go for a largest float type).
Related
The C++ standard doesn't require exact sizes of the integral types, which can sometimes lead to very unexpected results. Some smart people then introduced the <cstdint> header containing (optional) typedefs like int64_t for types with exactly 64-bit width. This is not what want.
Is there an (possibly optional) integral type with the property sizeof(mysterious_type) == 2 for whatever system is defined on?
Reasoning: I'm trying to figure out the system's endianess. For this, after reviewing many questions on this board, I though I would just define an integral type with size 2, assign one to it and check endianess like this:
enum endianess { LITTLE_ENDIAN, BIG_ENDIAN };
typedef utwowitdhtype test_t; // some unsigned type with width 2
endianess inspectSystem() {
static_assert(sizeof(twowitdhtype) == 2, "twowitdhtype is not size 2??!?!!");
test_t integral = 0x1;
return *reinterpret_cast<char*>(&integral) == 0 ? BIG_ENDIAN : LITTLE_ENDIAN;
}
While this is a reasoning why I would be interested in such a type, finding such a type is not to solve the problem but out of curiousity.
If you're on the machine that has a char size != 8 bits, you will have bigger portability issues you'll have to tackle - it's simpler to just do static_assert(CHAR_BIT == 8, "assuming 8-bit chars") than do weird things in the name of false portability - if you can't test it, how can you say you did it right?
Although for finding endianess it won't help at all, you can get such type by using type traits.
#include <type_traits>
template<typename T>
struct Identity
{
typedef T type;
};
template<typename... Args>
struct ShortTypeFinder;
template<>
struct ShortTypeFinder<>
{
};
template<typename Head, typename... Tail>
struct ShortTypeFinder<Head, Tail...>
{
typedef typename std::conditional<sizeof(Head) == 2, Identity<Head>, ShortTypeFinder<Tail...>>::type::type type;
};
typedef ShortTypeFinder<short, int, long, long long>::type integral_type_with_sizeof_two;
int main()
{
integral_type_with_sizeof_two x = 0;
static_assert(sizeof x == 2, "sizeof(x) must be 2");
static_assert(std::is_same<integral_type_with_sizeof_two, short>::value, "it's short on my machine");
}
If such type doesn't exist, the ShortTypeFinder<TYPES>::type will fail to compile.
As long as a byte consists of 8 bits, sizeof(int16_t) will always be 2. There are also types requiring at least 16 bits of size and some other interesting types.
http://en.cppreference.com/w/cpp/types/integer
Note: I mistakenly asked about static_cast originally; this is why the top answer mentions static_cast at first.
I have some binary files with little endian float values. I want to read them in a machine-independent manner. My byte-swapping routines (from SDL) operate on unsigned integers types.
Is it safe to simply cast between ints and floats?
float read_float() {
// Read in 4 bytes.
Uint32 val;
fread( &val, 4, 1, fp );
// Swap the bytes to little-endian if necessary.
val = SDL_SwapLE32(val);
// Return as a float
return reinterpret_cast<float &>( val ); //XXX Is this safe?
}
I want this software to be as portable as possible.
Well, static_cast is "safe" and it has defined behavior, but this is probably not what you need. Converting an integral value to float type will simply attempt to represent the same integral value in the target floating-point type. I.e. 5 of type int will turn into 5.0 of type float (assuming it is representable precisely).
What you seem to be doing is building the object representation of float value in a piece of memory declared as Uint32 variable. To produce the resultant float value you need to reinterpret that memory. This would be achieved by reinterpret_cast
assert(sizeof(float) == sizeof val);
return reinterpret_cast<float &>( val );
or, if you prefer, a pointer version of the same thing
assert(sizeof(float) == sizeof val);
return *reinterpret_cast<float *>( &val );
Although this sort of type-punning is not guaranteed to work in a compiler that follows strict-aliasing semantics. Another approach would be to do this
float f;
assert(sizeof f == sizeof val);
memcpy(&f, &val, sizeof f);
return f;
Or you might be able to use the well-known union hack to implement memory reinterpretation. This is formally illegal in C++ (undefined behavior), meaning that this method can only be used with certain implementations that support it as an extension
assert(sizeof(float) == sizeof(Uint32));
union {
Uint32 val;
float f;
} u = { val };
return u.f;
In short, it's incorrect. You are casting an integer to a float, and it will be interpreted by the compiler as an integer at the time. The union solution presented above works.
Another way to do the same sort of thing as the union is would be to use this:
return *reinterpret_cast<float*>( &val );
It is equally safe/unsafe as the union solution above, and I would definitely recommend an assert to make sure float is the same size as int.
I would also warn that there ARE floating point formats that are not IEEE-754 or IEEE-854 compatible (these two standards have the same format for float numbers, I'm not entirely sure what the detail difference is, to be honest). So, if you have a computer that uses a different floating point format, it would fall over. I'm not sure if there is any way to check that, aside from perhaps having a canned set of bytes stored away somewhere, along with the expected values in float, then convert the values and see if it comes up "right".
(As others have said, a reinterpret cast, where the underlying memory is treated as though it's another type, is undefined behaviour because it's up to the C++ implementation how the float is sized/aligned/placed in memory.)
Here's a templated implementation of AnT's memcpy solution, which avoids -Wstrict-aliasing warnings.
I guess this supports implementations where the sizes aren't standard, but still match one of the templated sizes - and then fails to compile if there are no matches.
(compiling with -fstrict-aliasing -Wall that actually enables -Wstrict-aliasing)
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <type_traits>
template<size_t S> struct sized_uint;
template<> struct sized_uint<sizeof(uint8_t)> { using type = uint8_t; };
template<> struct sized_uint<sizeof(uint16_t)> { using type = uint16_t; };
template<> struct sized_uint<sizeof(uint32_t)> { using type = uint32_t; };
template<> struct sized_uint<sizeof(uint64_t)> { using type = uint64_t; };
template<size_t S> using sized_uint_t = typename sized_uint<S>::type;
template<class T> sized_uint_t<sizeof(T)> bytesAsUint(T x)
{
sized_uint_t<sizeof(T)> result;
// template forces size to match. memcpy handles alignment
memcpy(&result, &x, sizeof(x));
return result;
}
template<size_t S> struct sized_float;
template<> struct sized_float<sizeof(float)> { using type = float; };
template<> struct sized_float<sizeof(double)> { using type = double; };
template<size_t S> using sized_float_t = typename sized_float<S>::type;
template<class T> sized_float_t<sizeof(T)> bytesAsFloat(T x)
{
sized_float_t<sizeof(T)> result;
memcpy(&result, &x, sizeof(x));
return result;
}
// Alt for just 'float'
//template<class T> std::enable_if_t<sizeof(T) == sizeof(float), float> bytesAsFloat(T x)
//{
// float result;
// memcpy(&result, &x, sizeof(x));
// return result;
//}
float readIntAsFloat(uint32_t i)
{
// error: no matching function for call to 'bytesAsFloat(uint16_t)'
//return bytesAsFloat((uint16_t)i);
return bytesAsFloat(i);
}
void printFloat(float f) {
// warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
//printf("Float %f: %x", f, reinterpret_cast<unsigned int&>(f));
printf("Float %f: %x", f, bytesAsUint(f));
}
(godbolt)
Often I have some compile-time constant number that is also the upper limit of possible values assumed by the variables. And thus I'm interested in choosing the smallest type that can accomodate those values. For example I may know that variables will fit into <-30 000, 30 000> range, so when looking for a suitable type I would start with signed short int. But since I'm switching between platforms and compilers I would like a compile-time assert checking whether the constant upper values really fit within those type. BOOST_STATIC_ASSERT( sizeof(T) >= required_number_of_bytes_for_number ) works fine but the problem is:
How to automatically determine the number of bytes required for storing a given compile-time constant, signed or unsigned? I guess a C macro could do this job? Could anyone write it for me?
I might use std::numeric_limits::max() and min() instead of computing the bytes but then I would have to switch to run-time assert :(
Now that this is tagged with c++, I suggest using Boost.Integer for appropriate type selection. boost::int_max_value_t< MyConstant >::least would give the type you are looking for.
You may use the following code. It works only for positive 8/16/32/64bit integers. But you may do the appropriate changes for negative values as well.
template <typename T, T x> class TypeFor
{
template <T x>
struct BitsRequired {
static const size_t Value = 1 + BitsRequired<x/2>::Value;
};
template <>
struct BitsRequired<0> {
static const size_t Value = 0;
};
static const size_t Bits = BitsRequired<x>::Value;
static const size_t Bytes = (Bits + 7) / 8;
static const size_t Complexity = 1 + BitsRequired<Bytes-1>::Value;
template <size_t c> struct Internal {
};
template <> struct Internal<1> {
typedef UCHAR Type;
};
template <> struct Internal<2> {
typedef USHORT Type;
};
template <> struct Internal<3> {
typedef ULONG Type;
};
template <> struct Internal<4> {
typedef ULONGLONG Type;
};
public:
typedef typename Internal<Complexity>::Type Type;
};
TypeFor<UINT, 117>::Type x;
P.S. this compiles under MSVC. Probably some adjustment should be done to adopt it for gcc/mingw/etc.
How about you avoid the problem:
BOOST_STATIC_ASSERT((1LL << (8*sizeof(T))) >= number);
How about BOOST_STATIC_ASSERT(int(60000)==60000) ? This will test whether 60000 fits in an int. If int is 16 bits, int(60000) is 27232. For the comparison, this will then be zero-extended back to a 32 bits long, and fail reliably.
I have some struct containig a bitfield, which may vary in size. Example:
struct BitfieldSmallBase {
uint8_t a:2;
uint8_t b:3;
....
}
struct BitfieldLargeBase {
uint8_t a:4;
uint8_t b:5;
....
}
and a union to access all bits at once:
template<typename T>
union Bitfield
{
T bits;
uint8_t all; // <------------- Here is the problem
bool operator & (Bitfield<T> x) const {
return !!(all & x.all);
}
Bitfield<T> operator + (Bitfield<T> x) const {
Bitfield<T> temp;
temp.all = all + x.all; //works, because I can assume no overflow will happen
return temp;
}
....
}
typedef Bitfield<BitfieldSmallBase> BitfieldSmall;
typedef Bitfield<BitfieldLargeBase> BitfieldLarge;
The problem is: For some bitfield base classes, an uint8_t is not sufficient. BitfieldSmall does fit into a uint8_t, but BitfieldLarge does not. The data needs to be packed as tightly as possible (it will be handled by SSE instructions later), so always using uint16_t is out of question. Is there a way to declare the "all" field with an integral type, whose size is the same as the bitfield? Or another way to access bits as a whole?
I can of course forego the use of the template and declare every kind of bitfield explicitly, but I would like to avoid code repetition (there is quite a list of operators und member functions).
You could make the integral type a template parameter as well.
template<typename T, typename U>
union Bitfield
{
T bits;
U all;
}
typedef Bitfield<BitfieldSmallBase, uint8_t> BitfieldSmall;
typedef Bitfield<BitfieldLargeBase, uint16_t> BitfieldLarge;
I've learnt the hard way that whilst the bit width on vars that you're using is a convenient way of getting the compiler to do your masking and shifting for you, you cannot make assumptions about the order and padding of the members in the struct. Its compiler dependent and the compiler really does change the order and such dependent upon the other code in your project.
If you want to treat a byte as discrete fields, you really have to do it the hard way.
you can use template metaprogramming to define a template function that maps from BitfieldSmallBase, BitfieldLargeBase, etc into another type - uint8_t by default and to uint16_t for BitfieldLargeBase as a template specialization and then use that like this:
union Bitfield
{
T bits;
typename F<T>::holder_type all;
};
You might want to consider std::bitset or boost::dynamic_bitset rather than rolling your own. In any case, steer clear of std::vector<bool>!
Make the number of bytes you need part of the template parameters:
template <typename T, int S=1>
struct BitField
{
union
{
T bits;
unsigned char bytes[S];
};
};
typedef Bitfield<BitfieldSmallBase, 1> BitfieldSmall;
typedef Bitfield<BitfieldLargeBase, 2> BitfieldLarge;
How about this?
#include <limits.h>
template <class T>
union BitField
{
T bits;
unsigned all : sizeof(T) * CHAR_BIT;
};
I'm using a well known template to allow binary constants
template< unsigned long long N >
struct binary
{
enum { value = (N % 10) + 2 * binary< N / 10 > :: value } ;
};
template<>
struct binary< 0 >
{
enum { value = 0 } ;
};
So you can do something like binary<101011011>::value. Unfortunately this has a limit of 20 digits for a unsigned long long.
Does anyone have a better solution?
Does this work if you have a leading zero on your binary value? A leading zero makes the constant octal rather than decimal.
Which leads to a way to squeeze a couple more digits out of this solution - always start your binary constant with a zero! Then replace the 10's in your template with 8's.
The approaches I've always used, though not as elegant as yours:
1/ Just use hex. After a while, you just get to know which hex digits represent which bit patterns.
2/ Use constants and OR or ADD them. For example (may need qualifiers on the bit patterns to make them unsigned or long):
#define b0 0x00000001
#define b1 0x00000002
: : :
#define b31 0x80000000
unsigned long x = b2 | b7
3/ If performance isn't critical and readability is important, you can just do it at runtime with a function such as "x = fromBin("101011011");".
4/ As a sneaky solution, you could write a pre-pre-processor that goes through your *.cppme files and creates the *.cpp ones by replacing all "0b101011011"-type strings with their equivalent "0x15b" strings). I wouldn't do this lightly since there's all sorts of tricky combinations of syntax you may have to worry about. But it would allow you to write your string as you want to without having to worry about the vagaries of the compiler, and you could limit the syntax trickiness by careful coding.
Of course, the next step after that would be patching GCC to recognize "0b" constants but that may be an overkill :-)
C++0x has user-defined literals, which could be used to implement what you're talking about.
Otherwise, I don't know how to improve this template.
template<unsigned int p,unsigned int i> struct BinaryDigit
{
enum { value = p*2+i };
typedef BinaryDigit<value,0> O;
typedef BinaryDigit<value,1> I;
};
struct Bin
{
typedef BinaryDigit<0,0> O;
typedef BinaryDigit<0,1> I;
};
Allowing:
Bin::O::I::I::O::O::value
much more verbose, but no limits (until you hit the size of an unsigned int of course).
You can add more non-type template parameters to "simulate" additional bits:
// Utility metafunction used by top_bit<N>.
template <unsigned long long N1, unsigned long long N2>
struct compare {
enum { value = N1 > N2 ? N1 >> 1 : compare<N1 << 1, N2>::value };
};
// This is hit when N1 grows beyond the size representable
// in an unsigned long long. It's value is never actually used.
template<unsigned long long N2>
struct compare<0, N2> {
enum { value = 42 };
};
// Determine the highest 1-bit in an integer. Returns 0 for N == 0.
template <unsigned long long N>
struct top_bit {
enum { value = compare<1, N>::value };
};
template <unsigned long long N1, unsigned long long N2 = 0>
struct binary {
enum {
value =
(top_bit<binary<N2>::value>::value << 1) * binary<N1>::value +
binary<N2>::value
};
};
template <unsigned long long N1>
struct binary<N1, 0> {
enum { value = (N1 % 10) + 2 * binary<N1 / 10>::value };
};
template <>
struct binary<0> {
enum { value = 0 } ;
};
You can use this as before, e.g.:
binary<1001101>::value
But you can also use the following equivalent forms:
binary<100,1101>::value
binary<1001,101>::value
binary<100110,1>::value
Basically, the extra parameter gives you another 20 bits to play with. You could add even more parameters if necessary.
Because the place value of the second number is used to figure out how far to the left the first number needs to be shifted, the second number must begin with a 1. (This is required anyway, since starting it with a 0 would cause the number to be interpreted as an octal number.)
Technically it is not C nor C++, it is a GCC specific extension, but GCC allows binary constants as seen here:
The following statements are identical:
i = 42;
i = 0x2a;
i = 052;
i = 0b101010;
Hope that helps. Some Intel compilers and I am sure others, implement some of the GNU extensions. Maybe you are lucky.
A simple #define works very well:
#define HEX__(n) 0x##n##LU
#define B8__(x) ((x&0x0000000FLU)?1:0)\
+((x&0x000000F0LU)?2:0)\
+((x&0x00000F00LU)?4:0)\
+((x&0x0000F000LU)?8:0)\
+((x&0x000F0000LU)?16:0)\
+((x&0x00F00000LU)?32:0)\
+((x&0x0F000000LU)?64:0)\
+((x&0xF0000000LU)?128:0)
#define B8(d) ((unsigned char)B8__(HEX__(d)))
#define B16(dmsb,dlsb) (((unsigned short)B8(dmsb)<<8) + B8(dlsb))
#define B32(dmsb,db2,db3,dlsb) (((unsigned long)B8(dmsb)<<24) + ((unsigned long)B8(db2)<<16) + ((unsigned long)B8(db3)<<8) + B8(dlsb))
B8(011100111)
B16(10011011,10011011)
B32(10011011,10011011,10011011,10011011)
Not my invention, I saw it on a forum a long time ago.