I'm using a well known template to allow binary constants
template< unsigned long long N >
struct binary
{
enum { value = (N % 10) + 2 * binary< N / 10 > :: value } ;
};
template<>
struct binary< 0 >
{
enum { value = 0 } ;
};
So you can do something like binary<101011011>::value. Unfortunately this has a limit of 20 digits for a unsigned long long.
Does anyone have a better solution?
Does this work if you have a leading zero on your binary value? A leading zero makes the constant octal rather than decimal.
Which leads to a way to squeeze a couple more digits out of this solution - always start your binary constant with a zero! Then replace the 10's in your template with 8's.
The approaches I've always used, though not as elegant as yours:
1/ Just use hex. After a while, you just get to know which hex digits represent which bit patterns.
2/ Use constants and OR or ADD them. For example (may need qualifiers on the bit patterns to make them unsigned or long):
#define b0 0x00000001
#define b1 0x00000002
: : :
#define b31 0x80000000
unsigned long x = b2 | b7
3/ If performance isn't critical and readability is important, you can just do it at runtime with a function such as "x = fromBin("101011011");".
4/ As a sneaky solution, you could write a pre-pre-processor that goes through your *.cppme files and creates the *.cpp ones by replacing all "0b101011011"-type strings with their equivalent "0x15b" strings). I wouldn't do this lightly since there's all sorts of tricky combinations of syntax you may have to worry about. But it would allow you to write your string as you want to without having to worry about the vagaries of the compiler, and you could limit the syntax trickiness by careful coding.
Of course, the next step after that would be patching GCC to recognize "0b" constants but that may be an overkill :-)
C++0x has user-defined literals, which could be used to implement what you're talking about.
Otherwise, I don't know how to improve this template.
template<unsigned int p,unsigned int i> struct BinaryDigit
{
enum { value = p*2+i };
typedef BinaryDigit<value,0> O;
typedef BinaryDigit<value,1> I;
};
struct Bin
{
typedef BinaryDigit<0,0> O;
typedef BinaryDigit<0,1> I;
};
Allowing:
Bin::O::I::I::O::O::value
much more verbose, but no limits (until you hit the size of an unsigned int of course).
You can add more non-type template parameters to "simulate" additional bits:
// Utility metafunction used by top_bit<N>.
template <unsigned long long N1, unsigned long long N2>
struct compare {
enum { value = N1 > N2 ? N1 >> 1 : compare<N1 << 1, N2>::value };
};
// This is hit when N1 grows beyond the size representable
// in an unsigned long long. It's value is never actually used.
template<unsigned long long N2>
struct compare<0, N2> {
enum { value = 42 };
};
// Determine the highest 1-bit in an integer. Returns 0 for N == 0.
template <unsigned long long N>
struct top_bit {
enum { value = compare<1, N>::value };
};
template <unsigned long long N1, unsigned long long N2 = 0>
struct binary {
enum {
value =
(top_bit<binary<N2>::value>::value << 1) * binary<N1>::value +
binary<N2>::value
};
};
template <unsigned long long N1>
struct binary<N1, 0> {
enum { value = (N1 % 10) + 2 * binary<N1 / 10>::value };
};
template <>
struct binary<0> {
enum { value = 0 } ;
};
You can use this as before, e.g.:
binary<1001101>::value
But you can also use the following equivalent forms:
binary<100,1101>::value
binary<1001,101>::value
binary<100110,1>::value
Basically, the extra parameter gives you another 20 bits to play with. You could add even more parameters if necessary.
Because the place value of the second number is used to figure out how far to the left the first number needs to be shifted, the second number must begin with a 1. (This is required anyway, since starting it with a 0 would cause the number to be interpreted as an octal number.)
Technically it is not C nor C++, it is a GCC specific extension, but GCC allows binary constants as seen here:
The following statements are identical:
i = 42;
i = 0x2a;
i = 052;
i = 0b101010;
Hope that helps. Some Intel compilers and I am sure others, implement some of the GNU extensions. Maybe you are lucky.
A simple #define works very well:
#define HEX__(n) 0x##n##LU
#define B8__(x) ((x&0x0000000FLU)?1:0)\
+((x&0x000000F0LU)?2:0)\
+((x&0x00000F00LU)?4:0)\
+((x&0x0000F000LU)?8:0)\
+((x&0x000F0000LU)?16:0)\
+((x&0x00F00000LU)?32:0)\
+((x&0x0F000000LU)?64:0)\
+((x&0xF0000000LU)?128:0)
#define B8(d) ((unsigned char)B8__(HEX__(d)))
#define B16(dmsb,dlsb) (((unsigned short)B8(dmsb)<<8) + B8(dlsb))
#define B32(dmsb,db2,db3,dlsb) (((unsigned long)B8(dmsb)<<24) + ((unsigned long)B8(db2)<<16) + ((unsigned long)B8(db3)<<8) + B8(dlsb))
B8(011100111)
B16(10011011,10011011)
B32(10011011,10011011,10011011,10011011)
Not my invention, I saw it on a forum a long time ago.
Related
My task is to create a class that implements Floating point number.
The size of the class must be exactly 3 bytes:
1 bit for the sign
6 bits for exponent
17 bits for mantissa
I tried to implement the class using bit fields, but the size
is 4 bytes :
class FloatingPointNumber
{
private:
unsigned int sign : 1;
unsigned int exponent : 6;
unsigned int mantissa : 17;
};
C++ (and C for that matter) compilers are permitted to insert and append any amount of padding into a struct as they see fit. So if your task specifies that it must be exactly 3 bytes, then this task can not be done with struct (or class) using just standard language elements.
Using compiler specific attributes or pragmas, you can force the compiler to not insert padding; however for bitfields the compiler still might see the need to fill up any gaps left to type alignment requirements.
For this specific task your best bet probably is to use a class like this
class CustomFloat {
protected: // or private: as per #paddy's comment
unsigned char v[3];
}
…and hoping for the compiler not to append some padding bytes.
The surefire way would be to simply to
typedef char CustomFloat[3];
and accept, that you'll not enjoy static type checking benefits whatsoever.
And then for each operation use a form of type punning to transfer the contents of v into a (at least 32 bit wide) variable, unpack the bits from there, perform the desired operation, pack the bits and transfer back into v. E.g. something like this:
uint32_t u = 0;
static_assert( sizeof(u) >= sizeof(v) );
memcpy((void*)&u, sizeof(v), (void const*)v);
unsigned sign = (u & SIGN_MASK) >> SIGN_SHIFT;
unsigned mant = (u & MANT_MASK) >> MANT_SHIFT;
unsigned expt = (u & EXPT_MASK) >> EXPT_SHIFT;
// perform operation
u = 0;
u |= (sign << SIGN_SHIFT) & SIGN_MASK;
u |= (mant << MANT_SHIFT) & MANT_MASK;
u |= (expt << EXPT_SHIFT) & EXPT_MASK;
memcpy((void*)v, sizeof(v), (void const*)&u);
Yes, this looks ugly. Yes, it is quite verbose. But that's what going to happen under the hood anyway, so you might just as well write it down.
I need a function which can take string as an input and generate hash code out of it. Currently, in c++ we have std::hash to do this but this returns the hash code of type size_t( unsigned long long ). Here, I need a hash function which can give me the hash code of type signed long long.
I have also tried using the modulus operator but that gives me negative values and those are not reliable. Hence, pls advise me on the hash function I can use in C++ so that I get hash code of type signed long long.
I need a hash function which can give me the hash code of type signed long long.
You could just set to 0 the most significant bit, unless your architecture have a weird internal representation of integral types, this would produce a positive number when converted to signed type of the same size.
template <class S>
constexpr size_t clamp_to_positive(size_t value)
{
return value & (std::numeric_limits<size_t>::max() >>
(std::numeric_limits<size_t>::digits - std::numeric_limits<S>::digits)
);
}
You can then call it as
auto my_hash = clamp_to_positive<long long>(std::hash<std::string>{}(source_string));
As noted by Ben Voigt, though, the easiest way is to just right-shift by one the unsigned value.
auto my_hash = static_cast<long long>(std::hash<std::string>{}(source_string) >> 1);
Another way to tackle this problem is to force the modulo operation to always return a positive value.
// Evaluates abs(x) as an unsigned type, avoiding corner case overflow.
template <class T>
constexpr auto unsigned_abs(T x)
{
static_assert(std::is_integral_v<T>);
if constexpr ( std::is_unsigned_v<T> ) {
return x;
}
return x < 0
? ~static_cast<std::make_unsigned_t<T>>(x) + 1
: static_cast<std::make_unsigned_t<T>>(x);
}
// Evaluates abs(x) % abs(y) avoiding overflows. The result has the same type
// of y and it's always 0 <= result < y. It has UB when y == 0.
template <class X, class Y>
auto absolute_remainder(X x, Y y)
{
static_assert( std::is_integral_v<X> && std::is_integral_v<Y> );
return static_cast<Y>(unsigned_abs(x) % unsigned_abs(y));
}
Whether you really need one of those or maybe a change in the current design of your program is left to you to figure out.
I need to define a struct which has data members of size 2 bits and 6 bits.
Should I use char type for each member?Or ,in order not to waste a memory,can I use something like :2\ :6 notation?
how can I do that?
Can I define a typedef for 2 or 6 bits type?
You can use something like:
typedef struct {
unsigned char SixBits:6;
unsigned char TwoBits:2;
} tEightBits;
and then use:
tEightBits eight;
eight.SixBits = 31;
eight.TwoBits = 3;
But, to be honest, unless you're having to comply with packed data external to your application, or you're in a very memory constrained situation, this sort of memory saving is not usually worth it. You'll find your code is a lot faster if it's not having to pack and unpack data all the time with bitwise and bitshift operations.
Also keep in mind that use of any type other than _Bool, signed int or unsigned int is an issue for the implementation. Specifically, unsigned char may not work everywhere.
It's probably best to use uint8_t for something like this. And yes, use bit fields:
struct tiny_fields
{
uint8_t twobits : 2;
uint8_t sixbits : 6;
}
I don't think you can be sure that the compiler will pack this into a single byte, though. Also, you can't know how the bits are ordered, within the byte(s) that values of the the struct type occupies. It's often better to use explicit masks, if you want more control.
Personally I prefer shift operators and some macros over bit fields, so there's no "magic" left for the compiler. It is usual practice in embedded world.
#define SET_VAL2BIT(_var, _val) ( (_var) | ((_val) & 3) )
#define SET_VAL6BIT(_var, _val) ( (_var) | (((_val) & 63) << 2) )
#define GET_VAL2BIT(_var) ( (_val) & 3)
#define GET_VAL6BIT(_var) ( ((_var) >> 2) & 63 )
static uint8_t my_var;
<...>
SET_VAL2BIT(my_var, 1);
SET_VAL6BIT(my_var, 5);
int a = GET_VAL2BIT(my_var); /* a == 1 */
int b = GET_VAL6BIT(my_var); /* b == 5 */
Often I have some compile-time constant number that is also the upper limit of possible values assumed by the variables. And thus I'm interested in choosing the smallest type that can accomodate those values. For example I may know that variables will fit into <-30 000, 30 000> range, so when looking for a suitable type I would start with signed short int. But since I'm switching between platforms and compilers I would like a compile-time assert checking whether the constant upper values really fit within those type. BOOST_STATIC_ASSERT( sizeof(T) >= required_number_of_bytes_for_number ) works fine but the problem is:
How to automatically determine the number of bytes required for storing a given compile-time constant, signed or unsigned? I guess a C macro could do this job? Could anyone write it for me?
I might use std::numeric_limits::max() and min() instead of computing the bytes but then I would have to switch to run-time assert :(
Now that this is tagged with c++, I suggest using Boost.Integer for appropriate type selection. boost::int_max_value_t< MyConstant >::least would give the type you are looking for.
You may use the following code. It works only for positive 8/16/32/64bit integers. But you may do the appropriate changes for negative values as well.
template <typename T, T x> class TypeFor
{
template <T x>
struct BitsRequired {
static const size_t Value = 1 + BitsRequired<x/2>::Value;
};
template <>
struct BitsRequired<0> {
static const size_t Value = 0;
};
static const size_t Bits = BitsRequired<x>::Value;
static const size_t Bytes = (Bits + 7) / 8;
static const size_t Complexity = 1 + BitsRequired<Bytes-1>::Value;
template <size_t c> struct Internal {
};
template <> struct Internal<1> {
typedef UCHAR Type;
};
template <> struct Internal<2> {
typedef USHORT Type;
};
template <> struct Internal<3> {
typedef ULONG Type;
};
template <> struct Internal<4> {
typedef ULONGLONG Type;
};
public:
typedef typename Internal<Complexity>::Type Type;
};
TypeFor<UINT, 117>::Type x;
P.S. this compiles under MSVC. Probably some adjustment should be done to adopt it for gcc/mingw/etc.
How about you avoid the problem:
BOOST_STATIC_ASSERT((1LL << (8*sizeof(T))) >= number);
How about BOOST_STATIC_ASSERT(int(60000)==60000) ? This will test whether 60000 fits in an int. If int is 16 bits, int(60000) is 27232. For the comparison, this will then be zero-extended back to a 32 bits long, and fail reliably.
I need to convert time from one format to another in C++ and it must be cross-platform compatible. I have created a structure as my time container. The structure fields must also be unsigned int as specified by legacy code.
struct time{
unsigned int timeInteger;
unsigned int timeFraction;
} time1, time2;
Mathematically the conversion is as follows:
time2.timeInteger = time1.timeInteger + 2208988800
time2.timeFraction = (time1.timeFraction * 20e-6) * 2e32
Here is my original code in C++ however when I attempt to write to a binary file, the converted time does not match with the truth data. I think this problem is due to a type casting mistake? This code will compile in VS2008 and will execute.
void convertTime(){
time2.timeInteger = unsigned int(time1.timeInteger + 2209032000);
time2.timeFraction = unsigned int(double(time1.timeFraction) * double(20e-6)*double(pow(double(2),32)));
}
Just a guess, but are you assuming that 2e32 == 2^32? This assumption would make sense if you're trying to scale the result into a 32 bit integer. In fact 2e32 == 2 * 10^32
Slightly unrelated, I think you should rethink your type design. You are basically talking about two different types here. They happen to store the same data, albeit in different results.
To minimize errors in their usage, you should define them as two completely distinct types that have a well-defined conversion between them.
Consider for example:
struct old_time {
unsigned int timeInteger;
unsigned int timeFraction;
};
struct new_time {
public:
new_time(unsigned int ti, unsigned int tf) :
timeInteger(ti), timeFraction(tf) { }
new_time(new_time const& other) :
timeInteger(other.timeInteger),
timeFraction(other.timeFraction) { }
new_time(old_time const& other) :
timeInteger(other.timeInteger + 2209032000U),
timeFraction(other.timeFraction * conversion_factor) { }
operator old_time() const {
old_time other;
other.timeInteger = timeInteger - 2209032000U;
other.timeFraction = timeFraction / conversion_factor;
return other;
}
private:
unsigned int timeInteger;
unsigned int timeFraction;
};
(EDIT: of course this code doesn’t work for the reasons pointed out below.
Now this code can be used frictionless in a safe way:
time_old told; /* initialize … */
time_new tnew = told; // converts old to new format
time_old back = tnew; // … and back.
The problem is that (20 ^ -6) * (2 e32) is far bigger than UINT_MAX. Maybe you meant 2 to the power of 32, or UINT_MAX, rather than 2e32.
In addition, your first line with the integer, the initial value must be less than (2^32 - 2209032000), and depending on what this is measured in, it could wrap round too. In my opinion, set the first value to be a long long (normally 64bits) and change 2e32.
If you can't change the type, then it may become necessary to store the field as it's result in a double, say, and then cast to unsigned int before use.