value of MSB in template argument (an unsigned integer) - templates

I am writing a templated function that accepts unsigned integers. In the function, I need the value of the most significant bit. For example, for std::uint8_t I would need a value of 0x80, for std::uint16_t a value of 0x8000 etc.
What is the best way to obtain the value of the MSB? I hoped for std::numeric_limits, but did not find a suitable trait there.
I came up with the solution below, but wonder whether some solution exists that does not require sizeof():
template<typename InputIterator, typename OutputIterator>
void foo(InputIterator it1, InputIterator it2,
OutputIterator it3) {
using datatype = typename std::iterator_traits<InputIterator>::value_type;
assert(std::is_unsigned_v<datatype> && std::is_integral_v<datatype>);
constexpr datatype msb = static_cast<datatype>(1u)<<(8 * sizeof(datatype) - 1);
...
}
(constexpr datatype msb = (std::numeric_limits<datatype>::max()>>1)+1 would also work, but IMO also is not super-readable)

Related

How to create a template function that accepts a vector of values, with the vector type specified?

Weird question but let me explain. I'm creating a Deserializer that needs to have specialized functions for Deserializing different types, including primitives, arrays, and vectors. An example of this is the integral function, which looks like this:
/** Reads an integral. */
template<typename T, std::enable_if_t<std::is_integral<T>::value, bool> = true>
inline T read() {
ind += sizeof(T);
return reinterpret_cast<T>(data[ind - sizeof(T)]);
}
When I tried to add vector support, I ran into a problem. Ideally, I'd like to write a specialization for vectors containing integral types, and I initially thought I could do it like this:
/** Reads a vector of integrals. */
template<typename T, std::enable_if_t<std::is_integral<T>::value, bool> = true>
inline std::vector<T> read() {
usize len = read<size_t>();
auto startInd = ind;
ind += len * sizeof(T);
return std::vector<T>(
reinterpret_cast<const T*>(&data[startInd]),
reinterpret_cast<const T*>(&data[ind]));
}
but then a problem occurs where trying to read a vector of int has the same signature as trying to read a single int, read<int>().
To fix this, I want to make it so that the vector signature looks like this: read<std::vector<int>>(), but I can't figure out how to do this. Is there a way to require the vector type in the template argument, but still get the inner type it uses for use in the function?
Thanks!!
Yes, you can suppose std::vector as the template parameter, and get the element type from std::vector::value_type. E.g.
template<typename V, std::enable_if_t<std::is_integral<typename V::value_type>::value, bool> = true>
// ^^^^^^^^^^^^^^^^^^^^^^ check the element type
inline V read() {
using T = typename V::value_type; // get the element type
...
}
Then you can call it as read<std::vector<int>>().
BTW: This doesn't only work for std::vector, but also other containers having nested type value_type which is integral type.

C++ template function to compare any unsigned and signed integers

I would like to implement a template function that compares two variables of two types (T1 and T2). These types are two random unsigned or signed integer types.
To be able to compare them correctly I need to cast both of them to a 'bigger' integer type (T3). Promotion rules for signed/unsigned comparison unfortunately always promote to the unsigned type.
So how can I find a type T3 in C++11/C++14/C++17 that covers two integer types T1 and T2, no matter which size and signedness they have?
If this isn't possible, is there an other solution to build a template based comparison function that works reliably with any integer combination?
You can split the comparison up into parts. First check if one number is negative, and the other positive. If that's the case you know what order they go in. If neither is negative (or both are), just do a normal comparison.
This can be built in a template function that'll only check for negative of signed types.
I am not sure I understand your question. Do you mean something like this:
#include <cstdint>
#include <type_traits>
template < typename P, typename Q >
auto
compare( P p, Q q ) {
using T = typename std::common_type< P, Q >::type;
T promoted_p{p};
T promoted_q{q};
if ( promoted_p < promoted_q ) {
return -1;
}
else if ( promoted_p > promoted_q ) {
return 1;
}
else {
return 0;
}
}
It will work when safe to do so, and you can add your specializations if the language is not doing what you want.

What is the safest way to convert long integer into array of chars

Right now I have this code:
uint64_t buffer = 0;
const uint8_t * data = reinterpret_cast<uint8_t*>(&buffer);
And this works, but it seems risky due to the hanging pointer (and looks ugly too). I don't want naked pointers sitting around. I want to do something like this:
uint64_t buffer = 0;
const std::array<uint8_t, 8> data = partition_me_softly(buffer);
Is there and c++11 style construct that allows for me to get this into a safe container, preferable a std::array out of an unsigned int like this without inducing overhead?
If not, what would be the ideal way to improve this code to be more safe?
So I modified dauphic's answer to be a little more generic:
template <typename T, typename U>
std::array<T, sizeof(U) / sizeof(T)> ScalarToByteArray(const U v)
{
static_assert(std::is_integral<T>::value && std::is_integral<U>::value,
"Template parameter must be a scalar type");
std::array<T, sizeof(U) / sizeof(T)> ret;
std::copy((T*)&v, ((T*)&v) + sizeof(U), ret.begin());
return ret;
}
This way you can use it with more types like so:
uint64_t buffer = 0;
ScalarToByteArray<uint8_t>(buffer);
If you want to store an integer in a byte array, the best approach is probably to just cast the integer to a uint8_t* and copy it into an std::array. You're going to have to use raw pointers at some point, so your best option is to encapsulate the operation into a function.
template<typename T>
std::array<uint8_t, sizeof(T)> ScalarToByteArray(const T value)
{
static_assert(std::is_integral<T>::value,
"Template parameter must be a scalar type");
std::array<uint8_t, sizeof(T)> result;
std::copy((uint8_t*)&value, ((uint8_t*)&value) + sizeof(T), result.begin());
return result;
}

Type inference for templatefunctions with templated parameters

What is the form (if there is one) to write template functions, where arguments are templated containers?
For example I want to write a generic sum, which will work on any container which can be iterated over. Given the code below I have to write for example sum<int>(myInts). I would prefer to just write sum(myInts) and the type to be inferred from the type which myInts contains.
/**
#brief Summation for iterable containers of numerical type
#tparam cN Numerical type (that can be summed)
#param[in] container container containing the values, e.g. a vector of doubles
#param[out] total The sum (i.e. total)
*/
template<typename N, typename cN>
N sum(cN container) {
N total;
for (N& value : container) {
total += value;
}
return total;
}
I write such a function like this
template<typename IT1>
typename std::iterator_traits<IT1>::value_type //or decltype!
function(IT1 first, const IT1 last)
{
while(first != last)
{
//your stuff here auto and decltype is your friend.
++first;
}
return //whatever
}
This way it will work with more than just containers, for example ostream iterators and directory iterators.
Call like
function(std::begin(container), std::end(container));
This, even if cumbersome, can do the trick in C++11:
template <typename C>
auto sum( C const & container )
-> std::decay<decltype( *std::begin(container) )>::type
Another option is just using the same structure that accumulate does: have the caller pass an extra argument with the initial value and use that to control the result of the expression:
template<typename N, typename cN>
N sum(cN container, N initial_value = N() )
(By providing a default value, the user can decide to call it with a value or else provide the template argument --but this requires the type N to be default constructible)

Unordered (hash) map from bitset to bitset on boost

I want to use a cache, implemented by boost's unordered_map, from a dynamic_bitset to a dynamic_bitset. The problem, of course, is that there is no default hash function from the bitset. It doesn't seem to be like a conceptual problem, but I don't know how to work out the technicalities. How should I do that?
I found an unexpected solution. It turns out boost has an option to #define BOOST_DYNAMIC_BITSET_DONT_USE_FRIENDS. When this is defined, private members including m_bits become public (I think it's there to deal with old compilers or something).
So now I can use #KennyTM's answer, changed a bit:
namespace boost {
template <typename B, typename A>
std::size_t hash_value(const boost::dynamic_bitset<B, A>& bs) {
return boost::hash_value(bs.m_bits);
}
}
There's to_block_range function that copies out the words that the bitset consists of into some buffer. To avoid actual copying, you could define your own "output iterator" that just processes individual words and computes hash from them. Re. how to compute hash: see e.g. the FNV hash function.
Unfortunately, the design of dynamic_bitset is IMHO, braindead because it does not give you direct access to the underlying buffer (not even as const).
It is a feature request.
One could implement a not-so-efficient unique hash by converting the bitset to a vector temporary:
namespace boost {
template <typename B, typename A>
std::size_t hash_value(const boost::dynamic_bitset<B, A>& bs) {
std::vector<B, A> v;
boost::to_block_range(bs, std::back_inserter(v));
return boost::hash_value(v);
}
}
We can't directly calculate the hash because the underlying data in dynamic_bitset is private (m_bits)
But we can easily finesse past (subvert!) the c++ access specification system without either
hacking at the code or
pretending your compiler is non-conforming (BOOST_DYNAMIC_BITSET_DONT_USE_FRIENDS)
The key is the template function to_block_range which is a friend to dynamic_bitset. Specialisations of this function, therefore, also have access to its private data (i.e. m_bits).
The resulting code couldn't be simpler
namespace boost {
// specialise dynamic bitset for size_t& to return the hash of the underlying data
template <>
inline void
to_block_range(const dynamic_bitset<>& b, size_t& hash_result)
{
hash_result = boost::hash_value(bs.m_bits);
}
std::size_t hash_value(const boost::dynamic_bitset<B, A>& bs)
{
size_t hash_result;
to_block_range(bs, hash_result);
return hash_result;
}
}
the proposed solution generates the same hash in the following situation.
#define BOOST_DYNAMIC_BITSET_DONT_USE_FRIENDS
namespace boost {
template <typename B, typename A>
std::size_t hash_value(const boost::dynamic_bitset<B, A>& bs) {
return boost::hash_value(bs.m_bits);
}
}
boost::dynamic_biset<> test(1,false);
auto hash1 = boost::hash_value(test);
test.push_back(false);
auto hash2 = boost::hash_value(test);
// keep continue...
test.push_back(false);
auto hash31 = boost::hash_value(test);
// magically all hash1 to hash31 are the same!
the proposed solution is sometimes improper for hash map.
I read the source code of dynamic_bitset why this happened and realized that dynamic_bitset stores one bit per value as same as vector<bool>. For example, you call dynamic_bitset<> test(1, false), then dynamic_bitset initially allocates 4 bytes with all zero and it holds the size of bits (in this case, size is 1). Note that if the size of bits becomes greater than 32, then it allocates 4 bytes again and push it back into dynamic_bitsets<>::m_bits (so m_bits is a vector of 4 byte-blocks).
If I call test.push_back(x), it sets the second bit to x and increases the size of bits to 2. If x is false, then m_bits[0] does not change at all! In order to correctly compute hash, we need to take m_num_bits in hash computation.
Then, the question is how?
1: Use boost::hash_combine
This approach is simple and straight forward. I did not check this compile or not.
namespace boost {
template <typename B, typename A>
std::size_t hash_value(const boost::dynamic_bitset<B, A>& bs) {
size_t tmp = 0;
boost::hash_combine(tmp,bs.m_num_bits);
boost::hash_combine(tmp,bs.m_bits);
return tmp;
}
}
2: flip m_num_bits % bits_per_block th bit.
flip a bit based on bit size. I believe this approach is faster than 1.
namespace boost {
template <typename B, typename A>
std::size_t hash_value(const boost::dynamic_bitset<B, A>& bs) {
// you may need more sophisticated bit shift approach.
auto bit = 1u << (bs.m_num_bits % bs.bits_per_block);
auto return_val = boost::hash_value(bs.m_bits);
// sorry this was wrong
//return (return_val & bit) ? return_val | bit : return_val & (~bit);
return (return_val & bit) ? return_val & (~bit) : return_val | bit;
}
}