Unable to insert more than 256 nodes into a custom tree - c++

I've been stuck on this for quite some time now and have even tested the issue between a 64-bit version of gcc on Ubuntu as welll as a 32-bit gcc on Windows (MinGW).
Any time I insert more than 256 nodes into a binary-tree(?), it stops counting the number of nodes. I can still access all of my data. I have a feeling that it has something to do with the way I have my structure setup, by using chars to acquire each bit of each byte, but I have no idea how to fix it.
In this header, I have a structure and some functions setup which allows me to acquire an individual bit of an object.
This is the actual tree implementation. In order to find where to store each object, the tree iterates through each byte of a key, then iterates again through each bit of those bytes. The "iterate" function is what is giving me the most difficulty though; I have no idea why, but once 256 nodes become filled with data, my structure stops counting further, then begins to replace all previous data. I believe this has something to do with the fact that a single char can only hold 0-256, but I can't see where this would be an issue. Since the location of each node is determined by the individual bits of the key, it's hard to determine why only 256 items can be placed into the tree.
The URL to my test program is at the bottom of the post. SO won't let me post more than 2 at the moment. I would like to get this done soon, so any help would be greatly appreciated.
Edit:
Just to make things easier, this is the structure that gives me the individual bit of a byte, as well as a helper function:
struct bitMask {
char b1 : 1;
char b2 : 1;
char b3 : 1;
char b4 : 1;
char b5 : 1;
char b6 : 1;
char b7 : 1;
char b8 : 1;
char operator[] ( unsigned i ) const {
switch( i ) {
case 0 : return b1;
case 1 : return b2;
case 2 : return b3;
case 3 : return b4;
case 4 : return b5;
case 5 : return b6;
case 6 : return b7;
case 7 : return b8;
}
return 0; // Avoiding a compiler error
}
};
/******************************************************************************
* Functions shared between tree-type objects
******************************************************************************/
namespace treeShared {
// Function to retrieve the next set of bits at the pointer "key"
template <typename key_t>
inline const bitMask* getKeyByte( const key_t* key, unsigned iter );
/* template specializations */
template <>
inline const bitMask* getKeyByte( const char*, unsigned );
template <>
inline const bitMask* getKeyByte( const wchar_t*, unsigned );
template <>
inline const bitMask* getKeyByte( const char16_t*, unsigned );
template <>
inline const bitMask* getKeyByte( const char32_t*, unsigned );
} // end treeShared namespace
/*
* Tree Bit Mask Function
*/
template <typename key_t>
inline const bitMask* treeShared::getKeyByte( const key_t* k, unsigned iter ) {
return (iter < sizeof( key_t ))
? reinterpret_cast< const bitMask* >( k+iter )
: nullptr;
}
/*
* Tree Bit Mask Specializations
*/
template <>
inline const bitMask* treeShared::getKeyByte( const char* str, unsigned iter ) {
return (str[ iter ] != '\0')
? reinterpret_cast< const bitMask* >( str+iter )
: nullptr;
}
template <>
inline const bitMask* treeShared::getKeyByte( const wchar_t* str, unsigned iter ) {
return (str[ iter ] != '\0')
? reinterpret_cast< const bitMask* >( str+iter )
: nullptr;
}
template <>
inline const bitMask* treeShared::getKeyByte( const char16_t* str, unsigned iter ) {
return (str[ iter ] != '\0')
? reinterpret_cast< const bitMask* >( str+iter )
: nullptr;
}
template <>
inline const bitMask* treeShared::getKeyByte( const char32_t* str, unsigned iter ) {
return (str[ iter ] != '\0')
? reinterpret_cast< const bitMask* >( str+iter )
: nullptr;
}
And here is the tree class:
template <typename data_t>
struct bTreeNode {
data_t* data = nullptr;
bTreeNode* subNodes = nullptr;
~bTreeNode() {
delete data;
delete [] subNodes;
data = nullptr;
subNodes = nullptr;
}
};
/******************************************************************************
* Binary-Tree Structure Setup
******************************************************************************/
template <typename key_t, typename data_t>
class bTree {
enum node_dir : unsigned {
BNODE_LEFT = 0,
BNODE_RIGHT = 1,
BNODE_MAX
};
protected:
bTreeNode<data_t> head;
unsigned numNodes = 0;
private:
bTreeNode<data_t>* iterate( const key_t* k, bool createNodes );
public:
~bTree() {}
// STL-Map behavior
data_t& operator [] ( const key_t& k );
void push ( const key_t& k, const data_t& d );
void pop ( const key_t& k );
bool hasData ( const key_t& k );
const data_t* getData ( const key_t& k );
unsigned size () const { return numNodes; }
void clear ();
};
/*
* Binary-Tree -- Element iteration
*/
template <typename key_t, typename data_t>
bTreeNode<data_t>* bTree<key_t, data_t>::iterate( const key_t* k, bool createNodes ) {
node_dir dir;
unsigned bytePos = 0;
bTreeNode<data_t>* bNodeIter = &head;
const bitMask* byteIter = nullptr;
while ( byteIter = treeShared::getKeyByte< key_t >( k, bytePos++ ) ) {
for ( int currBit = 0; currBit < HL_BITS_PER_BYTE; ++currBit ) {
// compare the bits of each byte in k
dir = byteIter->operator []( currBit ) ? BNODE_LEFT : BNODE_RIGHT;
// check to see if a new bTreeNode needs to be made
if ( !bNodeIter->subNodes ) {
if ( createNodes ) {
// create and initialize the upcoming sub bTreeNode
bNodeIter->subNodes = new bTreeNode<data_t>[ BNODE_MAX ];
}
else {
return nullptr;
}
}
// move to the next bTreeNode
bNodeIter = &(bNodeIter->subNodes[ dir ]);
}
}
return bNodeIter;
}
/*
* Binary-Tree -- Destructor
*/
template <typename key_t, typename data_t>
void bTree<key_t, data_t>::clear() {
delete head.data;
delete [] head.subNodes;
head.data = nullptr;
head.subNodes = nullptr;
numNodes = 0;
}
/*
* Binary-Tree -- Array Subscript operators
*/
template <typename key_t, typename data_t>
data_t& bTree<key_t, data_t>::operator []( const key_t& k ) {
bTreeNode<data_t>* iter = iterate( &k, true );
if ( !iter->data ) {
iter->data = new data_t();
++numNodes;
}
return *iter->data;
}
/*
* Binary-Tree -- Push
* Push a data element to the tree using a key
*/
template <typename key_t, typename data_t>
void bTree<key_t, data_t>::push( const key_t& k, const data_t& d ) {
bTreeNode<data_t>* iter = iterate( &k, true );
if ( !iter->data ) {
iter->data = new data_t( d );
++numNodes;
}
else {
*iter->data = d;
}
}
/*
* Binary-Tree -- Pop
* Remove whichever element lies at the key
*/
template <typename key_t, typename data_t>
void bTree<key_t, data_t>::pop( const key_t& k ) {
bTreeNode<data_t>* iter = iterate( &k, false );
if ( !iter || !iter->data )
return;
delete iter->data;
iter->data = nullptr;
--numNodes;
}
/*
* Binary-Tree -- Has Data
* Return true if there is a data element at the key
*/
template <typename key_t, typename data_t>
bool bTree<key_t, data_t>::hasData( const key_t& k ) {
bTreeNode<data_t>* iter = iterate( &k, false );
return iter && ( iter->data != nullptr );
}
/*
* Binary-Tree -- Push
* Return a pointer to the data that lies at a key
* Returns a nullptr if no data exists
*/
template <typename key_t, typename data_t>
const data_t* bTree<key_t, data_t>::getData( const key_t& k ) {
bTreeNode<data_t>* iter = iterate( &k, false );
if ( !iter )
return nullptr;
return iter->data;
}
pastebin.com/8MZ0TMpj

template <typename key_t>
inline const bitMask* treeShared::getKeyByte( const key_t* k, unsigned iter ) {
return (iter < sizeof( key_t ))
? reinterpret_cast< const bitMask* >( k+iter )
: nullptr;
}
This doesn't do what you seem to think it does. (k+iter) doesn't retrieve the iter'th byte of k, but the iter'th element of the key_t[] array pointed to by k. In other words, k+iter advances the pointer by iter*sizeof(key_t) bytes, not by iter bytes.
Formally, this code exhibits undefined behavior, by overrunning array bounds. Practically speaking, your program uses just a single byte of the key, and then sizeof(key_t)-1 random bytes that just happen to sit in memory above that key. That's why you are effectively limited to 8 bits of state.
In addition, your reinterpret_cast also exhibits undefined behavior, formally speaking. The only legal use for a pointer obtained with reinterpret_cast is to reinterpret_cast it right back to the original type. This is not the immediate cause of your problem though.

Related

How to template creating a map for custom types

In my code I have a number of places where I need to take an std::vector of things and put it into a std::map indexed by something. For example here are two code snippets:
//sample A
std::map<Mode::Type, std::vector<Mode>> modesByType;
for( const auto& mode : _modes ) {
Mode::Type type = mode.getType();
auto it = modesByType.find( type );
if( it == modesByType.end() ) {
std::vector<Mode> v = { mode };
modesByType.insert( std::pair( type, v ) );
} else {
it->second.push_back( mode );
}
}
//sample B
std::map<unsigned, std::vector<Category>> categoriesByTab;
for( const auto& category : _categories ) {
unsigned tabIndex = category.getTab();
auto it = categoriesByTab.find( tabIndex );
if( it == categoriesByTab.end() ) {
std::vector<Category> v = { category };
categoriesByTab.insert( std::pair( tabIndex, v ) );
} else {
it->second.push_back( category );
}
}
I'd like to generalize this and create a template function like:
template<typename T, typename V>
std::map<T,std::vector<V>> getMapByType( const std::vector<V>& items, ?? ) {
std::map<T,std::vector<V>> itemsByType;
for( const auto& item : items ) {
unsigned index = ??;
auto it = itemsByType.find( index );
if( it == itemsByType.end() ) {
std::vector<V> v = { item };
itemsByType.insert( std::pair( index, v ) );
} else {
it->second.push_back( item );
}
}
return itemsByType;
}
My question is, how do I define the ?? argument to this function so that I can call the correct V.foo() function to get the index value for the map?
Note, I do not want to make all the classes that this template (V) accepts, inherit from a base class. Can I somehow specify a lambda argument?
have a pointer to a member fn as an extra parameter
template<typename T, typename V>
std::map<T,std::vector<V>> getMapByType( const std::vector<V>& items, T (V::*fn)()const) {
std::map<T,std::vector<V>> itemsByType;
for( const auto& item : items ) {
T index = (item.*fn)();
auto it = itemsByType.find( index );
if( it == itemsByType.end() ) {
std::vector<V> v = { item };
itemsByType.emplace( index, v );
} else {
it->second.push_back( item );
}
}
return itemsByType;
}
auto res = getMapByType(items, &Category::getTab);
You can pass a function that determines the key, something like this:
template <typename V,typename F>
auto getMapByType( const std::vector<V>& items,F func) {
using key_t = std::decay_t<delctype(func(item[0]))>;
std::map<key_t,std::vector<V>> result;
for (const auto& item : items) {
result[ func(item) ].push_back(item);
}
return item;
}
Then you can call it like this
std:vector<Category> vec;
auto m = getMapByType( vec, [](const Category& c) { return c.getTab(); });
or
std:vector<Mode> vec;
auto m = getMapByType( vec, [](const Category& c) { return c.getType(); });
Note that operator[] does already what you reimplemented. It tries to find an element with the given key. If none is present it inserts a default constructed one, then it returns a reference to the mapped value.
Even without operator[] you do not need find and then insert, because insert does only insert when no element with given key was present. insert returns an iterator to the element and a bool which tells you if the insertion actually took place.

Enum convert to string using compile time constants

I'm trying to associate compile time strings to enum values.
Here is my first attempt at the problem:
EnumValue will do the compile time assocation between a string and an enum
template<typename EnumType, int EnumIntValue, const char* EnumStrValue>
class EnumValue
{
public:
static const char* toString()
{
return EnumStrValue;
}
static const int toInt()
{
return EnumIntValue;
}
static EnumType get()
{
return static_cast<EnumType>(EnumIntValue);
}
};
EnumValueHolder will hold the actual values for both string and enum.
I dislike my current design as it still needs to hold a pointer to string. I would prefer a compile time association for this but fail to come up with a more elegant solution
template<typename EnumType>
class EnumValueHolder
{
public:
EnumValueHolder()
{}
EnumValueHolder(const EnumType& value, const char* str)
: value(value), str(str)
{}
bool operator==(const EnumValueHolder<EnumType>& rhs) { return value == rhs.value; }
bool operator==(const EnumType& rhs)const { return value == rhs; }
operator EnumType()const
{
return value;
}
const char* toString()const
{
return str;
}
const int toInt()const
{
return static_cast<int>(value);
}
private:
EnumType value;
char const* str;
};
Marcos to easily refer to enum types and enum value holder construction
#define ENUM_VALUE_TYPE(enumName, enumValue) \
EnumValue<enumName, (int)enumName::enumValue, str_##enumValue>
#define ENUM_VALUE_MAKE(enumName, enumValue) \
EnumValueHolder<enumName> { \
ENUM_VALUE_TYPE(enumName, enumValue)::get(), \
ENUM_VALUE_TYPE(enumName, enumValue)::toString() }
The following are my test cases and usage examples:
const char str_Apple[] = "Apple";
const char str_Orange[] = "Orange";
const char str_Pineapple[] = "Pineapple";
enum class EFruits
{
Apple,
Orange,
Pineapple
};
int main()
{
auto evApple = ENUM_VALUE_MAKE(EFruits, Apple);
std::cout << evApple.toString() << std::endl;
auto evOrange = ENUM_VALUE_MAKE(EFruits, Orange);
std::cout << evOrange.toString() << std::endl;
std::cout << "compare: " << (evApple == evOrange) << std::endl;
evApple = evOrange;
std::cout << evApple.toString() << std::endl;
auto myfruit = ENUM_VALUE_MAKE(EFruits, Pineapple);
std::cout << myfruit.toString() << std::endl;
switch (myfruit)
{
case EFruits::Apple:
std::cout << "Im an apple!" << std::endl;
break;
case EFruits::Orange:
std::cout << "Im an Orange!" << std::endl;
break;
case EFruits::Pineapple:
std::cout << "Im a Pineapple!" << std::endl;
break;
default:break;
}
}
One of the objectives is to remove the global string:
const char str_Apple[] = "Apple";
const char str_Orange[] = "Orange";
const char str_Pineapple[] = "Pineapple";
The other is to create a macro that assoicates an enum with a string
//Some crazy define that makes pairs of enum values and strings as
//compile time constants
#define DEFINE_ENUM_STRING(enumValue)\
enumValue, #enumValue
//Ideally, the macro would be used like this. This should be usable in any
//scope (global, namespace, class)
//with any access specifier (private, protected, public)
enum class EFruits
{
DEFINE_ENUM_STRING(Apple),
DEFINE_ENUM_STRING(Orange),
DEFINE_ENUM_STRING(Pineapple)
};
So there are 2 main questions:
1) Will this current design actually guarantee compile time constants for associating the enum to the string?
2) How can I define a macro to stringify an enum value and declare the value in a enum class using 1 line?
Edit: This should work and compile with msvs2017 on win64 platform using c++ 11.
Thanks.
I think it should work with MSVC2017. It uses C++14 in the constexpr functions but you can split them to single return statement constexprs to be C++11 compatible (however MSVC2017 supports C++14).
EnumConverter stores the char*, the enum and a string hash value for each enum entry. For each enum you must specialize EnumConverter::StrEnumContainer. The enum-string pairs could be generated with a similar macro you specified.
#include <tuple>
#include <array>
#include <stdexcept>
using namespace std;
enum ELogLevel {
Info,
Warn,
Debug,
Error,
Critical
};
static constexpr size_t constexprStringHash( char const* const str ) noexcept
{
return (
( *str != 0 ) ?
( static_cast< size_t >( *str ) + 33 * constexprStringHash( str + 1 ) ) :
5381
);
}
class EnumConverter final
{
public:
EnumConverter() = delete;
EnumConverter( const EnumConverter& ) = delete;
EnumConverter( EnumConverter&& ) = delete;
EnumConverter& operator =( const EnumConverter& ) = delete;
EnumConverter& operator =( EnumConverter&& ) = delete;
template< typename ENUM_T >
static constexpr const char* toStr( const ENUM_T value )
{
const auto& strEnumArray{ StrEnumContainer< ENUM_T >::StrEnumPairs };
const char* result{ nullptr };
for( size_t index{ 0 }; index < strEnumArray.size(); ++index ) {
if( std::get< 1 >( strEnumArray[ index ] ) == value ) {
result = std::get< 0 >( strEnumArray[ index ] );
break;
}
}
return ( ( result == nullptr ) ? throw std::logic_error{ "Enum toStrBase conversion failed" } : result );
}
template< typename ENUM_T >
static constexpr ENUM_T fromStr( const char* const str )
{
const auto& strEnumArray{ StrEnumContainer< ENUM_T >::StrEnumPairs };
const size_t hash{ constexprStringHash( str ) };
const ENUM_T* result{ nullptr };
for( size_t index{ 0 }; index < strEnumArray.size(); ++index ) {
if( std::get< 2 >( strEnumArray[ index ] ) == hash ) {
result = &( std::get< 1 >( strEnumArray[ index ] ) );
}
}
return ( ( result == nullptr ) ? throw std::logic_error{ "Enum toStrBase conversion failed" } : *result );
}
private:
template< typename ENUM_T, size_t LEN >
using ARRAY_T = std::array< std::tuple< const char* const, const ENUM_T, const size_t >, LEN >;
template< typename ENUM_T >
static constexpr std::tuple< const char* const, ENUM_T, size_t > getTuple( const char* const str, const ENUM_T type ) noexcept
{
return std::tuple< const char* const, ENUM_T, size_t >{ str, type, constexprStringHash( str ) };
}
template< typename ENUM_T >
struct StrEnumContainer
{
};
template< typename ENUM_T >
friend struct StrEnumContainer;
};
template<>
struct EnumConverter::StrEnumContainer< ELogLevel >
{
using ENUM_T = ELogLevel;
static constexpr EnumConverter::ARRAY_T< ENUM_T, 5 > StrEnumPairs{ {
{ getTuple( "Info", ENUM_T::Info ) },
{ getTuple( "Warn", ENUM_T::Warn ) },
{ getTuple( "Debug", ENUM_T::Debug ) },
{ getTuple( "Error", ENUM_T::Error ) },
{ getTuple( "Critical", ENUM_T::Critical ) },
} };
};
int main()
{
//static_assert( EnumConverter::fromStr< ELogLevel >( "Info" ) == EnumConverter::fromStr< ELogLevel >( EnumConverter::toStr( Error ) ), "Error" ); // Error
static_assert(
EnumConverter::toStr( Warn )[ 0 ] == 'W' &&
EnumConverter::toStr( Warn )[ 1 ] == 'a' &&
EnumConverter::toStr( Warn )[ 2 ] == 'r' &&
EnumConverter::toStr( Warn )[ 3 ] == 'n',
"Error"
);
static_assert( EnumConverter::fromStr< ELogLevel >( "Info" ) == EnumConverter::fromStr< ELogLevel >( EnumConverter::toStr( Info ) ), "Error" );
}

How to define an common iterator to my class hirerachy

My team designed a library meant to store data from different "signals". A signal is a list of timestamped float values. We have three way to store a signal (depending of the way it was recorded from the hardware in the first place):
MarkerSignal: We store a sorted std::vector of std::pair of (boost::posix_time::ptime,float)
RawSignal: We store a start time (boost::posix_time::ptime), a sampling period (boost::posix_time::time_duration) and finally a std::vector of float (each value's timestamp is start time + period * value's index in the vector)
NumericalSignal: We store a start time (boost::posix_time::ptime), a sampling period (boost::posix_time::time_duration), a scale (float), an offset (float) and finally a std::vector of short (timestamp is computed as for RawSignal and float value is short*scale+offset)
Those three signals have a common parent class (SignalBase) storing the signal's name, description, unit and stuff like that. We use the visitor pattern to let people nicely "cast" the SignalBase to a MarkerSignal/RawSignal/NumericalSignal and then access the data it contains.
In the end, what we need for each class is to iterate through all elements, one element being actually a pair of (boost::posix_time::ptime,float) (like MarkerSignal). And it's a pain having to create a visitor every time we want to do that.
Storing all signals as a std::vector<std::pair<boost::posix_time::ptime,float>> (or returning an object of this kind on demand) uses too much memory.
We thought the best was probably to define our own iterator object. The iterator would give access to the timestamp and value, like that:
SignalBase* signal = <any signal>;
for ( SignalBase::iterator iter = signal->begin();
iter != signal->end();
++iter )
{
boost::posix_time::ptime timestamp = iter.time();
float value = iter.value();
}
What's the best approach/strategy to create such an iterator class? (simple class with a size_t index attribute, or a MarkerSignal/RawSignal/NumericalSignal container's specific iterator as attribute, store a std::pair<boost::posix_time::ptime,float> and update it from a ++ operator...).
Also, I would much prefer if the solution rpoposed avoids using a virtual table (to have ++, time(), and value() be faster when iterating on huge signals).
To sum up I think the best you can achieve if you value for efficiency could be something like this:
template <typename SignalType, typename Functor = function<void(typename SignalType::value_type&&) > >
void iterateThroughSignal(SignalBase *signal, Functor foo) {
SignalType *specificSignal = dynamic_cast<SignalType *>(signal);
if (!specificSignal)
return;
for (typename SignalType::iterator it = specificSignal->begin();
it != specificSignal->end();
it++) {
foo(*it); // retrieving value from iterator...
}
}
Then for call:
iterateThroughSignal<MarkerSignal>(signal, [](MarkerSignal::value_type&& msv){
/*doing something with value...*/
});
I'm not sure if you are using C++11 so the lambda can be replace by function pointer, rvalue reference using lvalue reference and the std::function with a function signature...
Edit:
To make it compile when the type of the foo signature won't match the SignalType::value_type there will be a need of playing a little bit with sfinae:
template <typename SignalType>
class IterateHelper {
template <typename Functor>
static typename enable_if<first_param_is<Functor, typename SignalType::value_type>::value >::type iterateThroughSignal(SignalBase *signal, Functor foo) {
SignalType *specificSignal = dynamic_cast<SignalType *>(signal);
if (!specificSignal)
return;
for (typename SignalType::iterator it = specificSignal->begin();
it != specificSignal->end();
it++) {
foo(*it); // retrieving value from iterator...
}
}
template <typename Functor>
static typename enable_if<!first_param_is<Functor, typename SignalType::value_type>::value >::type iterateThroughSignal(SignalBase *signal, Functor foo) {
}
};
I leave the implementation of first_param_is helper struct to you... Call would change to:
IteratorHelper<MarkerSignal>::iterateThroughSignal(signal, [](MarkerSignal::value_type&& msv){
/*doing something with value...*/
});
As I wanted something easy to use for people using my library (be able to esily do a for loop) I finally implemented my own iterator like that:
Added two virtual functions in SignalBase (found no alternative to that, runtime will use the virtual table):
virtual size_t floatDataCount() const = 0;
virtual bool loadFloatInfoAt( size_t pos, SignalFloatIter::ValueInfo& info ) const = 0;
Added functions in SignalBase to get begin/end iterators:
inline BDL::SignalFloatIter beginFloatIter() const { return BDL::SignalFloatIter::beginIter( *this ); }
inline BDL::SignalFloatIter endFloatIter() const { return BDL::SignalFloatIter::endIter( *this ); }
Declared iterator class like that:
class SignalFloatIter
{
public:
SignalFloatIter( const SignalBase* signal = NULL, size_t pos = 0 );
SignalFloatIter( const SignalFloatIter& iter );
static SignalFloatIter beginIter( const SignalBase& signal );
static SignalFloatIter endIter( const SignalBase& signal );
SignalFloatIter& operator=( const SignalFloatIter& iter );
bool operator==( const SignalFloatIter& iter ) const;
bool operator!=( const SignalFloatIter& iter ) const;
/** Pre-increment operator */
SignalFloatIter& operator++();
/** Post-increment operator */
SignalFloatIter operator++(int unused);
inline const BDL::time& when() const { assert( m_valid ); return m_info.first.first; }
inline const BDL::duration& duration() const { assert( m_valid ); return m_info.first.second; }
inline const float& value() const { assert( m_valid ); return m_info.second; }
inline size_t index() const { assert( m_valid ); return m_pos; }
inline BDL::MarkerKey markerKey() const { assert( m_valid ); return std::make_pair( when(), duration() ); }
inline bool valid() const { return m_valid; }
typedef std::pair<BDL::time,BDL::duration> TimeInfo;
typedef std::pair<TimeInfo,float> ValueInfo;
private:
const SignalBase* m_signal;
size_t m_pos;
bool m_valid;
ValueInfo m_info;
void loadCurInfo();
};
Implemented:
SignalFloatIter::SignalFloatIter( const SignalBase* signal, size_t pos ) :
m_signal( signal ),
m_pos( pos )
{
loadCurInfo();
}
SignalFloatIter::SignalFloatIter( const SignalFloatIter& iter )
{
operator=( iter );
}
SignalFloatIter SignalFloatIter::beginIter( const SignalBase& signal )
{
return SignalFloatIter( &signal, 0 );
}
SignalFloatIter SignalFloatIter::endIter( const SignalBase& signal )
{
return SignalFloatIter( &signal, signal.floatDataCount() );
}
SignalFloatIter& SignalFloatIter::operator=( const SignalFloatIter& iter )
{
if ( this != &iter )
{
m_signal = iter.m_signal;
m_pos = iter.m_pos;
m_info = iter.m_info;
m_valid = iter.m_valid;
}
return *this;
}
bool SignalFloatIter::operator==( const SignalFloatIter& iter ) const
{
if ( m_signal == iter.m_signal )
{
if ( m_pos == iter.m_pos )
{
assert( m_valid == iter.m_valid );
if ( m_valid )
assert( m_info == iter.m_info );
return true;
}
else
{
return false;
}
}
else
{
assert( false );
return false;
}
}
bool SignalFloatIter::operator!=( const SignalFloatIter& iter ) const
{
return !( *this == iter );
}
SignalFloatIter& SignalFloatIter::operator++()
{
++m_pos;
loadCurInfo();
return *this;
}
SignalFloatIter SignalFloatIter::operator++( int unused )
{
SignalFloatIter old = *this;
assert( unused == 0 ); // see http://en.cppreference.com/w/cpp/language/operator_incdec
++m_pos;
loadCurInfo();
return old;
}
void SignalFloatIter::loadCurInfo()
{
if ( m_signal )
{
m_valid = m_signal->loadFloatInfoAt( m_pos, m_info );
}
else
{
assert( false );
m_valid = false;
}
}
It's pretty straightforward and easy to use for any signal:
std::cout << "Signal timestamped data are: ";
for ( BDL::SignalFloatIter iter = signal.beginFloatIter();
iter != signal.endFloatIter();
++iter )
{
std::cout << iter.when() << " : " << iter.value() << std::endl;
}

Custom allocator for STL fails to compile in release mode only

I have written a custom allocate which i'm using with std::vector. The code compiles and works when in debug mode, but it fails to compile in release mode with a strange error.
Here is my allocator :
template< class T >
class AllocPowOf2
{
public:
typedef size_t size_type;
typedef ptrdiff_t difference_type;
typedef T * pointer;
typedef const T * const_pointer;
typedef T & reference;
typedef const T & const_reference;
typedef T value_type;
private:
size_type m_nMinNbBytes;
public:
template< class U >
struct rebind
{
typedef AllocPowOf2< U > other;
};
inline pointer address( reference value ) const
{
return & value;
}
inline const_pointer address( const_reference value ) const
{
return & value;
}
inline AllocPowOf2( size_type nMinNbBytes = 32 )
: m_nMinNbBytes( nMinNbBytes ) { }
inline AllocPowOf2( const AllocPowOf2 & oAlloc )
: m_nMinNbBytes( oAlloc.m_nMinNbBytes ) { }
template< class U >
inline AllocPowOf2( const AllocPowOf2< U > & oAlloc )
: m_nMinNbBytes( oAlloc.m_nMinNbBytes ) { }
inline ~AllocPowOf2() { }
inline bool operator != ( const AllocPowOf2< T > & oAlloc )
{
return m_nMinNbBytes != oAlloc.m_nMinNbBytes;
}
inline size_type max_size() const
{
return size_type( -1 ) / sizeof( value_type );
}
static size_type OptimizeNbBytes( size_type nNbBytes, size_type nMin )
{
if( nNbBytes < nMin )
{
nNbBytes = nMin;
}
else
{
size_type j = nNbBytes;
j |= (j >> 1);
j |= (j >> 2);
j |= (j >> 4);
j |= (j >> 8);
#if ENV_32BITS || ENV_64BITS
j |= (j >> 16);
#endif
#if ENV_64BITS
j |= (j >> 32);
#endif
++j; // Least power of two greater than nNbBytes and nMin
if( j > nNbBytes )
{
nNbBytes = j;
}
}
return nNbBytes;
}
pointer allocate( size_type nNum )
{
return new value_type[ OptimizeNbBytes( nNum * sizeof( value_type ), 32 ) ]; // ERROR HERE, line 97
}
void construct( pointer p, const value_type & value )
{
new ((void *) p) value_type( value );
}
void destroy( pointer p )
{
p->~T();
}
void deallocate( pointer p, size_type nNum )
{
(void) nNum;
delete[] p;
}
};
Here is the error :
Error 1 error C2512: 'std::_Aux_cont' : no appropriate default constructor available c:\XXX\AllocPowOf2.h 97
The code compiles correctly in debug mode in both Windows with VS2008 and Android with the Android NDK and eclipse.
Any idea ?
return new value_type[ OptimizeNbBytes( nNum * sizeof( value_type ), 32 ) ];
Ignoring OptimizeNbBytes for now, you are newing up nNum * sizeof(value_type) value_types, which also calls value_type's constructor that many times.
In other words, asked to allocate memory for 16 ints, you would allocate enough for 64 ints instead; not only that, but you were asked for raw memory, and instead ran constructors all over them, creating objects that will be overwritten by the container without being destroyed - and then the delete[] in deallocate will result in double destruction.
allocate should allocate raw memory:
return pointer(::operator new(OptimizeNbBytes( nNum * sizeof( value_type ), 32 )));
and deallocate should deallocate the memory without running any destructor:
::operator delete((void*)p);

How to access a slice of packed_bits<> as a std::bitset<>?

I am attempting to implement a packed_bits class using variadic templates and std::bitset.
In particular, I am running into problems writing a get function which returns a reference to a subset of the member m_bits which contains all the packed bits. The function should be analogous to std::get for std::tuple.
It should act as an reference overlay so I can manipulate a subset of packed_bits.
For instance,
using my_bits = packed_bits<8,16,4>;
my_bits b;
std::bitset< 8 >& s0 = get<0>( b );
std::bitset< 16 >& s1 = get<1>( b );
std::bitset< 4 >& s2 = get<2>( b );
UPDATE
Below is the code that has been rewritten according to Yakk's recommendations below. I am stuck at the point of his last paragraph: not sure how to glue together the individual references into one bitset-like reference. Thinking/working on that last part now.
UPDATE 2
Okay, my new approach is going to be to let bit_slice<> do the bulk of the work:
it is meant to be short-lived
it will publicly subclass std::bitset<length>, acting as a temporary buffer
on construction, it will copy from packed_bits<>& m_parent;
on destruction, it will write to m_parent
in addition to the reference via m_parent, it must also know offset, length
get<> will become a free-function which takes a packet_bits<> and returns a bit_slice<> by value instead of bitset<> by reference
There are various short-comings to this approach:
bit_slice<> has to be relatively short-lived to avoid aliasing issues, since we only update on construction and destruction
we must avoid multiple overlapping references while coding (whether threaded or not)
we will be prone to slicing if we attempt to hold a pointer to the base class when we have an instance of the child class
but I think this will be sufficient for my needs. I will post the finished code when it is complete.
UPDATE 3
After fighting with the compiler, I think I have a basic version working. Unfortunately, I could not get the free-floating ::get() to compile properly: BROKEN shows the spot. Otherwise, I think it's working.
Many thanks to Yakk for his suggestions: the code below is about 90%+ based on his comments.
UPDATE 4
Free-floating ::get() fixed.
UPDATE 5
As suggested by Yakk, I have eliminated the copy. bit_slice<> will read on get_value() and write on set_value(). Probably 90%+ of my calls will be through these interfaces anyways, so no need to subclass/copy.
No more dirtiness.
CODE
#include <cassert>
#include <bitset>
#include <iostream>
// ----------------------------------------------------------------------------
template<unsigned... Args>
struct Add { enum { val = 0 }; };
template<unsigned I,unsigned... Args>
struct Add<I,Args...> { enum { val = I + Add<Args...>::val }; };
template<int IDX,unsigned... Args>
struct Offset { enum { val = 0 }; };
template<int IDX,unsigned I,unsigned... Args>
struct Offset<IDX,I,Args...> {
enum {
val = IDX>0 ? I + Offset<IDX-1,Args...>::val : Offset<IDX-1,Args...>::val
};
};
template<int IDX,unsigned... Args>
struct Length { enum { val = 0 }; };
template<int IDX,unsigned I,unsigned... Args>
struct Length<IDX,I,Args...> {
enum {
val = IDX==0 ? I : Length<IDX-1,Args...>::val
};
};
// ----------------------------------------------------------------------------
template<unsigned... N_Bits>
struct seq
{
static const unsigned total_bits = Add<N_Bits...>::val;
static const unsigned size = sizeof...( N_Bits );
template<int IDX>
struct offset
{
enum { val = Offset<IDX,N_Bits...>::val };
};
template<int IDX>
struct length
{
enum { val = Length<IDX,N_Bits...>::val };
};
};
// ----------------------------------------------------------------------------
template<unsigned offset,unsigned length,typename PACKED_BITS>
struct bit_slice
{
PACKED_BITS& m_parent;
bit_slice( PACKED_BITS& t ) :
m_parent( t )
{
}
~bit_slice()
{
}
bit_slice( bit_slice const& rhs ) :
m_parent( rhs.m_parent )
{ }
bit_slice& operator=( bit_slice const& rhs ) = delete;
template<typename U_TYPE>
void set_value( U_TYPE u )
{
for ( unsigned i=0; i<length; ++i )
{
m_parent[offset+i] = u&1;
u >>= 1;
}
}
template<typename U_TYPE>
U_TYPE get_value() const
{
U_TYPE x = 0;
for ( int i=length-1; i>=0; --i )
{
if ( m_parent[offset+i] )
++x;
if ( i!=0 )
x <<= 1;
}
return x;
}
};
template<typename SEQ>
struct packed_bits :
public std::bitset< SEQ::total_bits >
{
using bs_type = std::bitset< SEQ::total_bits >;
using reference = typename bs_type::reference;
template<int IDX> using offset = typename SEQ::template offset<IDX>;
template<int IDX> using length = typename SEQ::template length<IDX>;
template<int IDX> using slice_type =
bit_slice<offset<IDX>::val,length<IDX>::val,packed_bits>;
template<int IDX>
slice_type<IDX> get()
{
return slice_type<IDX>( *this );
}
};
template<int IDX,typename T>
typename T::template slice_type<IDX>
get( T& t )
{
return t.get<IDX>();
};
// ----------------------------------------------------------------------------
int main( int argc, char* argv[] )
{
using my_seq = seq<8,16,4,8,4>;
using my_bits = packed_bits<my_seq>;
using my_slice = bit_slice<8,16,my_bits>;
using slice_1 =
bit_slice<my_bits::offset<1>::val,my_bits::length<1>::val,my_bits>;
my_bits b;
my_slice s( b );
slice_1 s1( b );
assert( sizeof( b )==8 );
assert( my_seq::total_bits==40 );
assert( my_seq::size==5 );
assert( my_seq::offset<0>::val==0 );
assert( my_seq::offset<1>::val==8 );
assert( my_seq::offset<2>::val==24 );
assert( my_seq::offset<3>::val==28 );
assert( my_seq::offset<4>::val==36 );
assert( my_seq::length<0>::val==8 );
assert( my_seq::length<1>::val==16 );
assert( my_seq::length<2>::val==4 );
assert( my_seq::length<3>::val==8 );
assert( my_seq::length<4>::val==4 );
{
auto s2 = b.get<2>();
}
{
auto s2 = ::get<2>( b );
s2.set_value( 25 ); // 25==11001, but only 4 bits, so we take 1001
assert( s2.get_value<unsigned>()==9 );
}
return 0;
}
I wouldn't have get return a bitset, because each bitset needs to manage its own memory.
Instead, I'd use a bitset internally to manage all of the bits, and create bitset::reference-like individual bit references, and bitset-like "slices", which get can return.
A bitslice would have a pointer back to the original packed_bits, and would know the offset where it starts, and how wide it is. It's references to individual bits would be references from the original packed_bits, which are references from the internal bitset, possibly.
Your Size is redundant -- sizeof...(pack) tells you how many elements are in the pack.
I'd pack up the sizes of the slices into a seqence so you can pass it around easier. Ie:
template<unsigned... Vs>
struct seq {};
is a type from which you can extract an arbitrary length list of unsigned ints, yet can be passed as a single parameter to a template.
As a first step, write bit_slice<offset, length>, which takes a std::bitset<size> and produces bitset::references to individual bits, where bit_slice<offset, length>[n] is the same as bitset[n+offset].
Optionally, bit_slice could store offset as a run-time parameter (because offset as a compile-time parameter is just an optimization, and not that strong of one I suspect).
Once you have bit_slice, working on the tuple syntax of packed_bits is feasible. get<n, offset=0>( packed_bits<a,b,c,...>& ) returns a bit_slice<x> determined by indexing the packed_bits sizes, with an offset determined by adding the first n-1 packed_bits sizes, which is then constructed from the internal bitset of the packed_bits.
Make sense?
Apparently not. Here is a quick bit_slice that represents some sub-range of bits within a std::bitset.
#include <bitset>
template<unsigned Width, unsigned Offset, std::size_t SrcBitWidth>
struct bit_slice {
private:
std::bitset<SrcBitWidth>* bits;
public:
// cast to `bitset`:
operator std::bitset<Width>() const {
std::bitset<Width> retval;
for(unsigned i = 0; i < Offset; ++i) {
retval[i] = (*this)[i];
}
return retval;
}
typedef typename std::bitset<SrcBitWidth>::reference reference;
reference operator[]( size_t pos ) {
// TODO: check that pos < Width?
return (*bits)[pos+Offset];
}
constexpr bool operator[]( size_t pos ) const {
// TODO: check that pos < Width?
return (*bits)[pos+Offset];
}
typedef bit_slice<Width, Offset, SrcBitWidth> self_type;
// can be assigned to from any bit_slice with the same width:
template<unsigned O_Offset, unsigned O_SrcBitWidth>
self_type& operator=( bit_slice<Width, O_Offset, O_SrcBitWidth>&& o ) {
for (unsigned i = 0; i < Width; ++i ) {
(*this)[i] = o[i];
}
return *this;
}
// can be assigned from a `std::bitset<Width>` of the same size:
self_type& operator=( std::bitset<Width> const& o ) {
for (unsigned i = 0; i < Width; ++i ) {
(*this)[i] = o[i];
}
return *this;
}
explicit bit_slice( std::bitset<SrcBitWidth>& src ):bits(&src) {}
bit_slice( self_type const& ) = default;
bit_slice( self_type&& ) = default;
bit_slice( self_type&o ):bit_slice( const_cast<self_type const&>(o)) {}
// I suspect, but am not certain, the the default move/copy ctor would do...
// dtor not needed, as there is nothing to destroy
// TODO: reimplement rest of std::bitset's interface that you care about
};
template<unsigned offset, unsigned width, std::size_t src_width>
bit_slice< width, offset, src_width > make_slice( std::bitset<src_width>& src ) {
return bit_slice< width, offset, src_width >(src);
}
#include <iostream>
int main() {
std::bitset<16> bits;
bits[8] = true;
auto slice = make_slice< 8, 8 >( bits );
bool b0 = slice[0];
bool b1 = slice[1];
std::cout << b0 << b1 << "\n"; // should output 10
}
Another useful class would be a bit_slice with runtime offset-and-source size. This will be less efficient, but easier to program against.
I'm going to guess it something like this:
#include <iostream>
#include <bitset>
using namespace std;
template<int N, int L, int R>
bitset<L-R>
slice(bitset<N> value)
{
size_t W = L - R + 1;
if (W > sizeof(uint64_t) * 8) {
W = 31;
throw("Exceeding integer word size");
}
uint64_t svalue = (value.to_ulong() >> R) & ((1 << W) - 1);
return bitset<L-R>{svalue};
}
}
int main()
{
bitset<16> beef { 0xBEEF };
bitset<4-3> sliced_beef = slice<16, 4, 3>(beef);
auto fast_sliced_beef = slice<16, 4, 3>(beef);
return 0;
}