Check if a bitset contains all values of another bitset - c++

I'm trying to create an entity/component system that automatically matches suitable entities suitable systems. I'm using std::bitset and RTTI to automatically assign a bit value to every component type.
A system is defined like this: MovementSystem : System<Position, Velocity>.
MovementSystem, in this example, accepts any entity that has both the Position and the Velocity components (and any other component).
To check if an entity is suitable, I compare the system's bitset to the entity's bitset.
// Let's assume there are max 4 components
1 1 0 1 // Entity bitset
^ ^ ^
Position Velocity OtherB
1 1 0 0 // Suitable example system bitset
^ ^
Position Velocity
1 1 1 0 // Unsuitable example system bitset
^ ^ ^ // Entity does not have OtherA!
Position Velocity OtherA
So far, my solution is this one:
if(entityBitset & systemBitset) == systemBitset)) { /* entity is suitable! */ }
It seems to work, but I found it after doodling bitsets on a whiteboard. Is it correct? Can it be improved any further? (Entities will be created and destroyed an immense amount of times in my games, so performance is very important!)
Code is here if needed (shouldn't be), but it's almost impossible to read.

Your check
(a & b) == b; // check whether b is a subset of a
checks whether b is a subset of a, or equivalent, whether a contains/includes b. Note that you are creating one temporary followed by the break-early operator==.
This is equivalent to checking whether the difference of b and a is empty (note the order!)
(b & ~a).none();
This will be equally fast: a temporary followed by a break-early .none()
Given the interface of std::bitset, this is as fast you can get. The problem with std::bitset is that all its bitwise members (&, |, ^ and ~ loop over every word. The early termination operations like none(), any(), == or <, cannot be intertwined with them. This is because std::bitset does not expose the underyling word storage so you cannot perform the iteration yourself.
However, if you would write your own bitset class, you could write a special-purpose includes() algorithm that loops over each of the words, doing the & until you break-early
// test whether this includes other
bool YourBitSet::includes(YourBitSet const& other) const {
for (auto i = 0; i < NumWords; ++i)
if ((other.word[i] & ~(this->word[i])) != 0)
return false;
return true;
}
A similar algorithm missing from std::bitset would be intersects(), to efficiently test (a & b) != 0. Currently you have to first do bitwise and, and then the test for zero, whereas that would be more efficiently done in one loop. If std::bitset ever gets updated, it would be nice if they include includes() and intersects() primitives.

Related

May Leetcode Speedrun Question: Single element in a Sorted Array

So I was watching Errichto complete these challenges and I was amazed at how fast he solved the "Single element in a Sorted Array". From a beginner's perspective, it does look impressive - maybe for senior devs the speed is quite normal.
You are given a sorted array where all elements are integers, and all elements appear exactly twice in the array, except for one element, which appears exactly once. (i.e., all elements are duplicated, except one.) You need to find the element appearing exactly once.
I am just here to understand how said code works:
class Solution {
public:
int singleNonDuplicate(vector<int>& nums) {
long long a = 0;
for(int x : nums) {
a ^= x
}
return a;
}
};
Here's what I've got so far:
for every single integer "x" in the vector/array "nums", a is equal to a^x (if what I said is correct).
And here are my questions:
Wouldn't a^x be equal to 0 because a is 0 since the beginning?
int singleNonDuplicate(vector<int> nums) {
//...
}
and
int singleNonDuplicate(vector<int>& nums) {
//...
}
I've understood this: vector<int> nums is pass by value (you're working with a "copy" of nums inside the function) and vector<int>& nums is pass by reference (you're working with nums itself inside the function).
Does the "&" matter if you were to solve the problem just like Errichto?
ps:
sorry for possible mistakes from a programming perspective, I might've accidentally said some wrong things.
yes I will learn C++ sooner or later, 2020 is the first year in my life where I actually have an actual "programming" class in my schedule, these videos are entertaining and I'm curious to see why said code works & try understand etc.
Casual proof:
(If you're interested in areas of study that help you to come up with solutions like this and understand them, I'd suggest Discrete Mathematics and Group Theory / Abstract Algebra.)
I think I know the question you were referencing. It goes something like,
You are given an unsorted array where all elements are integers, and all elements appear exactly twice in the array, except for one element, which appears exactly once. (i.e., all elements are duplicated, except one.)
You're on the right track for the first part, why the algorithm works. It takes advantage of a few properties of XOR:
X^0=X
X^X=0
The XOR operation is commutative and associative.
# proof
# since XOR is commutative, we can take advantage
# of the fact that all elements except our target
# occur in pairs of two:
P1, P1 = Two integers of the same value in a pair.
T = Our target.
# sample unsorted order, array size = 7 = (3*2)+1
[ P3, P1, T, P1, P2, P3, P2 ]
# since XOR is commutative, we can re-arrange the array
# to our liking, and still get the same value as
# the XOR algorithm.
# here, we move our target to the front, and then group
# each pair together. I've arranged them in ascending
# order, but that's not important:
[ T, P1, P1, P2, P2, P3, P3 ]
# write out the equation our algorithm is solving:
solution = 0 ^ T ^ P1 ^ P1 ^ P2 ^ P2 ^ P3 ^ P3
# because XOR is associative, we can use parens
# to indicate elements of the form X^X=0:
solution = T ^ (P1 ^ P1) ^ (P2 ^ P2) ^ (P3 ^ P3) ^ 0
# substitute X^X=0
solution = T ^ 0 ^ 0 ^ 0 ^ 0
# use X^0=X repeatedly
solution = T
So we know that running that algorithm will give us our target, T.
On using & to pass-by-reference instead of pass-by-value:
Your understanding is correct. Here, it doesn't make a real difference.
Pass-by-reference lets you modify the original value in place, which he doesn't do.
Pass-by-value copies the vector, which wouldn't meaningfully impact performance here.
So he gets style points for using pass-by-reference, and if you're using leetcode to demonstrate your diligence as a software developer it's good to see, but it's not pertinent to his solution.
^ is XOR operation in the world of coding, not the power operation (which you are assuming I guess).
I don't know about which problem you are talking about, if its finding the only unique element in array (given every other element occurs twice),
then the logic behind solving is
**a XOR a equals 0 **
**a XOR 0 equals a**
So if we XOR all the elements present in array, we will get 0 corresponding to the elements occurring twice.
The only element remaining will be XORed with 0 and hence we get the element.
Answer to second query is that whenever you want to modify the array we pass it by reference .
PS: I am also new to programming.I hope I answered your queries.

How to shift a matrix in Eigen?

I'm trying to make a simple shift of Eigen's Matrix<int,200,200>, but I can't get Eigen::Translation to work. Since I'm rather new to C++, Eigen`s official documentation isn't of much use to me. I can't extract any useful information from it. I've tried to declare my translation as:
Translation<int,2> t(1,0);
hoping for a simple one row shift, but I can't get it to do anything with my matrix. Actually I'm not even sure if that's what this method is for... if not, could you please recommend some other, preferably fast, way of doing matrix translation on a torus? I'm looking for an equivalent to MATLab's circshift.
The Translation class template is from the Geometry module and represents a translation transformation. It has nothing to do with shifting values in an array/matrix.
According to this discussion, the shifting feature wasn't implemented yet as of 2010 and was of low priority back then. I don't see any indication in the documentation that things are any different now, 4 years later.
So, you need to do it yourself. For example:
/// Shifts a matrix/vector row-wise.
/// A negative \a down value is taken to mean shifting up.
/// When passed zero for \a down, the input matrix is returned unchanged.
/// The type \a M can be either a fixed- or dynamically-sized matrix.
template <typename M> M shiftedByRows(const M & in, int down)
{
if (!down) return in;
M out(in.rows(), in.cols());
if (down > 0) down = down % in.rows();
else down = in.rows() - (-down % in.rows());
// We avoid the implementation-defined sign of modulus with negative arg.
int rest = in.rows() - down;
out.topRows(down) = in.bottomRows(down);
out.bottomRows(rest) = in.topRows(rest);
return out;
}

How is C++'s std::set class able to implement a binary tree for ANY type of data structure?

I understand how binary trees are implemented for most native elements such as ints or strings. So I can understand an implementation of std::set that would have a constructor like
switch(typeof(T)) // T being the typename/class in the implementation
{
case int:
{
/* create a binary tree structure that uses the bitshift operator to
add elements, e.g. 13=1101 is created as
/
/
/
/
1
/
/
/
1
\
\
0
/
1
*/
}
case string:
{
/* Do something where the string is added to a tree by going letter-by-letter
and looking whether the letter is in the second half of the alphabet (?)
*/
}
// etcetera for every imaginable type
}
but obviously this is not how std::set is actually implemented, because it is able to create a tree even when I use a homemade data structure like
struct myStruct
{
char c;
bool b;
};
std::set<myStruct> mySet;
Could it be possible to create a generic binary tree class that looks at all the bits of a data structure and does something like the int case I mentioned above?
For instance, in the case of myStruct, the size of the structure is 2 bytes of 16 bits, so a myStruct element S with S.c = '!' and S.b = true could look like
00010101 00000001
(c part) (b part)
=
\
\
0
\
\
0
\
\
0
/
/
1
\
[etcetera]
since the ASCII value for '!' is 21 and a bool = true as an int is 1. Then again, this could be inefficient to do generically because a very large data structure would correspond to a gigantic tree that might take more time to traverse then just doing a basic linear search on the elements.
Does that make sense? I'm truly confused an would love if some people here could set me straight.
What you want is a good book on templates and template meta-programming.
In short, the std::set class only defines a prototype for a class, which is then instantiated at compile-type using the provided arguments (some Key-type Key, some value-type T, which deduces std::less<Key> and std::allocator<std::pair<Key, T>> if not given, or whatever else).
A big part of the flexibility comes from being able to create partial specialisations and using other templates and default arguments.
Now, std::less is defined for many standard-library types and all basic types, but not for custom types.
There are 3 ways to provide the comparison std::map needs:
Override the default template argument and provide it to the template (if the override has state, it might make sense to provide an object to the constructor).
Specialise std::less.
Add a comparison operator (operator<).
Let's try out your example:
#include <set>
struct myStruct {
char c;
bool b;
};
int main() {
std::set<myStruct> mySet;
mySet.insert(myStruct());
}
If we compile this, we actually get an error. I've reduced the error messages to the interesting part and we see:
.../__functional_base:63:21: error: invalid operands to binary expression ('const myStruct' and 'const myStruct')
{return __x < __y;}
We can see here that std::set, to do the work it needs to do, needs to be able to compare these two objects against each other. Let's implement that:
bool operator<(myStruct const & lhs, myStruct const & rhs) {
if (lhs.c < rhs.c)
return true;
if (lhs.c > rhs.c)
return false;
return lhs.b < rhs.b;
}
Now the code will compile fine.
All of this works because std::set<T> expects to be able to compare two T objects via std::less<T> which attempts to do (T) lhs < (T) rhs.
This is highly implementation specific: actual implementations can vary here. I hope to just give you an idea of how it works.
A binary tree typically will hold actual values at each spot in the tree: your diagram makes me think the values are only present at leaf nodes (are you thinking of a trie?). Consider a string binary tree, with memebers cat, duck, goose, and dog:
dog
/ \
cat duck
\
goose
Note here that each node is a value that exists in the set. (Here, our set has 4 elements.) While perhaps you could do some sort of 0/1 prefix, you'd need to be able to convert the object to a bitstring (looking at the raw underlying bytes is not guaranteed to work), and isn't really needed.
You need to understand templates in C++; Remeber that a set<T> is "templated" on T, that is, T is whatever you specify when you use a set. (a string (set<string>, your custom struct (set<MyStruct>), etc.) Inside the implementation of set, you might imagine a helper class like:
template<typename T>
struct node {
T value;
node<T> *left, *right;
}
This structure holds a value and which node is to the left and right of it. set<T>, because it has T to use in it's implementation, can use that to also template this node structure to the same T. In my example, the bit labeled "dog" would be a node, with value being a std::string with the value "dog", left pointing to the node holding "cat", and right pointing to the node holding "duck".
When you look up a value in a set, it looks through the tree for the value, starting at the root. The set can "know" which way to go (left or right) by comparing the value you're looking for / inserting / removing with the node it's looking at. (One of the requirements for a set is that whatever you template it on be comparable with <, or you give it a function to act in place of <: so, int works, std::string works, and MyStruct can work if you either define < or write a "comparator".)
You can always compare two of a kind by comparing their byte array, no matter what.
So, if the set is represented as a sorted binary tree, a memcmp with result -1 indicates insert left, and one with +1 says, insert right.
Later
I was so eager to show that there's no need to branch according to the bits of a set element that I did not consider that there's a restriction that requires a std::set element to implement operator<.
Am I forgiven?

Hashing an unordered sequence of small integers

Background
I have a large collection (~thousands) of sequences of integers. Each sequence has the following properties:
it is of length 12;
the order of the sequence elements does not matter;
no element appears twice in the same sequence;
all elements are smaller than about 300.
Note that the properties 2. and 3. imply that the sequences are actually sets, but they are stored as C arrays in order to maximise access speed.
I'm looking for a good C++ algorithm to check if a new sequence is already present in the collection. If not, the new sequence is added to the collection. I thought about using a hash table (note however that I cannot use any C++11 constructs or external libraries, e.g. Boost). Hashing the sequences and storing the values in a std::set is also an option, since collisions can be just neglected if they are sufficiently rare. Any other suggestion is also welcome.
Question
I need a commutative hash function, i.e. a function that does not depend on the order of the elements in the sequence. I thought about first reducing the sequences to some canonical form (e.g. sorting) and then using standard hash functions (see refs. below), but I would prefer to avoid the overhead associated with copying (I can't modify the original sequences) and sorting. As far as I can tell, none of the functions referenced below are commutative. Ideally, the hash function should also take advantage of the fact that elements never repeat. Speed is crucial.
Any suggestions?
http://partow.net/programming/hashfunctions/index.html
http://code.google.com/p/smhasher/
Here's a basic idea; feel free to modify it at will.
Hashing an integer is just the identity.
We use the formula from boost::hash_combine to get combine hashes.
We sort the array to get a unique representative.
Code:
#include <algorithm>
std::size_t array_hash(int (&array)[12])
{
int a[12];
std::copy(array, array + 12, a);
std::sort(a, a + 12);
std::size_t result = 0;
for (int * p = a; p != a + 12; ++p)
{
std::size_t const h = *p; // the "identity hash"
result ^= h + 0x9e3779b9 + (result << 6) + (result >> 2);
}
return result;
}
Update: scratch that. You just edited the question to be something completely different.
If every number is at most 300, then you can squeeze the sorted array into 9 bits each, i.e. 108 bits. The "unordered" property only saves you an extra 12!, which is about 29 bits, so it doesn't really make a difference.
You can either look for a 128 bit unsigned integral type and store the sorted, packed set of integers in that directly. Or you can split that range up into two 64-bit integers and compute the hash as above:
uint64_t hash = lower_part + 0x9e3779b9 + (upper_part << 6) + (upper_part >> 2);
(Or maybe use 0x9E3779B97F4A7C15 as the magic number, which is the 64-bit version.)
Sort the elements of your sequences numerically and then store the sequences in a trie. Each level of the trie is a data structure in which you search for the element at that level ... you can use different data structures depending on how many elements are in it ... e.g., a linked list, a binary search tree, or a sorted vector.
If you want to use a hash table rather than a trie, then you can still sort the elements numerically and then apply one of those non-commutative hash functions. You need to sort the elements in order to compare the sequences, which you must do because you will have hash table collisions. If you didn't need to sort, then you could multiply each element by a constant factor that would smear them across the bits of an int (there's theory for finding such a factor, but you can find it experimentally), and then XOR the results. Or you could look up your ~300 values in a table, mapping them to unique values that mix well via XOR (each one could be a random value chosen so that it has an equal number of 0 and 1 bits -- each XOR flips a random half of the bits, which is optimal).
I would just use the sum function as the hash and see how far you come with that. This doesn’t take advantage of the non-repeating property of the data, nor of the fact that they are all < 300. On the other hand, it’s blazingly fast.
std::size_t hash(int (&arr)[12]) {
return std::accumulate(arr, arr + 12, 0);
}
Since the function needs to be unaware of ordering, I don’t see a smart way of taking advantage of the limited range of the input values without first sorting them. If this is absolutely required, collision-wise, I’d hard-code a sorting network (i.e. a number of if…else statements) to sort the 12 values in-place (but I have no idea how a sorting network for 12 values would look like or even if it’s practical).
EDIT After the discussion in the comments, here’s a very nice way of reducing collisions: raise every value in the array to some integer power before summing. The easiest way of doing this is via transform. This does generate a copy but that’s probably still very fast:
struct pow2 {
int operator ()(int n) const { return n * n; }
};
std::size_t hash(int (&arr)[12]) {
int raised[12];
std::transform(arr, arr + 12, raised, pow2());
return std::accumulate(raised, raised + 12, 0);
}
You could toggle bits, corresponding to each of the 12 integers, in the bitset of size 300. Then use formula from boost::hash_combine to combine ten 32-bit integers, implementing this bitset.
This gives commutative hash function, does not use sorting, and takes advantage of the fact that elements never repeat.
This approach may be generalized if we choose arbitrary bitset size and if we set or toggle arbitrary number of bits for each of the 12 integers (which bits to set/toggle for each of the 300 values is determined either by a hash function or using a pre-computed lookup table). Which results in a Bloom filter or related structures.
We can choose Bloom filter of size 32 or 64 bits. In this case, there is no need to combine pieces of large bit vector into a single hash value. In case of classical implementation of Bloom filter with size 32, optimal number of hash functions (or non-zero bits for each value of the lookup table) is 2.
If, instead of "or" operation of classical Bloom filter, we choose "xor" and use half non-zero bits for each value of the lookup table, we get a solution, mentioned by Jim Balter.
If, instead of "or" operation, we choose "+" and use approximately half non-zero bits for each value of the lookup table, we get a solution, similar to one, suggested by Konrad Rudolph.
I accepted Jim Balter's answer because he's the one who came closest to what I eventually coded, but all of the answers got my +1 for their helpfulness.
Here is the algorithm I ended up with. I wrote a small Python script that generates 300 64-bit integers such that their binary representation contains exactly 32 true and 32 false bits. The positions of the true bits are randomly distributed.
import itertools
import random
import sys
def random_combination(iterable, r):
"Random selection from itertools.combinations(iterable, r)"
pool = tuple(iterable)
n = len(pool)
indices = sorted(random.sample(xrange(n), r))
return tuple(pool[i] for i in indices)
mask_size = 64
mask_size_over_2 = mask_size/2
nmasks = 300
suffix='UL'
print 'HashType mask[' + str(nmasks) + '] = {'
for i in range(nmasks):
combo = random_combination(xrange(mask_size),mask_size_over_2)
mask = 0;
for j in combo:
mask |= (1<<j);
if(i<nmasks-1):
print '\t' + str(mask) + suffix + ','
else:
print '\t' + str(mask) + suffix + ' };'
The C++ array generated by the script is used as follows:
typedef int_least64_t HashType;
const int maxTableSize = 300;
HashType mask[maxTableSize] = {
// generated array goes here
};
inline HashType xorrer(HashType const &l, HashType const &r) {
return l^mask[r];
}
HashType hashConfig(HashType *sequence, int n) {
return std::accumulate(sequence, sequence+n, (HashType)0, xorrer);
}
This algorithm is by far the fastest of those that I have tried (this, this with cubes and this with a bitset of size 300). For my "typical" sequences of integers, collision rates are smaller than 1E-7, which is completely acceptable for my purpose.

Super long arrays in C++

I have two sets A and B. Set A contains unique elements. Set B contains all elements. Each element in the B is a 10 by 10 matrix where all entries are either 1 or 0. I need to scan through set B and everytime i encounter a new matrix i will add it to set A. Therefore set A is a subset of B containing only unique matrices.
It seems like you might really be looking for a way to manage a large, sparse array. Trivially, you could use a hash map with your giant index as your key, and your data as the value. If you talk more about your problem, we might be able to find a more appropriate data structure for your problem.
Update:
If set B is just some set of matrices and not the set of all possible 10x10 binary matrices, then you just want a sparse array. Every time you find a new matrix, you compute its key (which could simply be the matrix converted into a 100 digit binary value, or even a 100 character string!), look up that index. If no such key exists, insert the value 1 for that key. If the key does exist, increment and re-store the new value for that key.
Here is some code, maybe not very efficient :
# include <vector>
# include <bitset>
# include <algorithm>
// I assume your 10x10 boolean matrix is implemented as a bitset of 100 bits.
// Comparison of bitsets
template<size_t N>
class bitset_comparator
{
public :
bool operator () (const std::bitset<N> & a, const std::bitset<N> & b) const
{
for(size_t i = 0 ; i < N ; ++i)
{
if( !a[i] && b[i] ) return true ;
else if( !b[i] && a[i] ) return false ;
}
return false ;
}
} ;
int main(int, char * [])
{
std::set< std::bitset<100>, bitset_comparator<100> > A ;
std::vector< std::bitset<100> > B ;
// Fill B in some manner ...
// Keeping unique elements in A
std::copy(B.begin(), B.end(), std::inserter(A, A.begin())) ;
}
You can use std::listinstead of std::vector. The relative order of elements in B is not preserved in A (elements in A are sorted).
EDIT : I inverted A and B in my first post. It's correct now. Sorry for the inconvenience. I also corrected the comparison functor.
Each element in the B is a 10 by 10 matrix where all entries are either 1 or 0.
Good, that means it can be represented by a 100-bit number. Let's round that up to 128 bits (sixteen bytes).
One approach is to use linked lists - create a structure like (in C):
typedef struct sNode {
unsigned char bits[16];
struct sNode *next;
};
and maintain the entire list B as a sorted linked list.
The performance will be somewhat less (a) than using the 100-bit number as an array index into a truly immense (to the point of impossible given the size of the known universe) array.
When it comes time to insert a new item into B, insert it at its desired position (before one that's equal or greater). If it was a brand new one (you'll know this if the one you're inserting before is different), also add it to A.
(a) Though probably not unmanageably so - there are options you can take to improve the speed.
One possibility is to use skip lists, for faster traversal during searches. These are another pointer that references not the next element but one 10 (or 100 or 1000) elements along. That way you can get close to the desired element reasonably quickly and just do the one-step search after that point.
Alternatively, since you're talking about bits, you can divide B into (for example) 1024 sub-B lists. Use the first 10 bits of the 100-bit value to figure out which sub-B you need to use and only store the next 90 bits. That alone would increase search speed by an average of 1000 (use more leading bits and more sub-Bs if you need improvement on that).
You could also use a hash on the 100-bit value to generate a smaller key which you can use as an index into an array/list, but I don't think that will give you any real advantage over the method in the previous paragraph.
Convert each matrix into a string of 100 binary digits. Now run it through the Linux utilities:
sort | uniq
If you really need to do this in C++, it is possible to implement your own merge sort, then the uniq part becomes trivial.
You don't need N buckets where N is the number of all possible inputs. A binary tree will just do fine. This is implemented with set class in C++.
vector<vector<vector<int> > > A; // vector of 10x10 matrices
// fill the matrices in A here
set<vector<vector<int> > > B(A.begin(), A.end()); // voila!
// now B contains all elements in A, but only once for duplicates