unordered_set: is pointer address a good hash? - c++

i want to store a set of (smart) pointers in a hash set, either <boost/unordered_set>. After 10 seconds of thought, i came up with this hash function:
typedef boost::shared_ptr<myType> ref_t;
struct SharedPtrHash : public std::unary_function<ref_t, std::size_t> {
std::size_t operator()(ref_t const& obj) const {
return reinterpret_cast<std::size_t>( obj.get() );
}
};
My question is: is this hash a good idea? i'm entertaining the thought that this hash will have zero or very few collisions (maybe there is some prime-number modulus under the hood spoiling all my fun).
Further Details on purpose: The purpose of the hash is for recycling storage of big objects, so i need a fast way to detect if a big object is already in the bin.
in case it is not, what would be an ideal hash for pointers, either smart or dumb ones?

If you want to detect objects that are not identical even though their contents might be equal, you have no choice but to use the address of the object in the hash. The only question is whether to use the address directly or to run it through a formula. Dividing by sizeof(mytype) would tighten up the holes in the distribution.
Edit: Here's an untested template implementation that should work with all shared_ptr types, along with an equal_to function to complete the requirements for std::unordered_set. Don't use this generic implementation if you have other objects that require a hash based on the value instead of the pointer.
template<typename T>
size_t hash(const std::shared_ptr<T> & ptr)
{
return ((size_t) ptr.get()) / sizeof(T);
}
template<typename T>
bool equal_to(const std::shared_ptr<T> & left, const std::shared_ptr<T> & right)
{
return left.get() == right.get();
}

The following code compiles perfectly (GCC 4.7, Boost 1.47):
#include <boost/unordered_set.hpp>
#include <boost/shared_ptr.hpp>
struct Foo { };
int main()
{
boost::unordered_set<boost::shared_ptr<int>> s;
boost::shared_ptr<int> pi(new int);
s.insert(pi);
boost::unordered_set<boost::shared_ptr<Foo>> t;
boost::shared_ptr<Foo> pf(new Foo);
t.insert(pf);
}

The default Boost.Hash hash function for integral types is the identity function, so I don't think doing the same for pointers is a bad idea. It would have the same collision ratio.

Related

std::hash variations of object with arbitrary number of attributes of fundamental type

Discussion:
Let's say I have a struct/class with an arbitrary number of attributes that I want to use as key to a std::unordered_map e.g.,:
struct Foo {
int i;
double d;
char c;
bool b;
};
I know that I have to define a hasher-functor for it e.g.,:
struct FooHasher {
std::size_t operator()(Foo const &foo) const;
};
And then define my std::unordered_map as:
std::unordered_map<Foo, MyValueType, FooHasher> myMap;
What bothers me though, is how to define the call operator for FooHasher. One way to do it, that I also tend to prefer, is with std::hash. However, there are numerous variations e.g.,:
std::size_t operator()(Foo const &foo) const {
return std::hash<int>()(foo.i) ^
std::hash<double>()(foo.d) ^
std::hash<char>()(foo.c) ^
std::hash<bool>()(foo.b);
}
I've also seen the following scheme:
std::size_t operator()(Foo const &foo) const {
return std::hash<int>()(foo.i) ^
(std::hash<double>()(foo.d) << 1) ^
(std::hash<char>()(foo.c) >> 1) ^
(std::hash<bool>()(foo.b) << 1);
}
I've seen also some people adding the golden ratio:
std::size_t operator()(Foo const &foo) const {
return (std::hash<int>()(foo.i) + 0x9e3779b9) ^
(std::hash<double>()(foo.d) + 0x9e3779b9) ^
(std::hash<char>()(foo.c) + 0x9e3779b9) ^
(std::hash<bool>()(foo.b) + 0x9e3779b9);
}
Questions:
What are they trying to achieve by adding the golden ration or shifting bits in the result of std::hash.
Is there an "official scheme" to std::hash an object with arbitrary number of attributes of fundamental type?
A simple xor is symmetric and behaves badly when fed the "same" value multiple times (hash(a) ^ hash(a) is zero). See here for more details.
This is the question of combining hashes. boost has a hash_combine that is pretty decent. Write a hash combiner, and use it.
There is no "official scheme" to solve this problem.
Myself, I typically write a super-hasher that can take anything and hash it. It hash combines tuples and pairs and collections automatically, where it first hashes the count of elements in the collection, then the elements.
It finds hash(t) via ADL first, and if that fails checks if it has a manually written hash in a helper namespace (used for std containers and types), and if that fails does a std::hash<T>{}(t).
Then my hash for Foo support looks like:
struct Foo {
int i;
double d;
char c;
bool b;
friend auto mytie(Foo const& f) {
return std::tie(f.i, f.d, f.c, f.b);
}
friend std::size_t hash(Foo const& f) {
return hasher::hash(mytie(f));
}
};
where I use mytie to move Foo into a tuple, then use the std::tuple overload of hasher::hash to get the result.
I like the idea of hashes of structurally similar types having the same hash. This lets me act as if my hash is transparent in some cases.
Note that hashing unordered meows in this manner is a bad idea, as an asymmetric hash of an unordered meow may generate spurious misses.
(Meow is the generic name for map and set. Do not ask me why: Ask the STL.)
The standard hash framework is lacking in respect of combining hashes. Combining hashes using xor is sub-optimal.
A better solution is proposed in N3980 "Types Don't Know #".
The main idea is using the same hash function and its state to hash more than one value/element/member.
With that framework your hash function would look:
template <class HashAlgorithm>
void hash_append(HashAlgorithm& h, Foo const& x) noexcept
{
using std::hash_append;
hash_append(h, x.i);
hash_append(h, x.d);
hash_append(h, x.c);
hash_append(h, x.b);
}
And the container:
std::unordered_map<Foo, MyValueType, std::uhash<>> myMap;

how to stop automatic conversion from int to float and vice-versa in std::map

I wrote a small program of using std::map here as follows.
int main()
{
map<int,float>m1;
m1.insert(pair<int,float>(10,15.0)); //step-1
m1.insert(pair<float,int>(12.0,13)); //step-2
cout<<"map size="<<m1.size()<<endl; //step -3
I created a map with int type as key and float type as value(key-value) pair for the map m1
Created a normal int-float pair and inserted to map.
Created a cross float-int pair and inserted to map. Now I know that implicit conversion is making this pair to get inserted to map.
Here I just don't want the implicit conversion to take place and compiler error should be given.
What sort of changes I have to do in this program/map to make the comipiler flag an error while we try to do step-2 type operation?
Here's a suggestion:
template <typename K, typename V, typename W>
void map_insert(map<K,V>& m, K k, W w) {
V v = w;
m.insert(pair<K,V>(k,v));
}
int main() {
map<int,float>m1;
map_insert(m1, 10, 15.0);
map_insert(m1, 12.0, 13); // compiler complains here
cout<<"map size="<<m1.size()<<endl;
The third template parameter is a bit awkward but is necessary to allow casting from double to float.
This is not possible (and even if it is possible, then it would be major hack that you shouldn't use).
insert takes a value_type as argument, which is a pair<int const,float>.
So, when you try to insert a pair<float, int>, the compiler looks for a conversion, that is: a constructor of pair<int const, float> that takes a pair<float, int> as argument, which simply exists. In fact, I tried to come up with a partial specialization for that template member (that allows the conversion) which then you could have fail on the remaining template parameter, but I failed to do so; it seems not possible. Anyway, it would be a very dirty hack that you just shouldn't be doing just to avoid a typo. Elsewhere you might need this conversion, and it's a no no to define anything in namespace std anyway.
So what is the solution to "How can I avoid this kind of typos?" ?
Here is what I usually do:
1) All my maps have a typedef for their type.
2) I then use ::value_type (and ::iterator etc) on that type exclusively.
This is not only more robust, it is also more flexible: you can change the container type later on and the code is likely to still work.
So, your code would become:
int main()
{
typedef std::map<int,float> m_type;
m_type m1;
m1.insert(m_type::value_type(10,15.0)); // allowed
m1.insert(m_type::value_type(12.0,13)); // no risk for a typo.
An alternative solution would be to wrap your float in a custom class. This isn't a bad thing to do anyway for (again) reasons of flexibility. It is rarely nice to have written code using a std::map<int, builtin-type> to then realize you need to store more data, and believe me that happens a lot. You might as well start with a class from the beginning.
There may well be a simpler way but this is what occurred to me:
#include <iostream>
#include <map>
template<typename Key, typename Value>
struct typesafe_pair
{
const Key& key;
const Value& value;
explicit typesafe_pair(const Key& key, const Value& value): key(key), value(value) {}
operator typename std::map<Key, Value>::value_type() { return typename std::map<Key, Value>::value_type(key, value); }
};
int main()
{
std::map<int,float>m1;
m1.insert(std::pair<int,float>(10,15.0)); // allowed
m1.insert(std::pair<float,int>(12.0,13)); // allowed!!
m1.insert(typesafe_pair<int,float>(10, 15.0)); // allowed
m1.insert(typesafe_pair<float, int>(12.0, 13)); // compiler error
std::cout << "map size=" << m1.size() << std::endl; //step -3
}
EDIT: 1 Someone may be able to provide a better (more efficient) solution involving rvalue references and perfect forwarding magic that I don't quite grasp yet.
EDIT 2: I think Carlo Wood has the best solution IMHO.

Interface for returning a bunch of values

I have a function that takes a number and returns up to that many things (say, ints). What's the cleanest interface? Some thoughts:
Return a vector<int>. The vector would be copied several times, which is inefficient.
Return a vector<int>*. My getter now has to allocate the vector itself, as well as the elements. There are all the usual problems of who has to free the vector, the fact that you can't allocate once and use the same storage for many different calls to the getter, etc. This is why STL algorithms typically avoid allocating memory, instead wanting it passed in.
Return a unique_ptr<vector<int>>. It's now clear who deletes it, but we still have the other problems.
Take a vector<int> as a reference parameter. The getter can push_back() and the caller can decide whether to reserve() the space. However, what should the getter do if the passed-in vector is non-empty? Append? Overwrite by clearing it first? Assert that it's empty? It would be nice if the signature of the function allowed only a single interpretation.
Pass a begin and end iterator. Now we need to return the number of items actually written (which might be smaller than desired), and the caller needs to be careful not to access items that were never written to.
Have the getter take an iterator, and the caller can pass an insert_iterator.
Give up and just pass a char *. :)
In C++11, where move semantics is supported for standard containers, you should go with option 1.
It makes the signature of your function clear, communicating that you just want a vector of integers to be returned, and it will be efficient, because no copy will be issued: the move constructor of std::vector will be invoked (or, most likely, Named Return Value Optimization will be applied, resulting in no move and no copy):
std::vector<int> foo()
{
std::vector<int> v;
// Fill in v...
return v;
}
This way you won't have to deal with issues such as ownership, unnecessary dynamic allocations, and other stuff which are just polluting the simplicity of your problem: returning a bunch of integers.
In C++03, you may want to go with option 4 and take an lvalue reference to a non-const vector: standard containers in C++03 are not move-aware, and copying a vector may be expensive. Thus:
void foo(std::vector<int>& v)
{
// Fill in v...
}
However, even in that case, you should consider whether this penalty is really significant for your use cases. If it is not, you may well opt for a clearer function signature at the expense of some CPU cycles.
Also, C++03 compilers are capable of performing Named Return Value Optimization, so even though in theory a temporary should be copy-constructed from the value you return, in practice no copying is likely to happen.
You wrote it yourself:
... This is why STL algorithms typically avoid allocating memory, instead wanting it passed in
except that STL algorithms don't typically "want memory passed in", they operate on iterators instead. This is specifically to decouple the algorithm from the container, giving rise to:
option 8
decouple the value generation from both the use and storage of those values, by returning an input iterator.
The easiest way is using boost::function_input_iterator, but a sketch mechanism is below (mostly because I was typing faster than thinking).
Input iterator type
(uses C++11, but you can replace the std::function with a function pointer or just hard-code the generation logic):
#include <functional>
#include <iterator>
template <typename T>
class Generator: public std::iterator<std::input_iterator_tag, T> {
int count_;
std::function<T()> generate_;
public:
Generator() : count_(0) {}
Generator(int count, std::function<T()> func) : count_(count)
, generate_(func) {}
Generator(Generator const &other) : count_(other.count_)
, generate_(other.generate_) {}
// move, assignment etc. etc. omitted for brevity
T operator*() { return generate_(); }
Generator<T>& operator++() {
--count_;
return *this;
}
Generator<T> operator++(int) {
Generator<T> tmp(*this);
++*this;
return tmp;
}
bool operator==(Generator<T> const &other) const {
return count_ == other.count_;
}
bool operator!=(Generator<T> const &other) const {
return !(*this == other);
}
};
Example generator function
(again, it's trivial to replace the lambda with an out-of-line function for C++98, but this is less typing)
#include <random>
Generator<int> begin_random_integers(int n) {
static std::minstd_rand prng;
static std::uniform_int_distribution<int> pdf;
Generator<int> rv(n,
[]() { return pdf(prng); }
);
return rv;
}
Generator<int> end_random_integers() {
return Generator<int>();
}
Example use
#include <vector>
#include <algorithm>
#include <iostream>
int main()
{
using namespace std;
vector<int> out;
cout << "copy 5 random ints into a vector\n";
copy(begin_random_integers(5), end_random_integers(),
back_inserter(out));
copy(out.begin(), out.end(),
ostream_iterator<int>(cout, ", "));
cout << "\n" "print 2 random ints straight from generator\n";
copy(begin_random_integers(2), end_random_integers(),
ostream_iterator<int>(cout, ", "));
cout << "\n" "reuse vector storage for 3 new ints\n";
out.clear();
copy(begin_random_integers(3), end_random_integers(),
back_inserter(out));
copy(out.begin(), out.end(),
ostream_iterator<int>(cout, ", "));
}
return vector<int>, it will not be copied, it will be moved.
In C++11 the right answer is to return the std::vector<int> is to return it, ensuring that it will be either explicitly or implicitly moved. (Prefer implicit move, because explicit move can block some optimizations)
Amusingly, if you are concerned about reusing the buffer, the easiest way is to throw in an optional parameter that takes a std::vector<int> by value like this:
std::vector<int> get_stuff( int how_many, std::vector<int> retval = std::vector<int>() ) {
// blah blah
return retval;
}
and, if you have a preallocated buffer of the right size, just std::move it into the get_stuff function and it will be used. If you don't have a preallocated buffer of the right size, don't pass a std::vector in.
Live example: http://ideone.com/quqnMQ
I'm uncertain if this will block NRVO/RVO, but there isn't a fundamental reason why it should, and moving a std::vector is cheap enough that you probably won't care if it does block NRVO/RVO anyhow.
However, you might not actually want to return a std::vector<int> - possibly you just want to iterate over the elements in question.
In that case, there is an easy way and a hard way.
The easy way is to expose a for_each_element( Lambda ) method:
#include <iostream>
struct Foo {
int get_element(int i) const { return i*2+1; }
template<typename Lambda>
void for_each_element( int up_to, Lambda&& f ) {
for (int i = 0; i < up_to; ++i ) {
f( get_element(i) );
}
}
};
int main() {
Foo foo;
foo.for_each_element( 7, [&](int e){
std::cout << e << "\n";
});
}
and possibly use a std::function if you must hide the implementation of the for_each.
The hard way would be to return a generator or a pair of iterators that generate the elements in question.
Both of these avoid the pointless allocation of the buffer when you only want to deal with the elements one at a time, and if generating the values in question is expensive (it might require traversing memory
In C++98 I would take a vector& and clear() it.

C++ map<std::string> vs map<char *> performance (I know, "again?")

I was using a map with a std::string key and while everything was working fine I wasn't getting the performance I expected. I searched for places to optimize and improved things only a little and that's when a colleague said, "that string key is going to be slow."
I read dozens of questions and they consistently say:
"don't use a char * as a key"
"std::string keys are never your bottleneck"
"the performance difference between a char * and a
std::string is a myth."
I reluctantly tried a char * key and there was a difference, a big difference.
I boiled the problem down to a simple example:
#include <stdio.h>
#include <stdlib.h>
#include <map>
#ifdef USE_STRING
#include <string>
typedef std::map<std::string, int> Map;
#else
#include <string.h>
struct char_cmp {
bool operator () (const char *a,const char *b) const
{
return strcmp(a,b)<0;
}
};
typedef std::map<const char *, int, char_cmp> Map;
#endif
Map m;
bool test(const char *s)
{
Map::iterator it = m.find(s);
return it != m.end();
}
int main(int argc, char *argv[])
{
m.insert( Map::value_type("hello", 42) );
const int lcount = atoi(argv[1]);
for (int i=0 ; i<lcount ; i++) test("hello");
}
First the std::string version:
$ g++ -O3 -o test test.cpp -DUSE_STRING
$ time ./test 20000000
real 0m1.893s
Next the 'char *' version:
g++ -O3 -o test test.cpp
$ time ./test 20000000
real 0m0.465s
That's a pretty big performance difference and about the same difference I see in my larger program.
Using a char * key is a pain to handle freeing the key and just doesn't feel right. C++ experts what am I missing? Any thoughts or suggestions?
You are using a const char * as a lookup key for find(). For the map containing const char* this is the correct type that find expects and the lookup can be done directly.
The map containing std::string expects the parameter of find() to be a std::string, so in this case the const char* first has to be converted to a std::string. This is probably the difference you are seeing.
As sth noted, the issue is one of specifications of the associative containers (sets and maps), in that their member search methods always force a conversion to the key_type, even if an operator< exists that would accept to compare your key against the keys in the map despite their different types.
On the other hand, the functions in <algorithm> do not suffer from this, for example lower_bound is defined as:
template< class ForwardIt, class T >
ForwardIt lower_bound( ForwardIt first, ForwardIt last, const T& value );
template< class ForwardIt, class T, class Compare >
ForwardIt lower_bound( ForwardIt first, ForwardIt last, const T& value, Compare comp );
So, an alternative could be:
std::vector< std::pair< std::string, int > >
And then you could do:
std::lower_bound(vec.begin(), vec.end(), std::make_pair("hello", 0), CompareFirst{})
Where CompareFirst is defined as:
struct CompareFirst {
template <typename T, typename U>
bool operator()(T const& t, U const& u) const { return t.first < u.first; }
};
Or even build a completely custom comparator (but it's a bit harder).
A vector of pair is generally more efficient in read-heavy loads, so it's really to store a configuration for example.
I do advise to provide methods to wrap the accesses. lower_bound is pretty low-level.
If your in C++ 11, the copy constructor is not called unless the string is changed. Because std::string is a C++ construct, at least 1 dereference is needed to get at the string data.
My guess would be the time is taken up in an extra dereference (which if done 10000 times is costly), and std::string is likely doing appropriate null pointer checks, which again eats up cycles.
Store the std::string as a pointer and then you lose the copy constructor overhead.
But after you have to remember to handle the deletes.
The reason std::string is slow is that is constructs itself. Calls the copy constructor, and then at the end calls delete. If you create the string on the heap you lose the copy construction.
After compilation the 2 "Hello" string literals will have the same memory address. On the char * case you use this memory addresses as keys.
In the string case every "Hello"s will be converted to a different object. This is a small part (really really small) of your performance difference.
A bigger part can be that as all the "Hello"s you are using has the same memory address strcmp will always get 2 equivalent char pointers and I'm quite sure that it early checks for this case :) So it will never really iterate on the all characters but the std::string comparison will.
One solution to this is use a custom key class that acts as a cross between a const char * and a std::string, but has a boolean to tell at run time if it is "owning" or "non-owning". That way you can insert a key into the map which owns it's data (and will free it on destruction), and then compare with a key that does not own it's data. (This is a similar concept to the rust Cow<'a, str> type).
The below example also inherits from boost's string_ref to avoid having to re-implement hash functions etc.
NOTE this has the dangerous effect that if you accidentally insert into the map with the non-owning version, and the string you are pointing at goes out of scope, the key will point at already freed memory. The non-owning version can only be used for lookups.
#include <iostream>
#include <map>
#include <cstring>
#include <boost/utility/string_ref.hpp>
class MaybeOwned: public boost::string_ref {
public:
// owning constructor, takes a std::string and copies the data
// deletes it's copy on destruction
MaybeOwned(const std::string& string):
boost::string_ref(
(char *)malloc(string.size() * sizeof(char)),
string.size()
),
owned(true)
{
memcpy((void *)data(), (void *)string.data(), string.size());
}
// non-owning constructor, takes a string ref and points to the same data
// does not delete it's data on destruction
MaybeOwned(boost::string_ref string):
boost::string_ref(string),
owned(false)
{
}
// non-owning constructor, takes a c string and points to the same data
// does not delete it's data on destruction
MaybeOwned(const char * string):
boost::string_ref(string),
owned(false)
{
}
// move constructor, tells source that it no longer owns the data if it did
// to avoid double free
MaybeOwned(MaybeOwned&& other):
boost::string_ref(other),
owned(other.owned)
{
other.owned = false;
}
// I was to lazy to write a proper copy constructor
// (it would need to malloc and memcpy again if it owned the data)
MaybeOwned(const MaybeOwned& other) = delete;
// free owned data if it has any
~MaybeOwned() {
if (owned) {
free((void *)data());
}
}
private:
bool owned;
};
int main()
{
std::map<MaybeOwned, std::string> map;
map.emplace(std::string("key"), "value");
map["key"] += " here";
std::cout << map["key"] << "\n";
}

Sequence iterator? Isn't there one in boost?

From time to time I am feeling the need for a certain kind of iterator (for which I can't make up a good name except the one prefixed to the title of this question).
Suppose we have a function (or function object) that maps an integer to type T. That is, we have a definition of a mathematical sequence, but we don't actually have it stored in memory. I want to make an iterator out of it. The iterator class would look something like this:
template <class F, class T>
class sequence_iterator : public std::iterator<...>
{
int i;
F f;
public:
sequence_iterator (F f, int i = 0):f(f), i(i){}
//operators ==, ++, +, -, etc. will compare, increment, etc. the value of i.
T operator*() const
{
return f(i);
}
};
template <class T, class F>
sequence_iterator<F, T> make_sequence_iterator(F f, int i)
{
return sequence_iterator<F, T>(f, i);
}
Maybe I am being naive, but I personally feel that this iterator would be very useful. For example, suppose I have a function that checks whether a number is prime or not. And I want to count the number of primes in the interval [a,b]. I'd do this;
int identity(int i)
{
return i;
}
count_if(make_sequence_iterator<int>(identity, a), make_sequence_iterator<int>(identity, b), isPrime);
Since I have discovered something that would be useful (at least IMHO) I am definitely positive that it exists in boost or the standard library. I just can't find it. So, is there anything like this in boost?. In the very unlikely event that there actually isn't, then I am going to write one - and in this case I'd like to know your opinion whether or not should I make the iterator_category random_access_iterator_tag. My concern is that this isn't a real RAI, because operator* doesn't return a reference.
Thanks in advance for any help.
boost::counting_iterator and boost::transform_iterator should do the trick:
template <typename I, typename F>
boost::transform_iterator<
F,
boost::counting_iterator<I>>
make_sequence_iterator(I i, F f)
{
return boost::make_transform_iterator(
boost::counting_iterator<I>(i), f);
}
Usage:
std::copy(make_sequence_iterator(0, f), make_sequence_iterator(n, f), out);
I would call this an integer mapping iterator, since it maps a function over a subsequence of the integers. And no, I've never encountered this in Boost or in the STL. I'm not sure why that is, since your idea is very similar to the concept of stream iterators, which also generate elements by calling functions.
Whether you want random access iteration is up to you. I'd try building a forward or bidirectional iterator first, since (e.g.) repeated binary searches over a sequence of integers may be faster if they're generated and stored in one go.
Does the boost::transform_iterator fills your needs? there are several useful iterator adaptors in boost, the doc is here.
I think boost::counting_iterator is what you are looking for, or atleast comes the closest. Is there something you are looking for it doesn't provide? One could do, for example:
std::count_if(boost::counting_iterator<int>(0),
boost::counting_iterator<int>(10),
is_prime); // or whatever ...
In short, it is an iterator over a lazy sequence of consecutive values.
Boost.Utility contains a generator iterator adaptor. An example from the documentation:
#include <iostream>
#include <boost/generator_iterator.hpp>
class my_generator
{
public:
typedef int result_type;
my_generator() : state(0) { }
int operator()() { return ++state; }
private:
int state;
};
int main()
{
my_generator gen;
boost::generator_iterator_generator<my_generator>::type it =
boost::make_generator_iterator(gen);
for (int i = 0; i < 10; ++i, ++it)
std::cout << *it << std::endl;
}