std::map key no match for operator< [duplicate] - c++

This question already has answers here:
How can I use std::maps with user-defined types as key?
(8 answers)
Closed 8 years ago.
I'm having quite a hard time trying to debug my little piece of code:
std::map<glm::ivec3,int> myMap;
glm::ivec3 myVec(3, 3, 3);
myMap.find(myVec);
I get the following error:
c:\program files (x86)\codeblocks\mingw\bin\..\lib\gcc\mingw32\4.7.1\include\c++\bits\stl_function.h|237|error: no match for 'operator<' in '__x < __y'
Does that mean I can't check whether a glm::ivec3 is smaller than another?
I think that because a stl::map is ordered, the compiler wants to check which pair comes first. I tried to make the key a pointer and it worked.
Isn't there a way to keep the key a value instead of a pointer? This makes me ask another question: how can compare with a greater than operation something that cannot be compared or that is slow to be compared?
Thank you! :)

You can implement a comparison function:
bool operator<(const glm::ivec& lhs, const glm::ivec& rhs)
{
return lhs.x < rhs.x ||
lhs.x == rhs.x && (lhs.y < rhs.y || lhs.y == rhs.y && lhs.z < rhs.z);
}
(change .x, .y, .z to [0], [1], [2] / .first(), .second(), .third() etc as necessary.
how can compare with a greater than operation something that cannot be compared or that is slow to be compared?
Your pointer hack isn't uncommon but isn't always useful and has to be done with care - specifically, if someone comes along to search in the map and wants to find an existing element, they need a pointer to the same existing object that was earlier stored in the map. Or, choose some arbitrary ordering even if it makes no particular sense in the real world - as long as it's consistent.
If a comparison is just slow, you can potentially do things like compare a hash value first then fall back on the slower comparison for rare collisions (or if your hash is long/strong enough, return false on the assumption they're equal).

I'm not familiar with glm, but mathematically this doesn't surprise me as vectors don't have a natural ordering; I.e. What would it mean u < v when the two can be at any location in a 3d space. When you used pointers, it was using address ordering, often isn't a good idea as the addresses have nothing to do with the "values" of the keys. You can't really order on magnitude since you can end up with two completely different vectors being equal. If it is important to have an order you could order them lexicographically, comparing one dimension, then the next, etc. but you might want to consider an unordered_map (a hash table) unless there is some need for an ordering in your problem.
Here is a link that discusses the Java hashCode() function with some discussion of various approaches to hashing for compound objects.
http://www.javamex.com/tutorials/collections/hash_function_guidelines.shtml
For a class that has three ints as it's state, I'd probably do (((x*p)+y)*p)+z where p Is a small prime, say 31. (There are many variations on this and much more complex has functions depending on the structure of the data, etc.)
Here are some more links from SO on C++ hashing.
unordered_map hash function c++
C++ unordered_map using a custom class type as the key

Related

Does priority_queue provide a predictable order? [duplicate]

This question already has an answer here:
c++ ordered(stable) priority queue
(1 answer)
Closed 5 years ago.
wise guys
My question was like this:
I need to use priority_queue from std, everything works fine, until if there exists ties between my records, the order is no long consistent if I compile using clang compared to compiling on gcc.
my comparator function is simple:
bool comparator(const max_pair_t &lhs, const max_pair_t &rhs) {
return lhs.pval < rhs.pval;
}
that's it.
Is there a way to resolve this problem?
PS: I printed out all the records using two binary excutables, and compared the order side by side, the order is different, but the tied records are in the neighboring area
std::priority_queue gives no guarantees about sort stability. If you need sort stability, you'll have to provide it yourself, e.g. by storing a progressively increasing or decreasing value (doesn't really matter which, it just changes the direction of the fallback comparison) that is used when the primary comparison key is equal, and stripping it off when you pop off the queue.

Fastest way to remove duplicates from std::list of a custom type

If this has been asked before please forgive me, I could not find it.
I have a custom type for which I can implement (fuzzy) equality but no < operator that is transitive.
The comparison is costly but I have not many elements.
I need to sort out polygons that are alomst identical (they overlap to a large fraction). Since ordering using < is impossible due to the lack of a transitive implementation I am using a std::list like the following:
typedef std::list<Polygon> PolyList;
PolyList purged(rawList);
for (PolyList::iterator iter= purged.begin(); iter!= purged.end(); ++iter) {
for(PolyList::iterator toRemove = find(boost::next(iter),purged.end(),*iter); toRemove != purged.end(); ){
PolyList::iterator next = purged.erase(toRemove);
toRemove = find(next,purged.end(),*iter);
}
}
The complexity is n*n/2 which is unavoidable in my opinion and
while the algorithm works fine, it is still very cumbersome to read and write and I am almost sure there is a standard algorithm for it that I just don't know or at least something as fast but neater to type. As I said sorting is not an option due to the fuzzyness of the data so no unique set or sort.
Many thanks in advance for helping me out
You're probably not going to find an asnwer in the Standard, since your "duplicates" sound like they're not transitive either. That's to say that a==b && b==c does not imply a==c.
For that reason alone, any algorithm has to compare all pairs, which gives you (N*N-1)/2 comparisons (assuming your equality is symmetric, i.e. a==b does imply b==a).
I doubt there is a 'standard algorithm' for achieving what you want, but if you define a distance metric describing the difference between two polygons, then you can select (any) one polygon (call it the base polygon) and sort all the others on the distance from that polygon. Only polygons whose distance from the base are similar may be similar to each other.
Now you only need to consider groups of polygons with similar distances, when deciding which to delete. Without proving it - and I suspect the proof may be involved - I believe this is N log N.

map inside map ( Map as key)

I have created map inside following way.
Ex: map first;
and I have to created second map into following way as per my requirement.
map second.
So first is the key value for in second map.
I have inserted data into both map.
first.insert("Test1",1);
second.insert(first,2).
First Just I wantt to know is it correct way to do implementation. or Should I use another stl.?
I am facing one issue with this code (Not compliation issue). If I get data from database in following way than the value does not insert into second map.
first.insert("Test1",2);
second.insert(first,1). But I belive that it should enter into map as ("Test1" && 1) and
("Test" && 2) both are diffirent key for second map.
Why would you like to use a map as a key type?
Keys should be small, since you have no guarantee how many copies of them will STL do. Using (potentially large) std::map as a key will kill your apllication's performance.
First of all, for "STL", let me quote !stl from ##c++ at freenode:
`STL' is sometimes used to mean: (1) C++ standard library; (2) the library Stepanov designed at HP; (3) the parts of [1] based on [2]; (4) specific vendor implementations of either [1], [2], or [3]; (5) the underlying principles of [2]. As such, the term is highly ambiguous, and must be used with extreme caution. If you meant [1] and insist on abbreviating, "stdlib" is a far better choice.
Next: of course you can use map as key, but there is probably no comparator for it (I doubt there is std::less for map...). But remember - comparator doesn't check if parameters are equal - it checks, whether first is less than/greater than the second, because it's easier to model every possible relations using "less than":
a == b <=> !(a < b) && !(b < a)
And now, more ontopic:
From what you have written, I don't quite get the point of having map<map, anything else>. Could you provide some testcase? I will be able to give you complete answer, then.

GLM + STL: operator == missing

I try to use GLM vector classes in STL containers. No big deal as long as I don't try to use <algorithm>. Most algorithms rely on the == operator which is not implemented for GLM classes.
Anyone knows an easy way to work around this? Without (re-)implementing STL algorithms :(
GLM is a great math library implementing GLSL functions in c++
Update
I just found out that glm actually implements comparison operators in an extension (here). But how do i use them in stl?
Update 2
This question has been superseded by this one: how to use glm's operator== in stl algorithms?
Many STL algorithms accept a functor for object comparison (of course, you need to exercise special care when comparing two vectors containing floating point values for equality).
Example:
To sort a std::list<glm::vec3> (it's up to you whether sorting vectors that way would make any practical sense), you could use
std::sort(myVec3List.begin(), myVec3List.end(), MyVec3ComparisonFunc)
with
bool MyVec3ComparisonFunc(const glm::vec3 &vecA, const glm::vec3 &vecB)
{
return vecA[0]<vecB[0]
&& vecA[1]<vecB[1]
&& vecA[2]<vecB[2];
}
So, thankfully, there is no need to modify GLM or even reinvent the wheel.
You should be able to implement a operator== as a stand-alone function:
// (Actually more Greg S's code than mine.....)
bool operator==(const glm::vec3 &vecA, const glm::vec3 &vecB)
{
const double epsilion = 0.0001; // choose something apprpriate.
return fabs(vecA[0] -vecB[0]) < epsilion
&& fabs(vecA[1] -vecB[1]) < epsilion
&& fabs(vecA[2] -vecB[2]) < epsilion;
}
James Curran and Greg S have already shown you the two major approaches to solving the problem.
define a functor to be used explicitly in the STL algorithms that need it, or
define the actual operators == and < which STL algorithms use if no functor is specified.
Both solutions are perfectly fine and idiomatic, but a thing to remember when defining operators is that they effectively extend the type. Once you've defined operator< for a glm::vec3, these vectors are extended to define a "less than" relationship, which means that any time someone wants to test if one vector is "less than" another, they'll use your operator. So operators should only be used if they're universally applicable. If this is always the one and only way to define a less than relationship between 3D vectors, go ahead and make it an operator.
The problem is, it probably isn't. We could order vectors in several different ways, and none of them is obviously the "right one". For example, you might order vectors by length. Or by magnitude of the x component specifically, ignoring the y and z ones. Or you could define some relationship using all three components (say, if a.x == b.x, check the y coordinates. If those are equal, check the z coordinates)
There is no obvious way to define whether one vector is "less than" another, so an operator is probably a bad way to go.
For equality, an operator might work better. We do have a single definition of equality for vectors: two vectors are equal if every component is equal.
The only problem here is that the vectors consist of floating point values, and so you may want to do some kind of epsilon comparison so they're equal if all members are nearly equal. But then the you may also want the epsilon to be variable, and that can't be done in operator==, as it only takes two parameters.
Of course, operator== could just use some kind of default epsilon value, and functors could be defined for comparisons with variable epsilons.
There's no clear cut answer on which to prefer. Both techniques are valid. Just pick the one that best fits your needs.

How can I increase the performance in a map lookup with key type std::string?

I'm using a std::map (VC++ implementation) and it's a little slow for lookups via the map's find method.
The key type is std::string.
Can I increase the performance of this std::map lookup via a custom key compare override for the map? For example, maybe std::string < compare doesn't take into consideration a simple string::size() compare before comparing its data?
Any other ideas to speed up the compare?
In my situation the map will always contain < 15 elements, but it is being queried non stop and performance is critical. Maybe there is a better data structure that I can use that would be faster?
Update: The map contains file paths.
Update2: The map's elements are changing often.
First, turn off all the profiling and DEBUG switches. These can slow down STL immensely.
If that's not it, part of the problem may be that your strings are identical for the first 80-90% of the string. This isn't bad for map, necessarily, but it is for string comparisons. If this is the case, your search can take much longer.
For example, in this code find() will likely result in a couple of string compares, but each will return after comparing the first character until "david", and then the first three characters will be checked. So at most, 5 characters will be checked per call.
map<string,int> names;
names["larry"] = 1;
names["david"] = 2;
names["juanita"] = 3;
map<string,int>::iterator iter = names.find("daniel");
On the other hand, in the following code, find() will likely check 135+ characters:
map<string,int> names;
names["/usr/local/lib/fancy-pants/share/etc/doc/foobar/longpath/yadda/yadda/wilma"] = 1;
names["/usr/local/lib/fancy-pants/share/etc/doc/foobar/longpath/yadda/yadda/fred"] = 2;
names["/usr/local/lib/fancy-pants/share/etc/doc/foobar/longpath/yadda/yadda/barney"] = 3;
map<string,int>::iterator iter = names.find("/usr/local/lib/fancy-pants/share/etc/doc/foobar/longpath/yadda/yadda/betty");
That's because the string comparisons have to search deeper to find a match since the beginning of each string is the same.
Using size() in your comparison for equality won't help you much here since your data set is so small. A std::map is kept sorted so its elements can be searched with a binary search. Each call to find should result in less than 5 string comparisons for a miss, and an average of 2 comparisons for a hit. But it does depend on your data. If most of your path strings are of different lengths, then a size check like Motti describes could help a lot.
Something to consider when thinking of alternative algorithms is how many many "hits" you get. Are most of your find() calls returning end() or a hit? If most of your find()s return end() (misses) then you are searching the entire map every time (2logn string compares).
Hash_map is a good idea; it should cut your search time in about half for hits; more for misses.
A custom algorithm may be called for because of the nature of path strings, especially if your data set has common ancestry like in the above code.
Another thing to consider is how you get your search strings. If you are reusing them, it may help to encode them into something that is easier to compare. If you use them once and discard them, then this encoding step is probably too expensive.
I used something like a Huffman coding tree once (a long time ago) to optimize string searches. A binary string search tree like that may be more efficient in some cases, but its pretty expensive for small sets like yours.
Finally, look into alternative std::map implementations. I've heard bad things about some of VC's stl code performance. The DEBUG library in particular is bad about checking you on every call. StlPort used to be a good alternative, but I haven't tried it in a few years. I've always loved Boost too.
As Even said the operator used in a set is < not ==.
If you don't care about the order of the strings in your set you can pass the set a custom comparator that performs better than the regular less-than.
For example if a lot of your strings have similar prefixes (but they vary in length) you can sort by string length (since string.length is constant speed).
If you do so beware a common mistake:
struct comp {
bool operator()(const std::string& lhs, const std::string& rhs)
{
if (lhs.length() < rhs.length())
return true;
return lhs < rhs;
}
};
This operator does not maintain a strict weak ordering, as it can treat two strings as each less than the other.
string a = "z";
string b = "aa";
Follow the logic and you'll see that comp(a, b) == true and comp(b, a) == true.
The correct implementation is:
struct comp {
bool operator()(const std::string& lhs, const std::string& rhs)
{
if (lhs.length() != rhs.length())
return lhs.length() < rhs.length();
return lhs < rhs;
}
};
The first thing is to try using a hash_map if that's possible - you are right that the standard string compare doesn't first check for size (since it compares lexicographically), but writing your own map code is something you'd be better off avoiding. From your question it sounds like you do not need to iterate over ranges; in that case map doesn't have anything hash_map doesn't.
It also depends on what sort of keys you have in your map. Are they typically very long? Also what does "a little slow" mean? If you have not profiled the code it's quite possible that it's a different part taking time.
Update: Hmm, the bottleneck in your program is a map::find, but the map always has less than 15 elements. This makes me suspect that the profile was somehow misleading, because a find on a map this small should not be slow, at all. In fact, a map::find should be so fast, just the overhead of profiling could be more than the find call itself. I have to ask again, are you sure this is really the bottleneck in your program? You say the strings are paths, but you're not doing any sort of OS calls, file system access, disk access in this loop? Any of those should be orders of magnitude slower than a map::find on a small map. Really any way of getting a string should be slower than the map::find.
You can try to use a sorted vector (here's one sample), this may turn out to be faster (you'll have to profile it to make sure of-course).
Reasons to think it'll be faster:
Less memory allocations and deallocations (the vector will expand to the maximal size used and then reuse freed memory).
Binary find with random access should be faster than tree traversal (espacially due to data locality).
Reasons to think it'll be slower:
Deleations and additions will mean moving strings around in memory, since string's swap is efficiant and the size of the data set is small this may not be an issue.
std::map's comparator isn't std::equal_to it's std::less, I'm not sure what the best way to short circuit a < compare so that it would be faster than the built in one.
If there are always < 15 elems, perhaps you could use a key besides std::string?
Motti has a good solution. However, I'm pretty sure that for your < 15 elements a map isn't the right way because its overhead will always be greater than that of a simple lookup table with an appropriate hashing scheme. In your case, it might even be enough to hash by length alone, and if that still produces collisions, use a linear search through all entries of the same length.
To establish if I'm right, a benchmark is of course required but I'm quite sure of its outcome.
You might consider pre-computing a hash for a string, and saving that in your map. Doing so gives the advantage of hash compares instead of string compares during the search through the std::map tree.
class HashedString
{
unsigned m_hash;
std::string m_string;
public:
HashedString(const std::string& str)
: m_hash(HashString(str))
, m_string(str)
{};
// ... copy constructor and etc...
unsigned GetHash() const {return m_hash;}
const std::string& GetString() const {return m_string;}
};
This has the benefits of computing a hash of the string once, on construction. After this, you could implement a comparison function:
struct comp
{
bool operator()(const HashedString& lhs, const HashedString& rhs)
{
if(lhs.GetHash() < rhs.GetHash()) return true;
if(lhs.GetHash() > rhs.GetHash()) return false;
return lhs.GetString() < rhs.GetString();
}
};
Since hashes are now computed on HashedString construction, they are stored that way in the std::map, and so the compare can happen very quickly (an integer compare) in an astronomically high percentage of the time, falling back on standard string compares when the hashes are equal.
Maybe you could reverse the strings prior to using them as keys in the map? That could help if the first few letters of each string are identical.
Here are some things you can consider:
0) Are you sure this is where the performance bottleneck is? Like the results from Quantify, Cachegrind, gprof or something like that? Because lookups on such a smap map should be fairly fast...
1) You can override the functor used to compare the keys in std::map<>, there is a second template parameter to do that. I doubt you can do much better than operator<, however.
2) Are the contents of the map changing a lot? If not, and given the very small size of your map, maybe using a sorted vector and binary search could yield better results (for example because you can exploit memory locality better.
3) Are the elements known at compile time? You could use a perfect hash function to improve lookup times if that is the case. Search for gperf on the web.
4) Do you have a lot of lookups that fail to find anything? If so, maybe comparing with the first and last elements in the collection may eliminate many mismatches quicker than a full search every time.
These have been suggested already, but in more detail:
5) Since you have so few strings, maybe you could use a different key. For example, are your keys all the same size? Can you use a class containing a fixed-length array of characters? Can you convert your strings to numbers or some data structure with only numbers?
Depending on the usage cases, there are some other techniques you can use. For example we had an application that needed to keep up with over a million different file paths. The problem with that there were thousands of objects that needed to keep small maps of these file paths.
Since adding new file paths to the data set was an infrequent operation, when path was added to the system, a master map was searched. If the path was not found, then it was added and a new sequenced integer (starting at 1) was returned. If the path already existed, then the previously assigned integer was returned. Then each map maintained by each object was converted from a string based map to an integer map. Not only did this greatly improve performance, it reduced memory usage by not having so many duplicate copies of the strings.
Sure, this is a very specific optimization. But when it comes to performance improvements, you often find yourself having to make tailored solutions to specific problems.
And I hate strings :) Not are they slow to compare, but they can really trash your CPU caches on high performance software.
Try std::tr1::unordered_map (found in the header <tr1/unordered_map>). This is a hash map, and, while it doesn't maintain a sorted order of elements, will likely be far faster than a regular map.
If your compiler doesn't support TR1, get a newer version. MSVC and gcc both support TR1, and I believe the newest versions of most other compilers also have support. Unfortunately, a lot of the library reference sites haven't been updated, so TR1 remains a largely-unknown piece of technology.
I hope C++0x isn't the same way.
EDIT: Note that the default hashing method for tr1::unordered_map is tr1::hash, which needs to be specialized to work on a UDT, probably.
Where you have long common substrings, a trie might be a better data structure than a map or a hash_map. I said "might", though - a hash_map already only traverses the key once per lookup, so should be fairly fast. I won't discuss it further since others already have.
You could also consider a splay tree if some keys are more frequently looked up than others, but of course this makes the worst-case lookup worse than a balanced tree, and lookups are mutating operations, which may matter to you if you're using e.g. a reader-writer lock.
If you care about the performance of lookups more than modifications, you might do better with an AVL tree than a red-black, which I think is what STL implementations generally use for map. An AVL tree is typically better balanced and so will on average require fewer comparisons per lookup, but the difference is marginal.
Finding an implementation of these that you're happy with might be an issue. A search on the Boost main page suggests they have a splay and AVL tree but not a trie.
You mentioned in a comment that you never have a lookup that fails to find anything. So you could in theory skip the final comparison, which in a tree of 15 < 2^4 elements could give you something like a 20-25% speedup without doing anything else. In fact, maybe more than that, since equal strings are the slowest to compare. Whether it's worth writing your own container just for this optimisation is another question.
You might also consider locality of reference - I don't know whether you could avoid the occasional page miss by allocating the keys and the nodes out of a small heap. If you only need about 15 entries at a time, then assuming a file name limit below 256 bytes you could ensure that everything accessed during a lookup fits into a single 4k page (apart from the key being looked up, of course). It may be that comparing the strings is insignificant compared with a couple of page loads. However, if this is your bottleneck there must be an enormous number of lookups going on, so I'd guess that everything is reasonably close to the CPU. Worth checking, maybe.
Another thought: if you are using pessimistic locking on a structure where there's a lot of contention (you said in a comment the program is massively multi-threaded) then regardless of what the profiler tells you (what code the CPU cycles are spent in), it might be costing you more than you think by effectively limiting you to 1 core. Try a reader-writer lock?
hash_map is not standard, try using unordered_map available in tr1 (which is available in boost if your tool chain doesn't already have it).
For small numbers of strings you might be better using vector, as map is typically implemented as a tree.
Why don't you use a hashtable instead? boost::unordered_map could do. Or you can roll out your own solution, and store the crc of a string instead of the string itself. Or better yet, put #defines for the strings, and use those for lookup, e.g.,
#define "STRING_1" STRING_1