Union-Find leetcode question exceeding time limit - c++

I am solving this problem on leetcode https://leetcode.com/problems/sentence-similarity-ii/description/ that involves implementing the union-find algorithm to find out if two sentences are similar or not given a list of pairs representing similar words. I implemented ranked union-find where I keep track of the size of each subset and join the smaller subtree to the bigger one but for some reason the code is still exceeding the time limit. Can someone point me to what I am doing wrong? How can it be optimized further. I saw other accepted solutions were using the same ranked union find algorithm.
Here is the code:
string root(map<string, string> dict, string element) {
if(dict[element] == element)
return element;
return root(dict, dict[element]);
}
bool areSentencesSimilarTwo(vector<string>& words1, vector<string>& words2, vector<pair<string, string>> pairs) {
if(words1.size() != words2.size()) return false;
std::map<string, string> dict;
std::map<string, int> sizes;
for(auto pair: pairs) {
if(dict.find(pair.first) == dict.end()) {
dict[pair.first] = pair.first;
sizes[pair.first] = 1;
}
if(dict.find(pair.second) == dict.end()) {
dict[pair.second] = pair.second;
sizes[pair.second] = 1;
}
auto firstRoot = root(dict, pair.first);
auto secondRoot = root(dict, pair.second);
if(sizes[firstRoot] < sizes[secondRoot]) {
dict[firstRoot] = secondRoot;
sizes[firstRoot] += sizes[secondRoot];
}
else {
dict[secondRoot] = firstRoot;
sizes[secondRoot] += sizes[firstRoot];
}
}
for(int i = 0; i < words1.size(); i++) {
if(words1[i] == words2[i]) {
continue;
}
else if(root(dict, words1[i]) != root(dict, words2[i])) {
return false;
}
}
return true;
}
Thanks!

Your union-find is broken with respect to complexity. Please read Wikipedia: Disjoint-set data structure.
For union-find to have its near O(1) complexity, it has to employ path-compaction. For that, your root method has to:
Get dict by reference, so that it can modify it.
Make path compaction to all elements on the path, so that they point to the root.
Without compaction you will have O(log N) complexity for root(), which could be OK. But for that, you'd have to fix it so that root() gets dict by reference and not by value. Passing dict by value costs O(N).
The fact that dict is an std::map makes any query cost O(log N), instead of O(1). std::unordered_map costs O(1), but in practice for N < 1000, std::map is faster. Also, even if std::unordered_map is used, hashing a string costs O(len(str)).
If the data is big, and performance is still slow, you may gain from working with indexes into pairs instead of strings, and run union-find with indexes into a vector<int>. This is error prone, since you have to correctly deal with duplicate strings.

Related

fast way to compare two vector containing strings

I have a vector of strings I that pass to my function and I need to compare it with some pre-defined values. What is the fastest way to do this?
The following code snippet shows what I need to do (This is how I am doing it, but what is the fastest way of doing this):
bool compare(vector<string> input1,vector<string> input2)
{
if(input1.size() != input2.size()
{
return false;
}
for(int i=0;i<input1.siz();i++)
{
if(input1[i] != input2[i])
{
return false;
}
}
return true;
}
int compare(vector<string> inputData)
{
if (compare(inputData,{"Apple","Orange","three"}))
{
return 129;
}
if (compare(inputData,{"A","B","CCC"}))
{
return 189;
}
if (compare(inputData,{"s","O","quick"}))
{
return 126;
}
if (compare(inputData,{"Apple","O123","three","four","five","six"}))
{
return 876;
}
if (compare(inputData,{"Apple","iuyt","asde","qwe","asdr"}))
{
return 234;
}
return 0;
}
Edit1
Can I compare two vector like this:
if(inputData=={"Apple","Orange","three"})
{
return 129;
}
You are asking what is the fastest way to do this, and you are indicating that you are comparing against a set of fixed and known strings. I would argue that you would probably have to implement it as a kind of state machine. Not that this is very beautiful...
if (inputData.size() != 3) return 0;
if (inputData[0].size() == 0) return 0;
const char inputData_0_0 = inputData[0][0];
if (inputData_0_0 == 'A') {
// possibly "Apple" or "A"
...
} else if (inputData_0_0 == 's') {
// possibly "s"
...
} else {
return 0;
}
The weakness of your approach is its linearity. You want a binary search for teh speedz.
By utilising the sortedness of a map, the binaryness of finding in one, and the fact that equivalence between vectors is already defined for you (no need for that first compare function!), you can do this quite easily:
std::map<std::vector<std::string>, int> lookup{
{{"Apple","Orange","three"}, 129},
{{"A","B","CCC"}, 189},
// ...
};
int compare(const std::vector<std::string>& inputData)
{
auto it = lookup.find(inputData);
if (it != lookup.end())
return it->second;
else
return 0;
}
Note also the reference passing for extra teh speedz.
(I haven't tested this for exact syntax-correctness, but you get the idea.)
However! As always, we need to be context-aware in our designs. This sort of approach is more useful at larger scale. At the moment you only have a few options, so the addition of some dynamic allocation and sorting and all that jazz may actually slow things down. Ultimately, you will want to take my solution, and your solution, and measure the results for typical inputs and whatnot.
Once you've done that, if you still need more speed for some reason, consider looking at ways to reduce the dynamic allocations inherent in both the vectors and the strings themselves.
To answer your follow-up question: almost; you do need to specify the type:
// new code is here
// ||||||||||||||||||||||||
if (inputData == std::vector<std::string>{"Apple","Orange","three"})
{
return 129;
}
As explored above, though, let std::map::find do this for you instead. It's better at it.
One key to efficiency is eliminating needless allocation.
Thus, it becomes:
bool compare(
std::vector<std::string> const& a,
std::initializer_list<const char*> b
) noexcept {
return std::equal(begin(a), end(a), begin(b), end(b));
}
Alternatively, make them static const, and accept the slight overhead.
As an aside, using C++17 std::string_view (look at boost), C++20 std::span (look for the Guideline support library (GSL)) also allows a nicer alternative:
bool compare(std::span<std::string> a, std::span<std::string_view> b) noexcept {
return a == b;
}
The other is minimizing the number of comparisons. You can either use hashing, binary search, or manual ordering of comparisons.
Unfortunately, transparent comparators are a C++14 thing, so you cannot use std::map.
If you want a fast way to do it where the vectors to compare to are not known in advance, but are reused so can have a little initial run-time overhead, you can build a tree structure similar to the compile time version Dirk Herrmann has. This will run in O(n) by just iterating over the input and following a tree.
In the simplest case, you might build a tree for each letter/element. A partial implementation could be:
typedef std::vector<std::string> Vector;
typedef Vector::const_iterator Iterator;
typedef std::string::const_iterator StrIterator;
struct Node
{
std::unique_ptr<Node> children[256];
std::unique_ptr<Node> new_str_child;
int result;
bool is_result;
};
Node root;
int compare(Iterator vec_it, Iterator vec_end, StrIterator str_it, StrIterator str_end, const Node *node);
int compare(const Vector &input)
{
return compare(input.begin(), input.end(), input.front().begin(), input.front().end(), &root);
}
int compare(Iterator vec_it, Iterator vec_end, StrIterator str_it, StrIterator str_end, const Node *node)
{
if (str_it != str_end)
{
// Check next character
auto next_child = node->children[(unsigned char)*str_it].get();
if (next_child)
return compare(vec_it, vec_end, str_it + 1, str_end, next_child);
else return -1; // No string matched
}
// At end of input string
++vec_it;
if (vec_it != vec_end)
{
auto next_child = node->new_str_child.get();
if (next_child)
return compare(vec_it, vec_end, vec_it->begin(), vec_it->end(), next_child);
else return -1; // Have another string, but not in tree
}
// At end of input vector
if (node->is_result)
return node->result; // Got a match
else return -1; // Run out of input, but all possible matches were longer
}
Which can also be done without recursion. For use cases like yours you will find most nodes only have a single success value, so you can collapse those into prefix substrings, to use the OP example:
"A"
|-"pple" - new vector - "O" - "range" - new vector - "three" - ret 129
| |- "i" - "uyt" - new vector - "asde" ... - ret 234
| |- "0" - "123" - new vector - "three" ... - ret 876
|- new vector "B" - new vector - "CCC" - ret 189
"s" - new vector "O" - new vector "quick" - ret 126
you could make use of std::equal function like below :
bool compare(vector<string> input1,vector<string> input2)
{
if(input1.size() != input2.size()
{
return false;
}
return std::equal(input1.begin(), input2.end(), input2.begin())
}
Can I compare two vector like this
The answer is No, you need compare a vector with another vector, like this:
vector<string>data = {"ab", "cd", "ef"};
if(data == vector<string>{"ab", "cd", "efg"})
cout << "Equal" << endl;
else
cout << "Not Equal" << endl;
What is the fastest way to do this?
I'm not an expert of asymptotic analysis but:
Using the relational operator equality (==) you have a shortcut to compare two vectors, first validating the size and, second, each element on them. This way provide a linear execution (T(n), where n is the size of vector) which compare each item of the vector, but each string must be compared and, generally, it is another linear comparison (T(m), where m is the size of the string).
Suppose that each string has de same size (m) and you have a vector of size n, each comparison could have a behavior of T(nm).
So:
if you want a shortcut to compare two vector you can use the
relational operator equality.
If you want an program which perform a fast comparison you should look for some algorithm for compare strings.

Dijkstra shortest path algorith performance of std::priority_queue Vs std::set

I would like to understand the main difference of these containers regarding their time complexity.
I've tried 3 implementations of Dijkstra algorithm as described below:
1- with a simple array used as queue
2- with STL priority_queue
3- with STL set
the graph I've tested is quite big, it contains more than 150000 vertices, oriented and all the weight of the edges are positive.
the results I get are the following:
1 - with array the algorithm is pretty slow --> which is expected
2 - with STL priority_queue the algorithm run a lot faster than the array --> which is also expected
3 - with STL set the algorithm run incredibly fast, I'm talking about couple 100 times faster than the priority_queue --> I didn't expect to see this huge performance...
knowing that the std::priority_queue and std::set are data containers that store elements and both have basically the same insertion complexity O(log n), I don't understand this big performance difference between them. Have you any explanation about this?
thanks for your help,
Edited:
here it is an abstract of my implementations:
with std::set:
unsigned int Graphe::dijkstra(size_t p_source, size_t p_destination) const {
....
set<pair<int, size_t>> set_vertices;
vector<unsigned int> distance(listAdj.size(),
numeric_limits<unsigned int>::max());
vector < size_t
> predecessor(listAdj.size(),
numeric_limits < size_t > ::max());
distance[p_source] = 0;
set_vertices.insert( { 0, p_source });
while (!set_vertices.empty()) {
unsigned int u = set_vertices.begin()->second;
if (u == p_destination) {
break;
}
set_vertices.erase( { distance[u],
u });
for (auto itr = listAdj[u].begin();
itr != listAdj[u].end(); ++itr) {
int v = itr->destination;
int weigth = itr->weigth;
if (distance[v]
> distance[u] + weigth) {
if (distance[v]
!= numeric_limits<unsigned int>::max()) {
set_vertices.erase(
set_vertices.find(
make_pair(distance[v],
v)));
}
distance[v] = distance[u] + weigth;
set_vertices.insert( { distance[v],
v });
predecessor[v] = u;
}
}
}
....
return distance[p_destination];}
and with priority_queue:
unsigned int Graphe::dijkstra(size_t p_source, size_t p_destination) const {
...
typedef pair<size_t, int> newpair;
priority_queue<newpair, vector<newpair>, greater<newpair> > PQ;
vector<unsigned int> distance(listAdj.size(),
numeric_limits<unsigned int>::max());
vector < size_t
> predecessor(listAdj.size(),
numeric_limits < size_t > ::max());
distance[p_source] = 0;
PQ.push(make_pair(p_source, 0));
while (!PQ.empty()) {
unsigned int u = PQ.top().first;
if (u == p_destination) {
break;
}
PQ.pop();
for (auto itr = listAdj[u].begin();
itr != listAdj[u].end(); ++itr) {
int v = itr->destination;
int weigth = itr->weigth;
if (distance[v]
> distance[u] + weigth) {
distance[v] = distance[u] + weigth;
PQ.push(
make_pair(v, distance[v]));
predecessor[v] = u;
}
}
}
...
return distance[p_destination];}
SKIP
You are doubling up on the work really badly with the priority queue.
You are double inserting into the queue, because you can't modify or delete. That's normal and necessary, because you can't.
but then when those old values come out of the queue you need to "skip that iteration of the while loop".
Something like:
if (PQ.top().second != distance[PQ.top().first]) continue; // It's stale! SKIP!!
The underlying data-structure of std::priority_queue is a max-heap and for std::set its self-balanced binary search - basically a red-black tree for C++. So both of them ensures O(logn) time complexity in insertion, deletion and update operation.
But, as I mentioned the balanced binary search tree of std::set is being balanced automatically to keep its height logarithmic of number of nodes which ensures logarithmic query complexity irrespective of insertion order or after any operations. std::priority_queue is not self-balanced and can be very flat depending on insertion order. Although the self-balancing has it's own cost and so as of heapify after removing top, I think that's the reason for the performance gain.
Hope it helps!

Compare element in a vector with elements in an array

I have two data structures with data in them.
One is a vector std::vector<int> presentStudents And other is a
char array char cAllowedStudents[256];
Now I have to compare these two such that checking every element in vector against the array such that all elements in the vector should be present in the array or else I will return false if there is an element in the vector that's not part of the array.
I want to know the most efficient and simple solution for doing this. I can convert my int vector into a char array and then compare one by one but that would be lengthy operation. Is there some better way of achieving this?
I would suggest you use a hash map (std::unordered_map). Store all the elements of the char array in the hash map.
Then simply sequentially check each element in your vector whether it is present in the map or not in O(1).
Total time complexity O(N), extra space complexity O(N).
Note that you will have to enable C++11 in your compiler.
Please refer to function set_difference() in c++ algorithm header file. You can use this function directly, and check if result diff set is empty or not. If not empty return false.
A better solution would be adapting the implementation of set_difference(), like in here: http://en.cppreference.com/w/cpp/algorithm/set_difference, to return false immediately after you get first different element.
Example adaption:
while (first1 != last1)
{
if (first2 == last2)
return false;
if (*first1 < *first2)
{
return false;
}
else
{
if (*first2 == *first1)
{
++first1;
}
++first2;
}
}
return true;
Sort cAllowedstudents using std::sort.
Iterate over the presentStudents and look for each student in the sorted cAllowedStudents using std::binary_search.
If you don't find an item of the vector, return false.
If all the elements of the vector are found, return true.
Here's a function:
bool check()
{
// Assuming hou have access to cAllowedStudents
// and presentStudents from the function.
char* cend = cAllowedStudents+256;
std::sort(cAllowedStudents, cend);
std::vector<int>::iterator iter = presentStudents.begin();
std::vector<int>::iterator end = presentStudents.end();
for ( ; iter != end; ++iter )
{
if ( !(std::binary_search(cAllowedStudents, cend, *iter)) )
{
return false;
}
}
return true;
}
Another way, using std::difference.
bool check()
{
// Assuming hou have access to cAllowedStudents
// and presentStudents from the function.
char* cend = cAllowedStudents+256;
std::sort(cAllowedStudents, cend);
std::vector<int> diff;
std::set_difference(presentStudents.begin(), presentStudents.end(),
cAllowedStudents, cend,
std::back_inserter(diff));
return (diff.size() == 0);
}
Sort both lists with std::sort and use std::find iteratively on the array.
EDIT: The trick is to use the previously found position as a start for the next search.
std::sort(begin(pS),end(pS))
std::sort(begin(aS),end(aS))
auto its=begin(aS);
auto ite=end(aS);
for (auto s:pS) {
its=std::find(its,ite,s);
if (its == ite) {
std::cout << "Student not allowed" << std::cout;
break;
}
}
Edit: As legends mentiones, it usually might be more efficient to use binary search (as in R Sahu's answer). However, for small arrays and if the vector contains a significant fraction of students from the array (I'd say at least one tenths), the additional overhead of binary search might (or might not) outweight its asymptotic complexity benefits.
Using C++11. In your case, size is 256. Note that I personally have not tested this, or even put it into a compiler. It should, however, give you a good idea of what to do yourself. I HIGHLY recommend testing the edge cases with this!
#include <algorithm>
bool check(const std::vector<int>& studs,
char* allowed,
unsigned int size){
for(auto x : studs){
if(std::find(allowed, allowed+size-1, x) == allowed+size-1 && x!= *(allowed+size))
return false;
}
return true;
}

Time complexity issues with multimap

I created a program that finds the median of a list of numbers. The list of numbers is dynamic in that numbers can be removed and inserted (duplicate numbers can be entered) and during this time, the new median is re-evaluated and printed out.
I created this program using a multimap because
1) the benefit of it being already being sorted,
2) easy insertion, deletion, searching (since multimap implements binary search)
3) duplicate entries are allowed.
The constraints for the number of entries + deletions (represented as N) are: 0 < N <= 100,000.
The program I wrote works and prints out the correct median, but it isn't fast enough. I know that the unsorted_multimap is faster than multimap, but then the problem with unsorted_multimap is that I would have to sort it. I have to sort it because to find the median you need to have a sorted list. So my question is, would it be practical to use an unsorted_multimap and then quick sort the entries, or would that just be ridiculous? Would it be faster to just use a vector, quicksort the vector, and use a binary search? Or maybe I am forgetting some fabulous solution out there that I haven't even thought of.
Though I'm not new to C++, I will admit, that my skills with time-complexity are somewhat medicore.
The more I look at my own question, the more I'm beginning to think that just using a vector with quicksort and binary search would be better since the data structures basically already implement vectors.
the more I look at my own question, the more I'm beginning to think that just using vector with quicksort and binary search would be better since the data structures basically already implement vectors.
If you have only few updates - use unsorted std::vector + std::nth_element algorithm which is O(N). You don't need full sorting which is O(N*ln(N)).
live demo of nth_element:
#include <algorithm>
#include <iterator>
#include <iostream>
#include <ostream>
#include <vector>
using namespace std;
template<typename RandomAccessIterator>
RandomAccessIterator median(RandomAccessIterator first,RandomAccessIterator last)
{
RandomAccessIterator m = first + distance(first,last)/2; // handle even middle if needed
nth_element(first,m,last);
return m;
}
int main()
{
vector<int> values = {5,1,2,4,3};
cout << *median(begin(values),end(values)) << endl;
}
Output is:
3
If you have many updates and only removing from middle - use two heaps as comocomocomocomo suggests. If you would use fibonacci_heap - then you would also get O(N) removing from arbitary position (if don't have handle to it).
If you have many updates and need O(ln(N)) removing from arbitary places - then use two multisets as ipc suggests.
If your purpose is to keep track of the median on the fly, as elements are inserted/removed, you should use a min-heap and a max-heap. Each one would contain one half of the elements... There was a related question a couple of days ago: How to implement a Median-heap
Though, if you need to search for specific values in order to remove elements, you still need some kind of map.
You said that it is slow. Are you iterating from the beginning of the map to the (N/2)'th element every time you need the median? You don't need to. You can keep track of the median by maintaining an iterator pointing to it at all times and a counter of the number of elements less than that one. Every time you insert/remove, compare the new/old element with the median and update both iterator and counter.
Another way of seeing it is as two multimaps containing half the elements each. One holds the elements less than the median (or equal) and the other holds those greater. The heaps do this more efficiently, but they don't support searches.
If you only need the median a few times you can use the "select" algorithm. It is described in Sedgewick's book. It takes O(n) time on average. It is similar to quick sort but it does not sort completely. It just partitions the array with random pivots until, eventually, it gets to "select" on one side the smaller m elements (m=(n+1)/2). Then you search for the greatest of those m elements, and this is the median.
Here is how you could implement that in O(log N) per update:
template <typename T>
class median_set {
public:
std::multiset<T> below, above;
// O(log N)
void rebalance()
{
int diff = above.size() - below.size();
if (diff > 0) {
below.insert(*above.begin());
above.erase(above.begin());
} else if (diff < -1) {
above.insert(*below.rbegin());
below.erase(below.find(*below.rbegin()));
}
}
public:
// O(1)
bool empty() const { return below.empty() && above.empty(); }
// O(1)
T const& median() const
{
assert(!empty());
return *below.rbegin();
}
// O(log N)
void insert(T const& value)
{
if (!empty() && value > median())
above.insert(value);
else
below.insert(value);
rebalance();
}
// O(log N)
void erase(T const& value)
{
if (value > median())
above.erase(above.find(value));
else
below.erase(below.find(value));
rebalance();
}
};
(Work in action with tests)
The idea is the following:
Keep track of the values above and below the median in two sets
If a new value is added, add it to the corresponding set. Always ensure that the set below has exactly 0 or 1 more then the other
If a value is removed, remove it from the set and make sure that the condition still holds.
You can't use priority_queues because they won't let you remove one item.
Can any one help me what is Space and Time complexity of my following C# program with details.
//Passing Integer array to Find Extreme from that Integer Array
public int extreme(int[] A)
{
int N = A.Length;
if (N == 0)
{
return -1;
}
else
{
int average = CalculateAverage(A);
return FindExtremes(A, average);
}
}
// Calaculate Average of integerArray
private int CalculateAverage(int[] integerArray)
{
int sum = 0;
foreach (int value in integerArray)
{
sum += value;
}
return Convert.ToInt32(sum / integerArray.Length);
}
//Find Extreme from that Integer Array
private int FindExtremes(int[] integerArray, int average) {
int Index = -1; int ExtremeElement = integerArray[0];
for (int i = 0; i < integerArray.Length; i++)
{
int absolute = Math.Abs(integerArray[i] - average);
if (absolute > ExtremeElement)
{
ExtremeElement = integerArray[i];
Index = i;
}
}
return Index;
}
You are almost certainly better off using a vector. Possibly maintaining an auxiliary vector of indexes to be removed between median calculations so you can delete them in batches. New additions can also be put into an auxiliary vector, sorted, then merged in.

A* and N-Puzzle optimization

I am writing a solver for the N-Puzzle (see http://en.wikipedia.org/wiki/Fifteen_puzzle)
Right now I am using a unordered_map to store hash values of the puzzle board,
and manhattan distance as the heuristic for the algorithm, which is a plain DFS.
so I have
auto pred = [](Node * lhs, Node * rhs){ return lhs->manhattanCost_ < rhs->manhattanCost_; };
std::multiset<Node *, decltype(pred)> frontier(pred);
std::vector<Node *> explored; // holds nodes we have already explored
std::tr1::unordered_set<unsigned> frontierHashTable;
std::tr1::unordered_set<unsigned> exploredHashTable;
This works great for n = 2 and 3.
However, its really hit and miss for n=4 and above. (stl unable to allocate memory for a new node)
I also suspect that I am getting hash collisions in the unordered_set
unsigned makeHash(const Node & pNode)
{
unsigned int b = 378551;
unsigned int a = 63689;
unsigned int hash = 0;
for(std::size_t i = 0; i < pNode.data_.size(); i++)
{
hash = hash * a + pNode.data_[i];
a = a * b;
}
return hash;
}
16! = 2 × 10^13 (possible arrangements)
2^32 = 4 x 10^9 (possible hash values in a 32 bit hash)
My question is how can I optimize my code to solve for n=4 and n=5?
I know from here
http://kociemba.org/fifteen/fifteensolver.html
http://www.ic-net.or.jp/home/takaken/e/15pz/index.html
that n=4 is possible in less than a second on average.
edit:
The algorithm itself is here:
bool NPuzzle::aStarSearch()
{
auto pred = [](Node * lhs, Node * rhs){ return lhs->manhattanCost_ < rhs->manhattanCost_; };
std::multiset<Node *, decltype(pred)> frontier(pred);
std::vector<Node *> explored; // holds nodes we have already explored
std::tr1::unordered_set<unsigned> frontierHashTable;
std::tr1::unordered_set<unsigned> exploredHashTable;
// if we are in the solved position in the first place, return true
if(initial_ == target_)
{
current_ = initial_;
return true;
}
frontier.insert(new Node(initial_)); // we are going to delete everything from the frontier later..
for(;;)
{
if(frontier.empty())
{
std::cout << "depth first search " << "cant solve!" << std::endl;
return false;
}
// remove a node from the frontier, and place it into the explored set
Node * pLeaf = *frontier.begin();
frontier.erase(frontier.begin());
explored.push_back(pLeaf);
// do the same for the hash table
unsigned hashValue = makeHash(*pLeaf);
frontierHashTable.erase(hashValue);
exploredHashTable.insert(hashValue);
std::vector<Node *> children = pLeaf->genChildren();
for( auto it = children.begin(); it != children.end(); ++it)
{
unsigned childHash = makeHash(**it);
if(inFrontierOrExplored(frontierHashTable, exploredHashTable, childHash))
{
delete *it;
}
else
{
if(**it == target_)
{
explored.push_back(*it);
current_ = **it;
// delete everything else in children
for( auto it2 = ++it; it2 != children.end(); ++it2)
delete * it2;
// delete everything in the frontier
for( auto it = frontier.begin(); it != frontier.end(); ++it)
delete *it;
// delete everything in explored
explored_.swap(explored);
for( auto it = explored.begin(); it != explored.end(); ++it)
delete *it;
return true;
}
else
{
frontier.insert(*it);
frontierHashTable.insert(childHash);
}
}
}
}
}
Since this is homework I will suggest some strategies you might try.
First, try using valgrind or a similar tool to check for memory leaks. You may have some memory leaks if you don't delete everything you new.
Second, calculate a bound on the number of nodes that should be explored. Keep track of the number of nodes you do explore. If you pass the bound, you might not be detecting cycles properly.
Third, try the algorithm with depth first search instead of A*. Its memory requirements should be linear in the depth of the tree and it should just be a matter of changing the sort ordering (pred). If DFS works, your A* search may be exploring too many nodes or your memory structures might be too inefficient. If DFS doesn't work, again it might be a problem with cycles.
Fourth, try more compact memory structures. For example, std::multiset does what you want but std::priority_queue with a std::deque may take up less memory. There are other changes you could try and see if they improve things.
First i would recommend cantor expansion, which you can use as the hashing method. It's 1-to-1, i.e. the 16! possible arrangements would be hashed into 0 ~ 16! - 1.
And then i would implement map by my self, as you may know, std is not efficient enough for computation. map is actually a Binary Search Tree, i would recommend Size Balanced Tree, or you can use AVL tree.
And just for record, directly use bool hash[] & big prime may also receive good result.
Then the most important thing - the A* function, like what's in the first of your link, you may try variety of A* function and find the best one.
You are only using the heuristic function to order the multiset. You should use the min(g(n) + f(n)) i.e. the min(path length + heuristic) to order your frontier.
Here the problem is, you are picking the one with the least heuristic, which may not be the correct "next child" to pick.
I believe this is what is causing your calculation to explode.