efficient permuation algorithm for multiple lists - c++

I have a variable number of lists. Each contains different number of elements.
For instance with four lists,
array1 = {1, 2, 3, 4};
array2 = {a, b, c};
array3 = {X};
array4 = {2.10, 3.5, 1.2, 6.2, 0.3};
I need to find all possible tuples whose ith element is from ith list, e.g. {1,a,X,2.10}, {1,a,X,3.5}, ...
Currently I am using a recursive implementation which has performance issue. Therefore, I want to find a noniterative way that can perform faster.
Any advice? Is there any efficient algorithms (or some pseudo code). Thanks!
Some pseudo code of what I implemented so far:
Recusive version:
vector<size_t> indices; // store current indices of each list except for the last one)
permuation (index, numOfLists) { // always called with permutation(0, numOfLists)
if (index == numOfLists - 1) {
for (i = first_elem_of_last_list; i <= last_elem_of_last_list; ++i) {
foreach(indices.begin(), indices.end(), printElemAtIndex());
printElemAtIndex(last_list, i);
}
}
else {
for (i = first_elem_of_ith_list; i <= last_elem_of_ith_list; ++i) {
update_indices(index, i);
permutation(index + 1, numOfLists); // recursive call
}
}
}
non-recursive version:
vector<size_t> indices; // store current indices of each list except for the last one)
permutation-iterative(index, numOfLists) {
bool forward = true;
int curr = 0;
while (curr >= 0) {
if (curr < numOfLists - 1){
if (forward)
curr++;
else {
if (permutation_of_last_list_is_done) {
curr--;
}
else {
curr++;
forward = true;
}
if (curr > 0)
update_indices();
}
}
else {
// last list
for (i = first_elem_of_last_list; i <= last_elem_of_last_list; ++i) {
foreach(indices.begin(), indices.end(), printElemAtIndex());
printElemAtIndex(last_list, i);
}
curr--;
forward = false;
}
}
}

There are O(l^n)1 different such tuples, where l is the size of a list and n is the number of lists.
Thus, generating all of them cannot be done efficiently polynomially.
There might be some local optimizations that can be made - but I doubt swtiching between iterative and (efficient) recursive will do a lot of difference if any, especially if the iterative version is trying to mimic a recursive solution using a stack + loop, which is likely less optimized for this purpose then the hardware stack.
A possible recursive approach is:
printAll(list<list<E>> listOfLists, list<E> sol):
if (listOfLists.isEmpty()):
print sol
return
list<E> currentList <- listOfLists.removeAndGetFirst()
for each element e in currentList:
sol.append(e)
printAll(listOfLists, sol) //recursively invoking with a "smaller" problem
sol.removeLast()
listOfLists.addFirst(currentList)
(1) To be exact, there are l1 * l2 * ... * ln tuples, where li is the size of the ith list. for lists of equal length it decays to l^n.

Related

How do I decrease the count of an element in a multiset in C++?

I am using a multi-set in c++, which I believe stores an element and the respective count of it when it is inserted.
Here, when I want to delete an element, I just want to decrease the count of that element in the set by 1 till it is greater than 0.
Example C++ code:
multiset<int>mset;
mset.insert(2);
mset.insert(2);
printf("%d ",mset.count(2)); //this returns 2
// here I need an O(1) constant time function (in-built or whatever )
// to decrease the count of 2 in the set without deleting it
// Remember constant time only
-> Function and its specifications
printf("%d ",mset.count(2)); // it should print 1 now .
Is there any way to achieve that or should i go by deleting that and inserting the element 2 by the required (count-1) times?
... I am using a multi-set in c++, which stores an element and the respective count of it ...
No you aren't. You're using a multi-set which stores n copies of a value which was inserted n times.
If you want to store something relating a value to a count, use an associative container like std::map<int, int>, and use map[X]++ to increment the number of Xs.
... i need an O(1) constant time function ... to decrease the count ...
Both map and set have O(log N) complexity just to find the element you want to alter, so this is impossible with them. Use std::unordered_map/set to get O(1) complexity.
... I just want to decrease the count of that element in the set by 1 till it is >0
I'm not sure what that means.
with a set:
to remove all copies of an element from the set, use equal_range to get a range (pair of iterators), and then erase that range
to remove all-but-one copies in a non-empty range, just increment the first iterator in the pair and check it's still not equal to the second iterator before erasing the new range.
these both have an O(log N) lookup (equal_range) step followed by a linear-time erase step (although it's linear with the number of elements having the same key, not N).
with a map:
to remove the count from a map, just erase the key
to set the count to one, just use map[key]=1;
both of these have an O(log N) lookup followed by a constant-time erase
with an unordered map ... for your purposes it's identical to the map above, except with O(1) complexity.
Here's a quick example using unordered_map:
template <typename Key>
class Counter {
std::unordered_map<Key, unsigned> count_;
public:
unsigned inc(Key k, unsigned delta = 1) {
auto result = count_.emplace(k, delta);
if (result.second) {
return delta;
} else {
unsigned& current = result.first->second;
current += delta;
return current;
}
}
unsigned dec(Key k, unsigned delta = 1) {
auto iter = count_.find(k);
if (iter == count_.end()) return 0;
unsigned& current = iter->second;
if (current > delta) {
current -= delta;
return current;
}
// else current <= delta means zero
count_.erase(iter);
return 0;
}
unsigned get(Key k) const {
auto iter = count_.find(k);
if (iter == count_.end()) return 0;
return iter->second;
}
};
and use it like so:
int main() {
Counter<int> c;
// test increment
assert(c.inc(1) == 1);
assert(c.inc(2) == 1);
assert(c.inc(2) == 2);
// test lookup
assert(c.get(0) == 0);
assert(c.get(1) == 1);
// test simple decrement
assert(c.get(2) == 2);
assert(c.dec(2) == 1);
assert(c.get(2) == 1);
// test erase and underflow
assert(c.dec(2) == 0);
assert(c.dec(2) == 0);
assert(c.dec(1, 42) == 0);
}

Removing multiple elements from a vector c++

Here is some code which checks if 2 units are killed after they attack each other, I pass in the position in the vector however when I remove one the vector is changed in size and therefore the 2nd unit is out of range. How can I remove both simultaneously?
if ((MyHealth <= 0) && (EnemyHealth <= 0))
{
PlayerUnits.erase(PlayerUnits.begin() + MyUnit, PlayerUnits.begin() + EnemyUnit);
}
else if (MyHealth <= 0)
{
PlayerUnits.erase(PlayerUnits.begin() + MyUnit);
}
else if (EnemyHealth <= 0)
{
PlayerUnits.erase(PlayerUnits.begin() + EnemyUnit);
}
Instead of coding the removal logic yourself, it would be better managed using std::remove_if from algorithm. Depending on whether you have compiler supporting C++11 or not, the Predicate can either be a lambda or a named function.
First point: In your first block, the call erase(x, y) does something different than you expect - it erases a whole range of elements starting at index x until just before index y. For example, if we have the vector [a,b,c,d,e,f,g], then erase(2,5) will erase indexes 2,3,4, so we end up with [a,b,f,g]. I'm guessing you want to erase just two elements, not an entire range.
Second point: As Dieter Lücking pointed out, simply erase the higher index element first, like this:
if (MyUnit > EnemyUnit) {
PlayerUnits.erase(PlayerUnits.begin() + MyUnit);
PlayerUnits.erase(PlayerUnits.begin() + EnemyUnit);
} else {
PlayerUnits.erase(PlayerUnits.begin() + EnemyUnit);
PlayerUnits.erase(PlayerUnits.begin() + MyUnit);
}
I think a better way to handle this would be to add an "isDead" condition to your "unit" class:
void Unit::Update()
{
//other stuff
if(this->m_health <= 0) this->m_isDead = true;
}
then in the main loop:
void Game::Update()
{
size_t size = PlayerUnits.size();
//iterate backwards, so there is no skipping
for(int i = size-1; i>= 0; i--)
{
PlayerUnits[i]->Update();
if(PlayerUnits[i]->isDead()) PlayerUnits.erase(PlayerUnits.begin() + i);
}
}
This is at least how I personally do it.

Segmentation Fault: 11 when trying to sort linked list by odds and evens

I'm trying to complete an assignment for my Data Structures course but I keep getting a segfault in one of my functions.
What I have this function doing is creating two new chains, one for even numbers and another for odds, incrementing through the original list, and populating the new chains based on whether or not the element is even or odd.
What I am stuck on is getting the last node from the odd chain to link to the beginning of the even chain, because the chains need to be linked together at the end of the function.
void chain :: oddAndEvenOrdering()
{
//This function reorders the list
//such a way that all odd numbers precede all even numbers.
//Note that for two odd (even)
//numbers i and j, the ordering between
//i and j should be intact after reordering.
// Create empty chain to store odds
chain *oddChain = new chain(100);
chainNode *oddNode = oddChain->firstNode;
// Create empty chain to store evens
chain *evenChain = new chain(100);
int countOdd = 0;
int countEven = 0;
for (int i = 0; i < listSize-1; i++)
{
if (*this->get(i) % 2 == 0)
{
evenChain->insert(countEven, *this->get(i));
countEven++;
} else {
oddChain->insert(countOdd, *this->get(i));
oddNode = oddNode->next;
countOdd++;
}
}
chainNode *evenNode = evenChain->firstNode;
oddNode->next = evenNode;
delete this;
this->firstNode = oddChain->firstNode;
}
This will most certainly produce an error:
delete this;
this->firstNode = oddChain->firstNode;
You delete this and then try to access its members.
Your memory issue has already been addressed in Anton's answer, but you could avoid it completely if you implemented your container in such a way that is consistent with the standard library. For example:
std::vector<int> vec { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
std::partition(vec.begin(), vec.end(), [](int i)
{
return i % 2;
});
This will put all of the odds at the start of the vector and all of the evens at the end.

Find nth smallest element in Binary Search Tree

I have written an algorithm for finding nth smallest element in BST but it returns root node instead of the nth smallest one. So if you input nodes in order 7 4 3 13 21 15, this algorithm after call find(root, 0) returns Node with value 7 instead of 3, and for call find(root, 1) it returns 13 instead of 4. Any thoughts ?
Binode* Tree::find(Binode* bn, int n) const
{
if(bn != NULL)
{
find(bn->l, n);
if(n-- == 0)
return bn;
find(bn->r, n);
}
else
return NULL;
}
and definition of Binode
class Binode
{
public:
int n;
Binode* l, *r;
Binode(int x) : n(x), l(NULL), r(NULL) {}
};
It is not possible to efficiently retrieve the n-th smallest element in a binary search tree by itself. However, this does become possible if you keep in each node an integer indicating the number of nodes in its entire subtree. From my generic AVL tree implementation:
static BAVLNode * BAVL_GetAt (const BAVL *o, uint64_t index)
{
if (index >= BAVL_Count(o)) {
return NULL;
}
BAVLNode *c = o->root;
while (1) {
ASSERT(c)
ASSERT(index < c->count)
uint64_t left_count = (c->link[0] ? c->link[0]->count : 0);
if (index == left_count) {
return c;
}
if (index < left_count) {
c = c->link[0];
} else {
c = c->link[1];
index -= left_count + 1;
}
}
}
In the above code, node->link[0] and node->link[1] are the left and right child of node, and node->count is the number of nodes in the entire subtree of node.
The above algorithm has O(logn) time complexity, assuming the tree is balanced. Also, if you keep these counts, another operation becomes possible - given a pointer to a node, it is possible to efficiently determine its index (the inverse of the what you asked for). In the code I linked, this operation is called BAVL_IndexOf().
Be aware that the node counts need to be updated as the tree is changed; this can be done with no (asymptotic) change in time complexity.
There are a few problems with your code:
1) find() returns a value (the correct node, assuming the function is working as intended), but you don't propagate that value up the call chain, so top-level calls don't know about the (possible) found element
.
Binode* elem = NULL;
elem = find(bn->l, n);
if (elem) return elem;
if(n-- == 0)
return bn;
elem = find(bn->r, n);
return elem; // here we don't need to test: we need to return regardless of the result
2) even though you do the decrement of n at the right place, the change does not propagate upward in the call chain. You need to pass the parameter by reference (note the & after int in the function signature), so the change is made on the original value, not on a copy of it
.
Binode* Tree::find(Binode* bn, int& n) const
I have not tested the suggested changes, but they should put you in the right direction for progress

C++ linked-list legacy code to STL -- allocating linked list size on the fly

I am working on some legacy code that defines a linked list (not using STL container)
I want to convert this code so as to use STL list. As you can see in the following example, the linked list is assigned an initial value for all Ns. Then certain elements in the list are assigned some value. Then the "empty elements" in the list are "cleaned up".
I am looking for a better way to do this using STL. especially, can this deleting empty elements code be avoided? I checked STL documentation.. it defines a remove method but thats not exactly what I need here. Is there a way to dynamically allocate linked list size? I would appreciate your suggestions!
Update
I have modified the code. this resembles the main code I have. But to avoid any confusion, I am writing a pseudo code below to explain how it works.
Steps
Allocate a size elementIds to the linked list (struct my_list)
There is another linked list meshElem and I am interested in some values from meshElem->elem struct.
For Example : I need elemId = meshElem->elem->id; This elemId is in range 0 to elementIds.
The elemId will be used as an index to look for a particular element in struct my_list lst e.g. lst[elemId]
In the doSomething () function, loop through 0 to elementIds . In this loop, if certain conditions are satisfied, the lst->number is assigned an integer value =someArray[i] where i is in range 0 to N (done in appendElement)
the elements without next entry in struct my_list lst ,are cleaned up ( Question: can this be avoided ? )
lst->number value is used further in the code for some other processing.
Now the modified code:
struct my_list
{
int number;
struct my_list *prev;
struct my_list *next;
}
void doSomething(void){
const int N = 50000;
const int elementIds = 10000;
int i, elemId, node_i;
struct my_list *lst;
lst = new struct my_list[elementIds];
int someArray[12];
meshElem = mesh->next;
for(i=0; i<=elementIds; i++) {
lst[i].num = 0;
lst[i].next = NIL;
lst[i].prev = NIL;
}
while(meshElem != NIL){
// Element id (int)
// Note that any elemId will be in range [0 - elemId ]
elemId = meshElem->elem->id;
// Do some operations to populate "lst"
// Note that 'lst' gets values ONLY for certain
// values of i
for (i = 0; i<=N; i++){
// if certain conditions are satisfied,
// it updates the linked list element
// lst[meshIdx]. foo1(), foo2() are just some conditions...
if (foo1()){
appendElement(someArray[i], &lst[meshIdx])
}
else if (foo2()){
appendElement(someArray[i], &lst[meshIdx])
}
}
meshElem = meshelem->next;
} // End of while(meshElem != NIL)
// Clean up the linked list lst
// by removing unassigned items.
struct my_list *lst_2
for(i=1; i<=N; i++) {
lst_2 = &lst[i];
while( lst != NIL ) {
if( lst->next != NIL && lst->next->number == 0 ) {
delete lst_2->next;
lst_2->next = NIL;
} // end of if loop
lst = lst_2->next;
} // end of while while( lst != NIL )
} // End of for(i=1; i<=N; i++)
// Do some more stuff that uses struct my_list lst
for(i=1;i<=elementIds;i++) {
while( lst[i] != NIL && (node_i = lst[i]->number) ) {
if( node_i == 0) {
lst[i] = lst[i]->next;
continue;
}
// Use this "node_i" index in some other arrays to
// do more stuff.
//..
//..
//..
lst[i] = lst[i]->next;
}
}
void appendElement(int n, struct my_list *lst) {
int exists = 0;
while( lst->next != NIL ) {
if( lst->number == n ) {
exists = 1;
lst=lst->next;
}
if( exists < 1 ) {
lst->number = n2;
insertElemAfter(lst, 0);
}
}
Your legacy linked list is essentially a threaded sparse vector. The array with NULLs is the sparse vector, and the linked list provides the threading. The two combined give constant access to individual nodes (random access into the array) and efficient traversal over "valid" nodes (avoiding NULLs).
Assuming both of these aspects are important, and assuming the Data maybe more complex than the simple int you show, then you could create a data structure such as:
class ThreadedSparseVector {
private:
std::vector<Data*> m_thread;
std::vector<int> m_sparse_vector;
};
During initialization, you can pre-size m_sparse_vector and initialize the values to -1. As you append elements (or access them), check if it is already "valid" first, adding it to the thread if not:
void ThreadedSparseVector::appendElement(int i) {
if (-1 == m_sparse_vector[i]) {
// Add new data
m_sparse_vector[i] = m_thread.size()
m_thread.push_back(new Data(i));
}
Data* data = m_thread[m_sparse_vector[i]];
// Initialize/update data as necessary
}
If the threading is more important than random access, another option is to simply use an STL map. If random access is more important than threading, then you can simply use an STL vector and tolerate NULLs during iteration (or create a custom iterator that automatically skips NULLs).
Another alternative, depending on your motivation to convert to STL, is to create a wrapper around the legacy code that implements an STL-compatible interface, as opposed to converting the data structure itself to use STL.
A linked list typically do not use contiguous memory, but rather fragmented heap-allocated nodes. I guess if you provide the std::list constructor with an initial count, at least that many nodes will be contiguous. Other than that, you'd need to write your own allocator to go with std::list.
struct my_list {
int number;
}
void doSomething(void){
std::list<my_list> lst;
const int N = 10000;
for (int i = 0; i<=N; ++i) {
if (1 == foo(N)) { //I'm guessing this is what you meant
my_list new_element; //Initalize with something;
lst.push_back(new_element);
}
}
}
int
foo(int n) {
// some function that returns 0 or 1 based on
// the value of n
return 1;
}
std::list<int> lst;
I think you just want:
std::list<int> lst;
for (i = 0; i<=N; i++){
if ( i == foo(N) )
lst.push_back(/*some new value*/);
}
In your logic instead of creating all the nodes in the beginning why dont you run the loop only once and create one elements dynamically for true condition and add it in the list.
for (i = 0; i<=N; i++)
{
if ( i == foo(N) )
node = createNode();
appendElement(node);
}