implement a queue - c++

I have the following queue class (taken from wordpress):
#include<iostream.h>
class Queue
{
private:
int data;
Queue*next;
public:
void Enque(int);
int Deque();
}*head,*tail;
void Queue::enque(int data)
{
Queue *temp;
temp=new Queue;
temp->data=data;
temp->next=NULL;
if(heads==NULL)
heads=temp;
else
tail->next=temp;
tail=temp;
}
int Queue::deque()
{
Queue* temp;//
temp=heads;
heads=heads->next;
return temp->data;
}
I'm trying to figure out why the compiler tells me that I have a multiple definition
of "head" and "tail"- without success.
edit: When the compiler gives the error message it opens up a locale_facets.tcc file
from I-don't-know-where and says that the error is on line 2497 in the following function:
bool
__verify_grouping(const char* __grouping, size_t __grouping_size,
const string& __grouping_tmp)
Does anyone have any insights?

Since this is homework, here is some information about queues and how you could go about implementing one.
A Queue is a standard Abstract Data Type.
It has several properties associated with it:
It is a linear data structure - all components are arranged in a straight line.
It has a grow/decay rule - queues add and remove from opposite ends.
Knowledge of how they're constructed shouldn't be integral in using them because they have public interfaces available.
Queues can be modeled using Sequential Arrays or Linked-Lists.
If you're using an array there are some things to consider because you grow in one direction so you will eventually run out of array. You then have some choices to make (shift versus grow). If you choose to shift back to the beginning of the array (wrap around) you have to make sure the head and tail don't overlap. If you choose to simply grow the queue, you have a lot of wasted memory.
If you're using a Linked-List, you can insert anywhere and the queue will grow from the tail and shrink from the head. You also don't have to worry about filling up your list and having to wrap/shift elements or grow.
However you decide to implement the queue, remember that Queues should provide some common interface to use the queue. Here are some examples:
enqueue - Inserts an element at the back (tail) of the queue
dequeue - Remove an element from the front (head) of a non-empty queue.
empty - Returns whether the queue is empty or not
size - Returns the size of the queue
There are other operations you might want to add to your queue (In C++, you may want an iterator to the front/back of your queue) but how you build your queue should not make a difference with regards to the operations it provides.
However, depending on how you want to use your queue, there are better ways to build it. The usual tradeoff is insert/removal time versus search time. Here is a decent reference.

If your assignment is not directly related to queue implementation, you might want to use the built in std::queue class in C++:
#include <queue>
void test() {
std::queue<int> myQueue;
myQueue.push(10);
if (myQueue.size())
myQueue.pop();
}

Why don't you just use the queue in standard C++ library?
#include <queue>
using namespace std;
int main() {
queue<int> Q;
Q.push(1);
Q.push(2);
Q.push(3);
Q.top(); // 1
Q.top(); // 1 again, we need to pop
Q.pop(); // void
Q.top(); // 2
Q.pop();
Q.top(); // 3
Q.pop();
Q.empty(); // true
return 0;
}

There are a couple of things wrong:
Your methods are declared as Enqueue and Dequeue, but defined as enqueue and dequeue: C++ is case sensitive.
Your methods refer to "heads" which doesn't appear to exist, do you mean "head"?

If you need this for BFS... just use deque.
#include <deque>
using namespace std;
void BFS() {
deque<GraphNode*> to_visit;
to_visit.push_back(start_node);
while (!to_visit.empty()) {
GraphNode* current = to_visit.front();
current->visit(&to_visit); // enqueues more nodes to visit with push_back
to_visit.pop_front();
}
}
The GraphNode::visit method should do all your "work" and add more nodes to the queue to visit. the only methods you should need are push_back(), front(), and pop_front()
This is how I always do it. Hope this helps.

It looks like your problem might have something to do with the fact that:
class Queue {
// blah
} *head, * tail;
is defining a Queue class, and declaring head and tail as type Queue*. They do not look like members of the class, which they should be.

Related

Std::forward_list thread safety

With reference to the following code:
#include <iostream>
#include <vector>
#include <mutex>
#include <forward_list>
using std::cout;
using std::endl;
class Cache {
// thread safe, only 1 writer, thanks to mutex
int add(int val)
{
std::lock_guard<std::mutex> lk(mtx);
if(flSize == 0)
{
fl.push_front(val);
backIt = fl.begin();
}
else
{
backIt = fl.insert_after(backIt,val);
}
++flSize;
return flSize - 1;
}
// allow concurrent readers (with other readers and writer)
// get the element at idx into the forward linked list
// the thread calling get() will never try to index past the last
// index/value it pushed to list. It uses the return value from add() as
// the arg to get
// a.k.a get will never be called before an add() has been called, and
// if add has been called N times, the index to get will be [0, N)
int get(int idx)
{
int num = 0;
auto it = fl.cbegin();
while(num < idx)
{
++num;
++it;
}
return *it;
}
private:
std::forward_list<int> fl;
size_t flSize {0};
std::forward_list<int>::iterator backIt;
std::mutex mtx;
};
The goal is to have readers read from any node in the linked list that has been constructed fully.
Thoughts:
This seems to be thread safe(under the aforementioned constraints). I think am relying on the implementation details to achieve this behavior. I am not sure if something can go wrong here or if any of my assumptions are incorrect. Is this code portable (across compilers) and is this future proof? Might break if future implementation changes.
Question:
can I access the data for a node in a std::forward_list in a thread while another thread is performing std::forward_list::insert_after on the same node?
Does the standard provide any guidelines for such a scenario?
Of course you can access a node in one thread while adding another in another thread. You only get in trouble if you try to access data that is being modified, but insert_after doesn't modify the data in existing nodes nor does it move any node around. No iterators or references are invalidated.
As long as you don't expose a "remove" function or access to iterators (so no thread can iterate through the list while something is being inserted), this is fine. However, I don't see the point of the member backIt: it is only accessed (and modified) when the mutex is locked, so it is effectively the same as fl.end(). If std::forward_list had a size method, flSize would also be redundant.
A couple suggestions to end with. First, I would suggest against using a linked list to begin with. If you can reserve enough elements or if you can deal with resizing (while locked for readers) when necessary, I would just use a vector or perhaps a vector of pointers if the items are very big. If the reserve or resize cannot be done, I would use a std::deque. Second, if you really want to use a std::forward_list, I would use push_front instead, and return an iterator (possibly const) in add that the user can later pass to get. That way, no need any complex logic in either of these functions, and flSize can also be removed. Though get could also be removed since the iterator provides access to the data. Unless there is something more to your comment "It uses the return value from add() as the arg to get".

Tree traversal falls into infinite loop (with huffman algorithm implementation)

I am trying implementing the huffman algorithm following the steps described in this tutorial: https://www.programiz.com/dsa/huffman-coding, and so far I got this code:
void encode(string filename) {
List<HuffmanNode> priorityQueue;
List<Node<HuffmanNode>> encodeList;
BinaryTree<HuffmanNode> toEncode;
//Map<char, string> encodeTable;
fstream input;
input.open(filename, ios_base::in);
if (input.is_open()) {
char c;
while (!input.eof()) {
input.get(c);
HuffmanNode node;
node.data = c;
node.frequency = 1;
int pos = priorityQueue.find(node);
if(pos) {
HuffmanNode value = priorityQueue.get(pos)->getData();
value++;
priorityQueue.update(pos, value);
} else {
priorityQueue.insert(node);
}
}
}
input.close();
priorityQueue.sort();
for(int i=1; i<=priorityQueue.size(); i++)
encodeList.insert( priorityQueue.get(i) );
while(encodeList.size() > 1) {
Node<HuffmanNode> * left = new Node<HuffmanNode>(encodeList.get(1)->getData());
Node<HuffmanNode> * right = new Node<HuffmanNode>(encodeList.get(2)->getData());
HuffmanNode z;
z.data = 0;
z.frequency = left->getData().frequency + right->getData().frequency;
Node<HuffmanNode> z_node;
z_node.setData(z);
z_node.setPrevious(left);
z_node.setNext(right);
encodeList.remove(1);
encodeList.remove(1);
encodeList.insert(z_node);
}
Node<HuffmanNode> node_root = encodeList.get(1)->getData();
toEncode.setRoot(&node_root);
}
full code for the main.cpp here: https://pastebin.com/Uw5g9s7j.
When I try run this, the program read the bytes from the file, group each character by frequency and order the list, but when I try generate the huffman tree, I am unable to traverse this tree, always falling into a infinte loop (the method get stuck in the nodes containing the 2 first items from the priorityQueue above).
I tried the tree class with BinaryTree<int>, and everything works fine in this case, but with the code above the issue happens. The code for the tree is this (in the code, previous == left and next == right - I am using here the same Node class already implemented for my List class): https://pastebin.com/ZKLjuBc8.
The code for the List used in this example is: https://pastebin.com/Dprh1Pfa. And the code for the Node class used for both the List and the BinaryTree classes is: https://pastebin.com/ATLvYyft. Anyone can tell me what I am missing here? What I am getting wrong here?
UPDATE
I have tried a version using only c++ stl (with no custom List or BinaryTree implementations),but the same problem happened. The code is that: https://pastebin.com/q0wrVYBB.
Too many things to mention as comments so I'm using an answer, sorry:
So going top to bottom through the code:
Why are you defining all methods outside the class? That just makes the code so much harder to read and is much more work to type.
Node::Node()
NULL is C code, use nullptr. And why not use member initialization in the class?
class Node {
private:
T data{};
Node * previous{nullptr};
Node * next{nullptr};
...
Node::Node(Node * node) {
What is that supposed to be? You create a new node, copy the value and attach it to the existing list of Nodes like a Remora.
Is this supposed to replace the old Node? Be a move constructor?
Node::Node(T data)
Write
Node<T>::Node(T data_ = T{}) : data{data_} { }
and remove the default constructor. The member initialization from (1) initializes the remaining members.
Node::Node(T data, Node * previous, Node * next)
Again creating a Remora. This is not inserting into an existing list.
T Node::getData(), void Node::setData(T value)
If everyone can get and set data then just make it public. That will also mean it will work with cons Node<T>. Your functions are not const correct because you lack all the const versions.
Same for previous and next. But those should actually do something when you set the member. The node you point to should point back to you or made to do so:
void Node::setPrevious(Node * previous) {
// don't break an existing list
assert(this->previous == nullptr);
assert(previous->next == nullptr);
this->previous = previous;
previous->next = this;
}
Think about the copy and move constructors and assignment.
Follow the rule of 0/3/5: https://en.cppreference.com/w/cpp/language/rule_of_three . This goes for Node, List, ... all the classes.
List::List()
Simpler to use
Node<T> * first{nullptr};
List::~List()
You are deleting the elements of the list front to back, each time traversing the list from front till you find index number i. While horrible inefficient the front nodes have also already been deleted. This is "use after free".
void List::insert(T data)
this->first = new Node<T>();
this->first->setData(data);
just write
first = new Node<T>(data);
And if insert will append to the tail of the list then why not keep track of the tail so the insert runs in O(1)?
void List::update(int index, T data)
If you need access to a list by index that is a clear sign that you are using the wrong data structure. Use a vector, not a list, if you need this.
void List::remove(int index)
As mentioned in comments there are 2 memory leaks here. Also aux->next->previous still points at the deleted aux likely causing "use after free" later on.
int List::size()
Nothing wrong here, that's a first. But if you need this frequently you could keep track of the size of the list in the List class.
Node * List::get(int index)
Nothing wrong except the place where you use this has already freed the nodes so this blows up. Missing the const counterpart. And again a strong indication you should be using a vector.
void List::set(int index, Node * value)
What's this supposed to do? Replace the n-th node in a list with a new node? Insert the node at a specific position? What it actually does it follow the list for index steps and then assign the local variable aux the value of value. Meaning it does absolutely nothing, slowly.
int List::find(T data)
Why return an index? Why not return a reference to the node? Also const and non-const version.
void List::sort()
This code looks like a bubblesort. Assuming it wasn't totaly broken by all the previous issues, would be O(n^4). I'm assuming the if(jMin != i) is supposed to swap the two elements in the list. Well, it's not.
I'm giving up now. This is all just the support classes to implement the BinaryTree, which itself is just support. 565 lines of code before you even start with your actual problem and it seems a lot of it broken one way or another. None of it can work with the state Node and List are in. Especially with copy construction / copy assignment of lists.

C++ N-last added items container

I try to find optimal data structure for next simple task: class which keeps N last added item values in built-in container. If object obtain N+1 item it should be added at the end of the container and first item should be removed from it. It like a simple queue, but class should have a method GetAverage, and other methods which must have access to every item. Unfortunately, std::queue doesn't have methods begin and end for this purpose.
It's a part of simple class interface:
class StatItem final
{
static int ITEMS_LIMIT;
public:
StatItem() = default;
~StatItem() = default;
void Reset();
void Insert(int val);
int GetAverage() const;
private:
std::queue<int> _items;
};
And part of desired implementation:
void StatItem::Reset()
{
std::queue<int> empty;
std::swap(_items, empty);
}
void StatItem::Insert(int val)
{
_items.push(val);
if (_items.size() == ITEMS_LIMIT)
{
_items.pop();
}
}
int StatItem::GetAverage() const
{
const size_t itemCount{ _items.size() };
if (itemCount == 0) {
return 0;
}
const int sum = std::accumulate(_items.begin(), _items.end(), 0); // Error. std::queue doesn't have this methods
return sum / itemCount;
}
Any ideas?
I'm not sure about std::deque. Does it work effective and should I use it for this task or something different?
P.S.: ITEMS_LIMIT in my case about 100-500 items
The data structure you're looking for is a circular buffer. There is an implementation in the Boost library, however in this situation since it doesn't seem you need to remove items you can easily implement one using a std::vector or std::array.
You will need to keep track of the number of elements in the vector so far so that you can average correctly until you reach the element limit, and also the current insertion index which should just wrap when you reach that limit.
Using an array or vector will allow you to benefit from having a fixed element limit, as the elements will be stored in a single block of memory (good for fast memory access), and with both data structures you can make space for all elements you need on construction.
If you choose to use a std::vector, make sure to use the 'fill' constructor (http://www.cplusplus.com/reference/vector/vector/vector/), which will allow you to create the right number of elements from the beginning and avoid any extra allocations.

stackoverflow when cloning graph recursively

I'm getting an error "AddressSanitizer .. stackoverflow in operator new (unsigned long)" using this version of the code, where I use copy->neighbors.push_back
class Node {
public:
int val;
vector<Node*> neighbors;
Node() {}
Node(int _val, vector<Node*> _neighbors) {
val = _val;
neighbors = _neighbors;
}
};
unordered_map<Node*, Node*> copies;
Node* cloneGraph(Node* node) {
if(!node) return node;
if(copies.find(node)==copies.end()){
Node *copy = new Node(node->val,{});
for(auto neighbor:node->neighbors){
copy->neighbors.push_back(cloneGraph(neighbor));//stackoverflow
}
copies[node]= copy;
}
return copies[node];
}
but it works with this version where I use copies[node]->neighbors.push_back , why is this happening?
the only difference is using a reference to an element of global map: copies[node] vs a local pointer copy
Node* cloneGraph(Node* node) {
if(!node) return node;
if(copies.find(node)==copies.end()){
copies[node] = new Node(node->val,{});
for(auto neighbor:node->neighbors){
copies[node]->neighbors.push_back(cloneGraph(neighbor));
}
}
return copies[node];
}
In your first implementation, you are creating a new Node each recursive call which is pushed to the stack. Whereas, in your second implementation it is being placed in an array which not part of the local recursion variables(it looks like a global variable), so the stack does not need to keep track of the newly created nodes.
When a recursive function causes a stack overflow, one of the first things you should look for is infinite recursion.
Consider a simple graph with two nodes: A is a neighbor of B, and B is a neighbor of A (pretty standard for non-directed graphs). What happens when you call cloneGraph(&A)?
Node A is not in the map, so a clone is made.
As part of the cloning process, cloneGraph(&B) is called.
So what happens next?
Node B is not in the map, so a clone is made.
As part of the cloning process, cloneGraph(&A) is called.
OK, back to where we started. This could get ugly if the recursion continues. So the big question is
At this point, is A in the map?
Using the first version of the code, it is not. So the recursion repeats until the stack overflows. Using the second version of the code, it is, so the recursion ends at this point.
In the first version you have a recursion that can produce an infinite loop for a graph with cycles. Note how the condition for entering to a deeper level of the recursion is that a node is not found in the map copies, but this map is only updated after the whole recursion finishes.
If your graph is A->B and B->A, then a call to cloneGraph(&A) will call cloneGraph(&B), and this will call cloneGraph(&A) and so on indefinitely until the call stack doesn't have space for any more.
Think carefully about your algorithm. Presumably your graph has cylces.
As your first version only adds the newly created node to copies after recursing into cloneGraph the next call will try to clone the same node again, which will recurse etc.

Nested shared_ptr destruction causes stack overflow

This is a more of design problem (I know why this is happening, just want to see how people deal with it). Suppose I have a simple linked list struct:
struct List {
int head;
std::shared_ptr<List> tail;
};
The shared_ptr enables sharing of sublists between multiple lists. However, when the list gets very long, a stack overflow might happen in its destructor (caused by recursive releases of shared_ptrs). I've tried using an explicit stack, but that gets very tricky since a tail can be owned by multiple lists. How can I design my List to avoid this problem?
UPDATE: To clarify, I'm not reinventing the wheel (std::forward_list). The List above is only a simplified version of the real data structure. The real data structure is a directed acyclic graph, which if you think about it is just a lot of of linked lists with shared tails/heads. It's usually prohibitively expensive to copy the graph, so data sharing is necessary.
UPDATE 2: I'm thinking about explicitly traversing down the pointer chain and std::move as I go. Something like:
~List()
{
auto p = std::move(tail);
while (p->tail != nullptr && p->tail.use_count() == 1) {
// Some other thread may start pointing to `p->tail`
// and increases its use count before the next line
p = std::move(p->tail);
}
}
This seems to work in a single thread, but I'm worried about thread safety.
If you're having problems with stack overflows on destruction for your linked datastructure, the easiest fix is just to implement deferred cleanup:
struct Graph {
std::shared_ptr<Graph> p1, p2, p3; // some pointers in your datastructure
static std::list<std::shared_ptr<Graph>> deferred_cleanup;
~Graph() {
deferred_cleanup.emplace_back(std::move(p1));
deferred_cleanup.emplace_back(std::move(p2));
deferred_cleanup.emplace_back(std::move(p3));
}
static void cleanup() {
while (!deferred_cleanup.empty()) {
std::list<std::shared_ptr<Graph>> tmp;
std::swap(tmp, deferred_cleanup);
tmp.clear(); } }
};
and you just need to remember to call Graph::cleanup(); periodically.
this should do it. With a little work it can easily be made thread-safe (a little locking/atomics in the deleter engine)
synopsis:
The shared_ptr's to the nodes are created with a custom destructor which, rather than deleting the node, hands it off to a deleter engine.
The engine's implementation is a singleton. Upon being notified of a new node to be deleted, it adds the node to a delete queue. If there is no node being deleted, the nodes in the queue are deleted in turn (no recursion).
While this is happening, new nodes arriving in the engine are simply added to the back of the queue. The in-progress delete cycle will take care of them soon enough.
#include <memory>
#include <deque>
#include <stdexcept>
#include <iostream>
struct node;
struct delete_engine
{
void queue_for_delete(std::unique_ptr<node> p);
struct impl;
static impl& get_impl();
};
struct node
{
node(int d) : data(d) {}
~node() {
std::cout << "deleting node " << data << std::endl;
}
static std::shared_ptr<node> create(int d) {
return { new node(d),
[](node* p) {
auto eng = delete_engine();
eng.queue_for_delete(std::unique_ptr<node>(p));
}};
}
int data;
std::shared_ptr<node> child;
};
struct delete_engine::impl
{
bool _deleting { false };
std::deque<std::unique_ptr<node>> _delete_list;
void queue_for_delete(std::unique_ptr<node> p)
{
_delete_list.push_front(std::move(p));
if (!_deleting)
{
_deleting = true;
while(!_delete_list.empty())
{
_delete_list.pop_back();
}
_deleting = false;
}
}
};
auto delete_engine::get_impl() -> impl&
{
static impl _{};
return _;
}
void delete_engine::queue_for_delete(std::unique_ptr<node> p)
{
get_impl().queue_for_delete(std::move(p));
}
struct tree
{
std::shared_ptr<node> root;
auto add_child(int data)
{
if (root) {
throw std::logic_error("already have a root");
}
auto n = node::create(data);
root = n;
return n;
}
};
int main()
{
tree t;
auto pc = t.add_child(6);
pc = pc->child = node::create(7);
}
std::shared_ptr (and before that, boost::shared_ptr) is and was the de-facto standard for building dynamic systems involving massive DAGs.
In reality, DAGs don't get that deep (maybe 10 or 12 algorithms deep in your average FX pricing server?) so the recursive deletes are not a problem.
If you're thinking of building an enormous DAG with a depth of 10,000 then it might start to be a problem, but to be honest I think it will be the least of your worries.
re the analogy of a DAG being like a linked list... not really. Since it's acyclic all your pointers pointing "up" will need to be shared_ptr and all your back-pointers (e.g. binding message subscriptions to sink algorithms) will need to be weak_ptr's which you lock as you fire the message.
disclaimer: I've spent a lot of time designing and building information systems based on directed acyclic graphs of parameterised algorithm components, with a great deal of sharing of common components (i.e. same algorithm with same parameters).
Performance of the graph is never an issue. The bottlenecks are:
initially building the graph when the program starts - there's a lot of noise at that point, but it only happens once.
getting data into and out of the process (usually a message bus). This is invariably the bottleneck as it involves I/O.