The following code inherits std::priority_queue and provides clear() which calls the internal std::vector's clear()
#include<iostream>
#include<queue>
using namespace std;
template<class type>
struct mypq :public priority_queue<type> {
void clear(){
this->c.clear();
}
};
mypq<int>pq;
int main() {
for(int i=0;i<10;++i)
pq.push(i);
pq.clear();
for(int j=-5;j<0;++j)
pq.push(j);
while (!pq.empty()){
cerr<<pq.top()<<endl;
pq.pop();
}
}
When I tested it with g++, MSVC++ and clang, it produces the expected output:
-1
-2
-3
-4
-5
But I haven't seen any guarantee for this, i.e. clearing the internal vector will be the same as calling pop() when the priority_queue isn't empty. Although I know other ways to clear it such as swap or assign it using an empty priority_queue, I think if this code can work well, it would be more efficient since the allocated memory in the vector is reusable. So I wonder if this code is portable or won't always work?
But I haven't seen any guarantee for this, i.e. clearing the internal vector will be the same as calling pop() when the priority_queue isn't empty.
Because that's not the same thing. A std::priority_queue is a specifically designed container adaptor that keeps things ordered via strict weak ordering. If you don't specify the type of container the queue will have (which you don't in the example), then the default type is a std::vector.
So calling pop() on a non-empty priority queue will have the effect of removing the top element from the queue while calling clear() on the underlying container will remove all elements from the queue, not just the top most.
Although I know other ways to clear it such as swap or assign it using an empty priority_queue, I think if this code can work well, it would be more efficient since the allocated memory in the vector is reusable. So I wonder if this code is portable or won't always work?
According to the reference, the underlying c member object is protected, so accessing the way you are should be guaranteed across compilers, that is, calling this->c.clear(); should be portable (anecdotally, it works on g++ 4.2.1 on an older version of OpenBSD).
As far as efficiency is concerned, it would somewhat depend. Doing something like this->c.clear(); vs. q = priority_queue <int>(); might not be that different in terms of memory usage or complexity, though you would have to test it on the different platforms to verify. However, doing something like this->c.clear(); vs. while(!q.empty()) { q.pop(); }, would be more efficient.
In terms of memory efficiency, the pop function of the priority queue calls the underlying containers pop_back function, and neither the pop_back nor the clear affect the underlying vector's capacity, so there's not really any "savings" to be had in that way; though with this, you could resize the vector to increase/decrease capacity if you had a specific need to.
Just remember that the priority queue pop function calls the underlying containers pop_back function, and calling pop_back on an empty container is undefined behavior.
Hope that can help.
A very good question. While I can't seem to find any strict guarantee that it is a correct method, there are some reasons to think that it is.
For example, consider the docs for operator=:
Copy assignment operator. Replaces the contents with a copy of the
contents of other. Effectively calls c = other.c;. (implicitly
declared)
Since the other queue may be of a different size, this essentially means that there is no internal state that is dependent on size. Moreover, it also implies that assigning an empty queue essentially replaces the container with an empty one and does nothing else.
With that in mind, and the fact that a reasonable implementation of a priority queue would hardly need to maintain any state except for the size of the queue, I believe it can be safely assumed that clearing the underlying container is a valid way to empty the queue.
Related
I have a c++ stack named pages.
As I have no clear() function to clear a stack, I wrote the following code:
stack<string> pages;
//here is some operation
//now clearing the stack
while(!pages.empty())
pages.pop();
Now my question: is there a better efficient way to clear the stack?
In general you can't clear copying containers in O(1) because you need to destroy the copies. It's conceivable that a templated copying container could have a partial specialization that cleared in O(1) time that was triggered by a trait indicating the type of contained objects had a trivial destructor.
If you want to avoid loop.
pages=stack<std::string>();
or
stack<std::string>().swap(pages);
I don't think there is a more efficient way. A stack is a well defined data type, specifically designed to operate in a LIFO context, and not meant to be emptied at once.
For this you could use vector or deque (or list), which are basically the underlying containers; a stack is in fact a container adaptor. Please see this C++ Reference for more information.
If you don't have a choice, and you have to use stack, then there is nothing wrong with the way you do it. Either way, the elements have to be destroyed if they were constructed, whether you assign a new empty stack or pop all elements out or whatever.
I suggest to use a vector instead; it has the operations you need indeed:
size (or resize)
empty
push_back
pop_back
back
clear
It is just more convenient, so you can use the clear method. Not sure if using vector is really more performant; the stack operations are basically the same.
What about assigning a new empty stack to it?
pages = stack<string>();
It won't remove elements one by one and it uses move assignment so it has the potential to be quite fast.
make it into a smart pointer:
stack.reset();
stack = make_shared<stack<string>>();
What about subclassing std::stack and implementing a simple clear() method like this, accessing underlying container c ?
public:
void clear() { c.clear(); }
This question already has answers here:
Is there a way to access the underlying container of STL container adaptors?
(10 answers)
Closed 6 years ago.
The following piece of code compiles and runs just fine, but I have been reading up on reinterpret_cast and I can't really make out if it's standard compliant and is portable. In my head it should be since we explicitly specify the underlying container of the priority_queue but I haven't been able to get a straight answer so SO-wizards might have some insight into this piece.
What it does is basically create a priority_queue that deals with integers using a vector. It then reinterpret_cast's that queue into a vector-pointer so that the elements of the queue can be iterated over (since priority_queue does not include that functionality itself).
#include <iostream>
#include <vector>
#include <queue>
int main() {
std::priority_queue< int, std::vector<int> > pq;
pq.push(100);
pq.push(32);
pq.push(1);
auto o = reinterpret_cast<std::vector<int> *>(&pq);
for (std::vector<int>::iterator it = (*o).begin(); it != (*o).end(); it++) {
std::cout << (*it) << std::endl;
}
return 0;
}
The standard makes no guarantees about the layout of the std::priority_queue class. If this works on your implementation, it must be because the std::vector is stored at the beginning of the std::priority_queue object, but this certainly cannot be relied upon.
The proper thing to do is write your own variant of std::priority_queue (the <algorithm> header already contains the necessary heap algorithms, such as std::push_heap) or to derive a class from std::priority_queue, which gives you access to the protected member c which refers to the underlying container.
The following piece of code compiles and runs just fine
phew! that was lucky.
but I have been reading up on reinterpret_cast and I can't really make out if it's standard compliant
yes it is
and is portable.
almost never. certainly not the way you're using it. It's one of those 'if you don't know exactly how it works, don't touch it' things.
You can portably reinterpret cast an A* to a void*, and back to an A*... than that's about it (OK, there are a few more use cases, but they are specific and you need to know them before you play with this particular stick of dynamite).
In my head it should be since we explicitly specify the underlying container of the priority_queue but I haven't been able to get a straight answer so SO-wizards might have some insight into this piece.
std::priority_queue is a specific adaptation of the underlying container. Because the correct operation of the queue depends upon you not tampering with the underlying container, the queue's interface deliberately hides it from you.
What it does is basically create a priority_queue that deals with integers using a vector. It then reinterpret_cast's that queue into a vector-pointer so that the elements of the queue can be iterated over (since priority_queue does not include that functionality itself).
What it really does is invoke undefined behaviour which makes it no longer possible to reason about the outcome of any part of your program. If it worked on your environment, that's a shame. Because now you'll be tempted to release it into mine, where it probably won't, or might for a while - while silently polluting my memory until... BOOM! my process core dumps and we're all clueless as to why.
I need a FIFO structure that supports indexing. Each element is an array of data that is saved off a device I'm reading from. The FIFO has a constant size, and at start-up each element is zeroed out.
Here's some pseudo code to help understand the issue:
Thread A (Device Reader):
1. Lock the structure.
2. Pop oldest element off of FIFO (don't need it).
3. Read next array of data (note this is a fixed size array) from the device.
4. Push new data array onto the FIFO.
5. Unlock.
Thread B (Data Request From Caller):
1. Lock the structure.
2. Determine request type.
3. if (request = one array) memcpy over the latest array saved (LIFO).
4. else memcpy over the whole FIFO to the user as a giant array (caller uses arrays).
5. Unlock.
Note that the FIFO shouldn't be changed in Thread B, the caller should just get a copy, so data structures where pop is destructive wouldn't necessarily work without an intermediate copy.
My code also has a boost dependency already and I am using a lockfree spsc_queue elsewhere. With that said, I don't see how this queue would work for me here given the need to work as a LIFO in some cases and also the need to memcpy over the entire FIFO at times.
I also considered a plain std::vector, but I'm worried about performance when I'm constantly pushing and popping.
One point not clear in the question is the compiler target, whether or not the solution is restricted to partial C++11 support (like VS2012), or full support (like VS2015). You mentioned boost dependency, which lends similar features to older compilers, so I'll rely on that and speak generally about options on the assumption that boost may provide what a pre-C++11 compiler may not, or you may elect C++11 features like the now standardized mutex, lock, threads and shared_ptr.
There's no doubt in my mind that the primary tool for the FIFO (which, as you stated, may occasionally need LIFO operation) is the std::deque. Even though the deque supports reasonably efficient dynamic expansion and shrinking of storage, contrary to your primary requirement of a static size, it's main feature is the ability to function as both FIFO and LIFO with good performance in ways vectors can't as easily manage. Internally most implementations provide what may be analogized as a collection of smaller vectors which are marshalled by the deque to function as if a single vector container (for subscripting) while allowing for double ended pushing and popping with efficient memory management. It can be tempting to use a vector, employing a circular buffer technique for fixed sizes, but any performance improvement is minimal, and deque is known to be reliable.
Your point regarding destructive pops isn't entirely clear to me. That could mean several things. std::deque offers back and front as a peek to what's at the ends of the deque, without destruction. In fact, they're required to look because deque's pop_front and pop_back only remove elements, they don't provide access to the element being popped. Taking an element and popping it is a two step process on std::deque. An alternate meaning, however, is that a read only requester needs to pop strictly as a means of navigation, not destruction, which is not really a pop, but a traversal. As long as the structure is under lock, that is easily managed with iterators or indexes. Or, it could also mean you need a independent copy of the queue.
Assuming some structure representing device data:
struct DevDat { .... };
I'm immediately faced with that curious question, should this not be a generic solution? It doesn't matter for the sake of discussion, but it seems the intent is an odd combination of application specific operation and a generalized thread-safe stack "machine", so I'll suggest a generic solution which is easily translated otherwise (that is, I suggest template classes, but you could easily choose non-templates if preferred). These psuedo code examples are sparse, just illustrating container layout ideas and proposed concepts.
class SafeStackBase
{ protected: std::mutex sync;
};
template <typename Element>
class SafeStack : public SafeStackBase
{ public:
typedef std::deque< Element > DeQue;
private:
DeQue que;
};
SafeStack could handle any kind of data in the stack, so that detail is left for Element declaration, which I illustrate with typedefs:
typedef std::vector< DevDat > DevArray;
typedef std::shared_ptr< DevArray > DevArrayPtr;
typedef SafeStack< DevArrayPtr > DeviceQue;
Note I'm proposing vector instead of array because I don't like the idea of having to choose a fixed size, but std::array is an option, obviously.
The SafeStackBase is intended for code and data that isn't aware of the users data type, which is why the mutex is stored there. It could easily part of the template class, but the practice of placing non-type aware data and code in a non-template base helps reduce code bloat when possible (functions which don't use Element, for example, need not be expanded in template instantiations). I suggest the DevArrayPtr so that the arrays can be "plucked out" of the queue without copying the arrays, then shared and distributed outside the structure under shared_ptr's shared ownership. This is a matter of illustration, and does not adequately deal with questions regarding content of those arrays. That could be managed by DevDat, which could marshal reading of the array data, while limiting writing of the array data to an authorized friend (a write accessor strategy), such that Thread B (a reader only) is not carelessly able to modify the content. In this way it's possible to provide these arrays without copying data..just return a copy of the DevArrayPtr for communal access to the entire array. This also supports returning a container of DevArrayPtr's supporting ThreadB point 4 (copy the whole FIFO to the user), as in:
typedef std::vector< DevArrayPtr > QueArrayVec;
typedef std::deque< DevArrayPtr > QueArrayDeque;
typedef std::array< DevArrayPtr, 12 > QueArrays;
The point is that you can return any container you like, which is merely an array of pointers to the internal std::array< DevDat >, letting DevDat control read/write authorization by requiring some authorization object for writing, and if this copy should be operable as a FIFO without potential interference with Thread A's write ownership, QueArrayDeque provides the full feature set as an independent FIFO/LIFO structure.
This brings up an observation about Thread A. There you state lock is step 1, while unlock is step 5, but I submit that only steps 2 and 4 are really required under lock. Step 3 can take time, and even if you assume that is a short time, it's not as short as a pop followed by a push. The point is that the lock is really about controlling the FIFO/LIFO queue structure, and not about reading data from the device. As such, that data can be fashioned into DevArray, which is THEN provided to SafeStack to be pop/pushed under lock.
Assume code inside SafeStack:
typedef std::lock_guard< std::mutex > Lock; // I use typedefs a lot
void StuffIt( const Element & e )
{ Lock l( sync );
que.pop_front();
que.push_back( e );
}
StuffIt does that simple, generic job of popping the front, pushing the back, under lock. Since it takes an const Element &, step 3 of Thread A is already done. Since Element, as I suggest, is a DevArrayPtr, this is used with:
DeviceQue dq;
auto p = std::make_shared<DevArray>();
dq.StuffIt( p );
How the DevArray is populated is up to it's constructor or some function, the point is that a shared_ptr is used to transport it.
This brings up a more generic point about SafeStack. Obviously there is some potential for standard access functions, which could mimic std::deque, but the primary job for SafeStack is to lock/unlock for access control, and do something while under lock. To that end, I submit a generic functor is sufficient to generalize the notion. The preferred mechanics, especially with respect to boost, is up to you, but something like (code inside SafeStack):
bool LockedFunc( std::function< bool(DevQue &)> f )
{
Lock l( sync );
f( que );
}
Or whatever mechanics you like for calling a functor taking a DevQue as a parameter. This means you could fashion callbacks with complete access to the deque (and it's interface) while under lock, or provide functors or lambdas which perform specific tasks under lock.
The design point is to make SafeStack small, focused on that minimal task of doing a few things under lock, taking most any kind of data in the queue. Then, using that last point, provide the array under shared_ptr to provide the service of Thread B steps 3 and 4.
To be clear about that, keep in mind that whatever is done to the shared_ptr to copy it is similar to what can be done to simple POD types, like ints, with respect to containers. That is, one could loop through the elements of the DevQue fashioning a copy of those elements into another container in the same code which would do that for a container of integers (remember, it's a member function of a template - that type is generic). The resulting work is only copying pointers, which is less effort than copying entire arrays of data.
Now, step 4 isn't QUITE clear to me. It appears to say that you need to return a DevArray which is the accumulated content of all entries in the queue. That's trivial to arrange, but it might work a little better with a vector (as that's dynamically expandable), but as long as the std::array has sufficient room, it's certainly possible.
However, the only real difference between such an array and the queue's native "array of arrays" is how it is traversed (and counted). Returning one Element (step 3) is quick, but since step 4 is indicated under lock, that's a bit more than most locked functions should really do if they don't have to.
I'd suggest SafeStack should be able to provide a copy of que (a DeQue typedef), which is quick. Then, outside of the lock, Thread B has a copy of the DeQue ( a std::deque< DevArrayPtr > ) to fashion into it's own "giant array".
Now, more about that array. To this point I've not adequately dealt with marshalling it. I've just suggested that DevDat does that, but this may not be adequate. Certainly the content of the std::array or std::vector conveying a collection of DevDats could be written. Perhaps that deserves it's own outer structure. I'll leave that to you, because the point I've made is that SafeStack is now focused on it's small task (lock/access/unlock) and can take anything which can be owned by a share_ptr (or POD's and copyable objects). In the same way SafeStack is an outer shell marshalling a std::deque with a mutex, some similar outer shell could marshal read only access to the std::vector or std::array of DevDats, with a kind of write accessor used by Thread A. That could be a simple as something that only allows construction of the std::array to create it's content, after which read only access could be all that's provided.
I would suggest you to use boost::circular_buffer which is a fixed size container that supports random access iteration, constant time insert and erase at the beginning and end. You can use it as a FIFO with push_back(), read back() for the latest data saved and iterate over the whole container via begin(), end() or using operator[].
But at start-up the elements are not zeroed out. It has in my opinion an even more convenient interface. The container is empty at first and insertion will increase size until it reaches max size.
I have a class that sometimes needs to use a member of type deque<int> if an argument is passed to the constructor, and if it isn't, the member will not be used. What is the best way to deal with this situation efficiently and stylistically?
I'd like to also mention that objects of this class should be able to be passed to the same function, though removing the ability for storage in the same container is fine. I have never done polymorphism (as hinted at in the comments), but I think I am going to read about it and try it out.
My two ideas:
- Keep the member variable as a deque<int>, which will be stored as an empty deque<int> I assume.
- Use a pointer to a deque<int>, only calling new if it is needed.
You can also set the pointer to the deque member when initializing it for use:
deque<int> queue_;
deque<int> *ptr_;
ptr_(NULL); // not used
ptr_(&queue_);
However, speed wise, if you most often do not use the queue, the new is likely going to be faster since by default all you'd do it set a NULL in a pointer. If it is used 50/50, then my method is probably one of the fastest because you do not need to handle more heap.
If performance is an issue,I would probably use option 1 along with a "is_initialized" boolean flag:
class A
{
bool is_initialized;
public:
A(bool used=false):is_initialized(used)
{
};
private:
deque<int> _d;
};
Yes, you may be able to save a little bit of memory with option 2 when deque is not used, but if it is used you incur an overhead of dereferencing a pointer.
What you are looking for is exactly boost::optional<deque<int>>. Nothing else documents your intention more clearly and correctly here.
Does anyone know why std::queue, std::stack, and std::priority_queue don't provide a clear() member function? I have to fake one like this:
std::queue<int> q;
// time passes...
q = std::queue<int>(); // equivalent to clear()
IIRC, clear() is provided by everything that could serve as the underlying container. Is there a good reason to not have the container adaptors provide it?
Well, I think this is because clear was not considered a valid operation on a queue, a priority_queue or a stack (by the way, deque is not and adaptor but a container).
The only reason to use the container
adaptor queue instead of the container
deque is to make it clear that you are
performing only queue operations, and
no other operations. (from the sgi page on queue)
So when using a queue, all you can do is push/pop elements; clearing the queue can be seen as a violation of the FIFO concept. Consequently, if you need to clear your queue, maybe it's not really a queue and you should better use a deque.
However, this conception of things is a little narrow-minded, and I think clearing the queue as you do is fair enough.
Deque has clear(). See, e.g., http://www.cplusplus.com/reference/stl/deque/clear.html.
However, queue does not. But why would you choose queue over deque, anyway?
The only reason to use the container
adaptor queue instead of the container
deque is to make it clear that you are
performing only queue operations, and
no other operations.
(http://www.sgi.com/tech/stl/queue.html)
So I guess clear() is not a queue operation, then.
I'd say it's because container adaptors are not containers.
You CAN clear queues (and std::stack and priority_queue), as long as you inherit from it. The container is intentionally left protected to allow this.
#include <queue>
using namespace std;
class clearable_queue : public queue<int>
{
public:
void clear()
{
// the container 'c' in queues is intentionally left protected
c.clear();
}
};
int main(int argc, char** argv)
{
clearable_queue a;
a.clear();
}
I think it depends on the implementation - until recently Microsoft STL didn't have clear on several containers. (it does now, eg this quick google result)
However, clear() is often simply a call to erase(begin(), end()), so implement your own equivalent and use that instead.
I think the standard refers to clear as erasing over an iterator range, so the above is what most implementations will provide. (eg Dinkumware's)
std::queue, std::deque, and std::priority_queue are container adaptors and only provide a small number of methods to access the underlying container.
You can clear the underlying container, so long as you can access it. To do this, create the underlying container to pass in to the apadptor constructor. For example:
std::deque< int > d;
std::queue< int > q( d );
... time passes ...
d.clear();
Edit: additional info
I should also have warned you to tread carefully here as calling methods on the underlying container may break assumptions made by the adaptor. In that respect, the way you are currently clearng the queue seems preferable.