I want to delete all the elements from the std::vector
for (Entity * cb : cbs)
{
delete cb;
}
Is there a better way to do this?
std::for_each would be the alternative to an explicit loop:
std::for_each(cbs.begin(), cbs.end(), [](Entity *cb){
delete cb;
});
You clarified in the comments you have a vector of raw pointers and want to delete all objects pointed by them.
So how do you do it?
You don't!
In C++ we don't do manual memory management (raw new/delete). It's buggy (any exception will make you leak memory), error prone and confusing (who is supposed to delete the objects). Instead we use RAII. For this we have smart pointers. So use std::unique_ptr or std::shared_ptr and everything is correctly managed for you.
It could be any operation on elements but same for all elements.
A range for loop like yours is the idiomatic way to do it:
for (auto&& elem : vector)
{
foo(elem);
}
Use shared or unique pointers. This results in the automatic deletion when clearing the vector (or in the destructor of the vector in case you don't clear it).
#include <iostream>
#include <memory>
#include <vector>
class Test
{
private:
size_t m_index;
public:
Test(size_t index)
: m_index(index)
{
}
void PrintIndex()
{
std::cout << m_index << std::endl;
}
~Test()
{
std::cout << "destructor invoked for " << m_index << std::endl;
}
};
int main()
{
std::vector<std::unique_ptr<Test>> v;
for (size_t i = 0; i < 5; ++i)
{
v.emplace_back(new Test(i));
}
for (auto& t : v)
{
t->PrintIndex();
}
//v.clear();
return 0;
}
As for executing other arbitrary operations on every element as mentioned in the comment:
There are other ways to achieve the this result, but there's nothing nothing that's better than the ranged for loop imho; it's just 3 simple lines (plus loop body) after all.
Related
I have been trying to dive deeper into the limitations of pointers to see how they effect the program behind the scenes. One thing my research has led me to is the variables created by pointers must be deleted in a language like C++, otherwise the data will still be on memory.
My question pertains to accessing the data after a functions lifecycle ends. If I create a pointer variable within a function, and then the function comes to a proper close, how would the data be accessed? Would it actually be just garbage taking up space, or is there supposed to be a way to still reference it without having stored the address in another variable?
There's no automatic garbage collection. If you lose the handle (pointer, reference, index, ...) to your resource, your resource will live ad vitam æternam.
If you want your resources to cease to live when their handle goes out of scope, RAII and smart pointers are the tool you need.
If you want your resources to continue to live after their handle goes out of scope, you need to copy the handle and pass it around.
Using standard smart pointers std::unique_ptr and std::shared_ptr memory is freed when pointer goes out of scope. After scope ends object is immediately destroyed+freed and there is no way to access it anymore. Unless you move/copy pointer out of scope to bigger scope, where it will be deleted.
But there is not so difficult to implement lazy garbage collector. Same as before you use smart pointers everywhere, but lazey variant. Now when pointer goes out of scope its object is not immediately destroyed+freed, but instead is delegated to lazy garbage collector, which will destroy+free it later in a separate thread. Exactly this lazy behaviour I implemented in my code below.
I implemented following code from scratch just for fun and as a demo for you, there is no big point why not to use standard greedy freeing techniques of std::unique_ptr and std::shared_ptr. Although there is one very important use case - std::shared_ptr constructs objects at well known points in code, when you call constructor, and you know construction time well, but destroys objects at different undefined points in code and time, because there are shared copies of shared pointer. Thus you may have long destruction delays at unpredicted points in time, which may harm realtime high performance code. Also destruction time might be too big. Lazy deleting moves destruction into separate thread where it can be deleted at its own pace.
Although smart pointer is lazily disposed at scope end, but yet for some nano seconds (or even micro seconds) you may still have access to its undestroyed/unfreed memory, of course time is not guaranteed. This just means that real destruction can happen much later than scope ends, thus is the name of lazy garbage collector. You can even tweak this kind of lazy garbage collector so that it really deletes objects lets say after 1 milli second after theirs smart pointers have been destroyed.
Real garbage collectors are doing similar thing, they free objects much later in time and usually do it automatically by finding bytes in memory that look like real pointers of heap.
There is a Test() function in my code that shows how my lazy variants of standard pointers are used. Also when code is runned you may see in console output that it shows something like:
Construct Obj( 592)
Construct Obj( 1264)
LazyDeleter Dispose( 1264)
LazyDeleter Dispose( 592)
Test finished
Destroy ~Obj( 1264)
Destroy ~Obj( 592)
Here in parenthesis it shows id of object (lower bits of its pointer). You may see that disposal and destruction is done in order exactly opposite to construction order. Disposal to lazy garbage collector happens before test finishes. While real destruction happens later in a separate thread after test finishes.
Try it online!
#include <deque>
#include <atomic>
#include <mutex>
#include <thread>
#include <array>
#include <memory>
#include <iostream>
#include <iomanip>
using DelObj = void (void *);
void Dispose(void * obj, DelObj * del);
template <typename T>
struct LazyDeleter {
void operator ()(T * ptr) const {
struct SDel { static void Del(void * ptr) { delete (T*)ptr; } };
std::cout << "LazyDeleter Dispose(" << std::setw(5) << uintptr_t(ptr) % (1 << 16) << ")" << std::endl;
Dispose(ptr, &SDel::Del);
}
};
template <typename T>
using lazy_unique_ptr = std::unique_ptr<T, LazyDeleter<T>>;
template <typename T>
std::shared_ptr<T> make_lazy_shared(T * ptr) {
return std::shared_ptr<T>(ptr, LazyDeleter<T>{});
}
void Dispose(void * obj, DelObj * del) {
class AtomicMutex {
public:
auto Locker() { return std::lock_guard<AtomicMutex>(*this); }
void lock() { while (f_.test_and_set(std::memory_order_acquire)) {} }
void unlock() { f_.clear(std::memory_order_release); }
auto & Flag() { return f_; }
private:
std::atomic_flag f_ = ATOMIC_FLAG_INIT;
};
class DisposeThread {
struct Entry {
void * obj = nullptr;
DelObj * del = nullptr;
};
public:
DisposeThread() : thr_([&]{
size_t constexpr block = 32;
while (!finish_.load(std::memory_order_relaxed)) {
while (true) {
std::array<Entry, block> cent{};
size_t cent_cnt = 0;
{
auto lock = mux_.Locker();
if (entries_.empty())
break;
cent_cnt = std::min(block, entries_.size());
std::move(entries_.begin(), entries_.begin() + cent_cnt, cent.data());
entries_.erase(entries_.begin(), entries_.begin() + cent_cnt);
}
for (size_t i = 0; i < cent_cnt; ++i) {
auto & entry = cent[i];
try { (*entry.del)(entry.obj); } catch (...) {}
}
}
std::this_thread::yield();
}
}) {}
~DisposeThread() {
while (!entries_.empty())
std::this_thread::yield();
finish_.store(true, std::memory_order_relaxed);
thr_.join();
}
void Add(void * obj, DelObj * del) {
auto lock = mux_.Locker();
entries_.emplace_back(Entry{obj, del});
}
private:
AtomicMutex mux_{};
std::thread thr_{};
std::deque<Entry> entries_;
std::atomic<bool> finish_ = false;
};
static DisposeThread dt{};
dt.Add(obj, del);
}
void Test() {
struct Obj {
Obj() { std::cout << "Construct Obj(" << std::setw(5) << uintptr_t(this) % (1 << 16) << ")" << std::endl << std::flush; }
~Obj() { std::cout << "Destroy ~Obj(" << std::setw(5) << uintptr_t(this) % (1 << 16) << ")" << std::endl << std::flush; }
};
{
lazy_unique_ptr<Obj> uptr(new Obj());
std::shared_ptr<Obj> sptr = make_lazy_shared(new Obj());
auto sptr2 = sptr;
}
std::cout << "Test finished" << std::endl;
}
int main() {
Test();
}
I have a std::vector of objects being filled by de-referencing std::unique_ptr's in the push_back calls. However, when I run through a mutable range-based for-loop, my modification to these objects stays local to the loop. In other words, it seems as those objects are being treated as constant, despite that lack of a const keyword in the loop. Here is minimal code to demonstrate what I'm seeing:
#include <vector>
#include <memory>
#include <iostream>
class Item
{
public:
typedef std::unique_ptr<Item> unique_ptr;
inline static Item::unique_ptr createItem()
{
return std::unique_ptr<Item>(new Item());
}
inline const int getValue() const { return _value; }
inline void setValue(const int val) { _value = val; }
private:
int _value;
};
int main()
{
std::vector<Item> _my_vec;
for (int i = 0; i < 5; i++)
{
Item::unique_ptr item = Item::createItem();
_my_vec.push_back(*item);
}
for (auto item : _my_vec)
{
// modify item (default value was 0)
item.setValue(10);
// Correctly prints 10
std::cout << item.getValue() << std::endl;
}
for (auto item : _my_vec)
{
// Incorrectly prints 0's (default value)
std::cout << item.getValue() << std::endl;
}
}
I suspect this has something to do with the move semantics of std::unique_ptr? But that wouldn't quite make sense because even if push_back is calling the copy constructor or something and copying the added item rather than pointing to it, the iterator is still passing over the same copies, no?
Interestingly enough, in my actual code, the class represented here by Item has a member variable that is a vector of shared pointers to objects of another class, and modifications to the objects being pointed to by those shared pointers persist between loops. This is why I suspect there's something funky with the unique_ptr.
Can anyone explain this behavior and explain how I may fix this issue while still using pointers?
When you write a range-based for loop like that:
std::vector<int> v = ...;
for(auto elt : v) {
...
}
the elements of v are copied into elt.
In your example, in each iteration, you modify the local copy of the Item and not the Item in the vector.
To fix your issue, use a reference:
for (auto& item : _my_vec)
{
item.setValue(10);
std::cout << item.getValue() << std::endl;
}
Vector of non-const objects seems to be treated as constant
If it was treated as constant, then the compiler would scream at you, because writing to a constant is treated as ill-formed and the compiler would be required to scream at you. The shown code compiles just fine, with no warnings.
I suspect that you may be referring to the fact that you don't modify the elements within the vector. That is because you modify auto item. That item is not an element of the vector, it is a copy of the item in the vector. You could refer to the item within that vector by using a reference: auto& item. Then modifications to item would be modifications to the referred element of the vector.
In our code we use both stl and MFC containers. I've encountered a case where we have a CArray of objects, where each object contains an std::vector.
After adding several objects to the CArray, so that the data in the CArray will be reallocated and copied when it reaches it's maximum size, it seems like the inner vector is corrupted. When I iterate over the CArray and for each object iterate over the std::vector, I get a "vector iterator not dereferencable" error.
I looked at the MFC code and it uses memcpy() to copy the data after reallocating. In the std::vector (I use visual studio) there is member called _Myproxy which has a member called _Mycont which seems to change it's value in the new vector (the vector that was copied by memcpy()).
I replicated this issue, I'm attaching the sample code below.
I can refactor this code and I will probably do so, but I want to understand exactly what's happening.
#include "stdafx.h"
#include <vector>
#include <iostream>
// an object which holds an std::vector
class test_t
{
public:
test_t() {}
~test_t()
{
std::cout << "d'tor" << std::endl;
}
void add(int i)
{
m_vec.push_back(i);
}
void print()
{
for (std::vector<int>::iterator it = m_vec.begin(); it != m_vec.end(); ++it)
{
int i = *it;
std::cout << i << std::endl;
}
std::cout << std::endl;
}
private:
std::vector<int> m_vec;
};
void test()
{
// array if objects where each object holds an std::vector
CArray<test_t, test_t&> arr;
for (int i = 0; i < 10; ++i)
{
test_t t;
int j = arr.Add(t);
test_t& rt = arr[i];
rt.add(1);
rt.add(2);
rt.add(3);
}
for (int i = 0; i < arr.GetSize(); ++i)
{
test_t& rt = arr[i];
rt.print(); // error occurs here
}
}
Thanks,
Gabriel
CArray doesnt play well with non-POD types, relying on memcpy_s when resizing. You can make it work, but in general it's best avoided for this use case.
See the note here: https://msdn.microsoft.com/en-us/library/4h2f09ct.aspx
I have a class like this :
Header:
class CurlAsio {
public:
boost::shared_ptr<boost::asio::io_service> io_ptr;
boost::shared_ptr<curl::multi> multi_ptr;
CurlAsio();
virtual ~CurlAsio();
void deleteSelf();
void someEvent();
};
Cpp:
CurlAsio::CurlAsio(int i) {
id = boost::lexical_cast<std::string>(i);
io_ptr = boost::shared_ptr<boost::asio::io_service>(new boost::asio::io_service());
multi_ptr = boost::shared_ptr<curl::multi>(new curl::multi(*io_ptr));
}
CurlAsio::~CurlAsio() {
}
void CurlAsio::someEvent() {
deleteSelf();
}
void CurlAsio::deleteSelf() {
if (io_ptr) {
io_ptr.reset();
}
if (multi_ptr)
multi_ptr.reset();
if (this)
delete this;
}
During run time, many instances of CurlAsio Class is created and deleted.
So my questions are:
even though I am calling shared_ptr.reset() , is it necessary to do so ?
i monitor the virtual memory usage of the program during run time and I would expect the memory usage would go down after deleteSelf() has been called, but it does not. Why is that?
if i modify the deleteSelf() like this:
void CurlAsio::deleteSelf() {
delete this;
}
What happens to the two shared pointers ? do they get deleted as well ?
The shared_ptr members have their own destructor to decrement the reference count on the pointee object, and delete it if the count reaches 0. You do not need to call .reset() explicitly given your destructor is about to run anyway.
That said - why are you even using a shared_ptr? Are those members really shared with other objects? If not - consider unique_ptr or storing by value.
As for memory - it doesn't normally get returned to the operating system until your program terminates, but will be available for your memory to reuse. There are many other stack overflow questions about this.
If you're concerned about memory, using a leak detection tool is a good idea. On Linux for example, valgrind is excellent.
if i modify the deleteSelf() like this:
void CurlAsio::deleteSelf() {
delete this;
}
Don't do this. This is an antipattern. If you find yourself "needing" this, shared_from_this is your solution:
Live On Coliru
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
#include <iostream>
#include <vector>
struct X : boost::enable_shared_from_this<X> {
int i = rand()%100;
using Ptr = boost::shared_ptr<X>;
void hold() {
_hold = shared_from_this();
}
void unhold() { _hold.reset(); }
~X() {
std::cout << "~X: " << i << "\n";
}
private:
Ptr _hold;
};
int main() {
X* raw_pointer = nullptr; // we abuse this for demo
{
auto some_x = boost::make_shared<X>();
// not lets addref from inside X:
some_x->hold();
// now we can release some_x without destroying the X pointed to:
raw_pointer = some_x.get(); // we'll use this to demo `unhold()`
some_x.reset(); // redundant since `some_x` is going out of scope here
}
// only the internal `_hold` still "keeps" the X
std::cout << "X on hold\n";
// releasing the last one
raw_pointer->unhold(); // now it's gone ("self-delete")
// now `raw_pointer` is dangling (invalid)
}
Prints e.g.
X on hold
~X: 83
My program will create and delete a lot of objects (from a REST API). These objects will be referenced from multiple places. I'd like to have a "memory cache" and manage objects lifetime with reference counting so they can be released when they aren't used anymore.
All the objects inherit from a base class Ressource.
The Cache is mostly a std::map<_key_, std::shared_ptr<Ressource> >
Then I'm puzzled, how can the Cacheknow when a Ressource ref count is decremented? ie. A call to the std::shared_ptr destructor or operator=.
1/ I don't want to iterate over the std::map and check each ref.count().
2/ Can I reuse std::shared_ptr and implement a custom hook?
class RessourcePtr : public std::shared_ptr<Ressource>
...
3/ Should I implement my own ref count class? ex. https://stackoverflow.com/a/4910158/1058117
Thanks!
make shared_ptr not use delete shows how you can provide a custom delete function for a shared pointer.
You could also use intrusive pointers if you wanted have customer functions for reference add and delete.
You could use a map<Key, weak_ptr<Resource> > for your dictionary.
It would work approximately like this:
map<Key, weak_ptr<Resource> > _cache;
shared_ptr<Resource> Get(const Key& key)
{
auto& wp = _cache[key];
shared_ptr<Resource> sp; // need to be outside of the "if" scope to avoid
// releasing the resource
if (wp.expired()) {
sp = Load(key); // actually creates the resource
wp = sp;
}
return wp.lock();
}
When all shared_ptr returned by Get have been destroyed, the object will be freed. The drawback is that if you use an object and then immediately destroy the shared pointer, then you are not really using a cache, as suggested by #pmr in his comment.
EDIT: this solution is not thread safe as you are probably aware, you'd need to lock accesses to the map object.
The problem is, that in your scenario the pool is going to keep every reference alive. Here is a solution that removes resources from a pool with a reference count of one. The problem is, when to prune the pool. This solution will prune on every call to get. This way scenarios like "release-and-acquire-again" will be fast.
#include <memory>
#include <map>
#include <string>
#include <iostream>
struct resource {
};
class pool {
public:
std::shared_ptr<resource> get(const std::string& x)
{
auto it = cache_.find(x);
std::shared_ptr<resource> ret;
if(it == end(cache_))
ret = cache_[x] = std::make_shared<resource>();
else {
ret = it->second;
}
prune();
return ret;
}
std::size_t prune()
{
std::size_t count = 0;
for(auto it = begin(cache_); it != end(cache_);)
{
if(it->second.use_count() == 1) {
cache_.erase(it++);
++count;
} else {
++it;
}
}
return count;
}
std::size_t size() const { return cache_.size(); }
private:
std::map<std::string, std::shared_ptr<resource>> cache_;
};
int main()
{
pool c;
{
auto fb = c.get("foobar");
auto fb2 = c.get("foobar");
std::cout << fb.use_count() << std::endl;
std::cout << "pool size: " << c.size() << std::endl;
}
auto fb3 = c.get("bar");
std::cout << fb3.use_count() << std::endl;
std::cout << "pool size: " << c.size() << std::endl;
return 0;
}
You do not want a cache you want a pool. Specifically an object pool. Your main problem is not how to implement a ref-count, shared_ptr already does that for you. when a resource is no longer needed you just remove it from the cache. You main problem will be memory fragmentation due to constant allocation/deletion and slowness due to contention in the global memory allocator. Look at a thread specific memory pool implementation for an answer.