I have a Storage class that keeps a list of Things:
#include <iostream>
#include <list>
#include <functional>
class Thing {
private:
int id;
int value = 0;
static int nextId;
public:
Thing() { this->id = Thing::nextId++; };
int getId() const { return this->id; };
int getValue() const { return this->value; };
void add(int n) { this->value += n; };
};
int Thing::nextId = 1;
class Storage {
private:
std::list<std::reference_wrapper<Thing>> list;
public:
void add(Thing& thing) {
this->list.push_back(thing);
}
Thing& findById(int id) const {
for (std::list<std::reference_wrapper<Thing>>::const_iterator it = this->list.begin(); it != this->list.end(); ++it) {
if (it->get().getId() == id) return *it;
}
std::cout << "Not found!!\n";
exit(1);
}
};
I started with a simple std::list<Thing>, but then everything is copied around on insertion and retrieval, and I didn't want this because if I get a copy, altering it does not reflect on the original objects anymore. When looking for a solution to that, I found about std::reference_wrapper on this SO question, but now I have another problem.
Now to the code that uses them:
void temp(Storage& storage) {
storage.findById(2).add(1);
Thing t4; t4.add(50);
storage.add(t4);
std::cout << storage.findById(4).getValue() << "\n";
}
void run() {
Thing t1; t1.add(10);
Thing t2; t2.add(100);
Thing t3; t3.add(1000);
Storage storage;
storage.add(t3);
storage.add(t1);
storage.add(t2);
temp(storage);
t2.add(10000);
std::cout << storage.findById(2).getValue() << "\n";
std::cout << storage.findById(4).getValue() << "\n";
}
My main() simply calls run(). The output I get is:
50
10101
Not found!!
Although I was looking for:
50
10101
50
Question
Looks like the locally declared object t4 ceases to exist when the function returns, which makes sense. I could prevent this by dynamically allocating it, using new, but then I didn't want to manage memory manually...
How can I fix the code without removing the temp() function and without having to manage memory manually?
If I just use a std::list<Thing> as some suggested, surely the problem with t4 and temp will cease to exist, but another problem will arise: the code won't print 10101 anymore, for example. If I keep copying stuff around, I won't be able to alter the state of a stored object.
Who is the owner of the Thing in the Storage?
Your actual problem is ownership. Currently, your Storage does not really contain the Things but instead it is left to the user of the Storage to manage the lifetime of the objects you put inside it. This is very much against the philosophy of std containers. All standard C++ containers own the objects you put in them and the container manages their lifetime (eg you simply call v.resize(v.size()-2) on a vector and the last two elements get destroyed).
Why references?
You already found a way to make the container not own the actual objects (by using a reference_wrapper), but there is no reason to do so. Of a class called Storage I would expect it to hold objects not just references. Moreover, this opens the door to lots of nasty problems, including undefined behaviour. For example here:
void temp(Storage& storage) {
storage.findById(2).add(1);
Thing t4; t4.add(50);
storage.add(t4);
std::cout << storage.findById(4).getValue() << "\n";
}
you store a reference to t4 in the storage. The thing is: t4s lifetime is only till the end of that function and you end up with a dangling reference. You can store such a reference, but it isnt that usefull because you are basically not allowed to do anything with it.
Aren't references a cool thing?
Currently you can push t1, modify it, and then observe that changes on the thingy in Storage, this might be fine if you want to mimic Java, but in c++ we are used to containers making a copy when you push something (there are also methods to create the elements in place, in case you worry about some useless temporaries). And yes, of course, if you really want you can make a standard container also hold references, but lets make a small detour...
Who collects all that garbage?
Maybe it helps to consider that Java is garbage-collected while C++ has destructors. In Java you are used to references floating around till the garbage collector kicks in. In C++ you have to be super aware of the lifetime of your objects. This may sound bad, but acutally it turns out to be extremely usefull to have full control over the lifetime of objects.
Garbage? What garbage?
In modern C++ you shouldnt worry to forget a delete, but rather appreciate the advantages of having RAII. Acquiring resources on initialzation and knowing when a destructor gets called allows to get automatic resource management for basically any kind of resource, something a garbage collector can only dream of (think of files, database connections, etc.).
"How can I fix the code without removing the temp() function and without having to manage memory manually?"
A trick that helped me a lot is this: Whenever I find myself thinking I need to manage a resource manually I stop and ask "Can't someone else do the dirty stuff?". It is really extremely rare that I cannot find a standard container that does exactly what I need out of the box. In your case, just let the std::list do the "dirty" work.
Can't be C++ if there is no template, right?
I would actually suggest you to make Storage a template, along the line of:
template <typename T>
class Storage {
private:
std::list<T> list;
//....
Then
Storage<Thing> thing_storage;
Storage<int> int_storage;
are Storages containing Things and ints, respectively. In that way, if you ever feel like exprimenting with references or pointers you could still instantiate a Storage<reference_wrapper<int>>.
Did I miss something?...maybe references?
I won't be able to alter the state of a stored object
Given that the container owns the object you would rather let the user take a reference to the object in the container. For example with a vector that would be
auto t = std::vector<int>(10,0); // 10 element initialized to 0
auto& first_element = t[0]; // reference to first element
first_element = 5; // first_element is an alias for t[0]
std::cout << t[0]; // i dont want to spoil the fun part
To make this work with your Storage you just have to make findById return a reference. As a demo:
struct foo {
private:
int data;
public:
int& get_ref() { return data;}
const int& get_ref() const { return data;}
};
auto x = foo();
x.get_ref = 12;
TL;DR
How to avoid manual resource managment? Let someone else do it for you and call it automatic resource management :P
t4 is a temporary object that is destroyed at exit from temp() and what you store in storage becomes a dangling reference, causing UB.
It is not quite clear what you're trying to achieve, but if you want to keep the Storage class the same as it is, you should make sure that all the references stored into it are at least as long-lived as the storage itself. This you have discovered is one of the reasons STL containers keep their private copies of elements (others, probably less important, being—elimination of an extra indirection and a much better locality in some cases).
P.S. And please, can you stop writing those this-> and learn about initialization lists in constructors? >_<
In terms of what your code actually appears to be doing, you've definitely overcomplicated your code, by my estimation. Consider this code, which does all the same things your code does, but with far less boilerplate code and in a way that's far more safe for your uses:
#include<map>
#include<iostream>
int main() {
std::map<int, int> things;
int & t1 = things[1];
int & t2 = things[2];
int & t3 = things[3];
t1 = 10;
t2 = 100;
t3 = 1000;
t2++;
things[4] = 50;
std::cout << things.at(4) << std::endl;
t2 += 10000;
std::cout << things.at(2) << std::endl;
std::cout << things.at(4) << std::endl;
things.at(2) -= 75;
std::cout << things.at(2) << std::endl;
std::cout << t2 << std::endl;
}
//Output:
50
10101
50
10026
10026
Note that a few interesting things are happening here:
Because t2 is a reference, and insertion into the map doesn't invalidate references, t2 can be modified, and those modifications will be reflected in the map itself, and vise-versa.
things owns all the values that were inserted into it, and it will be cleaned up due to RAII, and the built-in behavior of std::map, and the broader C++ design principles it is obeying. There's no worry about objects not being cleaned up.
If you need to preserve the behavior where the id incrementing is handled automatically, independently from the end-programmer, we could consider this code instead:
#include<map>
#include<iostream>
int & insert(std::map<int, int> & things, int value) {
static int id = 1;
int & ret = things[id++] = value;
return ret;
}
int main() {
std::map<int, int> things;
int & t1 = insert(things, 10);
int & t2 = insert(things, 100);
int & t3 = insert(things, 1000);
t2++;
insert(things, 50);
std::cout << things.at(4) << std::endl;
t2 += 10000;
std::cout << things.at(2) << std::endl;
std::cout << things.at(4) << std::endl;
things.at(2) -= 75;
std::cout << things.at(2) << std::endl;
std::cout << t2 << std::endl;
}
//Output:
50
10101
50
10026
10026
These code snippets should give you a decent sense of how the language works, and what principles, possibly unfamiliar in the code I've written, that you need to learn about. My general recommendation is to find a good C++ resource for learning the basics of the language, and learn from that. Some good resources can be found here.
One last thing: if the use of Thing is critical to your code, because you need more data saved in the map, consider this instead:
#include<map>
#include<iostream>
#include<string>
//Only difference between struct and class is struct sets everything public by default
struct Thing {
int value;
double rate;
std::string name;
Thing() : Thing(0,0,"") {}
Thing(int value, double rate, std::string name) : value(value), rate(rate), name(std::move(name)) {}
};
int main() {
std::map<int, Thing> things;
Thing & t1 = things[1];
t1.value = 10;
t1.rate = 5.7;
t1.name = "First Object";
Thing & t2 = things[2];
t2.value = 15;
t2.rate = 17.99999;
t2.name = "Second Object";
t2.value++;
std::cout << things.at(2).value << std::endl;
t1.rate *= things.at(2).rate;
std::cout << things.at(1).rate << std::endl;
std::cout << t1.name << "," << things.at(2).name << std::endl;
things.at(1).rate -= 17;
std::cout << t1.rate << std::endl;
}
Based on what François Andrieux and Eljay have said (and what I would have said, had I got there first), here is the way I would do it, if you want to mutate objects you have already added to a list. All that reference_wrapper stuff is just a fancy way of passing pointers around. It will end in tears.
OK. here's the code (now edited as per OP's request):
#include <iostream>
#include <list>
#include <memory>
class Thing {
private:
int id;
int value = 0;
static int nextId;
public:
Thing() { this->id = Thing::nextId++; };
int getId() const { return this->id; };
int getValue() const { return this->value; };
void add(int n) { this->value += n; };
};
int Thing::nextId = 1;
class Storage {
private:
std::list<std::shared_ptr<Thing>> list;
public:
void add(const std::shared_ptr<Thing>& thing) {
this->list.push_back(thing);
}
std::shared_ptr<Thing> findById(int id) const {
for (std::list<std::shared_ptr<Thing>>::const_iterator it = this->list.begin(); it != this->list.end(); ++it) {
if (it->get()->getId() == id) return *it;
}
std::cout << "Not found!!\n";
exit(1);
}
};
void add_another(Storage& storage) {
storage.findById(2)->add(1);
std::shared_ptr<Thing> t4 = std::make_shared<Thing> (); t4->add(50);
storage.add(t4);
std::cout << storage.findById(4)->getValue() << "\n";
}
int main() {
std::shared_ptr<Thing> t1 = std::make_shared<Thing> (); t1->add(10);
std::shared_ptr<Thing> t2 = std::make_shared<Thing> (); t2->add(100);
std::shared_ptr<Thing> t3 = std::make_shared<Thing> (); t3->add(1000);
Storage storage;
storage.add(t3);
storage.add(t1);
storage.add(t2);
add_another(storage);
t2->add(10000);
std::cout << storage.findById(2)->getValue() << "\n";
std::cout << storage.findById(4)->getValue() << "\n";
return 0;
}
Output is now:
50
10101
50
as desired. Run it on Wandbox.
Note that what you are doing here, in effect, is reference counting your Things. The Things themselves are never copied and will go away when the last shared_ptr goes out of scope. Only the shared_ptrs are copied, and they are designed to be copied because that's their job. Doing things this way is almost as efficient as passing references (or wrapped references) around and far safer. When starting out, it's easy to forget that a reference is just a pointer in disguise.
Given that your Storage class does not own the Thing objects, and every Thing object is uniquely counted, why not just store Thing* in the list?
class Storage {
private:
std::list<Thing*> list;
public:
void add(Thing& thing) {
this->list.push_back(&thing);
}
Thing* findById(int id) const {
for (auto thing : this->list) {
if (thing->getId() == id) return thing;
}
std::cout << "Not found!!\n";
return nullptr;
}
};
EDIT: Note that Storage::findById now returns Thing* which allows it to fail gracefully by returning nullptr (rather than exit(1)).
I have a fairly large visual studio C++ code base which many people are modifying. There is a requirement to delete an object which possibly many other objects are referring to(using address of raw pointers). I have tried to remove the address references as much as possible, but I am afraid there still might be some that I haven't addressed.
So, I want to know if there is a way to redirect all accesses to the deleted address to a different address, maybe by doing something while deleting so that it would not crash?
The language does not support what you are trying to do using raw pointers. If you have the option of using std::shared_ptr, you can get what you are looking for.
Response to OP's comment
The objective of using delete is to terminate the life of an object.
If an object is shared by multiple clients, by holding a pointer to the object, independent of one another, you have to make a policy decision on how to manage the life of the object.
Don't allow the life of the object to end until no client has a pointer to it. This is the policy implemented by shared_ptr.
Allow the life of the object to end when the first client wants to end it while making sure that the remaining clients know that the life of the object has ended.
It appears that you want to implement the second policy.
Calling delete directly on the pointer will not work to implement that policy since the language does not support it.
There are no smart pointer classes in the standard library, that I know of, that supports that policy. However, it's not that hard to implement one.
Here's a demonstrative implementation of such a class.
#include <iostream>
#include <cassert>
template <typename T>
struct my_shared_ptr
{
my_shared_ptr(T* ptr) : dataPtr_(new data(ptr))
{
}
my_shared_ptr(my_shared_ptr const& copy) : dataPtr_(copy.dataPtr_)
{
++(dataPtr_->use_count_);
}
~my_shared_ptr()
{
delete dataPtr_->ptr_;
--(dataPtr_->use_count_);
if ( dataPtr_->use_count_ == 0 )
{
delete dataPtr_;
}
else
{
dataPtr_->ptr_ = nullptr;
}
}
// Overloaded operator functions to use objects of
// the class as pointers.
T& operator*()
{
assert(dataPtr_->ptr_ != nullptr);
return *(dataPtr_->ptr_);
}
const T& operator*() const
{
assert(dataPtr_->ptr_ != nullptr);
return *(dataPtr_->ptr_);
}
T* operator->()
{
assert(dataPtr_->ptr_ != nullptr);
return dataPtr_->ptr_;
}
const T* operator->() const
{
assert(dataPtr_->ptr_ != nullptr);
return dataPtr_->ptr_;
}
struct data
{
data(T* ptr) : ptr_(ptr), use_count_(1) {}
T* ptr_;
size_t use_count_;
};
data* dataPtr_;
};
int main()
{
my_shared_ptr<int> ptr1{new int(10)};
std::cout << *ptr1 << std::endl;
my_shared_ptr<int> ptr2{ptr1};
std::cout << *ptr2 << std::endl;
{
my_shared_ptr<int> ptr3{ptr1};
std::cout << *ptr3 << std::endl;
}
// Problem. The int got deleted when ptr3's life ended
// in the above block.
std::cout << *ptr1 << std::endl;
return 1;
}
Output of the above program built with g++:
10
10
10
socc: socc.cc:35: T& my_shared_ptr<T>::operator*() [with T = int]: Assertion `dataPtr_->ptr_ != nullptr' failed.
Aborted
Please note that you will need to implement at least the copy assignment operator to make the class confirm to The Rule of Three. You will need further improvements to deal with pointers to base classes and derived classes.
I have a custom, generalized serialization system written in C++ where I've handled intrinsics, std::string and structures containing those. However, for a memory stream class containing a std::vector<byte>, I'd like to make it possible to store and retrieve a std::shared_ptr<T> inside of it (where T is any class that derives from Abstract). Of course, I'd like a solution without using Boost as it would defeat my intent.
As stated on http://en.cppreference.com/w/cpp/memory/shared_ptr :
Constructing a new shared_ptr using the raw underlying pointer owned by another shared_ptr leads to undefined behavior.
The only (hacky) solution I have come up so far is for the binary memory stream class having a small lookup table of std::shared_ptr<Abstract> referenced by the raw pointer itself, making it fairly trivial to read and write them out, and ownership/reference count would be reliable. Then it becomes possible/useful to serialize the raw pointer.
However, ownership/reference count is not of concern as it's guaranteed for the use case. If there is a solution that would only use the std::vector<byte>, I would consider it a more elegant approach as it could provide other use cases.
Since your serialization/deserialization process happens in the same process (i.e. the same memory space) then you can store the raw memory pointers as the binary data in your stream. Consider the idea below, written as a trivial demo.
Unfortunately, std::enable_shared_from_this does not allow to increment/decrement manually the reference counter because it is just storing a weak reference, that is not able to destroy the object on ref == 0 internally. That is why we have to make a manual reference management, specifically for the instances in the byte stream.
class Abstract : public std::enable_shared_from_this<Abstract> {
public:
Abstract() : _count(0) {}
~Abstract() { cout << "I am destoryed" << endl; }
void incrementStreamRef() {
std::lock_guard<std::mutex> lock(_mutex);
if (!_count) {
_guard = this->shared_from_this();
}
++_count;
};
void decrementStreamRef() {
std::lock_guard<std::mutex> lock(_mutex);
if (_count == 0)
return;
if (_count == 1) {
if (_guard.use_count() == 1) {
// After this call `this` will be destroyed
_guard.reset();
return;
}
_guard.reset();
}
--_count;
};
private:
std::mutex _mutex;
std::shared_ptr<Abstract> _guard;
std::size_t _count;
};
void addAbstractToStream(std::vector<uint8_t>& byteStream, Abstract* abstract) {
abstract->incrementStreamRef();
auto offset = byteStream.size();
try {
// 1 byte for type identification
byteStream.resize(offset + sizeof(abstract) + 1);
byteStream[offset]
= 0xEE; // Means the next bytes are the raw pointer to an Abstract instance
++offset;
// Add the raw pointer to the stream
// prealocate memory here
// byteStream.push_back(....;
// ....
} catch (...) {
abstract->decrementStreamRef();
return;
}
std::memcpy(byteStream.data() + static_cast<std::ptrdiff_t>(offset),
(void*)&abstract,
sizeof(abstract));
}
void removeAbstractFromStream(std::vector<uint8_t>& byteStream, std::size_t offset) {
Abstract* abstract;
std::memcpy((void*)&abstract,
byteStream.data() + static_cast<std::ptrdiff_t>(offset),
sizeof(abstract));
abstract->decrementStreamRef();
}
void tryMe(std::vector<uint8_t>& byteStream) {
// Must not be destoryed when we leave the scope
auto abstract = std::make_shared<Abstract>();
addAbstractToStream(byteStream, abstract.get());
cout << "Scope is about to be left" << endl;
}
int main() {
// Always walk over the stream and use `removeAbstractFromStream`
std::vector<uint8_t> byteStream;
// `try` to always clean the byte stream
// Of course RAII is much better
try {
// Do some work with the stream
} catch (...) {
removeAbstractFromStream(byteStream, 1);
throw;
}
tryMe(byteStream);
cout << "Main is about to be left" << endl;
removeAbstractFromStream(byteStream, 1);
cout << "Main is even closer to be left" << endl;
return 0;
}
Of course, more elaborate locking could be fine, or discarded at all if the thread-safety is not a concern. Please, revise the code for corner cases before using in production.
I have a class like this :
Header:
class CurlAsio {
public:
boost::shared_ptr<boost::asio::io_service> io_ptr;
boost::shared_ptr<curl::multi> multi_ptr;
CurlAsio();
virtual ~CurlAsio();
void deleteSelf();
void someEvent();
};
Cpp:
CurlAsio::CurlAsio(int i) {
id = boost::lexical_cast<std::string>(i);
io_ptr = boost::shared_ptr<boost::asio::io_service>(new boost::asio::io_service());
multi_ptr = boost::shared_ptr<curl::multi>(new curl::multi(*io_ptr));
}
CurlAsio::~CurlAsio() {
}
void CurlAsio::someEvent() {
deleteSelf();
}
void CurlAsio::deleteSelf() {
if (io_ptr) {
io_ptr.reset();
}
if (multi_ptr)
multi_ptr.reset();
if (this)
delete this;
}
During run time, many instances of CurlAsio Class is created and deleted.
So my questions are:
even though I am calling shared_ptr.reset() , is it necessary to do so ?
i monitor the virtual memory usage of the program during run time and I would expect the memory usage would go down after deleteSelf() has been called, but it does not. Why is that?
if i modify the deleteSelf() like this:
void CurlAsio::deleteSelf() {
delete this;
}
What happens to the two shared pointers ? do they get deleted as well ?
The shared_ptr members have their own destructor to decrement the reference count on the pointee object, and delete it if the count reaches 0. You do not need to call .reset() explicitly given your destructor is about to run anyway.
That said - why are you even using a shared_ptr? Are those members really shared with other objects? If not - consider unique_ptr or storing by value.
As for memory - it doesn't normally get returned to the operating system until your program terminates, but will be available for your memory to reuse. There are many other stack overflow questions about this.
If you're concerned about memory, using a leak detection tool is a good idea. On Linux for example, valgrind is excellent.
if i modify the deleteSelf() like this:
void CurlAsio::deleteSelf() {
delete this;
}
Don't do this. This is an antipattern. If you find yourself "needing" this, shared_from_this is your solution:
Live On Coliru
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
#include <iostream>
#include <vector>
struct X : boost::enable_shared_from_this<X> {
int i = rand()%100;
using Ptr = boost::shared_ptr<X>;
void hold() {
_hold = shared_from_this();
}
void unhold() { _hold.reset(); }
~X() {
std::cout << "~X: " << i << "\n";
}
private:
Ptr _hold;
};
int main() {
X* raw_pointer = nullptr; // we abuse this for demo
{
auto some_x = boost::make_shared<X>();
// not lets addref from inside X:
some_x->hold();
// now we can release some_x without destroying the X pointed to:
raw_pointer = some_x.get(); // we'll use this to demo `unhold()`
some_x.reset(); // redundant since `some_x` is going out of scope here
}
// only the internal `_hold` still "keeps" the X
std::cout << "X on hold\n";
// releasing the last one
raw_pointer->unhold(); // now it's gone ("self-delete")
// now `raw_pointer` is dangling (invalid)
}
Prints e.g.
X on hold
~X: 83
My program will create and delete a lot of objects (from a REST API). These objects will be referenced from multiple places. I'd like to have a "memory cache" and manage objects lifetime with reference counting so they can be released when they aren't used anymore.
All the objects inherit from a base class Ressource.
The Cache is mostly a std::map<_key_, std::shared_ptr<Ressource> >
Then I'm puzzled, how can the Cacheknow when a Ressource ref count is decremented? ie. A call to the std::shared_ptr destructor or operator=.
1/ I don't want to iterate over the std::map and check each ref.count().
2/ Can I reuse std::shared_ptr and implement a custom hook?
class RessourcePtr : public std::shared_ptr<Ressource>
...
3/ Should I implement my own ref count class? ex. https://stackoverflow.com/a/4910158/1058117
Thanks!
make shared_ptr not use delete shows how you can provide a custom delete function for a shared pointer.
You could also use intrusive pointers if you wanted have customer functions for reference add and delete.
You could use a map<Key, weak_ptr<Resource> > for your dictionary.
It would work approximately like this:
map<Key, weak_ptr<Resource> > _cache;
shared_ptr<Resource> Get(const Key& key)
{
auto& wp = _cache[key];
shared_ptr<Resource> sp; // need to be outside of the "if" scope to avoid
// releasing the resource
if (wp.expired()) {
sp = Load(key); // actually creates the resource
wp = sp;
}
return wp.lock();
}
When all shared_ptr returned by Get have been destroyed, the object will be freed. The drawback is that if you use an object and then immediately destroy the shared pointer, then you are not really using a cache, as suggested by #pmr in his comment.
EDIT: this solution is not thread safe as you are probably aware, you'd need to lock accesses to the map object.
The problem is, that in your scenario the pool is going to keep every reference alive. Here is a solution that removes resources from a pool with a reference count of one. The problem is, when to prune the pool. This solution will prune on every call to get. This way scenarios like "release-and-acquire-again" will be fast.
#include <memory>
#include <map>
#include <string>
#include <iostream>
struct resource {
};
class pool {
public:
std::shared_ptr<resource> get(const std::string& x)
{
auto it = cache_.find(x);
std::shared_ptr<resource> ret;
if(it == end(cache_))
ret = cache_[x] = std::make_shared<resource>();
else {
ret = it->second;
}
prune();
return ret;
}
std::size_t prune()
{
std::size_t count = 0;
for(auto it = begin(cache_); it != end(cache_);)
{
if(it->second.use_count() == 1) {
cache_.erase(it++);
++count;
} else {
++it;
}
}
return count;
}
std::size_t size() const { return cache_.size(); }
private:
std::map<std::string, std::shared_ptr<resource>> cache_;
};
int main()
{
pool c;
{
auto fb = c.get("foobar");
auto fb2 = c.get("foobar");
std::cout << fb.use_count() << std::endl;
std::cout << "pool size: " << c.size() << std::endl;
}
auto fb3 = c.get("bar");
std::cout << fb3.use_count() << std::endl;
std::cout << "pool size: " << c.size() << std::endl;
return 0;
}
You do not want a cache you want a pool. Specifically an object pool. Your main problem is not how to implement a ref-count, shared_ptr already does that for you. when a resource is no longer needed you just remove it from the cache. You main problem will be memory fragmentation due to constant allocation/deletion and slowness due to contention in the global memory allocator. Look at a thread specific memory pool implementation for an answer.