I'm trying to decide what is best choice to use in my HW.
I have a map (I coded it) that supposed to store integer id's as keys and shared pointer of class named fan as values:
Map<Id, shared_ptr<Fan>> Online_list;
what is better to use shared_ptr<Fan>& or none reference ?
My homework is about creating server like Facebook with fans to be on-line and offline,so im having two maps one called Online_list and other is Offline_list, so when fan is disconnected i need to remove him from on-line list and add him to offline list.
A shared_ptr is a sort of reference. A pointer with memory management. You can store the plain shared_ptr since the internal refers to the same data anyway(Copy constructor increment reference count, etc).
Usually it's best to not store a pointer at all, but just store the Fan object by-value. Does it really make sense for two things to own this Fan object?
However, assuming that your design is correct, then you should simply store the shared_ptr by-value.
Related
I have a basic design that consists of three classes : A Data class, A Holder class wich holds and manages multiple Data objects, and a Wrapper returned by the Holder wich contains a reference to a Data object.
The problem is that Wrapper must not outlive Holder, or it will contain a dangling reference, as Holder is responsible for deleting the Data objects. But as Wrapper is intended to have a very short lifetime (get it in a function, make some computation on its data, and let it go out of scope), this should not be a problem, but i'm not sure this is a good design.
Here are some solutions i thought about:
-Rely on the user reading the documentation, technically the same thing happens with STL iterators
-Using shared_ptr to make sure the data lasts long enought, but it feels like overkill
-Make Wrapper verify its Holder still exists each time you use it
-Any idea?
(I hope everyone can understand this, as english is not my native language)
Edit : If you want to have a less theoric approach, this all comes from a little algorithm i'm trying to write to solve Sudokus, the Holder is the grid, the Data is the content of each box (either a result or a temporary supposition), and the Wrapper is a Box class wich contains a reference to the Data, plus additional information like row and column.
I did not originally said it because i want to know what to do in a more general situation.
Only ever returning a Wrapper by value will help ensure the caller doesn't hold onto it beyond the calling scope. In conjunction with a comment or documentation that "Wrappers are only valid for the lifetime of the Holder that created them" should be enough.
Either that or you decide that ownership of a Data is transferred to the Wrapper when the Wrapper is created, and the Data is destroyed along with the Wrapper - but what if you want a to destroy a Wrapper without deleting the Data? You'd need a method to optionally relinquish ownership of the Data back to the Holder.
Whichever you choose, you need to decide what owns (ie: is responsible for the lifetime of) Data and when - once you've done that you can, if you want, use smart pointers to help with that management - but they won't make the design decision for you, and you can't simply say "oh I'll use smart pointers instead of thinking about it".
Remember, if you can't manage heap memory without smart pointers - you've got no business managing heap memory with them either!
To elaborate on what you have already listed as options,
As you suggested, shared_ptr<Data> is a good option. Unless performance is an issue, you should use it.
Never hold a pointer to Data in Wrapper. Store a handle that can be used to get a pointer to the appropriate Data object. Before Data is accessed through Wrapper, get a pointer the Data object. If the pointer is not valid, throw an exception. If the pointer is valid, proceed along the happy path.
I'm working on a game (and my own custom engine). I have quite a few assets (textures, skeletal animations, etc.) that are used by multiple models and therefore get loaded multiple times.
At first, my ambitions were smaller, game simpler and I could live with a little duplication, so shared_ptr which took care of resource cleanup once the last instance was gone seemed like a good idea. As my game grew, more and more resources got loaded multiple times and all the OpenGL state changing slowed the performance down to a crawl. To solve this problem, I decided to write an asset manager class.
I'm using an unordered_map to store a path to file in std::string and c++11's shared_ptr pointing to the actual loaded asset. If the file is already loaded, I return the pointer, if not, I call the appropriate Loader class. Plain and simple.
Unfortunately, I can't say the same about removal. One copy of the pointer remains in the unordered_map. Currently, I iterate through the entire map and perform .unique() checks every frame. Those pointers that turn out to be unique, get removed from the map, destroying the last copy and forcing the destructor run and do the cleanup.
Iterating through hundreds or thousands of objects is not the most efficient thing to do. (it's not a premature optimization, I am in optimization stage now) Is it possible to somehow override the shared pointer functionality? For example, add an "onLastRemains" event somehow? Maybe I should iterate through part of the unordered_map every frame (by bucket)? Some other way?
I know, I could try to write my own reference counted asset implementation, but all current code I have assumes that assets are shared pointers. Besides, shared pointers are excellent at what they do, so why re-invent the wheel?
Instead of storing shared_ptrs in the asset manager's map(see below, use a regular map), store weak_ptrs. When you construct a new asset, create a shared_ptr with a custom deleter which calls a function in the asset manager which tells it to remove this pointer from it's map. The custom deleter should contain the iterator into the map of this asset and supply that iterator when telling the asset manager to delete it's element from the map. The reason a weak_ptr is used in the map is that any subsequent requests for this element can still be given a shared_ptr (because you can make one from a weak_ptr) but the asset manager doesn't actually have ownership of the asset and the custom deleter strategy will work.
Edit: It was noted below the above technique only works if you use a std::map not a std::unordered_map. My recommendation would be to still do this, and switch to a regular map.
Use a std::unique_ptr in your unordered_map of assets.
Expose a std::shared_ptr with a custom deleter that looks up said pointer in the unordered_map, and either deletes it, or moves it to a second container "to be deleted later". Remember, std::shared_ptr does not have to actually own the data in question! The deleter can do any arbitrary action, and can even be stateful.
This lets you keep O(1) lookups for your assets, bunch cleanup (if you want to) instead of doing cleanup in the middle of other scenes.
You can even support temporary 0 reference count without deleting as follows:
Create a std::make_shared in the unordered_map of assets.
Expose custom std::shared_ptr. These hold a raw T* in the data, and the deleter holds a copy of the std::shared_ptr in the asset map. It "deletes" itself by storing the name (which it also holds) into a central "to be deleted" list.
Then go over said "to be deleted" list and check if they are indeed unique() -- if not, it means someone else in the meantime has spawned one of the "child" std::shared_ptr< T*, std::function<void(T*)>>s.
The only downside to this is that the type of exposed std::shared_ptr is no longer a simple std::shared_ptr.
Perhaps something like this?
shared_ptr<assed> get_asset(string path) {
static map<string, weak_ptr<asset>> cache;
auto ap = cache[path].lock();
if(!ap) cache[path] = ap = load_asset(path);
return ap;
}
I have a bunch of structs in C++. I'd like to save it to file and load them up again. Problem is a few of my structs are pointers to base classes(/structs). So i'd need a way to figure out the type and create it. They really are just POD, they all have public members and no constructors.
What is the easiest way to save and load them from file? I have a LOT of structs and the only types i use are ints, pointers or c strings. I am thinking i could do some macro hacks. But really i have no idea what i should do.
Have you tried the Boost serialization library?
Don't roll your own here - use something well-developed and tested. One idea is Protocol Buffers
The pointers pose a specific issue: I suppose that multiple struct may actually refer to the same pointer and that you'd like a single pointer to be recreated when deserializing...
The first idea, to avoid boiler-plate code, is to create a compile-time reflexion tool:
BOOST_FUSION_ADAPT_STRUCT
BOOST_FUSION_ADAPT_STRUCT_NAMED
Those 2 macros will generate some wicked information on your struct so that you can then use them with Fusion algorithms, which cross the gap between compile-time and run-time.
Now, you need something that will be able to serialize and deserialize your data. Deserialization is usually a bit more difficult, though here you have the advantage of no polymorphism (which always makes things difficult).
Normally, on a first pass you identify the graph of objects to serialize, assign them all an ID, and use this ID in lieu of the pointer when serializing. For deserializing, you use a 3-columns map:
the map is ID -> (pointer to allocated object, list of pointers that could not be set)
allocate all objects, filling the ID map with a pointer to the allocated object each time
when you need to deserialize an ID, look it up in the map, if absent put a pointer to your pointer in the corresponding list
when you put the pointer to the allocated object in the map, take the time to fill all 'not set' pointers (and remove the list at the same time)
Of course, it's better to have frameworks handling it for you. You may try out s11n, if I remember correctly it handles cycles of references.
just came a across a situation where I needs to store heap-allocated pointers (to a class B) in an STL container. The class that owns the privately held container (class A) also creates the instances of B. Class A will be able to return a const pointers to B instances for clients of A.
Now, does it matter if these pointer are stored in a set or a vector? I thought of having a set just to verify that no duplicates are stored but since addresses are stored, two B pointers with the same data can be stored (unless I provide a comparison class for data comparison I presume).
Any thoughts on this (quite vague) subject? What are the pros/cons for the alternatives? Are smart_pointers something to look into?
Please ask me if anything imperative is unclear, thank you!
There's nothing wrong with storing pointers in a standard container - be it a vector, set, map, or whatever. You just have to be aware of who owns that memory and make sure that it's released appropriately. When choosing a container, choose the container that makes the most sense for your needs. vector is great for random access and appending but not so great for inserting elsewhere in the container. list deals with insertions extremely well, but it doesn't have random access. Sets ensure that there are no duplicates in the container and it's sorted (though the sorting isn't very useful if the set holds pointers and you don't give a comparator function) whereas a map is a set of key-value pairs, so sorting and access is done by key. Etc. Etc. Every container has its pros and cons and which is best for a particular situation depends entirely on that situation.
As for pointers, again, having pointers in containers is fine. The issue that you need to worry about is who owns the memory and therefore must worry about freeing it. If there is a clear object that owns what a particular pointer points to, then it should probably be that object which frees it. If it's essentially the container which owns the memory, then you need to make sure that you delete all of the pointers in the container before the container is destroyed.
If you are concerned with there being multiple pointers to the same data floating around or there is no clear owner for a particular chunk of memory, then smart pointers are a good solution. Boost's shared_ptr would probably be a good one to use, and shared_ptr will be part of C++0x. Many would suggest that you should always use shared pointers, but there is some overhead involved and whether it's best for your particular application will depend entirely on your application.
Ultimately, you need to be aware of the strengths and weaknesses of the various container types and determine what the best container is for whatever you're doing. The same goes for how to deal with pointer management. You need to write your program in a way that it's clear who owns a particular chunk of memory and make sure that that owner frees it when appropriate. Shared pointers are just one solution for that (albeit an excellent one). What the best solution is depends on the particulars of your program.
Why would there be any duplicates in the first place? If class A is the sole entity responsible for creating the instances, and it holds the container privately, meaning there's no way for others to mutate it, it seems to me that there should be no cause for duplicates. Well, if there is, won't that be remediable by some checking prior to adding the pointer to the vector?
I don't know why it would matter if you store a pointer in what kind of container. Containers don't really manipulate their data, they only provide access to them in different ways. So, it's up to you :)
If you need to store pointers in stl containers, use shared_ptr.
Now, the set sounds completely wrong. What are you going to do with those?
If you need to add and remove, then list.
If you need to iterate over range, or all, then vector.
If you need to access specific, knowing a key, then map.
Take a look at others as well. One size doesn't fit all.
My answer is that any decisions that you make have be be with your goal in mind. If you need a 'no duplicates allowed' rule that a set enforces, then use a set. If not then you might want to use a vector or any container might do the trick.
As for smart_pointers yes, they are really really useful. Should they be used? I don't know, once again I don't know what your end goal is or the problem that you are trying to solve with them.
Basically it comes down this. If I said "I want to use a hammer. What do you think of that?" you would probably say "Well what for, I know that hammers are pretty good for nails and wood scenarios but they could also be used as a tool to hurt people or maybe as a book stand. Look, just wait a second, what is this for again?" The problem is that I have not, really said why I want to use a hammer. I have not said what goal I am trying to achieve.
So if you have sort of an overall goal then why not let us know, then it will be obvious if you are using the right tools for the job and we can help you more.
In my opinion, stick with vector unless you have a real reason not to. Sets come with some runtime overhead as well as quite large semantic overhead compared to a vector.
My application problem is the following -
I have a large structure foo. Because these are large and for memory management reasons, we do not wish to delete them when processing on the data is complete.
We are storing them in std::vector<boost::shared_ptr<foo>>.
My question is related to knowing when all processing is complete. First decision is that we do not want any of the other application code to mark a complete flag in the structure because there are multiple execution paths in the program and we cannot predict which one is the last.
So in our implementation, once processing is complete, we delete all copies of boost::shared_ptr<foo>> except for the one in the vector. This will drop the reference counter in the shared_ptr to 1. Is it practical to use shared_ptr.use_count() to see if it is equal to 1 to know when all other parts of my app are done with the data.
One additional reason I'm asking the question is that the boost documentation on the shared pointer shared_ptr recommends not using "use_count" for production code.
Edit -
What I did not say is that when we need a new foo, we will scan the vector of foo pointers looking for a foo that is not currently in use and use that foo for the next round of processing. This is why I was thinking that having the reference counter of 1 would be a safe way to ensure that this particular foo object is no longer in use.
My immediate reaction (and I'll admit, it's no more than that) is that it sounds like you're trying to get the effect of a pool allocator of some sort. You might be better off overloading operator new and operator delete to get the effect you want a bit more directly. With something like that, you can probably just use a shared_ptr like normal, and the other work you want delayed, will be handled in operator delete for that class.
That leaves a more basic question: what are you really trying to accomplish with this? From a memory management viewpoint, one common wish is to allocate memory for a large number of objects at once, and after the entire block is empty, release the whole block at once. If you're trying to do something on that order, it's almost certainly easier to accomplish by overloading new and delete than by playing games with shared_ptr's use_count.
Edit: based on your comment, overloading new and delete for class sounds like the right thing to do. If anything, integration into your existing code will probably be easier; in fact, you can often do it completely transparently.
The general idea for the allocator is pretty much the same as you've outlined in your edited question: have a structure (bitmaps and linked lists are both common) to keep track of your free objects. When new needs to allocate an object, it can scan the bit vector or look at the head of the linked list of free objects, and return its address.
This is one case that linked lists can work out quite well -- you (usually) don't have to worry about memory usage, because you store your links right in the free object, and you (virtually) never have to walk the list, because when you need to allocate an object, you just grab the first item on the list.
This sort of thing is particularly common with small objects, so you might want to look at the Modern C++ Design chapter on its small object allocator (and an article or two since then by Andrei Alexandrescu about his newer ideas of how to do that sort of thing). There's also the Boost::pool allocator, which is generally at least somewhat similar.
If you want to know whether or not the use count is 1, use the unique() member function.
I would say your application should have some method that eliminates all references to the Foo from other parts of the app, and that method should be used instead of checking use_count(). Besides, if use_count() is greater than 1, what would your program do? You shouldn't be relying on shared_ptr's features to eliminate all references, your application architecture should be able to eliminate references. As a final check before removing it from the vector, you could assert(unique()) to verify it really is being released.
I think you can use shared_ptr's custom deleter functionality to call a particular function when the last copy has been released. That way, you're not using use_count at all.
You would need to hold something other than a copy of the shared_ptr in your vector so that the shared_ptr is only tracking the outstanding processing.
Boost has several examples of custom deleters in the shared_ptr docs.
I would suggest that instead of trying to use the shared_ptr's use_count to keep track, it might be better to implement your own usage counter. this way you will have full control over this rather than using the shared_ptr's one which, as you rightly suggest, is not recommended. You can also pre-set your own counter to allow for the number of threads you know will need to act on the data, rather than relying on them all being initialised at the beginning to get their copies of the structure.