Container of Pointers vs Container of Objects - Performance - c++

I was wondering if there is any difference in performance when you compare/contrast
A) Allocating objects on the heap, putting pointers to those objects in a container, operating on the container elsewhere in the code
Ex:
std::list<SomeObject*> someList;
// Somewhere else in the code
SomeObject* foo = new SomeObject(param1, param2);
someList.push_back(foo);
// Somewhere else in the code
while (itr != someList.end())
{
(*itr)->DoStuff();
//...
}
B) Creating an object, putting it in a container, operating on that container elsewhere in the code
Ex:
std::list<SomeObject> someList;
// Somewhere else in the code
SomeObject newObject(param1, param2);
someList.push_back(newObject);
// Somewhere else in the code
while (itr != someList.end())
{
itr->DoStuff();
...
}
Assuming the pointers are all deallocated correctly and everything works fine, my question is...
If there is a difference, what would yield better performance, and how great would the difference be?

There is a performance hit when inserting objects instead of pointers to objects.
std::list as well as other std containers make a copy of the parameter that you store (for std::map both key and value is copied).
As your someList is a std::list the following line copies your object:
Foo foo;
someList.push_back(foo); // copy foo object
It will get copied again when you retrieve it from list. So you are making of copies of the whole object compared to making copies of pointer when using:
Foo * foo = new Foo();
someList.push_back(foo); // copy of foo*
You can double check by inserting print statements into Foo's constructor, destructor, copy constructor.
EDIT: As mentioned in comments, pop_front does not return anything. You usually get reference to front element with front then you pop_front to remove the element from list:
Foo * fooB = someList.front(); // copy of foo*
someList.pop_front();
OR
Foo fooB = someList.front(); // front() returns reference to element but if you
someList.pop_front(); // are going to pop it from list you need to keep a
// copy so Foo fooB = someList.front() makes a copy

Like most performance questions, this doesn't have one clear cut answer.
For one thing, it depends on what exactly you're doing with the list. Pointers might make it easier to do various operations (like sorting). That's because comparing pointers and swapping pointers is probably going to be faster than comparing/swapping SomeObject (of course, it depends on the implementation of SomeObject).
On the other hand, dynamic memory allocation tends to be worse than allocating on the stack. So, assuming you have enough memory on the stack for all the objects, that's another thing to consider.
In the end, I would personally recommend the best piece of advice I've ever gotten: It's pointless trying to guess what will perform better. Code it the way that makes the most sense (easiest to implement/maintain). If, and only if* you later discover there is a performance problem, run a profiler and figure out why. Chances are, most programs won't need all these optimizations, and this will turn out to be a moot point.

It depends how you use the list. Do you just fill it with stuff, and do lookups, or do you insert and remove data regularly. Lookups may be marginally faster without pointers, while adding and removing elements will be faster with pointers.

With objects it is going to be memberwise copy (thus new object creation and copy of members) assuming there aren't any copy constructors and = operator overloads. Therefore, using pointers is efficient std::auto_ptr or boost's smart pointers better, but that is beyond the scope of this question.
If you still have to use object syntax using reference.

Some additional things to consider (You have already been made aware of the copy semantics of STL containers):
Are your objects really smaller than pointers to them? This becomes more relevant if you use any kind of smart pointer as those have a tendency to be larger.
Copy operations are (often?) optimized to use memcpy() by the compiler. Especially this is probably not true for smart pointers.
Additional dereferencing caused by pointers
All the things I have mentioned are micro optimizations considerations and I'd discourage even thinking about them and go with them. On the other hand: A lot of my claims would need verification and would make for interesting test cases. Feel free to benchmark them.

Related

Why have move semantics?

Let me preface by saying that I have read some of the many questions already asked regarding move semantics. This question is not about how to use move semantics, it is asking what the purpose of it is - if I am not mistaken, I do not see why move semantics is needed.
Background
I was implementing a heavy class, which, for the purposes of this question, looked something like this:
class B;
class A
{
private:
std::array<B, 1000> b;
public:
// ...
}
When it came time to make a move assignment operator, I realized that I could significantly optimize the process by changing the b member to std::array<B, 1000> *b; - then movement could just be a deletion and pointer swap.
This lead me to the following thought: now, shouldn't all non-primitive type members be pointers to speed up movement (corrected below [1] [2]) (there is a case to be made for cases where memory should not be dynamically allocated, but in these cases optimizing movement is not an issue since there is no way to do so)?
Here is where I had the following realization - why create a class A which really just houses a pointer b so swapping later is easier when I can simply make a pointer to the entire A class itself. Clearly, if a client expects movement to be significantly faster than copying, the client should be OK with dynamic memory allocation. But in this case, why does the client not just dynamically allocate the whole A class?
The Question
Can't the client already take advantage of pointers to do everything move semantics gives us? If so, then what is the purpose of move semantics?
Move semantics:
std::string f()
{
std::string s("some long string");
return s;
}
int main()
{
// super-fast pointer swap!
std::string a = f();
return 0;
}
Pointers:
std::string *f()
{
std::string *s = new std::string("some long string");
return s;
}
int main()
{
// still super-fast pointer swap!
std::string *a = f();
delete a;
return 0;
}
And here's the strong assignment that everyone says is so great:
template<typename T>
T& strong_assign(T *&t1, T *&t2)
{
delete t1;
// super-fast pointer swap!
t1 = t2;
t2 = nullptr;
return *t1;
}
#define rvalue_strong_assign(a, b) (auto ___##b = b, strong_assign(a, &___##b))
Fine - the latter in both examples may be considered "bad style" - whatever that means - but is it really worth all the trouble with the double ampersands? If an exception might be thrown before delete a is called, that's still not a real problem - just make a guard or use unique_ptr.
Edit [1] I just realized this wouldn't be necessary with classes such as std::vector which use dynamic memory allocation themselves and have efficient move methods. This just invalidates a thought I had - the question below still stands.
Edit [2] As mentioned in the discussion in the comments and answers below this whole point is pretty much moot. One should use value semantics as much as possible to avoid allocation overhead since the client can always move the whole thing to the heap if needed.
I thoroughly enjoyed all the answers and comments! And I agree with all of them. I just wanted to stick in one more motivation that no one has yet mentioned. This comes from N1377:
Move semantics is mostly about performance optimization: the ability
to move an expensive object from one address in memory to another,
while pilfering resources of the source in order to construct the
target with minimum expense.
Move semantics already exists in the current language and library to a
certain extent:
copy constructor elision in some contexts
auto_ptr "copy"
list::splice
swap on containers
All of these operations involve transferring resources from one object
(location) to another (at least conceptually). What is lacking is
uniform syntax and semantics to enable generic code to move arbitrary
objects (just as generic code today can copy arbitrary objects). There
are several places in the standard library that would greatly benefit
from the ability to move objects instead of copy them (to be discussed
in depth below).
I.e. in generic code such as vector::erase, one needs a single unified syntax to move values to plug the hole left by the erased valued. One can't use swap because that would be too expensive when the value_type is int. And one can't use copy assignment as that would be too expensive when value_type is A (the OP's A). Well, one could use copy assignment, after all we did in C++98/03, but it is ridiculously expensive.
shouldn't all non-primitive type members be pointers to speed up movement
This would be horribly expensive when the member type is complex<double>. Might as well color it Java.
Your example gives it away: your code is not exception-safe, and it makes use of the free-store (twice), which can be nontrivial. To use pointers, in many/most situations you have to allocate stuff on the free store, which is much slower than automatic storage, and does not allow for RAII.
They also let you more efficiently represent non-copyable resources, like sockets.
Move semantics aren't strictly necessary, as you can see that C++ has existed for 40 years a while without them. They are simply a better way to represent certain concepts, and an optimization.
Can't the client already take advantage of pointers to do everything move semantics gives us? If so, then what is the purpose of move semantics?
Your second example gives one very good reason why move semantics is a good thing:
std::string *f()
{
std::string *s = new std::string("some long string");
return s;
}
int main()
{
// still super-fast pointer swap!
std::string *a = f();
delete a;
return 0;
}
Here, the client has to examine the implementation to figure out who is responsible for deleting the pointer. With move semantics, this ownership issue won't even come up.
If an exception might be thrown before delete a is called, that's still not a real problem just make a guard or use unique_ptr.
Again, the ugly ownership issue shows up if you don't use move semantics. By the way, how
would you implement unique_ptr without move semantics?
I know about auto_ptr and there are good reasons why it is now deprecated.
is it really worth all the trouble with the double ampersands?
True, it takes some time to get used to it. After you are familiar and comfortable with it, you will be wondering how you could live without move semantics.
Your string example is great. The short string optimization means that short std::strings do not exist in the free store: instead they exist in automatic storage.
The new/delete version means that you force every std::string into the free store. The move version only puts large strings into the free store, and small strings stay (and are possibly copied) in automatic storage.
On top of that your pointer version lacks exception safety, as it has non-RAII resource handles. Even if you do not use exceptions, naked pointer resource owners basically forces single exit point control flow to manage cleanup. On top of that, use of naked pointer ownership leads to resource leaks and dangling pointers.
So the naked pointer version is worse in piles of ways.
move semantics means you can treat complex objects as normal values. You move when you do not want duplicate state, and copy otherwise. Nearly normal types that cannot be copied can expose move only (unique_ptr), others can optimize for it (shared_ptr). Data stored in containers, like std::vector, can now include abnormal types because it is move aware. The std::vector of std::vector goes from ridiculously inefficient and hard to use to easy and fast at the stroke of a standard version.
Pointers place the resource management overhead into the clients, while good C++11 classes handle that problem for you. move semantics makes this both easier to maintain, and far less error prone.

C++: pointers and abstract array classes

I am relatively new to pointers and have written this merge function. Is this effective use of pointers? and secondly the *two variable, it should not be deleted when they are merged right? that would be the client´s task, not the implementer?
VectorPQueue *VectorPQueue::merge(VectorPQueue *one, VectorPQueue *two) {
int twoSize = two->size();
if (one->size() != 0) {
for (int i = 0; i < twoSize;i++)
{
one->enqueue(two->extractMin());
}
}
return one;
}
The swap function is called like this
one->merge(one, two);
Passing it the these two objects to merge
PQueue *one = PQueue::createPQueue(PQueue::UnsortedVector);
PQueue *two = PQueue::createPQueue(PQueue::UnsortedVector);
In your case pointers are completely unnecessary. You can simply use references.
It is also unnecessary to pass in the argument on which the member function is called. You can get the object on which a member function is called with the this pointer.
/// Merge this with other.
void VectorPQueue::merge(VectorPQueue& other) {
// impl
}
In general: Implementing containers with inheritance is not really the preferred style. Have a look at the standard library and how it implements abstractions over sequences (iterators).
At first sight, I cannot see any pointer-related problems. Although I'd prefer to use references instead, and make merge a member function of VectorPQueue so I don't have to pass the first argument (as others already pointed out). One more thing which confuses me is the check for one->size() != 0 - what would be the problem if one is empty? The code below would still correctly insert two into one, as it depends only on two's size.
Regarding deletion of two:
that would be the client´s task, not the implementer
Well, it's up to you how you want do design your interface. But since the function only adds two's elements to one, I'd say it should not delete it. Btw, I think a better name for this method would be addAllFrom() or something like this.
Regarding pointers in general:
I strongly suggest you take a look into smart pointers. These are a common technique in C++ to reduce memory management effort. Using bare pointers and managing them manually via new/delete is very error-prone, hard to make strongly exception-safe, will almost guarantee you memory leaks etc. Smart pointers on the other hand automatically delete their contained pointers as soon as they are not needed any more. For illustrative purposes, the C++ std lib has auto_ptr (unique_ptr and shared_ptr if your compiler supports C++ 11). It's used like this:
{ // Beginning of scope
std::auto_ptr<PQueue> one(PQueue::createPQueue(PQueue::UnsortedVector));
// Do some work with one...:
one->someFunction();
// ...
} // End of scope - one will automatically be deleted
My personal rules of thumb: Only use pointers wrapped in smart pointers. Only use heap allocated objects at all, if:
they have to live longer than the scope in which they are created, and a copy would be too expensive (C++ 11 luckily has move semantics, which eliminate a lot of such cases)
I have to call virtual functions on them
In all other cases, I try to use stack allocated objects and STL containers as much as possible.
All this might seem a lot at first if you're starting with C++, and it's totally ok (maybe even necessary) to try to fully understand pointers before you venture into smart pointers etc.. but it saves a lot of time spend debugging later on. I'd also recommend reading a few books on C++ - I was actually thinking I understood most of C++, until I read my first book :)

malloc & placement new vs. new

I've been looking into this for the past few days, and so far I haven't really found anything convincing other than dogmatic arguments or appeals to tradition (i.e. "it's the C++ way!").
If I'm creating an array of objects, what is the compelling reason (other than ease) for using:
#define MY_ARRAY_SIZE 10
// ...
my_object * my_array=new my_object [MY_ARRAY_SIZE];
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i]=my_object(i);
over
#define MEMORY_ERROR -1
#define MY_ARRAY_SIZE 10
// ...
my_object * my_array=(my_object *)malloc(sizeof(my_object)*MY_ARRAY_SIZE);
if (my_object==NULL) throw MEMORY_ERROR;
for (int i=0;i<MY_ARRAY_SIZE;++i) new (my_array+i) my_object (i);
As far as I can tell the latter is much more efficient than the former (since you don't initialize memory to some non-random value/call default constructors unnecessarily), and the only difference really is the fact that one you clean up with:
delete [] my_array;
and the other you clean up with:
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i].~T();
free(my_array);
I'm out for a compelling reason. Appeals to the fact that it's C++ (not C) and therefore malloc and free shouldn't be used isn't -- as far as I can tell -- compelling as much as it is dogmatic. Is there something I'm missing that makes new [] superior to malloc?
I mean, as best I can tell, you can't even use new [] -- at all -- to make an array of things that don't have a default, parameterless constructor, whereas the malloc method can thusly be used.
I'm out for a compelling reason.
It depends on how you define "compelling". Many of the arguments you have thus far rejected are certainly compelling to most C++ programmers, as your suggestion is not the standard way to allocate naked arrays in C++.
The simple fact is this: yes, you absolutely can do things the way you describe. There is no reason that what you are describing will not function.
But then again, you can have virtual functions in C. You can implement classes and inheritance in plain C, if you put the time and effort into it. Those are entirely functional as well.
Therefore, what matters is not whether something can work. But more on what the costs are. It's much more error prone to implement inheritance and virtual functions in C than C++. There are multiple ways to implement it in C, which leads to incompatible implementations. Whereas, because they're first-class language features of C++, it's highly unlikely that someone would manually implement what the language offers. Thus, everyone's inheritance and virtual functions can cooperate with the rules of C++.
The same goes for this. So what are the gains and the losses from manual malloc/free array management?
I can't say that any of what I'm about to say constitutes a "compelling reason" for you. I rather doubt it will, since you seem to have made up your mind. But for the record:
Performance
You claim the following:
As far as I can tell the latter is much more efficient than the former (since you don't initialize memory to some non-random value/call default constructors unnecessarily), and the only difference really is the fact that one you clean up with:
This statement suggests that the efficiency gain is primarily in the construction of the objects in question. That is, which constructors are called. The statement presupposes that you don't want to call the default constructor; that you use a default constructor just to create the array, then use the real initialization function to put the actual data into the object.
Well... what if that's not what you want to do? What if what you want to do is create an empty array, one that is default constructed? In this case, this advantage disappears entirely.
Fragility
Let's assume that each object in the array needs to have a specialized constructor or something called on it, such that initializing the array requires this sort of thing. But consider your destruction code:
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i].~T();
For a simple case, this is fine. You have a macro or const variable that says how many objects you have. And you loop over each element to destroy the data. That's great for a simple example.
Now consider a real application, not an example. How many different places will you be creating an array in? Dozens? Hundreds? Each and every one will need to have its own for loop for initializing the array. Each and every one will need to have its own for loop for destroying the array.
Mis-type this even once, and you can corrupt memory. Or not delete something. Or any number of other horrible things.
And here's an important question: for a given array, where do you keep the size? Do you know how many items you allocated for every array that you create? Each array will probably have its own way of knowing how many items it stores. So each destructor loop will need to fetch this data properly. If it gets it wrong... boom.
And then we have exception safety, which is a whole new can of worms. If one of the constructors throws an exception, the previously constructed objects need to be destructed. Your code doesn't do that; it's not exception-safe.
Now, consider the alternative:
delete[] my_array;
This can't fail. It will always destroy every element. It tracks the size of the array, and it's exception-safe. So it is guaranteed to work. It can't not work (as long as you allocated it with new[]).
Of course, you could say that you could wrap the array in an object. That makes sense. You might even template the object on the type elements of the array. That way, all the desturctor code is the same. The size is contained in the object. And maybe, just maybe, you realize that the user should have some control over the particular way the memory is allocated, so that it's not just malloc/free.
Congratulations: you just re-invented std::vector.
Which is why many C++ programmers don't even type new[] anymore.
Flexibility
Your code uses malloc/free. But let's say I'm doing some profiling. And I realize that malloc/free for certain frequently created types is just too expensive. I create a special memory manager for them. But how to hook all of the array allocations to them?
Well, I have to search the codebase for any location where you create/destroy arrays of these types. And then I have to change their memory allocators accordingly. And then I have to continuously watch the codebase so that someone else doesn't change those allocators back or introduce new array code that uses different allocators.
If I were instead using new[]/delete[], I could use operator overloading. I simply provide an overload for operators new[] and delete[] for those types. No code has to change. It's much more difficult for someone to circumvent these overloads; they have to actively try to. And so forth.
So I get greater flexibility and reasonable assurance that my allocators will be used where they should be used.
Readability
Consider this:
my_object *my_array = new my_object[10];
for (int i=0; i<MY_ARRAY_SIZE; ++i)
my_array[i]=my_object(i);
//... Do stuff with the array
delete [] my_array;
Compare it to this:
my_object *my_array = (my_object *)malloc(sizeof(my_object) * MY_ARRAY_SIZE);
if(my_object==NULL)
throw MEMORY_ERROR;
int i;
try
{
for(i=0; i<MY_ARRAY_SIZE; ++i)
new(my_array+i) my_object(i);
}
catch(...) //Exception safety.
{
for(i; i>0; --i) //The i-th object was not successfully constructed
my_array[i-1].~T();
throw;
}
//... Do stuff with the array
for(int i=MY_ARRAY_SIZE; i>=0; --i)
my_array[i].~T();
free(my_array);
Objectively speaking, which one of these is easier to read and understand what's going on?
Just look at this statement: (my_object *)malloc(sizeof(my_object) * MY_ARRAY_SIZE). This is a very low level thing. You're not allocating an array of anything; you're allocating a hunk of memory. You have to manually compute the size of the hunk of memory to match the size of the object * the number of objects you want. It even features a cast.
By contrast, new my_object[10] tells the story. new is the C++ keyword for "create instances of types". my_object[10] is a 10 element array of my_object type. It's simple, obvious, and intuitive. There's no casting, no computing of byte sizes, nothing.
The malloc method requires learning how to use malloc idiomatically. The new method requires just understanding how new works. It's much less verbose and much more obvious what's going on.
Furthermore, after the malloc statement, you do not in fact have an array of objects. malloc simply returns a block of memory that you have told the C++ compiler to pretend is a pointer to an object (with a cast). It isn't an array of objects, because objects in C++ have lifetimes. And an object's lifetime does not begin until it is constructed. Nothing in that memory has had a constructor called on it yet, and therefore there are no living objects in it.
my_array at that point is not an array; it's just a block of memory. It doesn't become an array of my_objects until you construct them in the next step. This is incredibly unintuitive to a new programmer; it takes a seasoned C++ hand (one who probably learned from C) to know that those aren't live objects and should be treated with care. The pointer does not yet behave like a proper my_object*, because it doesn't point to any my_objects yet.
By contrast, you do have living objects in the new[] case. The objects have been constructed; they are live and fully-formed. You can use this pointer just like any other my_object*.
Fin
None of the above says that this mechanism isn't potentially useful in the right circumstances. But it's one thing to acknowledge the utility of something in certain circumstances. It's quite another to say that it should be the default way of doing things.
If you do not want to get your memory initialized by implicit constructor calls, and just need an assured memory allocation for placement new then it is perfectly fine to use malloc and free instead of new[] and delete[].
The compelling reasons of using new over malloc is that new provides implicit initialization through constructor calls, saving you additional memset or related function calls post an malloc And that for new you do not need to check for NULL after every allocation, just enclosing exception handlers will do the job saving you redundant error checking unlike malloc.
These both compelling reasons do not apply to your usage.
which one is performance efficient can only be determined by profiling, there is nothing wrong in the approach you have now. On a side note I don't see a compelling reason as to why use malloc over new[] either.
I would say neither.
The best way to do it would be:
std::vector<my_object> my_array;
my_array.reserve(MY_ARRAY_SIZE);
for (int i=0;i<MY_ARRAY_SIZE;++i)
{ my_array.push_back(my_object(i));
}
This is because internally vector is probably doing the placement new for you. It also managing all the other problems associated with memory management that you are not taking into account.
You've reimplemented new[]/delete[] here, and what you have written is pretty common in developing specialized allocators.
The overhead of calling simple constructors will take little time compared the allocation. It's not necessarily 'much more efficient' -- it depends on the complexity of the default constructor, and of operator=.
One nice thing that has not been mentioned yet is that the array's size is known by new[]/delete[]. delete[] just does the right and destructs all elements when asked. Dragging an additional variable (or three) around so you exactly how to destroy the array is a pain. A dedicated collection type would be a fine alternative, however.
new[]/delete[] are preferable for convenience. They introduce little overhead, and could save you from a lot of silly errors. Are you compelled enough to take away this functionality and use a collection/container everywhere to support your custom construction? I've implemented this allocator -- the real mess is creating functors for all the construction variations you need in practice. At any rate, you often have a more exact execution at the expense of a program which is often more difficult to maintain than the idioms everybody knows.
IMHO there both ugly, it's better to use vectors. Just make sure to allocate the space in advance for performance.
Either:
std::vector<my_object> my_array(MY_ARRAY_SIZE);
If you want to initialize with a default value for all entries.
my_object basic;
std::vector<my_object> my_array(MY_ARRAY_SIZE, basic);
Or if you don't want to construct the objects but do want to reserve the space:
std::vector<my_object> my_array;
my_array.reserve(MY_ARRAY_SIZE);
Then if you need to access it as a C-Style pointer array just (just make sure you don't add stuff while keeping the old pointer but you couldn't do that with regular c-style arrays anyway.)
my_object* carray = &my_array[0];
my_object* carray = &my_array.front(); // Or the C++ way
Access individual elements:
my_object value = my_array[i]; // The non-safe c-like faster way
my_object value = my_array.at(i); // With bounds checking, throws range exception
Typedef for pretty:
typedef std::vector<my_object> object_vect;
Pass them around functions with references:
void some_function(const object_vect& my_array);
EDIT:
IN C++11 there is also std::array. The problem with it though is it's size is done via a template so you can't make different sized ones at runtime and you cant pass it into functions unless they are expecting that exact same size (or are template functions themselves). But it can be useful for things like buffers.
std::array<int, 1024> my_array;
EDIT2:
Also in C++11 there is a new emplace_back as an alternative to push_back. This basically allows you to 'move' your object (or construct your object directly in the vector) and saves you a copy.
std::vector<SomeClass> v;
SomeClass bob {"Bob", "Ross", 10.34f};
v.emplace_back(bob);
v.emplace_back("Another", "One", 111.0f); // <- Note this doesn't work with initialization lists ☹
Oh well, I was thinking that given the number of answers there would be no reason to step in... but I guess I am drawn in as the others. Let's go
Why your solution is broken
C++11 new facilities for handling raw memory
Simpler way to get this done
Advices
1. Why your solution is broken
First, the two snippets you presented are not equivalent. new[] just works, yours fails horribly in the presence of Exceptions.
What new[] does under the cover is that it keeps track of the number of objects that were constructed, so that if an exception occurs during say the 3rd constructor call it properly calls the destructor for the 2 already constructed objects.
Your solution however fails horribly:
either you don't handle exceptions at all (and leak horribly)
or you just try to call the destructors on the whole array even though it's half built (likely crashing, but who knows with undefined behavior)
So the two are clearly not equivalent. Yours is broken
2. C++11 new facilities for handling raw memory
In C++11, the comittee members have realized how much we liked fiddling with raw memory and they have introduced facilities to help us doing so more efficiently, and more safely.
Check cppreference's <memory> brief. This example shows off the new goodies (*):
#include <iostream>
#include <string>
#include <memory>
#include <algorithm>
int main()
{
const std::string s[] = {"This", "is", "a", "test", "."};
std::string* p = std::get_temporary_buffer<std::string>(5).first;
std::copy(std::begin(s), std::end(s),
std::raw_storage_iterator<std::string*, std::string>(p));
for(std::string* i = p; i!=p+5; ++i) {
std::cout << *i << '\n';
i->~basic_string<char>();
}
std::return_temporary_buffer(p);
}
Note that get_temporary_buffer is no-throw, it returns the number of elements for which memory has actually been allocated as a second member of the pair (thus the .first to get the pointer).
(*) Or perhaps not so new as MooingDuck remarked.
3. Simpler way to get this done
As far as I am concered, what you really seem to be asking for is a kind of typed memory pool, where some emplacements could not have been initialized.
Do you know about boost::optional ?
It is basically an area of raw memory that can fit one item of a given type (template parameter) but defaults with having nothing in instead. It has a similar interface to a pointer and let you query whether or not the memory is actually occupied. Finally, using the In-Place Factories you can safely use it without copying objects if it is a concern.
Well, your use case really looks like a std::vector< boost::optional<T> > to me (or perhaps a deque?)
4. Advices
Finally, in case you really want to do it on your own, whether for learning or because no STL container really suits you, I do suggest you wrap this up in an object to avoid the code sprawling all over the place.
Don't forget: Don't Repeat Yourself!
With an object (templated) you can capture the essence of your design in one single place, and then reuse it everywhere.
And of course, why not take advantage of the new C++11 facilities while doing so :) ?
You should use vectors.
Dogmatic or not, that is exactly what ALL the STL container do to allocate and initialize.
They use an allocator then allocates uninitialized space and initialize it by means of the container constructors.
If this (like many people use to say) "is not c++" how can be the standard library just be implemented like that?
If you just don't want to use malloc / free, you can allocate "bytes" with just new char[]
myobjet* pvext = reinterpret_cast<myobject*>(new char[sizeof(myobject)*vectsize]);
for(int i=0; i<vectsize; ++i) new(myobject+i)myobject(params);
...
for(int i=vectsize-1; i!=0u-1; --i) (myobject+i)->~myobject();
delete[] reinterpret_cast<char*>(myobject);
This lets you take advantage of the separation between initialization and allocation, still taking adwantage of the new allocation exception mechanism.
Note that, putting my first and last line into an myallocator<myobject> class and the second ands second-last into a myvector<myobject> class, we have ... just reimplemented std::vector<myobject, std::allocator<myobject> >
What you have shown here is actually the way to go when using a memory allocator different than the system general allocator - in that case you would allocate your memory using the allocator (alloc->malloc(sizeof(my_object))) and then use the placement new operator to initialize it. This has many advantages in efficient memory management and quite common in the standard template library.
If you are writing a class that mimics functionality of std::vector or needs control over memory allocation/object creation (insertion in array / deletion etc.) - that's the way to go. In this case, it's not a question of "not calling default constructor". It becomes a question of being able to "allocate raw memory, memmove old objects there and then create new objects at the olds' addresses", question of being able to use some form of realloc and so on. Unquestionably, custom allocation + placement new are way more flexible... I know, I'm a bit drunk, but std::vector is for sissies... About efficiency - one can write their own version of std::vector that will be AT LEAST as fast ( and most likely smaller, in terms of sizeof() ) with most used 80% of std::vector functionality in, probably, less than 3 hours.
my_object * my_array=new my_object [10];
This will be an array with objects.
my_object * my_array=(my_object *)malloc(sizeof(my_object)*MY_ARRAY_SIZE);
This will be an array the size of your objects, but they may be "broken". If your class has virtual funcitons for instance, then you won't be able to call those. Note that it's not just your member data that may be inconsistent, but the entire object is actully "broken" (in lack of a better word)
I'm not saying it's wrong to do the second one, just as long as you know this.

Array: Storing Objects or References

As a Java developer I have the following C++ question.
If I have objects of type A and I want to store a collection of them in an array,
then should I just store pointers to the objects or is it better to store the object itself?
In my opinion it is better to store pointers because:
1) One can easily remove an object, by setting its pointer to null
2) One saves space.
Pointers or just the objects?
You can't put references in an array in C++. You can make an array of pointers, but I'd still prefer a container and of actual objects rather than pointers because:
No chance to leak, exception safety is easier to deal with.
It isn't less space - if you store an array of pointers you need the memory for the object plus the memory for a pointer.
The only times I'd advocate putting pointers (or smart pointers would be better) in a container (or array if you must) is when your object isn't copy construable and assignable (a requirement for containers, pointers always meet this) or you need them to be polymorphic. E.g.
#include <vector>
struct foo {
virtual void it() {}
};
struct bar : public foo {
int a;
virtual void it() {}
};
int main() {
std::vector<foo> v;
v.push_back(bar()); // not doing what you expected! (the temporary bar gets "made into" a foo before storing as a foo and your vector doesn't get a bar added)
std::vector<foo*> v2;
v2.push_back(new bar()); // Fine
}
If you want to go down this road boost pointer containers might be of interest because they do all of the hard work for you.
Removing from arrays or containers.
Assigning NULL doesn't cause there to be any less pointers in your container/array, (it doesn't handle the delete either), the size remains the same but there are now pointers you can't legally dereference. This makes the rest of your code more complex in the form of extra if statements and prohibits things like:
// need to go out of our way to make sure there's no NULL here
std::for_each(v2.begin(),v2.end(), std::mem_fun(&foo::it));
I really dislike the idea of allowing NULLs in sequences of pointers in general because you quickly end up burying all the real work in a sequence of conditional statements. The alternative is that std::vector provides an erase method that takes an iterator so you can write:
v2.erase(v2.begin());
to remove the first or v2.begin()+1 for the second. There's no easy "erase the nth element" method though on std::vector because of the time complexity - if you're doing lots of erasing then there are other containers which might be more appropriate.
For an array you can simulate erasing with:
#include <utility>
#include <iterator>
#include <algorithm>
#include <iostream>
int main() {
int arr[] = {1,2,3,4};
int len = sizeof(arr)/sizeof(*arr);
std::copy(arr, arr+len, std::ostream_iterator<int>(std::cout, " "));
std::cout << std::endl;
// remove 2nd element, without preserving order:
std::swap(arr[1], arr[len-1]);
len -= 1;
std::copy(arr, arr+len, std::ostream_iterator<int>(std::cout, " "));
std::cout << std::endl;
// and again, first element:
std::swap(arr[0], arr[len-1]);
len -= 1;
std::copy(arr, arr+len, std::ostream_iterator<int>(std::cout, " "));
std::cout << std::endl;
}
preserving the order requires a series of shuffles instead of a single swap, which nicely illustrates the complexity of erasing that std::vector faces. Of course by doing this you've just reinvented a pretty big wheel a whole lot less usefully and flexibly than a standard library container would do for you for free!
It sounds like you are confusing references with pointers. C++ has 3 common ways of representing object handles
References
Pointers
Values
Coming from Java the most analogous way is to do so with a pointer. This is likely what you are trying to do here.
How they are stored though has some pretty fundamental effects on their behaviors. When you store as a value you are often dealing with copies of the values. Where pointers are dealing with one object with multiple references. Giving a flat answer of one is better than the other is not really possible without a bit more context on what these objects do
It completely depends on what you want to do... but you're misguided in some ways.
Things you should know are:
You can't set a reference to NULL in C++, though you can set a pointer to NULL.
A reference can only be made to an existing object - it must start initialized as such.
A reference cannot be changed (though the referenced value can be).
You wouldn't save space, in fact you would use more since you're using an object and a reference. If you need to reference the same object multiple times then you save space, but you might as well use a pointer - it's more flexible in MOST (read: not all) scenarios.
A last important one: STL containers (vector, list, etc) have COPY semantics - they cannot work with references. They can work with pointers, but it gets complicated, so for now you should always use copyable objects in those containers and accept that they will be copied, like it or not. The STL is designed to be efficient and safe with copy semantics.
Hope that helps! :)
PS (EDIT): You can use some new features in BOOST/TR1 (google them), and make a container/array of shared_ptr (reference counting smart pointers) which will give you a similar feel to Java's references and garbage collection. There's a flurry of differences but you'll have to read about it yourself - they are a great feature of the new standard.
You should always store objects when possible; that way, the container will manage the objects' lifetimes for you.
Occasionally, you will need to store pointers; most commonly, pointers to a base class where the objects themselves will be of different types. In that case, you need to be careful to manage the lifetime of the objects yourself; ensuring that they are not destroyed while in the container, but that they are destroyed once they are no longer needed.
Unlike Java, setting a pointer to null does not deallocate the object pointed to; instead, you get a memory leak if there are no more pointers to the object. If the object was created using new, then delete must be called at some point. Your best options here are to store smart pointers (shared_ptr, or perhaps unique_ptr if available), or to use Boost's pointer containers.
You can't store references in a container. You could store (naked) pointers instead, but that's prone to errors and is therefore frowned upon.
Thus, the real choice is between storing objects and smart pointers to objects. Both have their uses. My recommendation would be to go with storing objects by value unless the particular situation demands otherwise. This could happen:
if you need to NULL out the object without removing it from the
container;
if you need to store pointers to the same object in
multiple containers;
if you need to treat elements of the container
polymorphically.
One reason to not do it is to save space, since storing elements by value is likely to be more space-efficient.
To add to the answer of aix:
If you want to store polymorphic objects, you must use smart pointers because the containers make a copy, and for derived types only copy the base part (at least the standard ones, I think boost has some containers which work differently). Therefore you'll lose any polymorphic behaviour (and any derived-class state) of your objects.

c++: Excessive copying of large objects

While there is quite a few questions about copy constructors/assignment operators on SO already, I did not find an answer that fit my problem.
I have a class like
class Foo
{
// ...
private:
std::vector<int> vec1;
std::vector<int> vec2;
boost::bimap<unsigned int, unsigned int> bimap;
// And a couple more
};
Now it seems that there is some quite excessive copying going on (based on profile data).. So my question is how to best tackle this?
Should I implement custom copy constructor/assignment operator and use swap? Or should I define my own swap method and use that (where appropriate) instead of assignment?
As I am not a c++ expert, examples that show how to properly handle this situation are greatly appreciated.
UPDATE: It appears I was not terribly clear.. Let me try to explain. The program is basically an on-the-fly breadth-first search program, and for each step taken I need to store metadata about the step (which is the Foo class).. Now the problem is that there is (usually) exponentially steps, so you can imagine a large number of these objects needs to be stored.. I do pass by (const) reference always as far as I know.. Each time I calculate a successor from a node in the graph I need to create and store ONE Foo object (however, some of the data members will be added to this one foo further on in the processing of this successor)..
My profile data shows roughly something like this (I don't have the actual numbers on this machine):
SearchStrategy::Search 13s
FooStore::Save 10s
So you can see I spend nearly as much time saving this meta data as I do searching through the graph.. Oh, and FooStore saves Foo in a google::sparse_hash_map<long long, Foo, boost::hash<long long> >.
Compiler is g++4.4 or g++4.5 (I'm not at my dev. machine, so I cannot check at the moment)..
UPDATE 2 I assign some of the members after construction to a Foo instance like
void SetVec1(const std::vector<int>& vec1) { this->vec1 = vec1; };
I guess tomorrow, I should change this to use the swap method, which should definitely improve this a bit..
I'm sorry if I'm not entirely clear about what semantics I'm trying to achieve, but the reason is that I am not quite sure.
Regards,
Morten
Everything depends on what copying this object means in your case :
it means copying it's whole value
it means the copied object will refer to the same content
If it's 1, then this class seem correct. You're not very clear about the operations that you say does make lot of copies so I'm assuming you try to copy the whole object.
If it's 2, then you need to use something like shared_ptr to share the containers between the objects. Just using shared_ptr instead of real objects as member will implicitely allow the buffers to be refered by both objects (the copy and the copied).
That's the easier way (using boost::shared_ptr or std::shared_ptr if you have a C++0x enabled compiler providing it).
There are harder ways but they will certainly become a problem later.
Of course, and everyone says this, don't optimize prematurely. Don't bother with this until and unless you prove a) that your program goes too slowly, and b) it would go faster if you didn't copy so much data.
If your program design requires you to hold multiple simultaneous copies of the data, there is nothing you can do. You just have to bite the bullet and copy the data. No, implementing a custom copy constructor and custom assignment operator won't make it go faster.
If your program doesn't require multiple simultaneous copies of this data, then you do have a couple of tricks to reduce the number of copies you perform.
Instrument your copy methods If it were me, the first thing I would do, even before trying to improve anything, is to count the number of times my copy methods were
invoked.
class Foo {
private:
static int numberOfConstructors;
static int numberofCopyConstructors;
static int numberofAssignments;
Foo() { ++numberOfConstructors; ...; }
Foo(const Foo& f) : vec1(f.vec1), vec2(f.vec2), bimap(f.bimap) {
++numberOfCopyConstructors;
...;
}
Foo& operator=(const Foo& f) {
++numberOfAssignments;
...;
}
};
Run your program with and without your improvements. Print out the value of those static members to see if your changes had any effect.
Avoid assignments in function calls by using references If you pass objects of type Foo to functions, consider if you can do it by reference. If you don't change the passed copy, passing it by const reference is a no-brainer.
// WAS:
extern SomeFuncton(Foo f);
// EASY change -- if this compiles, you know that it is correct
extern SomeFunction(const Foo& f);
// HARD change -- you have to examine your code to see if this is safe
extern SomeFunction(Foo& f);
Avoid copies by using Foo::swap If you use the copy methods (either explicitly or implicitly) a lot, consider whether the assigned-from item could give up its data, rather than copying it.
// Was:
vectorOfFoo.push_back(myFoo);
// maybe faster:
vectorOfFoo.push_back(Foo());
vectorOfFoo.back().swap(myFoo);
// Was:
newFoo = oldFoo;
// maybe faster
newfoo.swap(oldFoo);
Of course, this only works if myFoo and oldFoo no longer need access to their data. And, you have to implement Foo::swap
void Foo::swap(Foo& old) {
std::swap(this->vec1, old.vec1);
std::swap(this->vec2, old.vec2);
...
}
Whatever you do, measure your program before and after your change. Measure the number of times your copy methods are invoked, and the total time improvement in your program.
Your class doesn't seem that bad, but you do not show how you use it.
If there is lots of copying, then you need to pass objects of those class by reference (or if possible const reference).
If that class has to be copied, then you can not do anything.
If it's really a problem, you might consider implementing the pimpl idiom. But I doubt it's a problem, though I'd have to see your use of the class to be sure.
Copying of huge vectors unlikely can be cheap. The most promising way is to copy rarer. While it's quite easy (may be too easy) in C++ to invoke copy without intention, there are ways to avoid needless copying:
passing by const and non-const reference
move-constructors
smart pointers with ownership transfer
These techniques may leave only copies which are required by algorithm.
Sometimes it's possible to avoid even some of those copying. For example, if you need two objects where the second one is reversed copy of the first one, a wrapper object may be created which acts like reversed, but instead of storing entire copy has only a reference.
The obvious way to reduce copying is to use something like a shared_ptr. With multithreading, however, this cure can be worse than the disease -- incrementing and decrementing reference counts needs to be done atomically, which can be quite expensive. If, however, you typically end up modifying the copies and need each copy to act unique (i.e., modifying a copy doesn't affect the original) you can end up with worse performance still, paying for the atomic increment/decrement for reference counting, and still doing lots of copies anyway.
There are a couple of obvious ways to avoid that. One is to move unique objects instead of copying at all -- this is great if you can make it work. Another is to use non-atomic reference counting most of the time, and do deep copies only when moving data between threads.
There is no one answer that'a universal and really clean though.