If I have a boost::shared_array<T> (or a boost::shared_ptr<T[]>), is there a way to obtain a boost::shared_ptr<T> which shares with the array?
So for example, I might want to write:
shared_array<int> array(new int[10]);
shared_ptr<int> element = &array[2];
I know that I can't use &array[2], because it just has type int *, and it would be dangerous for shared_ptr<int> to have an implicit constructor that will take that type. Ideally shared_array<int> would have an instance method on it, something like:
shared_ptr<int> element = array.shared_ptr_to(2);
Unfortunately I can't find anything like this. There is an aliasing constructor on shared_ptr<int> which will alias with another shared_ptr<T>, but it won't allow aliasing with shared_array<T>; so I can't write this either (it won't compile):
shared_ptr<int> element(array, &array[2]);
//Can't convert 'array' from shared_array<int> to shared_ptr<int>
Another option I played with was to use std::shared_ptr<T> (std instead of boost). The specialisation for T[] isn't standardised, so I thought about defining that myself. Unfortunately, I don't think that's actually possible in a way that doesn't break the internals of the aliasing constructor, as it tries to cast my std::shared_ptr<T[]> to its own implementation-specific supertype, which is no longer possible. (Mine is currently just inheriting from the boost one at the moment.) The nice thing about this idea would have been that I could implement my instance shared_ptr_to method.
Here's another idea I experimented with, but I don't think it's efficient enough to be acceptable as something we're potentially going to use throughout a large project.
template<typename T>
boost::shared_ptr<T> GetElementPtr(const boost::shared_array<T> &array, size_t index) {
//This deleter works by holding on to the underlying array until the deleter itself is deleted.
struct {
boost::shared_array<T> array;
void operator()(T *) {} //No action required here.
} deleter = { array };
return shared_ptr<T>(&array[index], deleter);
}
The next thing I'm going to try is upgrading to Boost 1.53.0 (we currently only have 1.50.0), using shared_ptr<T[]> instead of shared_array<T>, and also always using boost instead of std (even for non-arrays). I'm hoping this will then work, but I haven't had a chance to try it yet:
shared_ptr<int[]> array(new int[10]);
shared_ptr<int> element(array, &array[2]);
Of course I'd still prefer the instance method syntax, but I guess I'm out of luck with that one (short of modifying Boost):
shared_ptr<int> element = array.shared_ptr_to(2);
Anyone else have any ideas?
You are doing strange stuff.
Why do you need shared_ptr to element? Do you want element of array be passed somewhere else and hold down your array from removal?
If yes, than std::vector<shared_ptr<T>> is more suited for that. That solution is safe, standard and has fine granularity on objects removal
boost::shared_ptr does not seem to support this nativly. Maybe you can work around this with a custom deleter. But std::shared_ptr offers a special constructor to support what you want:
struct foo
{
int a;
double b;
};
int main()
{
auto sp1 = std::make_shared<foo>();
std::shared_ptr<int> sp2 (sp1,&sp1->a);
}
Here, sp1 and sp2 share ownership of the foo object but sp2 points to a member of it. If sp1 is destroyed, the foo object will still be alive and sp2 will still be valid.
Here's what I did in the end.
I made my own implementation of shared_array<T>. It effectively extends shared_ptr<vector<T>>, except it actually extends my own wrapper for vector<T> so that the user can't get the vector out. This means I can guarantee it won't be resized. Then I implemented the instance methods I needed - including weak_ptr_to(size_t) and of course operator[].
My implementation uses std::make_shared to make the vector. So the vector allocates its internal array storage separately from the control block, but the vector itself becomes a member of the control block. It's therefore about equivalent to forgetting to use std::make_shared for a normal type - but because these are arrays, they're likely to be largeish and few, so it's less important.
I could also create an implementation that's based on shared_ptr<T> but with default_delete<T[]> or whatever is required, but it would have to allocate the array separately from the control block (so there's not much saving versus vector). I don't think there's a portable way to embed a dynamically sized array in the control block.
Or my implementation could be based on boost::shared_array<T>, and use the custom deleter when taking element pointers (as per the example in the question). That's probably worse in most cases, because instead of a one-time hit allocating the array, we get a hit every time we take an aliased pointer (which could happen a lot with very short-lived ones).
I think the only reasonable way to get it even more optimal would be to use the latest boost (if it works; I didn't get as far as trying it before I changed my mind, mainly because of the desire for my own instance members). And of course this means using the boost ones everywhere, even for single objects.
But, the main advantage with what I went with is Visual Studio's debugger is (I'm told) good at displaying the contents of std::shared_ptrs and std::vectors, and (we expect) less good at analysing the contents of boost things or custom things.
So I think what I've done is basically optimal. :)
Related
How can I efficiently return a vector of derived pointers from a vector of base pointers?
std::vector<const Base*> getb();
std::vector<const Derived*> getd()
{
auto vb = getb(); /// I know for a fact all vb elements point to Derived
return ...;
}
Derived does not inherit directly from Base
The objects exist in other containers that have process lifetime.
boost::ranges?
I know for a fact all vb elements point to Derived
The best course of action is to express that assertion with types. Why does getb() return a vector of base pointers in the first place, if you know a better type for the elements? Make it a vector of derived pointers from the start.
Failing that, you need to dynamic_cast each and every individual pointer in vb and put the result in another container. Other casts may or may not work.
First, I would say that if you run into this problem, you should examine why, in your design, you need to do this step. Possibly there is something you could do differently to avoid this problem. Personally, I find it fishy, that you generate a container of Base* that each only point to Derived objects.
But if you want to do this, there are some possibilities how to go about this. If B is not a virtual base class of D, everywhere, you can use a static_cast instead of a dynamic_cast, due to [expr.static.cast]/11 (in short, if you now that the dynamic_cast will work, you can also static_cast). This will save you the runtime check in the dynamic_cast.
Conversion with memory overhead
You basically create a second vector and copy all pointers over:
const auto vb = get_b();
std::vector<const Derived*> ret;
ret.reserve(vb.size());
std::transform(cbegin(vb), cend(vb), std::back_inserter(ret), [](const Base* p) { return dynamic_cast<const Derived*>(p); });
return ret;
This is, in my opinion, the fastest and most concise way to do this only with the stl in C++14. I am not well versed in the capabilities of boost. If you can use some kind of transforming iterator, you could initialize ret directly from two iterators.
No memory overhead, boilerplate code, slight access runtime overhead
Wrap your std::vector<const Base*> in an own class that works like a vector and returns a const Derived* on access. With dynamic_cast this will have a slight runtime overhead when accessing, since it will have to do a check. If you can use a static_cast (as talked about above), this will not be the case. (You may have a very slight overhead due to the added level of indirection).
I really like this solution and personally would use it. I am not sure if boost has some kind of container adaptor, otherwise, you will have to write a bit of boilerplate code to have vector-like interface (you could inherit from std::vector and only overwrite operator[] and at(), but this has problems of its own since std::vector has no virtual methods, including its destructor!).
No memory and no runtime overhead
You would really like that, I guess^^. I don't think this is possible with an std::vector. For no runtime overhead, you would need to return an object that is really of type std::vector<const Derived*>. For no memory overhead you have to recycle the memory of the returned object of get_b.
But a std::vector<T> has no possibilty to relinquish ownership of its owned memory (except to another std::vector<T> on the same T through swap or move construction/assignment). Maybe you can do some fishy stuff with a custom allocator such that the underlying storage is not deleted when the vector is destroyed and you obtain it before destruction through data(). But this seems like a perfect way to obtain a memory leak. Especially since I don't really know how this works with capacity vs size.
Even if you get the underlying storage, you cannot use it, since you cannot construct a vector from a already allocated piece of memory. Of course here you can again do something evil with a custom allocator, but again this seems like a bad idea.
One could go around this problem by using std::unique_ptr<T[]> instead of std::vector. This is basically a compromise between std::array and std::vector. It holds an array of runtime size, but the size of this array is constant once it is allocated. Here you can obtain the storage with release() and construct a new std::unique_ptr from it without issue.
This brings us to the worst problem. The following code is not valid. You cannot even cast from a Derived** to a Base** (and lets not even talk about the other way round)
std::unique_ptr<const Base*[]> base(static_cast<const Base**>(new Derived*[5]));
std::unique_ptr<const Derived*[]> dev(dynamic_cast<const Derived**>(base.release()));
I have no idea if there is some weird way of reinterpreting just the whole chunk of memory as pointers of other type. Even then if this in any way something sensible to do. So I see no way doing this variant.
I currently have vectors such as:
vector<MyClass*> MyVector;
and I access using
MyVector[i]->MyClass_Function();
I would like to make use of shared_ptr. Does this mean all I have to do is change my vector to:
typedef shared_ptr<MyClass*> safe_myclass
vector<safe_myclass>
and I can continue using the rest of my code as it was before?
vector<shared_ptr<MyClass>> MyVector; should be OK.
But if the instances of MyClass are not shared outside the vector, and you use a modern C++11 compiler, vector<unique_ptr<MyClass>> is more efficient than shared_ptr (because unique_ptr doesn't have the ref count overhead of shared_ptr).
Probably just std::vector<MyClass>. Are you
working with polymorphic classes or
can't afford copy constructors or have a reason you can't copy and are sure this step doesn't get written out by the compiler?
If so then shared pointers are the way to go, but often people use this paradigm when it doesn't benefit them at all.
To be complete if you do change to std::vector<MyClass> you may have some ugly maintenance to do if your code later becomes polymorphic, but ideally all the change you would need is to change your typedef.
Along that point, it may make sense to wrap your entire std::vector.
class MyClassCollection {
private : std::vector<MyClass> collection;
public : MyClass& at(int idx);
//...
};
So you can safely swap out not only the shared pointer but the entire vector. Trade-off is harder to input to APIs that expect a vector, but those are ill-designed as they should work with iterators which you can provide for your class.
Likely this is too much work for your app (although it would be prudent if it's going to be exposed in a library facing clients) but these are valid considerations.
Don't immediately jump to shared pointers. You might be better suited with a simple pointer container if you need to avoid copying objects.
I've been looking into this for the past few days, and so far I haven't really found anything convincing other than dogmatic arguments or appeals to tradition (i.e. "it's the C++ way!").
If I'm creating an array of objects, what is the compelling reason (other than ease) for using:
#define MY_ARRAY_SIZE 10
// ...
my_object * my_array=new my_object [MY_ARRAY_SIZE];
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i]=my_object(i);
over
#define MEMORY_ERROR -1
#define MY_ARRAY_SIZE 10
// ...
my_object * my_array=(my_object *)malloc(sizeof(my_object)*MY_ARRAY_SIZE);
if (my_object==NULL) throw MEMORY_ERROR;
for (int i=0;i<MY_ARRAY_SIZE;++i) new (my_array+i) my_object (i);
As far as I can tell the latter is much more efficient than the former (since you don't initialize memory to some non-random value/call default constructors unnecessarily), and the only difference really is the fact that one you clean up with:
delete [] my_array;
and the other you clean up with:
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i].~T();
free(my_array);
I'm out for a compelling reason. Appeals to the fact that it's C++ (not C) and therefore malloc and free shouldn't be used isn't -- as far as I can tell -- compelling as much as it is dogmatic. Is there something I'm missing that makes new [] superior to malloc?
I mean, as best I can tell, you can't even use new [] -- at all -- to make an array of things that don't have a default, parameterless constructor, whereas the malloc method can thusly be used.
I'm out for a compelling reason.
It depends on how you define "compelling". Many of the arguments you have thus far rejected are certainly compelling to most C++ programmers, as your suggestion is not the standard way to allocate naked arrays in C++.
The simple fact is this: yes, you absolutely can do things the way you describe. There is no reason that what you are describing will not function.
But then again, you can have virtual functions in C. You can implement classes and inheritance in plain C, if you put the time and effort into it. Those are entirely functional as well.
Therefore, what matters is not whether something can work. But more on what the costs are. It's much more error prone to implement inheritance and virtual functions in C than C++. There are multiple ways to implement it in C, which leads to incompatible implementations. Whereas, because they're first-class language features of C++, it's highly unlikely that someone would manually implement what the language offers. Thus, everyone's inheritance and virtual functions can cooperate with the rules of C++.
The same goes for this. So what are the gains and the losses from manual malloc/free array management?
I can't say that any of what I'm about to say constitutes a "compelling reason" for you. I rather doubt it will, since you seem to have made up your mind. But for the record:
Performance
You claim the following:
As far as I can tell the latter is much more efficient than the former (since you don't initialize memory to some non-random value/call default constructors unnecessarily), and the only difference really is the fact that one you clean up with:
This statement suggests that the efficiency gain is primarily in the construction of the objects in question. That is, which constructors are called. The statement presupposes that you don't want to call the default constructor; that you use a default constructor just to create the array, then use the real initialization function to put the actual data into the object.
Well... what if that's not what you want to do? What if what you want to do is create an empty array, one that is default constructed? In this case, this advantage disappears entirely.
Fragility
Let's assume that each object in the array needs to have a specialized constructor or something called on it, such that initializing the array requires this sort of thing. But consider your destruction code:
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i].~T();
For a simple case, this is fine. You have a macro or const variable that says how many objects you have. And you loop over each element to destroy the data. That's great for a simple example.
Now consider a real application, not an example. How many different places will you be creating an array in? Dozens? Hundreds? Each and every one will need to have its own for loop for initializing the array. Each and every one will need to have its own for loop for destroying the array.
Mis-type this even once, and you can corrupt memory. Or not delete something. Or any number of other horrible things.
And here's an important question: for a given array, where do you keep the size? Do you know how many items you allocated for every array that you create? Each array will probably have its own way of knowing how many items it stores. So each destructor loop will need to fetch this data properly. If it gets it wrong... boom.
And then we have exception safety, which is a whole new can of worms. If one of the constructors throws an exception, the previously constructed objects need to be destructed. Your code doesn't do that; it's not exception-safe.
Now, consider the alternative:
delete[] my_array;
This can't fail. It will always destroy every element. It tracks the size of the array, and it's exception-safe. So it is guaranteed to work. It can't not work (as long as you allocated it with new[]).
Of course, you could say that you could wrap the array in an object. That makes sense. You might even template the object on the type elements of the array. That way, all the desturctor code is the same. The size is contained in the object. And maybe, just maybe, you realize that the user should have some control over the particular way the memory is allocated, so that it's not just malloc/free.
Congratulations: you just re-invented std::vector.
Which is why many C++ programmers don't even type new[] anymore.
Flexibility
Your code uses malloc/free. But let's say I'm doing some profiling. And I realize that malloc/free for certain frequently created types is just too expensive. I create a special memory manager for them. But how to hook all of the array allocations to them?
Well, I have to search the codebase for any location where you create/destroy arrays of these types. And then I have to change their memory allocators accordingly. And then I have to continuously watch the codebase so that someone else doesn't change those allocators back or introduce new array code that uses different allocators.
If I were instead using new[]/delete[], I could use operator overloading. I simply provide an overload for operators new[] and delete[] for those types. No code has to change. It's much more difficult for someone to circumvent these overloads; they have to actively try to. And so forth.
So I get greater flexibility and reasonable assurance that my allocators will be used where they should be used.
Readability
Consider this:
my_object *my_array = new my_object[10];
for (int i=0; i<MY_ARRAY_SIZE; ++i)
my_array[i]=my_object(i);
//... Do stuff with the array
delete [] my_array;
Compare it to this:
my_object *my_array = (my_object *)malloc(sizeof(my_object) * MY_ARRAY_SIZE);
if(my_object==NULL)
throw MEMORY_ERROR;
int i;
try
{
for(i=0; i<MY_ARRAY_SIZE; ++i)
new(my_array+i) my_object(i);
}
catch(...) //Exception safety.
{
for(i; i>0; --i) //The i-th object was not successfully constructed
my_array[i-1].~T();
throw;
}
//... Do stuff with the array
for(int i=MY_ARRAY_SIZE; i>=0; --i)
my_array[i].~T();
free(my_array);
Objectively speaking, which one of these is easier to read and understand what's going on?
Just look at this statement: (my_object *)malloc(sizeof(my_object) * MY_ARRAY_SIZE). This is a very low level thing. You're not allocating an array of anything; you're allocating a hunk of memory. You have to manually compute the size of the hunk of memory to match the size of the object * the number of objects you want. It even features a cast.
By contrast, new my_object[10] tells the story. new is the C++ keyword for "create instances of types". my_object[10] is a 10 element array of my_object type. It's simple, obvious, and intuitive. There's no casting, no computing of byte sizes, nothing.
The malloc method requires learning how to use malloc idiomatically. The new method requires just understanding how new works. It's much less verbose and much more obvious what's going on.
Furthermore, after the malloc statement, you do not in fact have an array of objects. malloc simply returns a block of memory that you have told the C++ compiler to pretend is a pointer to an object (with a cast). It isn't an array of objects, because objects in C++ have lifetimes. And an object's lifetime does not begin until it is constructed. Nothing in that memory has had a constructor called on it yet, and therefore there are no living objects in it.
my_array at that point is not an array; it's just a block of memory. It doesn't become an array of my_objects until you construct them in the next step. This is incredibly unintuitive to a new programmer; it takes a seasoned C++ hand (one who probably learned from C) to know that those aren't live objects and should be treated with care. The pointer does not yet behave like a proper my_object*, because it doesn't point to any my_objects yet.
By contrast, you do have living objects in the new[] case. The objects have been constructed; they are live and fully-formed. You can use this pointer just like any other my_object*.
Fin
None of the above says that this mechanism isn't potentially useful in the right circumstances. But it's one thing to acknowledge the utility of something in certain circumstances. It's quite another to say that it should be the default way of doing things.
If you do not want to get your memory initialized by implicit constructor calls, and just need an assured memory allocation for placement new then it is perfectly fine to use malloc and free instead of new[] and delete[].
The compelling reasons of using new over malloc is that new provides implicit initialization through constructor calls, saving you additional memset or related function calls post an malloc And that for new you do not need to check for NULL after every allocation, just enclosing exception handlers will do the job saving you redundant error checking unlike malloc.
These both compelling reasons do not apply to your usage.
which one is performance efficient can only be determined by profiling, there is nothing wrong in the approach you have now. On a side note I don't see a compelling reason as to why use malloc over new[] either.
I would say neither.
The best way to do it would be:
std::vector<my_object> my_array;
my_array.reserve(MY_ARRAY_SIZE);
for (int i=0;i<MY_ARRAY_SIZE;++i)
{ my_array.push_back(my_object(i));
}
This is because internally vector is probably doing the placement new for you. It also managing all the other problems associated with memory management that you are not taking into account.
You've reimplemented new[]/delete[] here, and what you have written is pretty common in developing specialized allocators.
The overhead of calling simple constructors will take little time compared the allocation. It's not necessarily 'much more efficient' -- it depends on the complexity of the default constructor, and of operator=.
One nice thing that has not been mentioned yet is that the array's size is known by new[]/delete[]. delete[] just does the right and destructs all elements when asked. Dragging an additional variable (or three) around so you exactly how to destroy the array is a pain. A dedicated collection type would be a fine alternative, however.
new[]/delete[] are preferable for convenience. They introduce little overhead, and could save you from a lot of silly errors. Are you compelled enough to take away this functionality and use a collection/container everywhere to support your custom construction? I've implemented this allocator -- the real mess is creating functors for all the construction variations you need in practice. At any rate, you often have a more exact execution at the expense of a program which is often more difficult to maintain than the idioms everybody knows.
IMHO there both ugly, it's better to use vectors. Just make sure to allocate the space in advance for performance.
Either:
std::vector<my_object> my_array(MY_ARRAY_SIZE);
If you want to initialize with a default value for all entries.
my_object basic;
std::vector<my_object> my_array(MY_ARRAY_SIZE, basic);
Or if you don't want to construct the objects but do want to reserve the space:
std::vector<my_object> my_array;
my_array.reserve(MY_ARRAY_SIZE);
Then if you need to access it as a C-Style pointer array just (just make sure you don't add stuff while keeping the old pointer but you couldn't do that with regular c-style arrays anyway.)
my_object* carray = &my_array[0];
my_object* carray = &my_array.front(); // Or the C++ way
Access individual elements:
my_object value = my_array[i]; // The non-safe c-like faster way
my_object value = my_array.at(i); // With bounds checking, throws range exception
Typedef for pretty:
typedef std::vector<my_object> object_vect;
Pass them around functions with references:
void some_function(const object_vect& my_array);
EDIT:
IN C++11 there is also std::array. The problem with it though is it's size is done via a template so you can't make different sized ones at runtime and you cant pass it into functions unless they are expecting that exact same size (or are template functions themselves). But it can be useful for things like buffers.
std::array<int, 1024> my_array;
EDIT2:
Also in C++11 there is a new emplace_back as an alternative to push_back. This basically allows you to 'move' your object (or construct your object directly in the vector) and saves you a copy.
std::vector<SomeClass> v;
SomeClass bob {"Bob", "Ross", 10.34f};
v.emplace_back(bob);
v.emplace_back("Another", "One", 111.0f); // <- Note this doesn't work with initialization lists ☹
Oh well, I was thinking that given the number of answers there would be no reason to step in... but I guess I am drawn in as the others. Let's go
Why your solution is broken
C++11 new facilities for handling raw memory
Simpler way to get this done
Advices
1. Why your solution is broken
First, the two snippets you presented are not equivalent. new[] just works, yours fails horribly in the presence of Exceptions.
What new[] does under the cover is that it keeps track of the number of objects that were constructed, so that if an exception occurs during say the 3rd constructor call it properly calls the destructor for the 2 already constructed objects.
Your solution however fails horribly:
either you don't handle exceptions at all (and leak horribly)
or you just try to call the destructors on the whole array even though it's half built (likely crashing, but who knows with undefined behavior)
So the two are clearly not equivalent. Yours is broken
2. C++11 new facilities for handling raw memory
In C++11, the comittee members have realized how much we liked fiddling with raw memory and they have introduced facilities to help us doing so more efficiently, and more safely.
Check cppreference's <memory> brief. This example shows off the new goodies (*):
#include <iostream>
#include <string>
#include <memory>
#include <algorithm>
int main()
{
const std::string s[] = {"This", "is", "a", "test", "."};
std::string* p = std::get_temporary_buffer<std::string>(5).first;
std::copy(std::begin(s), std::end(s),
std::raw_storage_iterator<std::string*, std::string>(p));
for(std::string* i = p; i!=p+5; ++i) {
std::cout << *i << '\n';
i->~basic_string<char>();
}
std::return_temporary_buffer(p);
}
Note that get_temporary_buffer is no-throw, it returns the number of elements for which memory has actually been allocated as a second member of the pair (thus the .first to get the pointer).
(*) Or perhaps not so new as MooingDuck remarked.
3. Simpler way to get this done
As far as I am concered, what you really seem to be asking for is a kind of typed memory pool, where some emplacements could not have been initialized.
Do you know about boost::optional ?
It is basically an area of raw memory that can fit one item of a given type (template parameter) but defaults with having nothing in instead. It has a similar interface to a pointer and let you query whether or not the memory is actually occupied. Finally, using the In-Place Factories you can safely use it without copying objects if it is a concern.
Well, your use case really looks like a std::vector< boost::optional<T> > to me (or perhaps a deque?)
4. Advices
Finally, in case you really want to do it on your own, whether for learning or because no STL container really suits you, I do suggest you wrap this up in an object to avoid the code sprawling all over the place.
Don't forget: Don't Repeat Yourself!
With an object (templated) you can capture the essence of your design in one single place, and then reuse it everywhere.
And of course, why not take advantage of the new C++11 facilities while doing so :) ?
You should use vectors.
Dogmatic or not, that is exactly what ALL the STL container do to allocate and initialize.
They use an allocator then allocates uninitialized space and initialize it by means of the container constructors.
If this (like many people use to say) "is not c++" how can be the standard library just be implemented like that?
If you just don't want to use malloc / free, you can allocate "bytes" with just new char[]
myobjet* pvext = reinterpret_cast<myobject*>(new char[sizeof(myobject)*vectsize]);
for(int i=0; i<vectsize; ++i) new(myobject+i)myobject(params);
...
for(int i=vectsize-1; i!=0u-1; --i) (myobject+i)->~myobject();
delete[] reinterpret_cast<char*>(myobject);
This lets you take advantage of the separation between initialization and allocation, still taking adwantage of the new allocation exception mechanism.
Note that, putting my first and last line into an myallocator<myobject> class and the second ands second-last into a myvector<myobject> class, we have ... just reimplemented std::vector<myobject, std::allocator<myobject> >
What you have shown here is actually the way to go when using a memory allocator different than the system general allocator - in that case you would allocate your memory using the allocator (alloc->malloc(sizeof(my_object))) and then use the placement new operator to initialize it. This has many advantages in efficient memory management and quite common in the standard template library.
If you are writing a class that mimics functionality of std::vector or needs control over memory allocation/object creation (insertion in array / deletion etc.) - that's the way to go. In this case, it's not a question of "not calling default constructor". It becomes a question of being able to "allocate raw memory, memmove old objects there and then create new objects at the olds' addresses", question of being able to use some form of realloc and so on. Unquestionably, custom allocation + placement new are way more flexible... I know, I'm a bit drunk, but std::vector is for sissies... About efficiency - one can write their own version of std::vector that will be AT LEAST as fast ( and most likely smaller, in terms of sizeof() ) with most used 80% of std::vector functionality in, probably, less than 3 hours.
my_object * my_array=new my_object [10];
This will be an array with objects.
my_object * my_array=(my_object *)malloc(sizeof(my_object)*MY_ARRAY_SIZE);
This will be an array the size of your objects, but they may be "broken". If your class has virtual funcitons for instance, then you won't be able to call those. Note that it's not just your member data that may be inconsistent, but the entire object is actully "broken" (in lack of a better word)
I'm not saying it's wrong to do the second one, just as long as you know this.
I'd like to use a std::vector to control a given piece of memory. First of all I'm pretty sure this isn't good practice, but curiosity has the better of me and I'd like to know how to do this anyway.
The problem I have is a method like this:
vector<float> getRow(unsigned long rowIndex)
{
float* row = _m->getRow(rowIndex); // row is now a piece of memory (of a known size) that I control
vector<float> returnValue(row, row+_m->cols()); // construct a new vec from this data
delete [] row; // delete the original memory
return returnValue; // return the new vector
}
_m is a DLL interface class which returns an array of float which is the callers responsibility to delete. So I'd like to wrap this in a vector and return that to the user.... but this implementation allocates new memory for the vector, copies it, and then deletes the returned memory, then returns the vector.
What I'd like to do is to straight up tell the new vector that it has full control over this block of memory so when it gets deleted that memory gets cleaned up.
UPDATE: The original motivation for this (memory returned from a DLL) has been fairly firmly squashed by a number of responders :) However, I'd love to know the answer to the question anyway... Is there a way to construct a std::vector using a given chunk of pre-allocated memory T* array, and the size of this memory?
The obvious answer is to use a custom allocator, however you might find that is really quite a heavyweight solution for what you need. If you want to do it, the simplest way is to take the allocator defined (as the default scond template argument to vector<>) by the implementation, copy that and make it work as required.
Another solution might be to define a template specialisation of vector, define as much of the interface as you actually need and implement the memory customisation.
Finally, how about defining your own container with a conforming STL interface, defining random access iterators etc. This might be quite easy given that underlying array will map nicely to vector<>, and pointers into it will map to iterators.
Comment on UPDATE: "Is there a way to construct a std::vector using a given chunk of pre-allocated memory T* array, and the size of this memory?"
Surely the simple answer here is "No". Provided you want the result to be a vector<>, then it has to support growing as required, such as through the reserve() method, and that will not be possible for a given fixed allocation. So the real question is really: what exactly do you want to achieve? Something that can be used like vector<>, or something that really does have to in some sense be a vector, and if so, what is that sense?
Vector's default allocator doesn't provide this type of access to its internals. You could do it with your own allocator (vector's second template parameter), but that would change the type of the vector.
It would be much easier if you could write directly into the vector:
vector<float> getRow(unsigned long rowIndex) {
vector<float> row (_m->cols());
_m->getRow(rowIndex, &row[0]); // writes _m->cols() values into &row[0]
return row;
}
Note that &row[0] is a float* and it is guaranteed for vector to store items contiguously.
The most important thing to know here is that different DLL/Modules have different Heaps. This means that any memory that is allocated from a DLL needs to be deleted from that DLL (it's not just a matter of compiler version or delete vs delete[] or whatever). DO NOT PASS MEMORY MANAGEMENT RESPONSIBILITY ACROSS A DLL BOUNDARY. This includes creating a std::vector in a dll and returning it. But it also includes passing a std::vector to the DLL to be filled by the DLL; such an operation is unsafe since you don't know for sure that the std::vector will not try a resize of some kind while it is being filled with values.
There are two options:
Define your own allocator for the std::vector class that uses an allocation function that is guaranteed to reside in the DLL/Module from which the vector was created. This can easily be done with dynamic binding (that is, make the allocator class call some virtual function). Since dynamic binding will look-up in the vtable for the function call, it is guaranteed that it will fall in the code from the DLL/Module that originally created it.
Don't pass the vector object to or from the DLL. You can use, for example, a function getRowBegin() and getRowEnd() that return iterators (i.e. pointers) in the row array (if it is contiguous), and let the user std::copy that into its own, local std::vector object. You could also do it the other way around, pass the iterators begin() and end() to a function like fillRowInto(begin, end).
This problem is very real, although many people neglect it without knowing. Don't underestimate it. I have personally suffered silent bugs related to this issue and it wasn't pretty! It took me months to resolve it.
I have checked in the source code, and boost::shared_ptr and boost::shared_array use dynamic binding (first option above) to deal with this.. however, they are not guaranteed to be binary compatible. Still, this could be a slightly better option (usually binary compatibility is a much lesser problem than memory management across modules).
Your best bet is probably a std::vector<shared_ptr<MatrixCelType>>.
Lots more details in this thread.
If you're trying to change where/how the vector allocates/reallocates/deallocates memory, the allocator template parameter of the vector class is what you're looking for.
If you're simply trying to avoid the overhead of construction, copy construction, assignment, and destruction, then allow the user to instantiate the vector, then pass it to your function by reference. The user is then responsible for construction and destruction.
It sounds like what you're looking for is a form of smart pointer. One that deletes what it points to when it's destroyed. Look into the Boost libraries or roll your own in that case.
The Boost.SmartPtr library contains a whole lot of interesting classes, some of which are dedicated to handle arrays.
For example, behold scoped_array:
int main(int argc, char* argv[])
{
boost::scoped_array<float> array(_m->getRow(atoi(argv[1])));
return 0;
}
The issue, of course, is that scoped_array cannot be copied, so if you really want a std::vector<float>, #Fred Nurk's is probably the best you can get.
In the ideal case you'd want the equivalent to unique_ptr but in array form, however I don't think it's part of the standard.
I was wondering if there is any difference in performance when you compare/contrast
A) Allocating objects on the heap, putting pointers to those objects in a container, operating on the container elsewhere in the code
Ex:
std::list<SomeObject*> someList;
// Somewhere else in the code
SomeObject* foo = new SomeObject(param1, param2);
someList.push_back(foo);
// Somewhere else in the code
while (itr != someList.end())
{
(*itr)->DoStuff();
//...
}
B) Creating an object, putting it in a container, operating on that container elsewhere in the code
Ex:
std::list<SomeObject> someList;
// Somewhere else in the code
SomeObject newObject(param1, param2);
someList.push_back(newObject);
// Somewhere else in the code
while (itr != someList.end())
{
itr->DoStuff();
...
}
Assuming the pointers are all deallocated correctly and everything works fine, my question is...
If there is a difference, what would yield better performance, and how great would the difference be?
There is a performance hit when inserting objects instead of pointers to objects.
std::list as well as other std containers make a copy of the parameter that you store (for std::map both key and value is copied).
As your someList is a std::list the following line copies your object:
Foo foo;
someList.push_back(foo); // copy foo object
It will get copied again when you retrieve it from list. So you are making of copies of the whole object compared to making copies of pointer when using:
Foo * foo = new Foo();
someList.push_back(foo); // copy of foo*
You can double check by inserting print statements into Foo's constructor, destructor, copy constructor.
EDIT: As mentioned in comments, pop_front does not return anything. You usually get reference to front element with front then you pop_front to remove the element from list:
Foo * fooB = someList.front(); // copy of foo*
someList.pop_front();
OR
Foo fooB = someList.front(); // front() returns reference to element but if you
someList.pop_front(); // are going to pop it from list you need to keep a
// copy so Foo fooB = someList.front() makes a copy
Like most performance questions, this doesn't have one clear cut answer.
For one thing, it depends on what exactly you're doing with the list. Pointers might make it easier to do various operations (like sorting). That's because comparing pointers and swapping pointers is probably going to be faster than comparing/swapping SomeObject (of course, it depends on the implementation of SomeObject).
On the other hand, dynamic memory allocation tends to be worse than allocating on the stack. So, assuming you have enough memory on the stack for all the objects, that's another thing to consider.
In the end, I would personally recommend the best piece of advice I've ever gotten: It's pointless trying to guess what will perform better. Code it the way that makes the most sense (easiest to implement/maintain). If, and only if* you later discover there is a performance problem, run a profiler and figure out why. Chances are, most programs won't need all these optimizations, and this will turn out to be a moot point.
It depends how you use the list. Do you just fill it with stuff, and do lookups, or do you insert and remove data regularly. Lookups may be marginally faster without pointers, while adding and removing elements will be faster with pointers.
With objects it is going to be memberwise copy (thus new object creation and copy of members) assuming there aren't any copy constructors and = operator overloads. Therefore, using pointers is efficient std::auto_ptr or boost's smart pointers better, but that is beyond the scope of this question.
If you still have to use object syntax using reference.
Some additional things to consider (You have already been made aware of the copy semantics of STL containers):
Are your objects really smaller than pointers to them? This becomes more relevant if you use any kind of smart pointer as those have a tendency to be larger.
Copy operations are (often?) optimized to use memcpy() by the compiler. Especially this is probably not true for smart pointers.
Additional dereferencing caused by pointers
All the things I have mentioned are micro optimizations considerations and I'd discourage even thinking about them and go with them. On the other hand: A lot of my claims would need verification and would make for interesting test cases. Feel free to benchmark them.