Related
I hope this question is not too subjective.. all I want is good design that would prevent memory leaks happening later in my code. I coulnd't find any question on SO. I always found questions about what to do after allocating data in function which is not my case.
Also for some reason I am not using newer C++ standards like C++11 (shared pointers).
Let me demonstrate by example:
I have logic of data buffering which are later being sent. The buffering is done in one class and sending in another class.
In one point of code I am taking some data from buffer, process it (check the type of data etc) and then send it with function send:
bool send_data(char *data, size_t data_length) {
The data are consumed and are no longer needed. Shall I free them in the send_data or shell I leave that to the caller?
Free it inside:
bool send_data(char *data, size_t data_length) {
//... process data ...
send(data, data_length, ...);
delete[] data;
}
Leave it and let the caller free it:
send_data(data,data_length);
delete[] data;
Or is there a design flaw and I should do something totally different?
The reason for not using C++11 is that the code is big & old - should I rewrite it completely? But I am considering to rewrite just some parts of the code because something is better then nothing.
And also the usage of some pointers spans lots of places of code I would have to change them all. Sometimes its not so easy to find them all because the usage may be hidden by casting and the buffering etc.
The data buffering is important. I have lot of data in buffers not just one char array. I am not sure if the data can be made static as some of you have in answers.. I will think about it.
If you don't want to use c++11, you can use std::auto_ptr, or if you can actually, use std::unique_ptr.
But as I see, it seems like you are using char * for, may be, an array. If so, don't use smart pointers (at least without custom deallocators) and call delete[] instead of delete.
Also, if you need to pass a char *, you can use
std::vector<char> vec;
vec.push_back(...);
//...
send_data(&vec[0], vec.size());
If you are sure you strongly want to use char *, delete it in the caller, it is far better practice. Because who allocates memory is it's owner, so owner deletes it. Also it removes a side effect from callee, and makes it callee easier to use for other developers, who won't expect i.e that send_data will also delete their data.
If you want good design to prevent memory leaks, the answer is to use a container, and there are plenty of containers to chose from that don't require C++11. If you want it to be as "freeform raw data" as possible, then yes, you should use a newer standard and use unique or shared pointers - is there any particular reason you're still stuck in the last decade compiler-wise?
If you want to handle it the C way (which is what you're insisting on doing above), then it's really application dependent. If you meet the following constraints:
1) Only one thread will use the data at a time
2) The data size is never prohibitive
3) There's nothing else that would make it unreasonable to leave it sticking around
... then I recommend storing it in a static pointer, wherein nobody ever needs to free it. That's what a lot of the stdlib library functions do when they deal with strings.
C++ style would be to use the safe ptr wrappers.
C style, as here, means definitely leave it to the caller.
The call could be:
char data[256];
...
send(data, sizeof(data);
So no delete[] data
To be a bit more safe, you could hide the original send and manage data and its deletion separately. C++ as class, C style as a couple of functions.
struct Data {
char* data;
size_t size;
};
void send(struct Data* data) {
if (!data->data) {
throw new IllegalStateException("...");
}
_send(data->data, data->size);
delete[] data->data;
data->data = NULL;
}
I am trying to write a simple game using C++ and SDL. My question is, what is the best practice to store class member variables.
MyObject obj;
MyObject* obj;
I read a lot about eliminating pointers as much as possible in similar questions, but I remember that few years back in some books I read they used it a lot (for all non trivial objects) . Another thing is that SDL returns pointers in many of its functions and therefor I would have to use "*" a lot when working with SDL objects.
Also am I right when I think the only way to initialize the first one using other than default constructor is through initializer list?
Generally, using value members is preferred over pointer members. However, there are some exceptions, e.g. (this list is probably incomplete and only contains reason I could come up with immediately):
When the members are huge (use sizeof(MyObject) to find out), the difference often doesn't matter for the access and stack size may be a concern.
When the objects come from another source, e.g., when there are factory function creating pointers, there is often no alternative to store the objects.
If the dynamic type of the object isn't known, using a pointer is generally the only alternative. However, this shouldn't be as common as it often is.
When there are more complicated relations than direct owner, e.g., if an object is shared between different objects, using a pointer is the most reasonable approach.
In all of these case you wouldn't use a pointer directly but rather a suitable smart pointer. For example, for 1. you might want to use a std::unique_ptr<MyObject> and for 4. a std::shared_ptr<MyObject> is the best alternative. For 2. you might need to use one of these smart pointer templates combined with a suitable deleter function to deal with the appropriate clean-up (e.g. for a FILE* obtained from fopen() you'd use fclose() as a deleter function; of course, this is a made up example as in C++ you would use I/O streams anyway).
In general, I normally initialize my objects entirely in the member initializer list, independent on how the members are represented exactly. However, yes, if you member objects require constructor arguments, these need to be passed from a member initializer list.
First I would like to say that I completely agree with Dietmar Kühl and Mats Petersson answer. However, you have also to take on account that SDL is a pure C library where the majority of the API functions expect C pointers of structs that can own big chunks of data. So you should not allocate them on stack (you shoud use new operator to allocate them on the heap). Furthermore, because C language does not contain smart pointers, you need to use std::unique_ptr::get() to recover the C pointer that std::unique_ptr owns before sending it to SDL API functions. This can be quite dangerous because you have to make sure that the std::unique_ptr does not get out of scope while SDL is using the C pointer (similar problem with std::share_ptr). Otherwise you will get seg fault because std::unique_ptr will delete the C pointer while SDL is using it.
Whenever you need to call pure C libraries inside a C++ program, I recommend the use of RAII. The main idea is that you create a small wrapper class that owns the C pointer and also calls the SDL API functions for you. Then you use the class destructor to delete all your C pointers.
Example:
class SDLAudioWrap {
public:
SDLAudioWrap() { // constructor
// allocate SDL_AudioSpec
}
~SDLAudioWrap() { // destructor
// free SDL_AudioSpec
}
// here you wrap all SDL API functions that involve
// SDL_AudioSpec and that you will use in your program
// It is quite simple
void SDL_do_some_stuff() {
SDL_do_some_stuff(ptr); // original C function
// SDL_do_some_stuff(SDL_AudioSpec* ptr)
}
private:
SDL_AudioSpec* ptr;
}
Now your program is exception safe and you don't have the possible issue of having smart pointers deleting your C pointer while SDL is using it.
UPDATE 1: I forget to mention that because SDL is a C library, you will need a custom deleter class in order to proper manage their C structs using smart pointers.
Concrete example: GSL GNU scientific library. Integration routine requires the allocation of a struct called "gsl_integration_workspace". In this case, you can use the following code to ensure that your code is exception safe
auto deleter= [](gsl_integration_workspace* ptr) {
gsl_integration_workspace_free(ptr);
};
std::unique_ptr<gsl_integration_workspace, decltype(deleter)> ptr4 (
gsl_integration_workspace_alloc (2000), deleter);
Another reason why I prefer wrapper classes
In case of initialization, it depends on what the options are, but yes, a common way is to use an initializer list.
The "don't use pointers unless you have to" is good advice in general. Of course, there are times when you have to - for example when an object is being returned by an API!
Also, using new will waste quite a bit of memory and CPU-time if MyObject is small. Each object created with new has an overhead of around 16-48 bytes in a typical modern OS, so if your object is only a couple of simple types, then you may well have more overhead than actual storage. In a largeer application, this can easily add up to a huge amount. And of course, a call to new or delete will most likely take some hundreds or thousands of cycles (above and beyond the time used in the constructor). So, you end up with code that runs slower and takes more memory - and of course, there's always some risk that you mess up and have memory leaks, causing your program to potentially crash due to out of memory, when it's not REALLY out of memory.
And as that famous "Murphy's law states", these things just have to happen at the worst possible and most annoying times - when you have just done some really good work, or when you've just succeeded at a level in a game, or something. So avoiding those risks whenever possible is definitely a good idea.
Well, creating the object is a lot better than using pointers because it's less error prone. Your code doesn't describe it well.
MyObj* foo;
foo = new MyObj;
foo->CanDoStuff(stuff);
//Later when foo is not needed
delete foo;
The other way is
MyObj foo;
foo.CanDoStuff(stuff);
less memory management but really it's up to you.
As the previous answers claimed the "don't use pointers unless you have to" is a good advise for general programming but then there are many issues that could finally make you select the pointers choice. Furthermore, in you initial question you are not considering the option of using references. So you can face three types of variable members in a class:
MyObject obj;
MyObject* obj;
MyObject& obj;
I use to always consider the reference option rather than the pointer one because you don't need to take care about if the pointer is NULL or not.
Also, as Dietmar Kühl pointed, a good reason for selecting pointers is:
If the dynamic type of the object isn't known, using a pointer is
generally the only alternative. However, this shouldn't be as common
as it often is.
I think this point is of particular importance when you are working on a big project. If you have many own classes, arranged in many source files and you use them in many parts of your code you will come up with long compilation times. If you use normal class instances (instead of pointers or references) a simple change in one of the header file of your classes will infer in the recompilation of all the classes that include this modified class. One possible solution for this issue is to use the concept of Forward declaration, which make use of pointers or references (you can find more info here).
I've been looking into this for the past few days, and so far I haven't really found anything convincing other than dogmatic arguments or appeals to tradition (i.e. "it's the C++ way!").
If I'm creating an array of objects, what is the compelling reason (other than ease) for using:
#define MY_ARRAY_SIZE 10
// ...
my_object * my_array=new my_object [MY_ARRAY_SIZE];
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i]=my_object(i);
over
#define MEMORY_ERROR -1
#define MY_ARRAY_SIZE 10
// ...
my_object * my_array=(my_object *)malloc(sizeof(my_object)*MY_ARRAY_SIZE);
if (my_object==NULL) throw MEMORY_ERROR;
for (int i=0;i<MY_ARRAY_SIZE;++i) new (my_array+i) my_object (i);
As far as I can tell the latter is much more efficient than the former (since you don't initialize memory to some non-random value/call default constructors unnecessarily), and the only difference really is the fact that one you clean up with:
delete [] my_array;
and the other you clean up with:
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i].~T();
free(my_array);
I'm out for a compelling reason. Appeals to the fact that it's C++ (not C) and therefore malloc and free shouldn't be used isn't -- as far as I can tell -- compelling as much as it is dogmatic. Is there something I'm missing that makes new [] superior to malloc?
I mean, as best I can tell, you can't even use new [] -- at all -- to make an array of things that don't have a default, parameterless constructor, whereas the malloc method can thusly be used.
I'm out for a compelling reason.
It depends on how you define "compelling". Many of the arguments you have thus far rejected are certainly compelling to most C++ programmers, as your suggestion is not the standard way to allocate naked arrays in C++.
The simple fact is this: yes, you absolutely can do things the way you describe. There is no reason that what you are describing will not function.
But then again, you can have virtual functions in C. You can implement classes and inheritance in plain C, if you put the time and effort into it. Those are entirely functional as well.
Therefore, what matters is not whether something can work. But more on what the costs are. It's much more error prone to implement inheritance and virtual functions in C than C++. There are multiple ways to implement it in C, which leads to incompatible implementations. Whereas, because they're first-class language features of C++, it's highly unlikely that someone would manually implement what the language offers. Thus, everyone's inheritance and virtual functions can cooperate with the rules of C++.
The same goes for this. So what are the gains and the losses from manual malloc/free array management?
I can't say that any of what I'm about to say constitutes a "compelling reason" for you. I rather doubt it will, since you seem to have made up your mind. But for the record:
Performance
You claim the following:
As far as I can tell the latter is much more efficient than the former (since you don't initialize memory to some non-random value/call default constructors unnecessarily), and the only difference really is the fact that one you clean up with:
This statement suggests that the efficiency gain is primarily in the construction of the objects in question. That is, which constructors are called. The statement presupposes that you don't want to call the default constructor; that you use a default constructor just to create the array, then use the real initialization function to put the actual data into the object.
Well... what if that's not what you want to do? What if what you want to do is create an empty array, one that is default constructed? In this case, this advantage disappears entirely.
Fragility
Let's assume that each object in the array needs to have a specialized constructor or something called on it, such that initializing the array requires this sort of thing. But consider your destruction code:
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i].~T();
For a simple case, this is fine. You have a macro or const variable that says how many objects you have. And you loop over each element to destroy the data. That's great for a simple example.
Now consider a real application, not an example. How many different places will you be creating an array in? Dozens? Hundreds? Each and every one will need to have its own for loop for initializing the array. Each and every one will need to have its own for loop for destroying the array.
Mis-type this even once, and you can corrupt memory. Or not delete something. Or any number of other horrible things.
And here's an important question: for a given array, where do you keep the size? Do you know how many items you allocated for every array that you create? Each array will probably have its own way of knowing how many items it stores. So each destructor loop will need to fetch this data properly. If it gets it wrong... boom.
And then we have exception safety, which is a whole new can of worms. If one of the constructors throws an exception, the previously constructed objects need to be destructed. Your code doesn't do that; it's not exception-safe.
Now, consider the alternative:
delete[] my_array;
This can't fail. It will always destroy every element. It tracks the size of the array, and it's exception-safe. So it is guaranteed to work. It can't not work (as long as you allocated it with new[]).
Of course, you could say that you could wrap the array in an object. That makes sense. You might even template the object on the type elements of the array. That way, all the desturctor code is the same. The size is contained in the object. And maybe, just maybe, you realize that the user should have some control over the particular way the memory is allocated, so that it's not just malloc/free.
Congratulations: you just re-invented std::vector.
Which is why many C++ programmers don't even type new[] anymore.
Flexibility
Your code uses malloc/free. But let's say I'm doing some profiling. And I realize that malloc/free for certain frequently created types is just too expensive. I create a special memory manager for them. But how to hook all of the array allocations to them?
Well, I have to search the codebase for any location where you create/destroy arrays of these types. And then I have to change their memory allocators accordingly. And then I have to continuously watch the codebase so that someone else doesn't change those allocators back or introduce new array code that uses different allocators.
If I were instead using new[]/delete[], I could use operator overloading. I simply provide an overload for operators new[] and delete[] for those types. No code has to change. It's much more difficult for someone to circumvent these overloads; they have to actively try to. And so forth.
So I get greater flexibility and reasonable assurance that my allocators will be used where they should be used.
Readability
Consider this:
my_object *my_array = new my_object[10];
for (int i=0; i<MY_ARRAY_SIZE; ++i)
my_array[i]=my_object(i);
//... Do stuff with the array
delete [] my_array;
Compare it to this:
my_object *my_array = (my_object *)malloc(sizeof(my_object) * MY_ARRAY_SIZE);
if(my_object==NULL)
throw MEMORY_ERROR;
int i;
try
{
for(i=0; i<MY_ARRAY_SIZE; ++i)
new(my_array+i) my_object(i);
}
catch(...) //Exception safety.
{
for(i; i>0; --i) //The i-th object was not successfully constructed
my_array[i-1].~T();
throw;
}
//... Do stuff with the array
for(int i=MY_ARRAY_SIZE; i>=0; --i)
my_array[i].~T();
free(my_array);
Objectively speaking, which one of these is easier to read and understand what's going on?
Just look at this statement: (my_object *)malloc(sizeof(my_object) * MY_ARRAY_SIZE). This is a very low level thing. You're not allocating an array of anything; you're allocating a hunk of memory. You have to manually compute the size of the hunk of memory to match the size of the object * the number of objects you want. It even features a cast.
By contrast, new my_object[10] tells the story. new is the C++ keyword for "create instances of types". my_object[10] is a 10 element array of my_object type. It's simple, obvious, and intuitive. There's no casting, no computing of byte sizes, nothing.
The malloc method requires learning how to use malloc idiomatically. The new method requires just understanding how new works. It's much less verbose and much more obvious what's going on.
Furthermore, after the malloc statement, you do not in fact have an array of objects. malloc simply returns a block of memory that you have told the C++ compiler to pretend is a pointer to an object (with a cast). It isn't an array of objects, because objects in C++ have lifetimes. And an object's lifetime does not begin until it is constructed. Nothing in that memory has had a constructor called on it yet, and therefore there are no living objects in it.
my_array at that point is not an array; it's just a block of memory. It doesn't become an array of my_objects until you construct them in the next step. This is incredibly unintuitive to a new programmer; it takes a seasoned C++ hand (one who probably learned from C) to know that those aren't live objects and should be treated with care. The pointer does not yet behave like a proper my_object*, because it doesn't point to any my_objects yet.
By contrast, you do have living objects in the new[] case. The objects have been constructed; they are live and fully-formed. You can use this pointer just like any other my_object*.
Fin
None of the above says that this mechanism isn't potentially useful in the right circumstances. But it's one thing to acknowledge the utility of something in certain circumstances. It's quite another to say that it should be the default way of doing things.
If you do not want to get your memory initialized by implicit constructor calls, and just need an assured memory allocation for placement new then it is perfectly fine to use malloc and free instead of new[] and delete[].
The compelling reasons of using new over malloc is that new provides implicit initialization through constructor calls, saving you additional memset or related function calls post an malloc And that for new you do not need to check for NULL after every allocation, just enclosing exception handlers will do the job saving you redundant error checking unlike malloc.
These both compelling reasons do not apply to your usage.
which one is performance efficient can only be determined by profiling, there is nothing wrong in the approach you have now. On a side note I don't see a compelling reason as to why use malloc over new[] either.
I would say neither.
The best way to do it would be:
std::vector<my_object> my_array;
my_array.reserve(MY_ARRAY_SIZE);
for (int i=0;i<MY_ARRAY_SIZE;++i)
{ my_array.push_back(my_object(i));
}
This is because internally vector is probably doing the placement new for you. It also managing all the other problems associated with memory management that you are not taking into account.
You've reimplemented new[]/delete[] here, and what you have written is pretty common in developing specialized allocators.
The overhead of calling simple constructors will take little time compared the allocation. It's not necessarily 'much more efficient' -- it depends on the complexity of the default constructor, and of operator=.
One nice thing that has not been mentioned yet is that the array's size is known by new[]/delete[]. delete[] just does the right and destructs all elements when asked. Dragging an additional variable (or three) around so you exactly how to destroy the array is a pain. A dedicated collection type would be a fine alternative, however.
new[]/delete[] are preferable for convenience. They introduce little overhead, and could save you from a lot of silly errors. Are you compelled enough to take away this functionality and use a collection/container everywhere to support your custom construction? I've implemented this allocator -- the real mess is creating functors for all the construction variations you need in practice. At any rate, you often have a more exact execution at the expense of a program which is often more difficult to maintain than the idioms everybody knows.
IMHO there both ugly, it's better to use vectors. Just make sure to allocate the space in advance for performance.
Either:
std::vector<my_object> my_array(MY_ARRAY_SIZE);
If you want to initialize with a default value for all entries.
my_object basic;
std::vector<my_object> my_array(MY_ARRAY_SIZE, basic);
Or if you don't want to construct the objects but do want to reserve the space:
std::vector<my_object> my_array;
my_array.reserve(MY_ARRAY_SIZE);
Then if you need to access it as a C-Style pointer array just (just make sure you don't add stuff while keeping the old pointer but you couldn't do that with regular c-style arrays anyway.)
my_object* carray = &my_array[0];
my_object* carray = &my_array.front(); // Or the C++ way
Access individual elements:
my_object value = my_array[i]; // The non-safe c-like faster way
my_object value = my_array.at(i); // With bounds checking, throws range exception
Typedef for pretty:
typedef std::vector<my_object> object_vect;
Pass them around functions with references:
void some_function(const object_vect& my_array);
EDIT:
IN C++11 there is also std::array. The problem with it though is it's size is done via a template so you can't make different sized ones at runtime and you cant pass it into functions unless they are expecting that exact same size (or are template functions themselves). But it can be useful for things like buffers.
std::array<int, 1024> my_array;
EDIT2:
Also in C++11 there is a new emplace_back as an alternative to push_back. This basically allows you to 'move' your object (or construct your object directly in the vector) and saves you a copy.
std::vector<SomeClass> v;
SomeClass bob {"Bob", "Ross", 10.34f};
v.emplace_back(bob);
v.emplace_back("Another", "One", 111.0f); // <- Note this doesn't work with initialization lists ☹
Oh well, I was thinking that given the number of answers there would be no reason to step in... but I guess I am drawn in as the others. Let's go
Why your solution is broken
C++11 new facilities for handling raw memory
Simpler way to get this done
Advices
1. Why your solution is broken
First, the two snippets you presented are not equivalent. new[] just works, yours fails horribly in the presence of Exceptions.
What new[] does under the cover is that it keeps track of the number of objects that were constructed, so that if an exception occurs during say the 3rd constructor call it properly calls the destructor for the 2 already constructed objects.
Your solution however fails horribly:
either you don't handle exceptions at all (and leak horribly)
or you just try to call the destructors on the whole array even though it's half built (likely crashing, but who knows with undefined behavior)
So the two are clearly not equivalent. Yours is broken
2. C++11 new facilities for handling raw memory
In C++11, the comittee members have realized how much we liked fiddling with raw memory and they have introduced facilities to help us doing so more efficiently, and more safely.
Check cppreference's <memory> brief. This example shows off the new goodies (*):
#include <iostream>
#include <string>
#include <memory>
#include <algorithm>
int main()
{
const std::string s[] = {"This", "is", "a", "test", "."};
std::string* p = std::get_temporary_buffer<std::string>(5).first;
std::copy(std::begin(s), std::end(s),
std::raw_storage_iterator<std::string*, std::string>(p));
for(std::string* i = p; i!=p+5; ++i) {
std::cout << *i << '\n';
i->~basic_string<char>();
}
std::return_temporary_buffer(p);
}
Note that get_temporary_buffer is no-throw, it returns the number of elements for which memory has actually been allocated as a second member of the pair (thus the .first to get the pointer).
(*) Or perhaps not so new as MooingDuck remarked.
3. Simpler way to get this done
As far as I am concered, what you really seem to be asking for is a kind of typed memory pool, where some emplacements could not have been initialized.
Do you know about boost::optional ?
It is basically an area of raw memory that can fit one item of a given type (template parameter) but defaults with having nothing in instead. It has a similar interface to a pointer and let you query whether or not the memory is actually occupied. Finally, using the In-Place Factories you can safely use it without copying objects if it is a concern.
Well, your use case really looks like a std::vector< boost::optional<T> > to me (or perhaps a deque?)
4. Advices
Finally, in case you really want to do it on your own, whether for learning or because no STL container really suits you, I do suggest you wrap this up in an object to avoid the code sprawling all over the place.
Don't forget: Don't Repeat Yourself!
With an object (templated) you can capture the essence of your design in one single place, and then reuse it everywhere.
And of course, why not take advantage of the new C++11 facilities while doing so :) ?
You should use vectors.
Dogmatic or not, that is exactly what ALL the STL container do to allocate and initialize.
They use an allocator then allocates uninitialized space and initialize it by means of the container constructors.
If this (like many people use to say) "is not c++" how can be the standard library just be implemented like that?
If you just don't want to use malloc / free, you can allocate "bytes" with just new char[]
myobjet* pvext = reinterpret_cast<myobject*>(new char[sizeof(myobject)*vectsize]);
for(int i=0; i<vectsize; ++i) new(myobject+i)myobject(params);
...
for(int i=vectsize-1; i!=0u-1; --i) (myobject+i)->~myobject();
delete[] reinterpret_cast<char*>(myobject);
This lets you take advantage of the separation between initialization and allocation, still taking adwantage of the new allocation exception mechanism.
Note that, putting my first and last line into an myallocator<myobject> class and the second ands second-last into a myvector<myobject> class, we have ... just reimplemented std::vector<myobject, std::allocator<myobject> >
What you have shown here is actually the way to go when using a memory allocator different than the system general allocator - in that case you would allocate your memory using the allocator (alloc->malloc(sizeof(my_object))) and then use the placement new operator to initialize it. This has many advantages in efficient memory management and quite common in the standard template library.
If you are writing a class that mimics functionality of std::vector or needs control over memory allocation/object creation (insertion in array / deletion etc.) - that's the way to go. In this case, it's not a question of "not calling default constructor". It becomes a question of being able to "allocate raw memory, memmove old objects there and then create new objects at the olds' addresses", question of being able to use some form of realloc and so on. Unquestionably, custom allocation + placement new are way more flexible... I know, I'm a bit drunk, but std::vector is for sissies... About efficiency - one can write their own version of std::vector that will be AT LEAST as fast ( and most likely smaller, in terms of sizeof() ) with most used 80% of std::vector functionality in, probably, less than 3 hours.
my_object * my_array=new my_object [10];
This will be an array with objects.
my_object * my_array=(my_object *)malloc(sizeof(my_object)*MY_ARRAY_SIZE);
This will be an array the size of your objects, but they may be "broken". If your class has virtual funcitons for instance, then you won't be able to call those. Note that it's not just your member data that may be inconsistent, but the entire object is actully "broken" (in lack of a better word)
I'm not saying it's wrong to do the second one, just as long as you know this.
I'm writing code and until now I was using structures like this:
struct s{
enum Types { zero = 0, one, two };
unsigned int type;
void* data;
}
I needed some generic structure to store data from different classes and I wanted to use it in std::vector, so that's reason why I can't use templates. What's better option: unions or void pointers?
Void pointer allocates only as much space as I need, but c++ is strong typed language for some reason and casting everywhere I need to use those data is not the way c++ code should be designed. As I read, void pointers shouldn't be used unless there's no alternative.
That alternative could be Unions. They comes with c++ and uses the same memory space for every member, very much like void pointers. However they come at price - allocated space is the size of largest element in union, and in my case differences between sizes are big.
This is rather stylistic and "correct language using" problem, as both ways accomplish what I need to do, but I can't decide if nicely stylized c++ code can pay for that wasted memory (even though memory these days isn't a big concern).
Consider boost::any or boost::variant if you want to store objects of heterogeneous types.
And before deciding which one to use, have a look at the comparison:
Boost.Variant vs. Boost.Any
Hopefully, it will help you to make the correct decision. Choose one, and any of the container from the standard library to store the objects, std::vector<boost::any>, std::vector<boost::variant>, or any other.
boost::variant.
Basically, it is a type-safe union, and in this case, it seems like unions are by far the most appropriate answer. A void* could be used, but that would mean dynamic allocation, and you would have to maintain the Types enum, and the table for casting.
Memory constraints could make void* an acceptable choice, but it's not the 'neat' answer, and I wouldn't go for it until both boost::variant and just a plain union have shown to be unacceptable.
If your classes have enough in common to be put in the same container give them a base class with a virtual destructor, and possibly a virtual member function to retrieve your type code, even though at that point not only dynamic_cast would be more appropriate, but it could be reasonable to explore whether your classes don't have enough in common to provide them with a more complete common interface.
Otherwise consider providing a custom container class with appropriately typed data members to hold instances of all the different classes you need to put into it.
I've got a lightweight templated class that contains a couple of member objects that are very rarely used, and so I'd like to avoid calling their constructors and destructors except in the rare cases when I actually use them.
To do that, I "declare" them in my class like this:
template <class K, class V> class MyClass
{
public:
MyClass() : wereConstructorsCalled(false) {/* empty */}
~MyClass() {if (wereConstructorsCalled) MyCallPlacementDestructorsFunc();}
[...]
private:
bool wereConstructorsCalled;
mutable char keyBuf[sizeof(K)];
mutable char valBuf[sizeof(V)];
};
... and then I use placement new and placement delete to set up and tear down the objects only when I actually need to do so.
Reading the C++ FAQ it said that when using placement new, I need to be careful that the placement is properly aligned, or I would run into trouble.
My question is, will the keyBuf and valBuf arrays be properly aligned in all cases, or is there some extra step I need to take to make sure they will be aligned properly? (if so, a non-platform-dependent step would be preferable)
There's no guarantee that you'll get the appropriate alignment. Arrays are in general only guaranteed to be aligned for the member type. A char array is aligned for storage of char.
The one exception is that char and unsigned char arrays allocated with new are given maximum alignment, so that you can store arbitrary types into them. But this guarantee doesn't apply in your case as you're avoiding heap allocation.
TR1 and C++0x add some very helpful types though:
std::alignment_of and std::aligned_storage together give you a portable (and functioning) answer.
std::alignment_of<T>::value gives you the alignment required for a type T. std::aligned_storage<A, S>::type gives you a POD type with alignment A and size S. That means that you can safely write your object into a variable of type std::aligned_storage<A, S>::type.
(In TR1, the namespace is std::tr1, rather than just std)
May I ask why you want to place them into a char buffer? Why not just create pointer objects of K and V then instantiate it when you need it.
Maybe I didn't understand your question, but can't you just do char *keyBuf[..size..];, set it initially to NULL (not allocated) and allocate it the first time you need it?
What you're trying to do with placement new seems risky business and bad coding style.
Anyway, code alignment is implementation dependent.
If you want to change code alignment use pragma pack
#pragma pack(push,x)
// class code here
#pragma pack(pop) // to restore original pack value
if x is 1, there will be no padding between your elements.
Heres a link to read
http://www.cplusplus.com/forum/general/14659/
I found this answer posted by SiCrane at http://www.gamedev.net/community/forums/topic.asp?topic_id=455233 :
However, for static allocations, it's less wasteful to declare the memory block in a union with other types. Then the memory block will be guaranteed to be aligned to the alignment of the most restrictive type in the union. It's still pretty ugly either way.
Sounds like a union might do the trick!
I recommend that you look at the boost::optional template. It does what you need, even if you can't use it you should probably look at its implementation.
It uses alignment_of and type_with_alignment for its alignment calculations and guarantees.
To make a very very long story very very short this isn't going to help your performance any and will cause lots of headaches and it won't be long before you get sucked into writing your own memory managemer.
Placement new is fine for a POD (but won't save you anything) but if you have a constructor at all then it's not going to work at all.
You also can't depend on the value of your boolean variable if you use placement new.
Placement new has uses but not really for this.