There was an article i found long ago (i cant find it ATM) which states reasons why the new keyword in C++ is bad. I cant remember all of the reasons but the two i remember most is you must match new with delete, new[] with delete[] and you cannot use #define with new as you could with malloc.
I am designing a language so i like to ask how would you change the C++ language so new is more friendly. Feel free to state problems with new and articles. I wish i can find the article link but i remember it was long and was written by a professor at (IIRC) a known school.
I cannot see any reason to replace the new keyword with something else (and seems to be that C++ committee agree with me). It is clear and makes what it should. You could override operator new in your class, no need to use defines.
To eliminate new[]/delete[] problem you could use std::vector.
If you want to use smart pointer you could use it, but I want to control when smart pointer will be used. That's why I like how it works in C++ — high level behavior with ability to control low level details.
Problem match new, delete, new[], delete[]
Not really a big deal.
You should be wrapping memory allocation inside a class so this does not really affect normal users. A single obejct can be wrapped with a smart pointer. While an array can be represented by std::Vector<>
cannot use #define with new as you could with malloc.
The reason to mess with malloc like this was to introduce your own memory management layer between your app and the standard memory management layer. This is because in C you were not allowed to write your own version of malloc. In C++ it is quite legal to write your own version of the new which makes this trick unnecessary.
I'd give it the semantics of new in C# (more or less):
Allocates memory for the object.
Initializes the memory by setting the member variables to their default values (generally 0 for values, null for references).
Initializes the object's dynamic binding mechanism (vtables in C++, type def tables for managed VMs).
Calls the constructor, at which point virtual calls work as expected.
For a language without garbage collection (eww for a new language at this point), return a smart_ptr or similar from the call.
Also, make all objects either value types or reference types, so you don't have to keep an explicit smart_ptr. Only allow new to heap-allocate for reference types, and make sure it contains information to properly call the destructor. For value types, new calls the constructor on memory from the stack.
Use Garbage Collection so that you never need to match new with anything.
By using the STL container classes and the various boost:smart_ptrs, there's little need to ever explicitly call new or delete in your C++ code.
The few places you might need to call new (e.g, to initialize a smart pointer) use the Named Constructor Idiom to return your class type pointer wrapped in, e.g., a boost:shared_ptr.
But C++ and the STL work very very hard to allow you to treat most objects as value objects, so you can construct objects rather than pointers and just use them.
Given all this, there's little need to replace the new operator -- and doing so would introduce a host of problems, whether by requiring a garbage collector, or by reducing the fine low-level control C++ offers programmers.
If your new language is garbage collected, you can avoid the new keyword. Thats what Python did (and Lisp did almost 5 decades ago!). Also see an answer provided by Peter Norvig for a similar question here. (Is no "news" good news?)
Sometimes you want to replace the constructor with a factory. This is a well known refactoring. Replace Constructor With Factory Method. So perhaps this is what the article meant?
Incidentally you will often see straight calls to new being replaced with a Factory Method.
DI frameworks such as Unity take this concept to another level. As you can see in the following C# code, there is no "new" applied to create the IMyClass interface:
IUnityContainer myContainer = new UnityContainer();
myContainer.RegisterType<IMyClass, SomeClass>();
IMyClass thing = myContainer.Resolve<IMyClass>();
The reason that C++ has a separate new operator ( or C malloc ) is primarily so that objects can be created whose lifetimes exceed the scope of the function which creates them.
If you had tail call elimination and continuations, you wouldn't care - the objects could all be created on the stack and have unlimited extent - an object can exist until you call the continuation that corresponds to the object going out of scope and being destructed. You might then need something to garbage collect or otherwise compress the stack so it doesn't become full of no-longer required objects ( Chicken Scheme and TinyOS 2 are two different examples for giving the effect of dynamic memory without dynamic memory at either runtime or compile time; Chicken Scheme doesn't allow for RAII and TinyOS doesn't allow for true dynamic allocation ), though for a large amount of code such a scheme wouldn't be vastly different to RAII with the facility to chose to change the order the objects are destructed.
Related
While reading into the libssh library, I saw that they specifically say
libssh follows the allocate-it-deallocate-it pattern. Each object that you allocate using xxxxx_new() must be deallocated using xxxxx_free()
Is this something that comes from it being a C library rather than a C++ library where new and delete didn't exist or is it a common practice to forget about new and delete and manually create and delete objects using the xxxx_new and xxxx_free pattern? If it is a common practice what are it's benefits over new and delete and the constructors and destructors that are called?
[EDIT] Added the link to where I read this as an <a> tag on "libssh library" for those asking.
A first glance at the link you provided reveals that libssh uses the xxxx_new() functions as combined allocator/constructor calls. It's really just a standard naming of factory functions. Likewise, xxxx_free() acts as a destructor/deallocator combination.
Combining allocation and construction into a single function call is a good idea whenever a library wants to provide typesafe opaque pointers to its user code: To compile the user code, the compiler only needs to know that the type exists and that it's distinct from any other type. There is no need to have the full class/struct declaration in a public header.
This approach is not very popular with C++ libraries, because they generally want their objects to behave like any normal C++ object (which means that the pointers/references must not be opaque to the compiler). But if a library provides a C interface, such factory functions make it unlikely that you get weird errors due to users passing in pointers to uninitialized objects (forgotten constructor call), or screwing up the allocation of your objects.
Is this something that comes from it being a C library rather than a C++ library where new and delete didn't exist
Most probably yes. There's often need for more initializations done than plain memory allocation as available with malloc() in c code. It's similar as new calls constructors of class/struct instances created.
or is it a common practice to forget about new and delete and manually create and delete objects using the xxxx_new and xxxx_free pattern?
No, that's not common practice in c++.
The way to deal with dynamically allocated instances and ownership is to use the functions and smart pointer classes from the standard c++ Dynamic memory management utilities.
I am trying to write a simple game using C++ and SDL. My question is, what is the best practice to store class member variables.
MyObject obj;
MyObject* obj;
I read a lot about eliminating pointers as much as possible in similar questions, but I remember that few years back in some books I read they used it a lot (for all non trivial objects) . Another thing is that SDL returns pointers in many of its functions and therefor I would have to use "*" a lot when working with SDL objects.
Also am I right when I think the only way to initialize the first one using other than default constructor is through initializer list?
Generally, using value members is preferred over pointer members. However, there are some exceptions, e.g. (this list is probably incomplete and only contains reason I could come up with immediately):
When the members are huge (use sizeof(MyObject) to find out), the difference often doesn't matter for the access and stack size may be a concern.
When the objects come from another source, e.g., when there are factory function creating pointers, there is often no alternative to store the objects.
If the dynamic type of the object isn't known, using a pointer is generally the only alternative. However, this shouldn't be as common as it often is.
When there are more complicated relations than direct owner, e.g., if an object is shared between different objects, using a pointer is the most reasonable approach.
In all of these case you wouldn't use a pointer directly but rather a suitable smart pointer. For example, for 1. you might want to use a std::unique_ptr<MyObject> and for 4. a std::shared_ptr<MyObject> is the best alternative. For 2. you might need to use one of these smart pointer templates combined with a suitable deleter function to deal with the appropriate clean-up (e.g. for a FILE* obtained from fopen() you'd use fclose() as a deleter function; of course, this is a made up example as in C++ you would use I/O streams anyway).
In general, I normally initialize my objects entirely in the member initializer list, independent on how the members are represented exactly. However, yes, if you member objects require constructor arguments, these need to be passed from a member initializer list.
First I would like to say that I completely agree with Dietmar Kühl and Mats Petersson answer. However, you have also to take on account that SDL is a pure C library where the majority of the API functions expect C pointers of structs that can own big chunks of data. So you should not allocate them on stack (you shoud use new operator to allocate them on the heap). Furthermore, because C language does not contain smart pointers, you need to use std::unique_ptr::get() to recover the C pointer that std::unique_ptr owns before sending it to SDL API functions. This can be quite dangerous because you have to make sure that the std::unique_ptr does not get out of scope while SDL is using the C pointer (similar problem with std::share_ptr). Otherwise you will get seg fault because std::unique_ptr will delete the C pointer while SDL is using it.
Whenever you need to call pure C libraries inside a C++ program, I recommend the use of RAII. The main idea is that you create a small wrapper class that owns the C pointer and also calls the SDL API functions for you. Then you use the class destructor to delete all your C pointers.
Example:
class SDLAudioWrap {
public:
SDLAudioWrap() { // constructor
// allocate SDL_AudioSpec
}
~SDLAudioWrap() { // destructor
// free SDL_AudioSpec
}
// here you wrap all SDL API functions that involve
// SDL_AudioSpec and that you will use in your program
// It is quite simple
void SDL_do_some_stuff() {
SDL_do_some_stuff(ptr); // original C function
// SDL_do_some_stuff(SDL_AudioSpec* ptr)
}
private:
SDL_AudioSpec* ptr;
}
Now your program is exception safe and you don't have the possible issue of having smart pointers deleting your C pointer while SDL is using it.
UPDATE 1: I forget to mention that because SDL is a C library, you will need a custom deleter class in order to proper manage their C structs using smart pointers.
Concrete example: GSL GNU scientific library. Integration routine requires the allocation of a struct called "gsl_integration_workspace". In this case, you can use the following code to ensure that your code is exception safe
auto deleter= [](gsl_integration_workspace* ptr) {
gsl_integration_workspace_free(ptr);
};
std::unique_ptr<gsl_integration_workspace, decltype(deleter)> ptr4 (
gsl_integration_workspace_alloc (2000), deleter);
Another reason why I prefer wrapper classes
In case of initialization, it depends on what the options are, but yes, a common way is to use an initializer list.
The "don't use pointers unless you have to" is good advice in general. Of course, there are times when you have to - for example when an object is being returned by an API!
Also, using new will waste quite a bit of memory and CPU-time if MyObject is small. Each object created with new has an overhead of around 16-48 bytes in a typical modern OS, so if your object is only a couple of simple types, then you may well have more overhead than actual storage. In a largeer application, this can easily add up to a huge amount. And of course, a call to new or delete will most likely take some hundreds or thousands of cycles (above and beyond the time used in the constructor). So, you end up with code that runs slower and takes more memory - and of course, there's always some risk that you mess up and have memory leaks, causing your program to potentially crash due to out of memory, when it's not REALLY out of memory.
And as that famous "Murphy's law states", these things just have to happen at the worst possible and most annoying times - when you have just done some really good work, or when you've just succeeded at a level in a game, or something. So avoiding those risks whenever possible is definitely a good idea.
Well, creating the object is a lot better than using pointers because it's less error prone. Your code doesn't describe it well.
MyObj* foo;
foo = new MyObj;
foo->CanDoStuff(stuff);
//Later when foo is not needed
delete foo;
The other way is
MyObj foo;
foo.CanDoStuff(stuff);
less memory management but really it's up to you.
As the previous answers claimed the "don't use pointers unless you have to" is a good advise for general programming but then there are many issues that could finally make you select the pointers choice. Furthermore, in you initial question you are not considering the option of using references. So you can face three types of variable members in a class:
MyObject obj;
MyObject* obj;
MyObject& obj;
I use to always consider the reference option rather than the pointer one because you don't need to take care about if the pointer is NULL or not.
Also, as Dietmar Kühl pointed, a good reason for selecting pointers is:
If the dynamic type of the object isn't known, using a pointer is
generally the only alternative. However, this shouldn't be as common
as it often is.
I think this point is of particular importance when you are working on a big project. If you have many own classes, arranged in many source files and you use them in many parts of your code you will come up with long compilation times. If you use normal class instances (instead of pointers or references) a simple change in one of the header file of your classes will infer in the recompilation of all the classes that include this modified class. One possible solution for this issue is to use the concept of Forward declaration, which make use of pointers or references (you can find more info here).
I've been looking into this for the past few days, and so far I haven't really found anything convincing other than dogmatic arguments or appeals to tradition (i.e. "it's the C++ way!").
If I'm creating an array of objects, what is the compelling reason (other than ease) for using:
#define MY_ARRAY_SIZE 10
// ...
my_object * my_array=new my_object [MY_ARRAY_SIZE];
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i]=my_object(i);
over
#define MEMORY_ERROR -1
#define MY_ARRAY_SIZE 10
// ...
my_object * my_array=(my_object *)malloc(sizeof(my_object)*MY_ARRAY_SIZE);
if (my_object==NULL) throw MEMORY_ERROR;
for (int i=0;i<MY_ARRAY_SIZE;++i) new (my_array+i) my_object (i);
As far as I can tell the latter is much more efficient than the former (since you don't initialize memory to some non-random value/call default constructors unnecessarily), and the only difference really is the fact that one you clean up with:
delete [] my_array;
and the other you clean up with:
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i].~T();
free(my_array);
I'm out for a compelling reason. Appeals to the fact that it's C++ (not C) and therefore malloc and free shouldn't be used isn't -- as far as I can tell -- compelling as much as it is dogmatic. Is there something I'm missing that makes new [] superior to malloc?
I mean, as best I can tell, you can't even use new [] -- at all -- to make an array of things that don't have a default, parameterless constructor, whereas the malloc method can thusly be used.
I'm out for a compelling reason.
It depends on how you define "compelling". Many of the arguments you have thus far rejected are certainly compelling to most C++ programmers, as your suggestion is not the standard way to allocate naked arrays in C++.
The simple fact is this: yes, you absolutely can do things the way you describe. There is no reason that what you are describing will not function.
But then again, you can have virtual functions in C. You can implement classes and inheritance in plain C, if you put the time and effort into it. Those are entirely functional as well.
Therefore, what matters is not whether something can work. But more on what the costs are. It's much more error prone to implement inheritance and virtual functions in C than C++. There are multiple ways to implement it in C, which leads to incompatible implementations. Whereas, because they're first-class language features of C++, it's highly unlikely that someone would manually implement what the language offers. Thus, everyone's inheritance and virtual functions can cooperate with the rules of C++.
The same goes for this. So what are the gains and the losses from manual malloc/free array management?
I can't say that any of what I'm about to say constitutes a "compelling reason" for you. I rather doubt it will, since you seem to have made up your mind. But for the record:
Performance
You claim the following:
As far as I can tell the latter is much more efficient than the former (since you don't initialize memory to some non-random value/call default constructors unnecessarily), and the only difference really is the fact that one you clean up with:
This statement suggests that the efficiency gain is primarily in the construction of the objects in question. That is, which constructors are called. The statement presupposes that you don't want to call the default constructor; that you use a default constructor just to create the array, then use the real initialization function to put the actual data into the object.
Well... what if that's not what you want to do? What if what you want to do is create an empty array, one that is default constructed? In this case, this advantage disappears entirely.
Fragility
Let's assume that each object in the array needs to have a specialized constructor or something called on it, such that initializing the array requires this sort of thing. But consider your destruction code:
for (int i=0;i<MY_ARRAY_SIZE;++i) my_array[i].~T();
For a simple case, this is fine. You have a macro or const variable that says how many objects you have. And you loop over each element to destroy the data. That's great for a simple example.
Now consider a real application, not an example. How many different places will you be creating an array in? Dozens? Hundreds? Each and every one will need to have its own for loop for initializing the array. Each and every one will need to have its own for loop for destroying the array.
Mis-type this even once, and you can corrupt memory. Or not delete something. Or any number of other horrible things.
And here's an important question: for a given array, where do you keep the size? Do you know how many items you allocated for every array that you create? Each array will probably have its own way of knowing how many items it stores. So each destructor loop will need to fetch this data properly. If it gets it wrong... boom.
And then we have exception safety, which is a whole new can of worms. If one of the constructors throws an exception, the previously constructed objects need to be destructed. Your code doesn't do that; it's not exception-safe.
Now, consider the alternative:
delete[] my_array;
This can't fail. It will always destroy every element. It tracks the size of the array, and it's exception-safe. So it is guaranteed to work. It can't not work (as long as you allocated it with new[]).
Of course, you could say that you could wrap the array in an object. That makes sense. You might even template the object on the type elements of the array. That way, all the desturctor code is the same. The size is contained in the object. And maybe, just maybe, you realize that the user should have some control over the particular way the memory is allocated, so that it's not just malloc/free.
Congratulations: you just re-invented std::vector.
Which is why many C++ programmers don't even type new[] anymore.
Flexibility
Your code uses malloc/free. But let's say I'm doing some profiling. And I realize that malloc/free for certain frequently created types is just too expensive. I create a special memory manager for them. But how to hook all of the array allocations to them?
Well, I have to search the codebase for any location where you create/destroy arrays of these types. And then I have to change their memory allocators accordingly. And then I have to continuously watch the codebase so that someone else doesn't change those allocators back or introduce new array code that uses different allocators.
If I were instead using new[]/delete[], I could use operator overloading. I simply provide an overload for operators new[] and delete[] for those types. No code has to change. It's much more difficult for someone to circumvent these overloads; they have to actively try to. And so forth.
So I get greater flexibility and reasonable assurance that my allocators will be used where they should be used.
Readability
Consider this:
my_object *my_array = new my_object[10];
for (int i=0; i<MY_ARRAY_SIZE; ++i)
my_array[i]=my_object(i);
//... Do stuff with the array
delete [] my_array;
Compare it to this:
my_object *my_array = (my_object *)malloc(sizeof(my_object) * MY_ARRAY_SIZE);
if(my_object==NULL)
throw MEMORY_ERROR;
int i;
try
{
for(i=0; i<MY_ARRAY_SIZE; ++i)
new(my_array+i) my_object(i);
}
catch(...) //Exception safety.
{
for(i; i>0; --i) //The i-th object was not successfully constructed
my_array[i-1].~T();
throw;
}
//... Do stuff with the array
for(int i=MY_ARRAY_SIZE; i>=0; --i)
my_array[i].~T();
free(my_array);
Objectively speaking, which one of these is easier to read and understand what's going on?
Just look at this statement: (my_object *)malloc(sizeof(my_object) * MY_ARRAY_SIZE). This is a very low level thing. You're not allocating an array of anything; you're allocating a hunk of memory. You have to manually compute the size of the hunk of memory to match the size of the object * the number of objects you want. It even features a cast.
By contrast, new my_object[10] tells the story. new is the C++ keyword for "create instances of types". my_object[10] is a 10 element array of my_object type. It's simple, obvious, and intuitive. There's no casting, no computing of byte sizes, nothing.
The malloc method requires learning how to use malloc idiomatically. The new method requires just understanding how new works. It's much less verbose and much more obvious what's going on.
Furthermore, after the malloc statement, you do not in fact have an array of objects. malloc simply returns a block of memory that you have told the C++ compiler to pretend is a pointer to an object (with a cast). It isn't an array of objects, because objects in C++ have lifetimes. And an object's lifetime does not begin until it is constructed. Nothing in that memory has had a constructor called on it yet, and therefore there are no living objects in it.
my_array at that point is not an array; it's just a block of memory. It doesn't become an array of my_objects until you construct them in the next step. This is incredibly unintuitive to a new programmer; it takes a seasoned C++ hand (one who probably learned from C) to know that those aren't live objects and should be treated with care. The pointer does not yet behave like a proper my_object*, because it doesn't point to any my_objects yet.
By contrast, you do have living objects in the new[] case. The objects have been constructed; they are live and fully-formed. You can use this pointer just like any other my_object*.
Fin
None of the above says that this mechanism isn't potentially useful in the right circumstances. But it's one thing to acknowledge the utility of something in certain circumstances. It's quite another to say that it should be the default way of doing things.
If you do not want to get your memory initialized by implicit constructor calls, and just need an assured memory allocation for placement new then it is perfectly fine to use malloc and free instead of new[] and delete[].
The compelling reasons of using new over malloc is that new provides implicit initialization through constructor calls, saving you additional memset or related function calls post an malloc And that for new you do not need to check for NULL after every allocation, just enclosing exception handlers will do the job saving you redundant error checking unlike malloc.
These both compelling reasons do not apply to your usage.
which one is performance efficient can only be determined by profiling, there is nothing wrong in the approach you have now. On a side note I don't see a compelling reason as to why use malloc over new[] either.
I would say neither.
The best way to do it would be:
std::vector<my_object> my_array;
my_array.reserve(MY_ARRAY_SIZE);
for (int i=0;i<MY_ARRAY_SIZE;++i)
{ my_array.push_back(my_object(i));
}
This is because internally vector is probably doing the placement new for you. It also managing all the other problems associated with memory management that you are not taking into account.
You've reimplemented new[]/delete[] here, and what you have written is pretty common in developing specialized allocators.
The overhead of calling simple constructors will take little time compared the allocation. It's not necessarily 'much more efficient' -- it depends on the complexity of the default constructor, and of operator=.
One nice thing that has not been mentioned yet is that the array's size is known by new[]/delete[]. delete[] just does the right and destructs all elements when asked. Dragging an additional variable (or three) around so you exactly how to destroy the array is a pain. A dedicated collection type would be a fine alternative, however.
new[]/delete[] are preferable for convenience. They introduce little overhead, and could save you from a lot of silly errors. Are you compelled enough to take away this functionality and use a collection/container everywhere to support your custom construction? I've implemented this allocator -- the real mess is creating functors for all the construction variations you need in practice. At any rate, you often have a more exact execution at the expense of a program which is often more difficult to maintain than the idioms everybody knows.
IMHO there both ugly, it's better to use vectors. Just make sure to allocate the space in advance for performance.
Either:
std::vector<my_object> my_array(MY_ARRAY_SIZE);
If you want to initialize with a default value for all entries.
my_object basic;
std::vector<my_object> my_array(MY_ARRAY_SIZE, basic);
Or if you don't want to construct the objects but do want to reserve the space:
std::vector<my_object> my_array;
my_array.reserve(MY_ARRAY_SIZE);
Then if you need to access it as a C-Style pointer array just (just make sure you don't add stuff while keeping the old pointer but you couldn't do that with regular c-style arrays anyway.)
my_object* carray = &my_array[0];
my_object* carray = &my_array.front(); // Or the C++ way
Access individual elements:
my_object value = my_array[i]; // The non-safe c-like faster way
my_object value = my_array.at(i); // With bounds checking, throws range exception
Typedef for pretty:
typedef std::vector<my_object> object_vect;
Pass them around functions with references:
void some_function(const object_vect& my_array);
EDIT:
IN C++11 there is also std::array. The problem with it though is it's size is done via a template so you can't make different sized ones at runtime and you cant pass it into functions unless they are expecting that exact same size (or are template functions themselves). But it can be useful for things like buffers.
std::array<int, 1024> my_array;
EDIT2:
Also in C++11 there is a new emplace_back as an alternative to push_back. This basically allows you to 'move' your object (or construct your object directly in the vector) and saves you a copy.
std::vector<SomeClass> v;
SomeClass bob {"Bob", "Ross", 10.34f};
v.emplace_back(bob);
v.emplace_back("Another", "One", 111.0f); // <- Note this doesn't work with initialization lists ☹
Oh well, I was thinking that given the number of answers there would be no reason to step in... but I guess I am drawn in as the others. Let's go
Why your solution is broken
C++11 new facilities for handling raw memory
Simpler way to get this done
Advices
1. Why your solution is broken
First, the two snippets you presented are not equivalent. new[] just works, yours fails horribly in the presence of Exceptions.
What new[] does under the cover is that it keeps track of the number of objects that were constructed, so that if an exception occurs during say the 3rd constructor call it properly calls the destructor for the 2 already constructed objects.
Your solution however fails horribly:
either you don't handle exceptions at all (and leak horribly)
or you just try to call the destructors on the whole array even though it's half built (likely crashing, but who knows with undefined behavior)
So the two are clearly not equivalent. Yours is broken
2. C++11 new facilities for handling raw memory
In C++11, the comittee members have realized how much we liked fiddling with raw memory and they have introduced facilities to help us doing so more efficiently, and more safely.
Check cppreference's <memory> brief. This example shows off the new goodies (*):
#include <iostream>
#include <string>
#include <memory>
#include <algorithm>
int main()
{
const std::string s[] = {"This", "is", "a", "test", "."};
std::string* p = std::get_temporary_buffer<std::string>(5).first;
std::copy(std::begin(s), std::end(s),
std::raw_storage_iterator<std::string*, std::string>(p));
for(std::string* i = p; i!=p+5; ++i) {
std::cout << *i << '\n';
i->~basic_string<char>();
}
std::return_temporary_buffer(p);
}
Note that get_temporary_buffer is no-throw, it returns the number of elements for which memory has actually been allocated as a second member of the pair (thus the .first to get the pointer).
(*) Or perhaps not so new as MooingDuck remarked.
3. Simpler way to get this done
As far as I am concered, what you really seem to be asking for is a kind of typed memory pool, where some emplacements could not have been initialized.
Do you know about boost::optional ?
It is basically an area of raw memory that can fit one item of a given type (template parameter) but defaults with having nothing in instead. It has a similar interface to a pointer and let you query whether or not the memory is actually occupied. Finally, using the In-Place Factories you can safely use it without copying objects if it is a concern.
Well, your use case really looks like a std::vector< boost::optional<T> > to me (or perhaps a deque?)
4. Advices
Finally, in case you really want to do it on your own, whether for learning or because no STL container really suits you, I do suggest you wrap this up in an object to avoid the code sprawling all over the place.
Don't forget: Don't Repeat Yourself!
With an object (templated) you can capture the essence of your design in one single place, and then reuse it everywhere.
And of course, why not take advantage of the new C++11 facilities while doing so :) ?
You should use vectors.
Dogmatic or not, that is exactly what ALL the STL container do to allocate and initialize.
They use an allocator then allocates uninitialized space and initialize it by means of the container constructors.
If this (like many people use to say) "is not c++" how can be the standard library just be implemented like that?
If you just don't want to use malloc / free, you can allocate "bytes" with just new char[]
myobjet* pvext = reinterpret_cast<myobject*>(new char[sizeof(myobject)*vectsize]);
for(int i=0; i<vectsize; ++i) new(myobject+i)myobject(params);
...
for(int i=vectsize-1; i!=0u-1; --i) (myobject+i)->~myobject();
delete[] reinterpret_cast<char*>(myobject);
This lets you take advantage of the separation between initialization and allocation, still taking adwantage of the new allocation exception mechanism.
Note that, putting my first and last line into an myallocator<myobject> class and the second ands second-last into a myvector<myobject> class, we have ... just reimplemented std::vector<myobject, std::allocator<myobject> >
What you have shown here is actually the way to go when using a memory allocator different than the system general allocator - in that case you would allocate your memory using the allocator (alloc->malloc(sizeof(my_object))) and then use the placement new operator to initialize it. This has many advantages in efficient memory management and quite common in the standard template library.
If you are writing a class that mimics functionality of std::vector or needs control over memory allocation/object creation (insertion in array / deletion etc.) - that's the way to go. In this case, it's not a question of "not calling default constructor". It becomes a question of being able to "allocate raw memory, memmove old objects there and then create new objects at the olds' addresses", question of being able to use some form of realloc and so on. Unquestionably, custom allocation + placement new are way more flexible... I know, I'm a bit drunk, but std::vector is for sissies... About efficiency - one can write their own version of std::vector that will be AT LEAST as fast ( and most likely smaller, in terms of sizeof() ) with most used 80% of std::vector functionality in, probably, less than 3 hours.
my_object * my_array=new my_object [10];
This will be an array with objects.
my_object * my_array=(my_object *)malloc(sizeof(my_object)*MY_ARRAY_SIZE);
This will be an array the size of your objects, but they may be "broken". If your class has virtual funcitons for instance, then you won't be able to call those. Note that it's not just your member data that may be inconsistent, but the entire object is actully "broken" (in lack of a better word)
I'm not saying it's wrong to do the second one, just as long as you know this.
Just out of curiosity: Why C++ choose a = new A instead of a = A.new as the way to instantiate an object? Doesn't latter seems more like more object-oriented?
Just out of curiosity: Why C++ choose a = new A instead of a = A.new as the way to instance-lize an object? Doesn't latter seems more like more object-oriented?
Does it?
That depends on how you define "object-oriented".
If you define it, the way Java did, as "everything must have syntax of the form "X.Y", where X is an object, and Y is whatever you want to do with that object, then yes, you're right. This isn't object-oriented, and Java is the pinnacle of OOP programming.
But luckily, there are also a few people who feel that "object-oriented" should relate to the behavior of your objects, rather than which syntax is used on them. Essentially it should be boiled down to what the Wikipedia page says:
Object-oriented programming is a programming paradigm that uses "objects" – data structures consisting of datafields and methods together with their interactions – to design applications and computer programs. Programming techniques may include features such as information hiding, data abstraction, encapsulation, modularity, polymorphism, and inheritance
Note that it says nothing about the syntax. It doesn't say "and you must call every function by specifying an object name followed by a dot followed by the function name".
And given that definition, foo(x) is exactly as object-oriented as x.foo().
All that matters is that x is an object, that is, it consists of datafields, and a set of methods by by which it can be manipulated. In this case, foo is obviously one of those methods, regardless of where it is defined, and regardless of which syntax is used in calling it.
C++ gurus have realized this long ago, and written articles such as this.
An object's interface is not just the set of member methods (which can be called with the dot syntax). It is the set of functions which can manipulate the object. Whether they are members or friends doesn't really matter. It is object-oriented as long as the object is able to stay consistent, that is, it is able to prevent arbitrary functions from messing with it.
So, why would A.new be more object-oriented? How would this form give you "better" objects?
One of the key goals behind OOP was to allow more reusable code.
If new had been a member of each and every class, that would mean every class had to define its own new operation. Whereas when it is a non-member, every class can reuse the same one. Since the functionality is the same (allocate memory, call constructor), why not put it out in the open where all classes can reuse it? (Preemptive nitpick: Of course, the same new implementation could have been reused in this case as well, by inheriting from some common base class, or just by a bit of compiler magic. But ultimately, why bother, when we can just put the mechanism outside the class in the first place)
The . in C++ is only used for member access so the right hand side of the dot is always an object and not a type. If anything it would be more logical to do A::new() than A.new().
In any case, dynamic object allocation is special as the compiler allocates memory and constructs an object in two steps and adds code to deal with exceptions in either step ensuring that memory is never leaked. Making it look like a member function call rather than a special operation could be considered as obscuring the special nature of the operation.
I think the biggest confusion here is that new has two meanings: there's the built-in new-expression (which combines memory allocation and object creation) and then there's the overloadable operator new (which deals only with memory allocation). The first, as far as I can see, is something whose behavior you cannot change, and hence it wouldn't make sense to masquerade it as a member function. (Or it would have to be - or look like - a member function that no class can implement / override!!)
This would also lead to another inconsistency:
int* p = int.new;
C++ is not a pure OOP language in that not everything is an object.
C++ also allows the use of free functions (which is encouraged by some authors and the example set in the SC++L design), which a C++ programmer should be comfortable with. Of course, the new-expression isn't a function, but I don't see how the syntax reminding vaguely of free-function call can put anybody off in a language where free function calls are very common.
please read the code (it works), and then you'll have different ideas:
CObject *p = (CObject*)malloc(sizeof *p);
...
p = new(p) CObject;
p->DoSomthing();
...
A.new is a static function of A while a = new A allocates memory and calls the object's constructor afterwards
Actually, you can instantiate object with something like A.new, if you add the proper method:
class A{
public: static A* instance()
{ return new A(); }
};
A *a = A::instance();
But that's not the case. Syntax is not the case either: you can distinguish :: and . "operations" by examining right-hand side of it.
I think the reason is memory management. In C++, unlike many other object-oriented languages, memory management is done by user. There's no default garbage collector, although the standard and non-standard libraries contain it, along with various techniques to manage memory. Therefore the programmer must see the new operator to understand that memory allocation is involved here!
Unless having been overloaded, the use of new operator first allocates raw memory, then calls the object constructor that builds it up within the memory allocated. Since the "raw" low-level operation is involved here, it should be a separate language operator and not just one of class methods.
I reckon there is no reason. Its a = new a just because it was first drafted that way. In hindsight, it should probably be a = a.new();
Why one should have seperate new of each class ?
I dont think its needed at all because the objective of new is to
allocate appropriate memory and construct the object by calling constructor.
Thus behaviour of new is unique and independent irrespective of any class. So why dont make is resuable ?
You can override new when you want to do memory management by yourself ( i.e. by allocating memory pool once and returning memory on demand).
I've learned in College that you always have to free your unused Objects but not how you actually do it. For example structuring your code right and so on.
Are there any general rules on how to handle pointers in C++?
I'm currently not allowed to use boost. I have to stick to pure c++ because the framework I'm using forbids any use of generics.
I have worked with the embedded Symbian OS, which had an excellent system in place for this, based entirely on developer conventions.
Only one object will ever own a pointer. By default this is the creator.
Ownership can be passed on. To indicate passing of ownership, the object is passed as a pointer in the method signature (e.g. void Foo(Bar *zonk);).
The owner will decide when to delete the object.
To pass an object to a method just for use, the object is passed as a reference in the method signature (e.g. void Foo(Bat &zonk);).
Non-owner classes may store references (never pointers) to objects they are given only when they can be certain that the owner will not destroy it during use.
Basically, if a class simply uses something, it uses a reference. If a class owns something, it uses a pointer.
This worked beautifully and was a pleasure to use. Memory issues were very rare.
Rules:
Wherever possible, use a
smart pointer. Boost has some
good ones.
If you
can't use a smart pointer, null out
your pointer after deleting it.
Never work anywhere that won't let you use rule 1.
If someone disallows rule 1, remember that if you grab someone else's code, change the variable names and delete the copyright notices, no-one will ever notice. Unless it's a school project, where they actually check for that kind of shenanigans with quite sophisticated tools. See also, this question.
I would add another rule here:
Don't new/delete an object when an automatic object will do just fine.
We have found that programmers who are new to C++, or programmers coming over from languages like Java, seem to learn about new and then obsessively use it whenever they want to create any object, regardless of the context. This is especially pernicious when an object is created locally within a function purely to do something useful. Using new in this way can be detrimental to performance and can make it all too easy to introduce silly memory leaks when the corresponding delete is forgotten. Yes, smart pointers can help with the latter but it won't solve the performance issues (assuming that new/delete or an equivalent is used behind the scenes). Interestingly (well, maybe), we have found that delete often tends to be more expensive than new when using Visual C++.
Some of this confusion also comes from the fact that functions they call might take pointers, or even smart pointers, as arguments (when references would perhaps be better/clearer). This makes them think that they need to "create" a pointer (a lot of people seem to think that this is what new does) to be able to pass a pointer to a function. Clearly, this requires some rules about how APIs are written to make calling conventions as unambiguous as possible, which are reinforced with clear comments supplied with the function prototype.
In the general case (resource management, where resource is not necessarily memory), you need to be familiar with the RAII pattern. This is one of the most important pieces of information for C++ developers.
In general, avoid allocating from the heap unless you have to. If you have to, use reference counting for objects that are long-lived and need to be shared between diverse parts of your code.
Sometimes you need to allocate objects dynamically, but they will only be used within a certain span of time. For example, in a previous project I needed to create a complex in-memory representation of a database schema -- basically a complex cyclic graph of objects. However, the graph was only needed for the duration of a database connection, after which all the nodes could be freed in one shot. In this kind of scenario, a good pattern to use is something I call the "local GC idiom." I'm not sure if it has an "official" name, as it's something I've only seen in my own code, and in Cocoa (see NSAutoreleasePool in Apple's Cocoa reference).
In a nutshell, you create a "collector" object that keeps pointers to the temporary objects that you allocate using new. It is usually tied to some scope in your program, either a static scope (e.g. -- as a stack-allocated object that implements the RAII idiom) or a dynamic one (e.g. -- tied to the lifetime of a database connection, as in my previous project). When the "collector" object is freed, its destructor frees all of the objects that it points to.
Also, like DrPizza I think the restriction to not use templates is too harsh. However, having done a lot of development on ancient versions of Solaris, AIX, and HP-UX (just recently - yes, these platforms are still alive in the Fortune 50), I can tell you that if you really care about portability, you should use templates as little as possible. Using them for containers and smart pointers ought to be ok, though (it worked for me). Without templates the technique I described is more painful to implement. It would require that all objects managed by the "collector" derive from a common base class.
G'day,
I'd suggest reading the relevant sections of "Effective C++" by Scott Meyers. Easy to read and he covers some interesting gotchas to trap the unwary.
I'm also intrigued by the lack of templates. So no STL or Boost. Wow.
BTW Getting people to agree on conventions is an excellent idea. As is getting everyone to agree on conventions for OOD. BTW The latest edition of Effective C++ doesn't have the excellent chapter about OOD conventions that the first edition had which is a pity, e.g. conventions such as public virtual inheritance always models an "isa" relationship.
Rob
When you have to use manage memory
manually, make sure you call delete
in the same
scope/function/class/module, which
ever applies first, e.g.:
Let the caller of a function allocate the memory that is filled by it,
do not return new'ed pointers.
Always call delete in the same exe/dll as you called new in, because otherwise you may have problems with heap corruptions (different incompatible runtime libraries).
you could derive everything from some base class that implement smart pointer like functionality (using ref()/unref() methods and a counter.
All points highlighted by #Timbo are important when designing that base class.