Allocating memory without initializing it in C++ - c++

I'm getting acquainted with C++, and I'm having a problem with memory management. In C, whenever I'd want to reserve memory for any number of elements, regardless of type, I would just call malloc() and then initialize by hand (through a loop), to whichever value I wanted. With C++'s new, everything is automagically initialized.
Problem is, I've got a BattlePoint class which goes a little something like this:
class BattlePoint {
public:
BattlePoint(int x, int y) : x(x), y(y) { };
bool operator==(const BattlePoint &right);
virtual ~BattlePoint();
private:
int x, y;
};
As you can see, it takes a few x and y values through the initializer and then sets its own x and y from it. The problem is, this function will be called from a function which will allocate an array of them:
BattleShip::BattleShip(BattlePoint start, enum shipTypeSize size, enum shipOrientation orientation) : size(size), orientation(orientation) {
points = new BattlePoint[size]; // Here be doubts.
}
So, I need my BattleShip's point to hold an array of BattlePoints, each one with different initialization values (such as 0,1; 0,2; 0,3, etcetera).
Question is: how could I allocate my memory uninitialized?
Julian,
P.S.: I haven't done any testing regarding the way new works, I simple read Wikipedia's article on it which says:
In the C++ programming language, as well as in many C++-based
languages, new is a language construct that dynamically allocates
memory on the heap and initialises the memory using the
constructor. Except for a form called the "placement new", new
attempts to allocate enough memory on the heap for the new data. If
successful, it initialises the memory and returns the address to the
newly allocated and initialised memory. However if new cannot allocate
memory on the heap it will throw an exception of type std::bad_alloc.
This removes the need to explicitly check the result of an allocation.
A call to delete, which calls the destructor and returns the memory
allocated by new back to the heap, must be made for every call to new
to avoid a memory leak.
placement new should be the solution, yet it makes no mention on how to do it.
P.S. 2: I know this can be done through stdlib's vector class, but I'm avoiding it on purpose.

You need to use a std::vector. In this case you can push_back whatever you want, e.g.
std::vector<BattlePoint> x;
x.push_back(BattlePoint(1, 2));
If you ever find yourself using new[], delete, or delete[], refactor your program immediately to remove such. They are hideously unsafe in virtually every way imaginable. Instead, use resource-managing classes, such as std::unique_ptr, std::vector, and std::shared_ptr.
Regular new can be useful in some situations involving unique_ptr, but else avoid it. In addition, placement new is usually not worth it. Of course, if you're writing a resource-managing class, then you may have to use them as underlying primitives, but that's few and very far between.
Edit: My mistake, I didn't see the very last line of your question. Addressing it:
P.S. 2: I know this can be done through stdlib's vector class, but I'm
avoiding it on purpose.
If you have some campaign against the Standard Library, then roll your own vector replacement. But do not go without a vector class. There's a reason that it must be provided by all conforming compilers.

points = new BattlePoint[size]; // Here be doubts.
P.S. 2: I know this can be done through stdlib's vector class, but I'm avoiding it on purpose.
Most certainly there will be doubts! Use std::vector. Why wouldn't you? There is no reason not to use std::vector, especially if it solves your problem.
std::vector<BattlePoint> bpoints;
bpoints.reserve(size); // there, only alloc'd memory, not initialized it.
bpoints.push_back(some_point); // still need to use push_back to initialize it
I'm sure the question will come - how does std::vector only alloc the memory?!
operator new is the answer. It's the operator that gets called for memory allocation when you use new. new is for construction and initialization, while operator new is for allocation (that's why you can overload it).
BattlePoint* bpoints = ::operator new(size); // happens in reserve
new (bpoints[index]) BattlePoint(some_x, some_y); // happens in push_back

The comp.lang.c++ FAQ has useful things to say on the matter, including attempting to dissuade you from using placement new - but if you really insist, it does have a useful section on placement new and all its pitfalls.

To echo the above answers, I would most certainly point you towards std::vector as it is the best possible solution. Managing your own dynamic arrays in C++ is almost never a good idea, and is almost never necessary.
However, to answer the direct question -- in this situation you can create a default constructor and some mutators to get the desired effect:
class BattlePoint {
public:
// default constructor, default initialize to 0,0
BattlePoint() x(0), y(0) {};
BattlePoint(int x, int y) : x(x), y(y) { };
bool operator==(const BattlePoint &right);
virtual ~BattlePoint();
// mutator functions allow you to modify the classes member values
void set_x(int x_) {x = x_;}
void set_y(int y_) {y = y_;}
private:
int x, y;
};
Then you can initialize this as you are used to in C:
BattlePoint* points = new BattlePoint[100];
for(int x = 0; x < 100; ++x)
{
points->set_x(x);
points->set_y(x * 2);
}
If you're bothered by basically making the BattlePoint class publically mutable, you can keep the mutators private and introduce a friend function specifically for initializing the values. This is a slightly more involved concept, so I'll forgo further explanation on this for now, unless it is needed.
Since you asked :)
Create your BattlePoint class again with a default constructor and mutators, however this time leave the mutators private, and declare a friend function to use them:
class BattlePoint {
public:
// default constructor, default initialize to 0,0
BattlePoint() x(0), y(0) {};
BattlePoint(int x, int y) : x(x), y(y) { };
bool operator==(const BattlePoint &right);
virtual ~BattlePoint();
private:
// mutator functions allow you to modify the classes member values
void set_x(int x_) {x = x_;}
void set_y(int y_) {y = y_;}
int x, y;
friend void do_initialize_x_y(BattlePoint*, int, int);
};
Create a header file that will contain a local function for creating the array of BattlePoint objects. This function will be available to anyone that includes the header, but if named properly then "everyone" should know not to use it.
// BattlePoint_Initialize.h
BattlePoint* create_battle_point_array(size_t count, int* x, int* y);
This function gets defined in the implementation file, along with our friend function that we will "hide" from the outside world:
// BattlePoint_Initialize.cpp
#include <BattlePoint_Initialize.h>
namespace
{
// by putting this function in an anonymous namespace it is only available
// to this compilation unit. This function can only be called from within
// this particular file.
//
// technically, the symbols are still exported, but they are mangled badly
// so someone could call this, but they would have to really try to do it
// not something that could be done "by accident"
void do_initialize_x_y(BattlePoint* bp, int x, int y)
{
bp->set_x(x);
bp->set_y(y);
}
}
// caution, relies on the assumption that count indicates the number of
// BattlePoint objects to be created, as well as the number of valid entries
// in the x and y arrays
BattlePoint* create_battle_point_array(size_t count, int* x, int* y)
{
BattlePoint* bp_array = new BattlePoint[count];
for(size_t curr = 0; curr < count; ++curr)
{
do_initialize_x_y(bp_array[curr], x[curr], y[curr]);
}
return bp_array;
}
So there you have it. A very convoluted way to meet your basic requirements.
While, create_battlepoint_array() could in theory be called anywhere, it's actually not capable of modifying an already created BattlePoint object. The do_initialize_x_y() function by nature of being hidden in an anonymous namespace tucked away behind the initialization code cannot easily be called from anywhere else in your program. In effect, once a BattlePoint object has been created (and initialized in two steps), it cannot be modified further.

Related

vector and primitive type initialisation

I've learned that if you declare for example an int in the global scope,
int x; //defaults to 0;
and in the local scope,
void f() {
int x; //undefined
}
However if we use a vector either in the global or local scope:
vector<int> v(3); //initialise v to {0,0,0} using int's default constructor.
We can default initialise int like vector's elements in the local scope by doing this:
int x = int(); //defaults to 0
I think if we use int's default constructor it's allocated in the heap.
Why can't a primitive type be default initialised in the local scope like T x;? Or
In the local scope, why does vectors (dunno about other containers) use the element's default constructor and not leave them uninitialised just like an int declaration?
What are the benefits of current approach on those two types? Why are they initialised in different ways? Is this about performance?
It's like this for "performance" reasons, because the C++ folks wanted the C folks back in the 1980's to not have any reason to complain about "paying for what we don't need." That's one of the tenets of C++, to not pay (run-time) costs for things you don't use. So the old-style POD types are uninitialized by default, though classes and structs with constructors always have one of those constructors called.
If I were specifying it today, I'd say that int x; in local scope would be default-initialized (to 0), and if you wanted to avoid that you could say something like int x = std::noinit;. It's far too late for this now, but I have actually done it in some class types when performance mattered a lot:
class SuperFast
{
struct no_init_t {};
public:
no_init_t no_init;
SuperFast() : x(0), y(0) {}
SuperFast(no_init_t) {}
private:
int x, y;
};
This way, default construction will give a valid object, but if you have a serious reason to need to avoid this, you can. You might use this technique if you know you will soon overwrite a whole bunch of these objects anyway--no need to default-construct them:
SuperFast sf(SuperFast::no_init); // look ma, I saved two nanoseconds!

C++: delete object or delete members?

I would like to ask about the functional difference; maybe ask for an example scenario, where I should choose from one of the options in the main method below:
#include <iostream>
using namespace std;
class A{
private:
int x, y;
public:
A(int, int);
};
class B{
private:
int *x, *y;
public:
B(int, int);
~B();
};
A:: A(int x, int y){
this->x = x; this->y = y;
}
B:: B(int x, int y){
this->x = new int(x);
this->y = new int(y);
}
B:: ~B(){
delete this->x;
delete this->y;
}
int main(){
int x = 0, y = 0;
A* objA = new A(x, y); // line 1
B objB1(x, y); // line 2
B* objB2 = new B(x, y); // line 3
delete objA;
delete objB2;
return 0;
}
I understand that the second declaration in the main method B objB1(x, y) is obviously different from the other 2, but can someone please explain the functional difference between the constructors in lines labelled 1 and 3? Is there any bad practice in either of the declarations?
Thanks
NAX
UPDATE
First of all, I appreciate all of the answers that everyone is giving, I am really getting some good insight. I have edited the code above as a few of the answers pointed out that I haven't deleted the objects that I used, which is fair and all, but that was not the purpose of my question. I just wanted to gain some insight on the functional difference between the different approaches to creating the classes. And thanks to all that targeted that point. I am reading through the answers still.
"The functional difference..."
On Line 1 you allocate an object of type A on the heap through use of the new keyword. On the heap, space is allocated for the object to which objA points which means 2 ints are created on the heap, contiguously, in line with your ivar definitions.
On line 2 you create a new object of class B on the stack. It will have its destructor called automatically when it goes out of scope. However, when B is allocated, it will be allocated with space for two int pointers (not ints) which will in turn be allocated on the heap as you have specified in B's constructor. When objB1 goes out of scope, the pointers will be successfully deleted by the destructor.
On line 3 you create a new object of class B on the heap. Therefore, space is allocated on the heap for two int pointers (not ints), and then those ints, in turn, are allocated elsewhere on the heap through use of the new keyword. When you delete objB2, the destructor is called and therefore the two "elsewhere integers" are deallocated, and then your original object at objB2 is also deallocated from the heap.
In line with WhozCraig's comment, class A is the most definitely the preferred class definition of the two you have shown in your example.
EDIT (Comment response):
WhozCraig's link basically strongly discourages use of raw pointers. In light of this actually, yes, I agree, Line 2 would be preferred purely on the basis of memory management as B technically manages its own memory (though it still uses raw pointers).
However, I generally dislike (excessive) use of new inside classes as new is much slower than the equivalent stack (or non-new) allocation. I therefore prefer to new the entire class rather than the individual components as it only requires a single new call and all ivars are allocated in the heap anyway. (Better yet, placement new, but that is well beyond the scope of this question).
So in summing up:
Line 2 (class B) would be preferred on the basis of memory management, however even better than that would be:
A objAOnStack(x, y); // Avoids heap altogether
Line 1 is equal-best provided you wrap it in a smart pointer such as std::shared_ptr or std::unique_ptr or something similar.
Line 3 should not really be considered without a smart pointer wrapper (and it's generally better for performance to shy away from nested new anyway).
I usually prefer A-style objects unless there are compelling reasons to use the B pattern, merely because A-style objects are more efficient.
For example, when A objects are allocated, memory for 2 ints (probably 8 bytes on your machine) will be reserved and then initialized by the arguments passed to the constructor. When B objects are allocated, memory for 2 pointers to int will be reserved (also probably 8 bytes on your machine), but then when the B object is initialized in your constructor, each value that is passed will be copied to a newly created int (on the heap), thus using up 8 more bytes of memory total. So in this simple example, your B objects are taking up twice the memory as the A objects.
Furthermore, each time you want to access the values referred to by the x and y your B objects, it will require dereferencing the pointers, which adds a level of indirection and inefficiency (and, in many use cases, should also probably involve a NULL-check for safety, which adds a branch). And of course, there's the extra heap "cleanup" that has to be done whenever B objects are destroyed. (Which can gradually lead to heap fragmentation if lots of them are created and destroyed very frequently.)
Generally speaking, the way of class A is much preferable to class B. Unless you have a good reason, you should stick with designs similar to A. In simple cases and for simple data structures like these, the way class B is implemented can even be considered bad practice.
There are several reasons for this, and here they are in no particular order:
Class B does two more dynamic memory allocations than A. Allocating memory at runtime can be slow, and allocating and freeing a lot of blocks with various sizes can lead to what's called "memory fragmentation* (which is a bad thing.)
Instances of class B are larger than instances of class A. Instances of A are the size of two integers, which are commonly 32 bits each, which makes the whole instance to be 8 bytes. Instances of B require two pointers (which can be 32 or 64 bits each, depending whether your code is compiled for a 32 or 64 bit architecture) plus two actual integers (4 bytes each) plus some metadata that the heap allocator stores for each allocation, which might be anywhere from 0 to 32 bytes or more per allocation. So each instance of B is 8, 16 or (much) more bytes larger than each instance of A, while basically doing the same job.
Accessing the fields (x and y) inside instances of B are slower than the fields inside instances of A. When accessing members of an instance of B, all you have is the location of their pointers. So the CPU fetches the pointers, and then it can know the addresses of the actual integers that hold the values of x and y and that's when it can read or write their values.
In instances of A, you are sure that x and y are stored in consecutive memory addresses. This is the best case scenario to gain the most from CPU caches. In an instance of B, the addresses where the actual x and y are located can be far from each other and you'll get less benefit from the CPU cache.
In A, the lifetime of the members are exactly the same as the lifetime of the object containing them. For B, there is no such inherent guarantee. This is not the case in this simple example, but in more complex cases, specially in the presence of exceptions, this point becomes a clear and present danger. Programming errors (e.g. forgetting to delete one member in some rarely-executed path of the destructor) is also a problem in case of B.
Note that sometimes, decoupling the lifetime of objects from the member data are what you actually want, but this is not generally considered good design. Look up RAII pattern in C++ if you want to know more.
By the way, as is pointed in other comments, you must implement (or declare private) copy constructor and assignment operator for class B.
Due to the same reasons outlines above, you should try to avoid newing your data if you can, which means that among the lines labeled 1, 2 and 3, line 2 is actually the better method of making instances.
You should define a copy constructor and an assignment operator for your class B. Otherwise you will have serious problems with those pointers. Apart from this, there is no functional difference between lines 1 and 3. The only difference is in the implementation.
Having said this, there is no reason for using pointers inside B. If you need a fixed number of integers, use plain integers or plain arrays. If you need a variable number of integers, use std::vector. And if you really need to allocate dynamic memory, be very careful and consider using smart pointers.
If your class B contained only one [pointer to] integer, it could be something like:
class B
{
private:
int * x;
public:
B (int i) { x = new int(i); }
B (const B & b) { x = new int(*b.x); }
~B() { delete x; }
B & operator= (const B & b) // Corner cases:
{ //
int * p = x; // 1) b and *this might
x = new int(*b.x); // be the same object
delete p; //
return *this; // 2) new might throw
} // an exception
};
This code will do "The Right Thing (TM)" even in the corner cases commented.
Another option is:
#include <utility> // std::swap
class B
{
private:
int * x;
public:
B (int i) { x = new int(i); }
B (const B & b) { x = new int(*b.x); }
~B() { delete x; }
void swap (B & b)
{
using std::swap;
swap (x, b.x);
}
B & operator= (const B & b) // Corner cases:
{ //
B tmp(b); // 1) b and *this might
swap (tmp); // be the same object
return *this; //
} // 2) new might throw
}; // an exception
Though, if there are two pointers ---like in your example---, you have to call new twice. If the second new failed throwing an exception, you would want to automatically delete the memory reserved by the first new...
#include <utility> // std::swap
class B
{
private:
int * x;
int * y;
void init (int i, int j)
{
x = new int(i);
try
{
y = new int(j);
}
catch (...) // first new was OK but
{ // second failed, so undo
delete x; // first allocation and
throw; // continue the exception
}
}
public:
B (int i, int j) { init (i, j); }
B (const B & b) { init (*b.x, *b.y); }
~B() { delete x; delete y; }
void swap (B & b)
{
using std::swap;
swap (x, b.x);
swap (y, b.y);
}
B & operator= (const B & b) // Corner cases:
{ //
B tmp(b); // 1) b and *this might
swap (tmp); // be the same object
return *this; //
} // 2) new might throw
}; // an exception
If you had three or four [pointers to] ints... the code would get even uglier! That's where smart pointers and RAII (Resource Acquisition Is Initialization) really help:
#include <utility> // std::swap
#include <memory> // std::unique_ptr (or std::auto_ptr)
class B
{
private:
std::auto_ptr<int> x; // If your compiler supports
std::auto_ptr<int> y; // C++11, use unique_ptr instead
public:
B (int i, int j) : x(new int(i)), // If 2nd new
y(new int(j)) {} // fails, 1st is
// undone
B (const B & b) : x(new int(*b.x)),
y(new int(*b.y)) {}
// No destructor is required
void swap (B & b)
{
using std::swap;
swap (x, b.x);
swap (y, b.y);
}
B & operator= (const B & b) // Corner cases:
{ //
B tmp(b); // 1) b and *this might
swap (tmp); // be the same object
return *this; //
} // 2) new might throw
}; // an exception
Line 1 creates objA and leaves a memory leak because objA is not deleted. If it was deleted, members x and y would be deleted too. Also objA supports copy constructors and assignment operator. There will be no issues with these calls:
func1(*objA)
A objB = *objA.
If you do the same lines with objB2, you will get memory access violation because the same memory pointed by x and y will be deleted twice. You need to create private copy constructor and assignment operator to prevent that.
Regarding scenarios:
Line1 and 3 are good for returning the object to a calling function.
The calling function will need to take responsibility for deleting
it. In class B x and y can be pointers to a base class. So they can
be polymorphic.
Line2 is good for passing this object to a called function below the
call stack. The object will be deleted when the current function
exits.

How to allocate an object with a complex constructor?

I think I know C++ reasonably well and I am thinking about implementing something a bit bigger than a "toy" program. I know the difference between stack- and heap-memory and the RAII-idiom.
Lets assume I have a simple class point:
class point {
public:
int x;
int y;
point(int x, int y) : x(x), y(y) {}
};
I would allocate points always on the stack, since the objects are small. Since on 64-bit machines sizeof(point) == sizeof(void*), if a am not wrong, I would go even further and pass points by value by default.
Now lets assume a more complex class battlefield, that I want to use in the class game:
class battlefield {
public:
battlefield(int w, int h, int start_x, int start_y, istream &in) {
// Complex generation of a battlefield from a file/network stream/whatever.
}
};
Since I really like RAII and the automatic cleanup when an object leaves the scope I am tempted to allocate the battlefield on the stack.
game::game(const settings &s) :
battlefield(s.read("w"), s.read("h"), gen_random_int(), gen_random_int(), gen_istream(s.read("level_number"))) {
// ...
}
But I have several problems now:
Since this class has not got a zero-args-constructor I have to initialize it in the initialisation list of the class I use battlefield in. This is cumbersome since I need a istream from somewhere. This leads to the next problem.
The complex constructors "snowball" at some point. When I use battlefield in the game class and initialize it in the initialisation list the constructor of game, the constructor of game will become fairly complex too and the initialisation of game itself might become cumbersome too. (When I decide to take the istream as argument of the game constructor)
I need auxiliary functions to fill in complex parameters.
I see two solutions to this problem:
Either I create a simple constructor for battlefield that does not initialize the object. But this approach has the problem that I have a half-initialized object, aka an object that violates the RAII-idiom. Strange things might happen when calling methods on such an object.
game::game(const settings &s) {
random_gen r;
int x = r.random_int();
int y = r.random_int();
ifstream in(s.read("level_number"));
in.open();
this->battlefield.init(s.read("w"), s.read("h"), x, y, in);
// ...
}
Or I allocate battlefield on the heap in the game constructor. But I have to beware of exceptions in the constructor and I have to take care that the destructor deletes the battlefield.
game::game(const settings &s) {
random_gen r;
int x = r.random_int();
int y = r.random_int();
ifstream in(s.read("level_number"));
in.open();
this->battlefield = new battlefield(s.read("w"), s.read("h"), x, y, in);
// ...
}
I hope you can see the problem I am thinking of. Some questions that arise for me are:
Is there a design pattern for this situations I do not know?
What is the best practise in bigger C++ projects? Which objects are allocated on the heap, which ones are allocated on the stack? Why?
What is the general advice regarding the complexity of constructors? Is reading from a file too much for a constructor? (Since this problem mostly arises from the complex constructor.)
You could let your battlefield be constructed from settings:
explicit battlefield(const settings& s);
or alternatively, why not create a factory function for your battlefield?
E.g.
battlefield CreateBattlefield(const settings& s)
{
int w = s.read("w");
int h = s.read("w");
std::istream& in = s.genistream();
return battlefield(w, h, gen_random_int(), gen_random_int(), in);
}
game::game(const settings &s) :
battlefield(CreateBattlefield(s)) {
// ...
}
But this approach has the problem that I have a half-initialized object, aka an object that violates the RAII-idiom.
That is not RAII. The concept is you use objects to manage the resources. When you aquire a resource like heap memory, semaphore, file handle, you have to transfer the ownership to a resource managing class. This is what smart pointers in C++ are meant for. You have to use either unique_ptr if you want to have sole ownership of the object or use a shared_ptr if you want multiple pointers to have ownership.
Or I allocate battlefield on the heap in the game constructor. But I have to beware of exceptions in the constructor and I have to take care that the destructor deletes the battlefield.
If your constructor throws an exception, then the destructor of the object would not be called and you might end up in a half-cooked object. In this case, you have to remember what allocations you did in the constructor before the exception was thrown and deallocate all those. Again smart pointers will help automatic cleaning of resources. See this faq
Which objects are allocated on the heap, which ones are allocated on the stack? Why?
Try to allocate the objects in stack whenever possible. Your objects then have life only in the scope of that block. If you have a case where this is not possible go for heap allocation - eg: you only know the size at runtime, the size of the object is too big to sit on stack.

Parameter-passing of C++ objects with dynamically allocated memory

I'm new to the C++ world, but I have some experience with C and read some tutorials about C++.
Now, creating objects in C++ seems quite easy and works well for me as long as the class has only attributes that are values (not pointers).
Now, when I try to create objects which allocate memory in the constructor for some of their attributes, I figure out how exactly such objects are passed between functions.
A simple example of such class would be:
class A {
int *a;
public:
A(int value) {
this->a = new int;
*(this->a) = value;
}
~A() {
delete this->a;
}
int getValue() const { return this->a; }
}
I want to use the class and pass it by value to other functions, etc. At least these examples must work without creating memory leaks or double free errors.
A f1() {
// some function that returns A
A value(5);
// ...
return value;
}
void f2(A a) {
// takes A as a parameter
// ...
}
A a = f1();
A b = a;
f2(a);
f2(f1());
The class A is incomplete because I should override operator= and A(A& oldValue) to solve some of these problems.
As I understand it, the default implementation of these methods just copy the value of the members which is causing the destructor to be called twice on the same pointer values.
Am I right and what else am I missing?
In addition, do you know any good tutorial that explains this issue?
Use containers and smart pointers.
E.g. std::vector for dynamic length array, or boost::shared_ptr for dynamically allocated single object.
Don't deal directly with object lifetime management.
Cheers & hth.,
When you pass an object like that, you will create a copy of the object. To avoid doing that, you should pass a const reference...
void f2(A const & a)
{
}
This does mean that you are not allowed to change 'a' in your function - but, to be honest, you shouldn't be doing that anyways, as any changes won't be reflected back to the original parameter that was passed in. So, here the compiler is helping you out, but not compiling when you would have made a hard to find error.
Specifically, you must implement a copy constructor that properly copies the memory pointer for the a variable. Any default constructor would simply copy the memory location for the a variable, which would obviously be subject to a double-delete.
Even doing this:
A value(5);
// ...
return value;
won't work because when A falls out of scope (at the end of the section) the delete operator for A will be called, thus deleting the a sub-variable and making the memory invalid.

RAII and uninitalized values

Just a simple question:
if I had a simple vector class:
class Vector
{
public:
float x;
float y;
float z;
};
Doesnt the RAII concept apply here as well? i.e. to provide a constructor to initialize all values to some values (to prevent uninitialized value being used).
EDIT or to provide a constructor that explicitly asks the user to initialize the member variables before the object can be obstantiated.
i.e.
class Vector
{
public:
float x;
float y;
float z;
public:
Vector( float x_, float y_, float z_ )
: x( x_ ), y( y_ ), z( z_ )
{ // Code to check pre-condition; }
};
Should RAII be used to help programmer forgetting to initialize the value before it's used, or is that the developer's responsibility?
Or is that the wrong way of looking at RAII?
I intentionally made this example ridiculously simple. My real question was to answer, for example, a composite class such as:
class VectorField
{
public:
Vector top;
Vector bottom;
Vector back;
// a lot more!
};
As you can see...if I had to write a constructor to initialize every single member, it's quite tedious.
Thoughts?
The "R" in RAII stands for Resource. Not everything is a resource.
Many classes, such as std::vector, are self-initializing. You don't need to worry about those.
POD types are not self initializing, so it makes sense to initialize them to some useful value.
Since the fields in your Vector class are built-in types, in order to ensure that they are initialized you'll have to do that in a constructor:
class Vector
{
public:
float x;
float y;
float z;
Vector() : x(0.0), y( 0.0), z( 0.0) {}
};
Now, if your fields were classes that were properly written, they should automatically initialize (and clean up, if necessary) by themselves.
In a way this is similar and related to RAII in that RAII means that resources (memory, handles, whatever) are acquired and cleaned up automatically by the object.
I wouldn't exactly say RAII applies here. Remember what the letters stand for: resource acquisition is initialization. You have no resources being acquired here, so RAII doesn't apply.
You could provide a default constructor to Vector; that would remove the need for you to explicitly initialize all the members of VectorField. The compiler would insert code to do that for you.
You use the RAII pattern when you need to do explicit cleanup, and want that cleanup to occur at the same time as another object is implicitly cleaned up. This can occur for memory allocation/deallocation, critical section entry/exit, database connections, etc. In your example, the "floats" are cleaned up automatically so you don't need to worry about them. However, say you had the following function that you called to obtain vectors:
Vector* getMeAVector() {
Vector *v = new Vector();
// do something
return v;
}
And say it was the caller's responsibility to delete the returned vector. If you called this code the following way:
Vector *v = getMeAVector();
// do some stuff with v
delete v;
You'd have to remember to free the vector. If the "stuff" is a long bit of code, which may throw an exception, or have a bunch of return statements in there, you'd have to free the vector with every exit point. Even if you do it, the person who maintains the code by adding another "return" statement or calling some library that throws an exception may not. Instead, you could write a class like this:
class AutoVector
{
Vector *v_;
public:
AutoVector(Vector *v) : v_(v) {}
~AutoVector() { delete v_; }
};
Then, you could obtain the vector like so:
Vector *v = getMeAVector();
AutoVector av(v);
// do lots of complicated stuff including throwing exceptions, multiple returns, etc.
Then you don't have to worry about deleting the vector any more because when av goes out of scope it will be deleted automatically. You can write a little macro to make the "AutoVector av(v)" syntax a little nicer too, if you want.
This is a bit of a contrived example, but if the surrounding code is complicated, or if it can throw exceptions, or someone comes along and adds a "return" statement in the middle, it's nice that the "AutoVector" will free the memory automatically.
You can do the same thing with an "auto" class that enters a critical section in its ctor and exits in its dtor, etc.
If you don't write constructor, the compiler will generate a default constructor for you, and set those values to default (uninitialized values). Provide a default constructor yourself and initialize the values there will be your best way to do this. I don't think it's too complicated to do that. Don't be too lazy :-)