Limit the number of instances with the new keyword - c++

I've been reading a lot of debates about whether the Singleton pattern is good/bad/ugly, and what should be used instead of it.
The common implementation requires an Instance() method that invokes a private constructor if the object has not yet been created.
My question doesn't really fit the Singleton pattern, but would it be possible to limit the number of instances of a class by overriding new? And if say we only want one instance, return the already created instance?
If this is possible is it even a good idea?
The aim would be that in any class needing access to a class, one would simply declare a private member, which would be initialized the first time, and then referenced for the rest.
ClassA {
MyClass classRef;
}
ClassB {
MyClass classRef;
}
So if MyClassis limited to one instance, depending on the order of instancing, one of these objects will actually create a new MyClass, and the other will just obtain it's reference.

Objects can be allocated statically, on the stack, and within other objects. If you want just one instance, you need to disallow all of these somehow. Overloading operator new won't help you with this. Making the constructors private or protected will, but this will disable operator new for the users of the class as well.
Moreover, what operator new returns is not an object, but a block memory in which the object will be created. If you return an already allocated block, a constructor will be run over it each time operator new is called.

This sounds like a non-concurrent worker pool of some sort.
This can be a good idea when a large number of jobs are going to be executed by more than one service/driver and you want to implement throttling, or perhaps queue jobs to prevent swap file thrashing, or some other resource constraint.
Overriding new is probably not the right way to do it. Have the task farm be an object itself, and "allocate" tasks from there. The raw allocation of the task handle wrapper object should be free from such considerations.
And yes, singletons are ugly (or at least an ugly implementation of a good idea).

Overriding new won't work. First, it won't prevent additional
instances on the stack or as static variables. Second, the operator
new that you define only allocates memory; the constructor will still
be called (with possibly disasterous effects if the singleton has
mutable state).

You can limit the number of instantiations much more straightforwardly by keeping a counter as a static member variable:
template<unsigned int N>
class N_gleton {
private:
static int number_of_instances_;
public:
enum { MAX_NUMBER_OF_INSTANCES = N };
N_gleton() {
assert(number_of_instances_ < MAX_NUMBER_OF_INSTANCES);
++number_of_instances_;
}
};
template<unsigned int N>
int N_gleton<N>::number_of_instances_ = 0; // initial value

Related

Instantiate an object in method vs. make a class member

What are some reasons to instantiate an object needed in a method, vs. making the object a class member?
For example, in the example code below, I have a class ClassA that I want to use from another class, like USer1, which has pointer to object of classA as member variable and instantiates in its constructor, and on the other hand User2, which instantiates object of classA in a method just before using it. What are some reasons to do it one way vs the other?
class ClassA
{
public:
void doStuff(void){ }
};
//
// this class has ClassA as a member
//
class User1
{
public:
User1()
{
classA = new ClassA();
}
~User1()
{
delete classA;
}
void use(void)
{
classA->doStuff();
}
private:
ClassA *classA;
};
//
// this class uses ClassA only in a method
//
class User2
{
public:
void use(void)
{
ClassA *classA = new ClassA();
classA->doStuff();
delete classA;
}
};
int main(void)
{
User1 user1;
user1.use();
User2 user2;
user2.use();
return 0;
}
The advantages of making it a class member are:
You don't have to allocate the instance every time, which depending on the class could be very slow.
The member can store state (though some people would say that this is a bad idea)
less code
As a side note, if you are just instantiating and deleting with new and delete in the constructor and destructor, it should really not be a pointer, just a member instance and then get rid of the new and delete.
IE
class User1
{
public:
void use(void)
{
classA.doStuff();
}
private:
ClassA classA;
};
There are times that this isn't the case, for instance when the class being allocated on the stack is large, or you want the footprint of the holding class to be as small as possible. But these are the exception rather than the rule.
There are other thing to consider like memory fragmentation, the advantages of accessing contiguous memory blocks, and how memory is allocated on the target system. There are no silver bullets, only general advice, and for any particular program you need to measure and adjust to get the best performance or overcome the limitations of the particular program.
Memory fragmentation is when even though you have a lot of memory free, the size of the individual block is quite small and you will get memory errors when you try to allocate a large amount of memory. This is usually caused by creating and destroying a lot of different objects of various sizes, with some of them staying alive. If you have a system that suffers from memory fragmentation I would suggest a thorough analysis of how objects are created rather than worry about how having a member or not will affect the system. However, here is a breakdown of how the four different scenarios play out when you are suffering from memory fragmentation:
Instantiating the class on the stack is very helpful as it won't contribute to overall memory fragmentation.
Creating it as a value member might cause problems as it might increase the overall size of the object, so when you get to the fragmentation scenario, the object may be too large to be created.
Creating the object and storing a pointer to it may increase memory fragmentation
Allocating on the heap and deleting at the end of use may increase memory fragmentation if something else is allocated after it was.
The advantages of accessing contiguous memory is that cache misses are minimised, so my feeling is that having the object as a value member would be faster, but as with so many things depending lots of other variables this could be completely wrong. As always when it comes to performance, measure.
Memory is often aligned to a particular boundary, for instance 4 byte alignment, or power of 2 blocks. So depending on the size of your object when you allocate one of them it might take up more memory than you expect, if your allocated object contains any members it might significant change the memory footprint of the class if it is a value member, or if it doesn't it probably won't increase it at all, while having a pointer to it will definitely increase the footprint by the size of a pointer, and that may result in a significant increase. Either creating the class on the heap or the stack will not affect the size of the using class. As always if it is going to affect your program you need to measure on the target system to see what the effects are going to be.
If the constructor/destructor does something (for instance a file handle, opening the file, and closing the file) then you might want to only use it in the function. But yet again, the pointer isn't usually necessary.
void use(void)
{
ClassA classA;
classA.doStuff();
} //classA will be destructed at end of scope
First off there is no reason to have a pointer in either class. If we use value semantics in User1 then there is no need to have a constructor or destructor as the compiler generated ones will be sufficient. That changes User1 to:
class User1
{
public:
void use(void)
{
classA.doStuff();
}
private:
ClassA classA;
};
Likewise if we use value semantics in User2 then it would become:
class User2
{
public:
void use(void)
{
ClassA classA;
classA.doStuff();
}
};
Now as to whether you want to have ClassA as a member or if you should just use it in the function is a matter of design. If the class is going to be using and updating the ClassA then it should be a member. If you just need to to do something in a function the the second approach is okay.
If you are going to be calling the function that creates a ClassA a lot it might be beneficial to have it be a member as you only need to construct it once and you get to use it in the function. Conversely If you are going to have a lot objects but you hardly ever call that function it might be better to create the ClassA when you need it as you will save space.
Really though this is something that you would have to profile to determine which way would be better. We programmers are bad judges of what is faster and should let the profiler tell us if we need to change something. Some things like using value semantics over a pointer with heap allocation is generally faster. One example where we get this wrong is sorting. If N is small then using a bubble sort which is O(n^2) is faster than a quicksort which is O(n log n). Another example of this si presented in this Hurb Sutter talk starting at 46:00. He shows that using a std::vector is faster than a std::list at inserting and removing from the middle because a std::vector is very cache friendly where a std::list is not.

C++ - Prevent global instantiation?

Is there a way to force a class to be instantiated on the stack or at least prevent it to be global in C++?
I want to prevent global instantiation because the constructor calls C APIs that need previous initialization. AFAIK there is no way to control the construction order of global objects.
Edit: The application targets an embedded device for which dynamic memory allocation is also prohibited. The only possible solution for the user to instanciate the class is either on the stack or through a placement new operator.
Edit2: My class is part of a library which depends on other external libraries (from which come the C APIs). I can't modify those libraries and I can't control the way libraries are initialized in the final application, that's why I am looking for a way to restrict how the class could be used.
Instead of placing somewhat arbitrary restrictions on objects of your class I'd rather make the calls to the C API safe by wrapping them into a class. The constructor of that class would do the initialization and the destructor would release acquired resources.
Then you can require this class as an argument to your class and initialization is always going to work out.
The technique used for the wrapper is called RAII and you can read more about it in this SO question and this wiki page. It originally was meant to combine encapsulate resource initialization and release into objects, but can also be used for a variety of other things.
Half an answer:
To prevent heap allocation (so only allow stack allocation) override operator new and make it private.
void* operator new( size_t size );
EDIT: Others have said just document the limitations, and I kind of agree, nevertheless and just for the hell of it: No Heap allocation, no global allocation, APIs initialised (not quite in the constructor but I would argue still good enough):
class Boogy
{
public:
static Boogy* GetBoogy()
{
// here, we intilialise the APIs before calling
// DoAPIStuffThatRequiresInitialisationFirst()
InitAPIs();
Boogy* ptr = new Boogy();
ptr->DoAPIStuffThatRequiresInitialisationFirst();
return ptr;
}
// a public operator delete, so people can "delete" what we give them
void operator delete( void* ptr )
{
// this function needs to manage marking array objects as allocated
// or not
}
private:
// operator new returns STACK allocated objects.
void* operator new( size_t size )
{
Boogy* ptr = &(m_Memory[0]);
// (this function also needs to manage marking objects as allocated
// or not)
return ptr;
}
void DoAPIStuffThatRequiresInitialisationFirst()
{
// move the stuff that requires initiaisation first
// from the ctor into HERE.
}
// Declare ALL ctors private so no uncontrolled allocation,
// on stack or HEAP, GLOBAL or otherwise,
Boogy(){}
// All Boogys are on the STACK.
static Boogy m_Memory[10];
};
I don't know if I'm proud or ashamed! :-)
You cannot, per se, prevent putting objects as globals. And I would argue you should not try: after all, why cannot build an object that initialize those libraries, instantiate it globally, and then instantiate your object globally ?
So, let me rephrase the question to drill down to its core:
How can I prevent my object from being constructed before some initialization work has been done ?
The response, in general, is: depends.
It all boils down at what the initialization work is, specifically:
is there a way to detect it has not been called yet ?
are there drawbacks to calling the initialization functions several times ?
For example, I can create the following initializer:
class Initializer {
public:
Initializer() { static bool _ = Init(); (void)_; }
protected:
// boilerplate to prevent slicing
Initializer(Initializer&&) = default;
Initializer(Initializer const&) = default;
Initializer& operator=(Initializer) = default;
private:
static bool Init();
}; // class Initializer
The first time this class is instantiated, it calls Init, and afterwards this is ignored (at the cost of a trivial comparison). Now, it's trivial to inherit (privately) from this class to ensure that by the time your constructor's initializer list or body is called the initialization required has been performed.
How should Init be implemented ?
Depends on what's possible and cheaper, either detecting the initialization is done or calling the initialization regardless.
And if the C API is so crappy you cannot actually do either ?
You're toast. Welcome documentation.
You can try using the Singleton pattern
Is there a way to force a class to be instantiated on the stack or at least prevent it to be global in C++?
Not really. You could make constructor private and create said object only using factory method, but nothing would really prevent you from using said method to create global variable.
If global variables were initialized before application enters "main", then you could throw exception from the constructor before "main" sets some flag. However, it is up to implementation to decide when initialize global variables. So, they could be initialized after application enters "main". I.e. that would be relying on undefined behavior, which isn't a good idea.
You could, in theory, attempt to walk the call stack and see from there it is called. However, compiler could inline constructor or several functions, and this will be non-portable, and walking call stack in C++ will be painful.
You could also manually check "this" pointer and attempt to guess where it is located. However, this will be non-portable hack specific to this particular compiler, OS and architecture.
So there is no good solution I can think of.
As a result, the best idea would be to change your program behavior, as others have already suggeste - make a singleton class that initializes your C api in constructor, deinitializes it in destructor, and request this class when necessary via factory method. This will be the most elegant solution to your problem.
Alternatively, you could attempt to document program behavior.
To allocate a class on the stack, you simply say
FooClass foo; // NOTE no parenthesis because it'd be parsed
// as a function declaration. It's a famous gotcha.
To allocate in on the heap, you say
std::unique_ptr<FooClass> foo(new FooClass()); //or
FooClass* foop = new FooClass(); // less safe
Your object will only be global if you declare it at program scope.

delete this & private destructor

I've been thinking about the possible use of delete this in c++, and I've seen one use.
Because you can say delete this only when an object is on heap, I can make the destructor private and stop objects from being created on stack altogether. In the end I can just delete the object on heap by saying delete this in a random public member function that acts as a destructor. My questions:
1) Why would I want to force the object to be made on the heap instead of on the stack?
2) Is there another use of delete this apart from this? (supposing that this is a legitimate use of it :) )
Any scheme that uses delete this is somewhat dangerous, since whoever called the function that does that is left with a dangling pointer. (Of course, that's also the case when you delete an object normally, but in that case, it's clear that the object has been deleted). Nevertheless, there are somewhat legitimate cases for wanting an object to manage its own lifetime.
It could be used to implement a nasty, intrusive reference-counting scheme. You would have functions to "acquire" a reference to the object, preventing it from being deleted, and then "release" it once you've finished, deleting it if noone else has acquired it, along the lines of:
class Nasty {
public:
Nasty() : references(1) {}
void acquire() {
++references;
}
void release() {
if (--references == 0) {
delete this;
}
}
private:
~Nasty() {}
size_t references;
};
// Usage
Nasty * nasty = new Nasty; // 1 reference
nasty->acquire(); // get a second reference
nasty->release(); // back to one
nasty->release(); // deleted
nasty->acquire(); // BOOM!
I would prefer to use std::shared_ptr for this purpose, since it's thread-safe, exception-safe, works for any type without needing any explicit support, and prevents access after deleting.
More usefully, it could be used in an event-driven system, where objects are created, and then manage themselves until they receive an event that tells them that they're no longer needed:
class Worker : EventReceiver {
public:
Worker() {
start_receiving_events(this);
}
virtual void on(WorkEvent) {
do_work();
}
virtual void on(DeleteEvent) {
stop_receiving_events(this);
delete this;
}
private:
~Worker() {}
void do_work();
};
1) Why would I want to force the object to be made on the heap instead of on the stack?
1) Because the object's lifetime is not logically tied to a scope (e.g., function body, etc.). Either because it must manage its own lifespan, or because it is inherently a shared object (and thus, its lifespan must be attached to those of its co-dependent objects). Some people here have pointed out some examples like event handlers, task objects (in a scheduler), and just general objects in a complex object hierarchy.
2) Because you want to control the exact location where code is executed for the allocation / deallocation and construction / destruction. The typical use-case here is that of cross-module code (spread across executables and DLLs (or .so files)). Because of issues of binary compatibility and separate heaps between modules, it is often a requirement that you strictly control in what module these allocation-construction operations happen. And that implies the use of heap-based objects only.
2) Is there another use of delete this apart from this? (supposing that this is a legitimate use of it :) )
Well, your use-case is really just a "how-to" not a "why". Of course, if you are going to use a delete this; statement within a member function, then you must have controls in place to force all creations to occur with new (and in the same translation unit as the delete this; statement occurs). Not doing this would just be very very poor style and dangerous. But that doesn't address the "why" you would use this.
1) As others have pointed out, one legitimate use-case is where you have an object that can determine when its job is over and consequently destroy itself. For example, an event handler deleting itself when the event has been handled, a network communication object that deletes itself once the transaction it was appointed to do is over, or a task object in a scheduler deleting itself when the task is done. However, this leaves a big problem: signaling to the outside world that it no longer exists. That's why many have mentioned the "intrusive reference counting" scheme, which is one way to ensure that the object is only deleted when there are no more references to it. Another solution is to use a global (singleton-like) repository of "valid" objects, in which case any accesses to the object must go through a check in the repository and the object must also add/remove itself from the repository at the same time as it makes the new and delete this; calls (either as part of an overloaded new/delete, or alongside every new/delete calls).
However, there is a much simpler and less intrusive way to achieve the same behavior, albeit less economical. One can use a self-referencing shared_ptr scheme. As so:
class AutonomousObject {
private:
std::shared_ptr<AutonomousObject> m_shared_this;
protected:
AutonomousObject(/* some params */);
public:
virtual ~AutonomousObject() { };
template <typename... Args>
static std::weak_ptr<AutonomousObject> Create(Args&&... args) {
std::shared_ptr<AutonomousObject> result(new AutonomousObject(std::forward<Args>(args)...));
result->m_shared_this = result; // link the self-reference.
return result; // return a weak-pointer.
};
// this is the function called when the life-time should be terminated:
void OnTerminate() {
m_shared_this.reset( NULL ); // do not use reset(), but use reset( NULL ).
};
};
With the above (or some variations upon this crude example, depending on your needs), the object will be alive for as long as it deems necessary and that no-one else is using it. The weak-pointer mechanism serves as the proxy to query for the existence of the object, by possible outside users of the object. This scheme makes the object a bit heavier (has a shared-pointer in it) but it is easier and safer to implement. Of course, you have to make sure that the object eventually deletes itself, but that's a given in this kind of scenario.
2) The second use-case I can think of ties in to the second motivation for restricting an object to be heap-only (see above), however, it applies also for when you don't restrict it as such. If you want to make sure that both the deallocation and the destruction are dispatched to the correct module (the module from which the object was allocated and constructed), you must use a dynamic dispatching method. And for that, the easiest is to just use a virtual function. However, a virtual destructor is not going to cut it because it only dispatches the destruction, not the deallocation. The solution is to use a virtual "destroy" function that calls delete this; on the object in question. Here is a simple scheme to achieve this:
struct CrossModuleDeleter; //forward-declare.
class CrossModuleObject {
private:
virtual void Destroy() /* final */;
public:
CrossModuleObject(/* some params */); //constructor can be public.
virtual ~CrossModuleObject() { }; //destructor can be public.
//.... whatever...
friend struct CrossModuleDeleter;
template <typename... Args>
static std::shared_ptr< CrossModuleObject > Create(Args&&... args);
};
struct CrossModuleDeleter {
void operator()(CrossModuleObject* p) const {
p->Destroy(); // do a virtual dispatch to reach the correct deallocator.
};
};
// In the cpp file:
// Note: This function should not be inlined, so stash it into a cpp file.
void CrossModuleObject::Destroy() {
delete this;
};
template <typename... Args>
std::shared_ptr< CrossModuleObject > CrossModuleObject::Create(Args&&... args) {
return std::shared_ptr< CrossModuleObject >( new CrossModuleObject(std::forward<Args>(args)...), CrossModuleDeleter() );
};
The above kind of scheme works well in practice, and it has the nice advantage that the class can act as a base-class with no additional intrusion by this virtual-destroy mechanism in the derived classes. And, you can also modify it for the purpose of allowing only heap-based objects (as usually, making constructors-destructors private or protected). Without the heap-based restriction, the advantage is that you can still use the object as a local variable or data member (by value) if you want, but, of course, there will be loop-holes left to avoid by whoever uses the class.
As far as I know, these are the only legitimate use-cases I have ever seen anywhere or heard of (and the first one is easily avoidable, as I have shown, and often should be).
The general reason is that the lifetime of the object is determined by some factor internal to the class, at least from an application viewpoint. Hence, it may very well be a private method which calls delete this;.
Obviously, when the object is the only one to know how long it's needed, you can't put it on a random thread stack. It's necessary to create such objects on the heap.
It's generally an exceptionally bad idea. There are a very few cases- for example, COM objects have enforced intrusive reference counting. You'd only ever do this with a very specific situational reason- never for a general-purpose class.
1) Why would I want to force the object to be made on the heap instead of on the stack?
Because its life span isn't determined by the scoping rule.
2) Is there another use of delete this apart from this? (supposing that this is a legitimate use of it :) )
You use delete this when the object is the best placed one to be responsible for its own life span. One of the simplest example I know of is a window in a GUI. The window reacts to events, a subset of which means that the window has to be closed and thus deleted. In the event handler the window does a delete this. (You may delegate the handling to a controller class. But the situation "window forwards event to controller class which decides to delete the window" isn't much different of delete this, the window event handler will be left with the window deleted. You may also need to decouple the close from the delete, but your rationale won't be related to the desirability of delete this).
delete this;
can be useful at times and is usually used for a control class that also controls the lifetime of another object. With intrusive reference counting, the class it is controlling is one that derives from it.
The outcome of using such a class should be to make lifetime handling easier for users or creators of your class. If it doesn't achieve this, it is bad practice.
A legitimate example may be where you need a class to clean up all references to itself before it is destructed. In such a case, you "tell" the class whenever you are storing a reference to it (in your model, presumably) and then on exit, your class goes around nulling out these references or whatever before it calls delete this on itself.
This should all happen "behind the scenes" for users of your class.
"Why would I want to force the object to be made on the heap instead of on the stack?"
Generally when you force that it's not because you want to as such, it's because the class is part of some polymorphic hierarchy, and the only legitimate way to get one is from a factory function that returns an instance of a different derived class according to the parameters you pass it, or according to some configuration that it knows about. Then it's easy to arrange that the factory function creates them with new. There's no way that users of those classes could have them on the stack even if they wanted to, because they don't know in advance the derived type of the object they're using, only the base type.
Once you have objects like that, you know that they're destroyed with delete, and you can consider managing their lifecycle in a way that ultimately ends in delete this. You'd only do this if the object is somehow capable of knowing when it's no longer needed, which usually would be (as Mike says) because it's part of some framework that doesn't manage object lifetime explicitly, but does tell its components that they've been detached/deregistered/whatever[*].
If I remember correctly, James Kanze is your man for this. I may have misremembered, but I think he occasionally mentions that in his designs delete this isn't just used but is common. Such designs avoid shared ownership and external lifecycle management, in favour of networks of entity objects managing their own lifecycles. And where necessary, deregistering themselves from anything that knows about them prior to destroying themselves. So if you have several "tools" in a "toolbelt" then you wouldn't construe that as the toolbelt "owning" references to each of the tools, you think of the tools putting themselves in and out of the belt.
[*] Otherwise you'd have your factory return a unique_ptr or auto_ptr to encourage callers to stuff the object straight into the memory management type of their choice, or you'd return a raw pointer but provide the same encouragement via documentation. All the stuff you're used to seeing.
A good rule of thumb is not to use delete this.
Simply put, the thing that uses new should be responsible enough to use the delete when done with the object. This also avoids the problems with is on the stack/heap.
Once upon a time i was writing some plugin code. I believe i mixed build (debug for plugin, release for main code or maybe the other way around) because one part should be fast. Or maybe another situation happened. Such main is already released built on gcc and plugin is being debugged/tested on VC. When the main code deleted something from the plugin or plugin deleted something a memory issue would occur. It was because they both used different memory pools or malloc implementations. So i had a private dtor and a virtual function called deleteThis().
-edit- Now i may consider overloading the delete operator or using a smart pointer or simply just state never delete a function. It will depend and usually overloading new/delete should never be done unless you really know what your doing (dont do it). I decide to use deleteThis() because i found it easier then the C like way of thing_alloc and thing_free as deleteThis() felt like the more OOP way of doing it

Get (pointer to) calling object

I have an object -a scheduler class-. This scheduler class is given member function pointers, times and the pointer to the object which created the scheduler.
This means I could do something as much as: (pObject->*h.function)(*h.param); Where pObject is the pointer to the original object, h a class which contains the function + void pointer parameter so I can pass arguments to the original function.
When I like to initialize this object I have the explicit Scheduler(pObjType o); constructor (where pObjType is a template parameter).
When I create an object which should have this alarm I type:
struct A {
typedef void (A::*A_FN2)(void*);
typedef Scheduler<A*,A_FN2> AlarmType;
A(int _x) : alarm(NULL)
{
alarm.SetObject(this);
}
AlarmType alarm
However this alarm-type puts quite a big limitation on the object: if I forget to add a copy-constructor (to A) the class would get undefined behaviour. The scheduler would keep pointing to the original object, and that original object might go out of scope, or even worse, might not.
Is there a method that when I copy my alarm (by default, so in the scheduler's copy-constructor) I can get the calling object (and pointer to that?)?
Or if that isn't possible, is it possible to throw a (compile) error if I forget to implement a copy-constructor for my structure? - And try to copy this structure somewhere?
As I see it, you have an opportunity to improve your design here, that may help you get rid of your worry.
It is usually a bad idea to pass
around member function pointers. It
is better to make your structs
inherit from an abstract base class,
making the functions you want to
customize abstract virtual.
If you don't need copying, it is best
to disallow it in the base class.
Either by making the copy constructor and operator undefined and private,
or by inheriting boost::NonCopyable.
If you want any kind of automatic copy construction semantics, then you're going to need to go to the CRTP- no other pattern provides for a pointer to the owning object.
The other thing is that you should really use a boost::/std::function<>, they're far more generic and you're going to need that if you want to be able to use Lua functions.
The simplest way to prevent the specific issue you're asking about is to make Scheduler noncopyable (e.g. with boost::noncopyable). This means that any client class incorporating a value member of type Scheduler will fail to be copyable. The hope is that this provides a hint for the programmer to check the docs and figure out the copy semantics of Scheduler (i.e. construct a new Scheduler for every new A), but it's possible for someone to get this wrong if they work around the problem by just holding the Scheduler by pointer. Aliasing the pointer gives exactly the same problem as default-copy-constructing the Scheduler instance that holds a pointer.
Any time you have raw pointers you have to have a policy on object lifetime. You want to ensure that the lifetime of any class A is at least as long as the corresponding instance of Scheduler, and as I see it there are three ways to ensure this:
Use composition - not possible in this case because A contains a Scheduler, so Scheduler can't contain an A
Use inheritance, e.g. in the form of the Curiously Recurring Template Pattern (CRTP)
Have a global policy on enforcing lifetimes of A instances, e.g. requiring that they are always held by smart pointer, or that cleaning them up is the responsibility of some class that also knows to clean up the Schedulers that depend on them
The CRTP could work like this:
#include <iostream>
using namespace std;
template<typename T>
struct Scheduler {
typedef void (T::* MemFuncPtr)(void);
Scheduler(MemFuncPtr action) :
action(action)
{
}
private:
void doAction()
{
this->*action();
}
MemFuncPtr action;
};
struct Alarm : private Scheduler<Alarm> {
Alarm() : Scheduler<Alarm>(&Alarm::doStuff)
{
}
void doStuff()
{
cout << "Doing stuff" << endl;
}
};
Note that private inheritance ensures that clients of the Alarm class can't treat it as a raw Scheduler.

Static Pointer to Dynamically allocated array

So the question is relatively straight forward, I have several semi-large lookup tables ~500kb a piece. Now these exact same tables are used by several class instantiations (maybe lots), with this in mind I don't want to store the same tables in each class. So I can either dump the entire tables onto the stack as 'static' members, or I can have 'static' pointers to these tables. In either case the constructor for the class will check whether they are initialized and do so if not. However, my question is, if I choose the static pointers to the tables (so as not to abuse the stack space) what is a good method for appropriately cleaning these up.
Also note that I have considered using boost::share_ptr, but opted not to, this is a very small project and I am not looking to add any dependencies.
Thanks
Static members will never be allocated on the stack. When you declare them (which of course, you do explicitly), they're assigned space somewhere (a data segment?).
If it makes sense that the lookup tables are members of the class, then make them static members!
When a class is instanced on the stack, the static member variables don't form part of the stack cost.
If, for instance, you want:
class MyClass {
...
static int LookUpTable[LARGENUM];
};
int MyClass:LookUpTable[LARGENUM];
When you instance MyClass on the stack, MyClass:LookUpTable points to the object that you've explicitly allocated on the last line of the codesample above. Best of all, there's no need to deallocate it, since it's essentially a global variable; it can't leak, since it's not on the heap.
If you don't free the memory for the tables at all, then when your program exits the OS will automatically throw away all memory allocated by your application. This is an appropriate strategy for handling memory that is allocated only once by your application.
Leaving the memory alone can actually improve performance too, because you won't waste time on shutdown trying to explicitly free everything and therefore possibly force a page in for all the memory you allocated. Just let the OS do it when you exit.
If these are lookup tables, the easiest solution is just to use std::vector:
class SomeClass {
/* ... */
static std::vector<element_type> static_data;
};
To initialize, you can do:
static_data.resize(numberOfElements);
// now initialize the contents
With this you can still do array-like access, as in:
SomeClass::static_data[42].foo();
And with any decent compiler, this should be as fast as a pointer to a native array.
Why don't you create a singleton class that manages the lookup tables? As it seems they need to be accessed by a number of classes; make the singleton the manager of the lookup tables accessible at global scope. Then all the classes can use the singleton getters/setters to manipulate the lookup tables. There are 3 advantages to this approach:-
If the static container size for the
lookup tables becomes large then the
default stack-size may ( 1MB on
Windows) lead to stack-overflow on
application statrt-up itself. Use a container that allocates dynamically.
If you plan to access the table via multiple-threads, the singleton class can be extended to accompany locked access.
You can also cleanup in the dtor of singleton during application exit.
I can think of several ways to approach for this depending upon what is trying to be accomplished.
If the data is static and fixed, using a static array which is global and initialized within the code would be a good approach. Everything is contained in the code and loaded when the program is started so it is available. Then all of the class which need access can access the information.
If the data is not static and needs to read in, an static STL structure, such as a vector, list or map would be good as it can grow as you add elements to the list. Some of these class provides lookup methods as well. Depending upon the data you are looking up, you may have to provide a structure and some operator to have the STL structures work correctly.
In either of the two case, you might what to make a static global class to read and contain the data. It can take care of managing initialization and access the data. You can use private members to indicate if the class has been read in and is available for use. If it has not, the class might be able to do the initialization by itself if it has enough information. The other class can call static function of the static global class to access the data. This provides encapsulation of the data, and then it can be used by several different classes without those classes needing to incorperate the large lookup table.
There are several possibilties with various advantages and disadvantages. I don't know what the table contains, so I'll call it an Entry.
If you just want the memory to be sure to go away when the program exits, use a global auto_ptr:
auto_ptr<Entry> pTable;
You can initialize it whenever you like, and it will automatically be deleted when the program exits. Unfortunately, it will pollute the global namespace.
It sounds like you are using the same table within multiple instances of the same class. In this case, it is usual to make it a static pointer of that class:
class MyClass {
...
protected:
static auto_ptr<Entry> pTable;
};
If you want it to be accessible in instances of different classes, then you might make it a static member of a function, these will also be deleted when the program exits, but the really nice thing is that it won't be initialized until the function is entered. I.e., the resource won't need to be allocated if the function is never called upon:
Entry* getTable() {
static auto_ptr<Entry> pTable = new Entry[ gNumEntries ];
return pTable;
}
You can do any of these with std::vector<Entry> rather than auto_ptr<Entry>, if you prefer, but the main advantage of that is that it can more easily be dynamically resized. That might not be something you value.