Related
Recent I read some C++ code using extensively following getInstance() method:
class S
{
private:
int some_int = 0;
public:
static S& getInstance()
{
static S instance; / (*) /
return instance;
}
};
From how this code fragment being used, I learned the getInstance() works like return this, returning the address(or ref) of the instance of class S. But I got confused.
1) where does the static variable S defined in line(*) allocated in memory? And why it can work like return this?
2) what if there exist more than one instance of class S, whose reference will be returned?
This is the so-called Singleton design pattern. Its distinguishing feature is that there can only ever be exactly one instance of that class and the pattern ensures that. The class has a private constructor and a statically-created instance that is returned with the getInstance method. You cannot create an instance from the outside and thus get the object only through said method.
Since instance is static in the getInstance method it will retain its value between multiple invocations. It is allocated and constructed somewhen before it's first used. E.g. in this answer it seems like GCC initializes the static variable at the time the function is first used. This answer has some excerpts from the C++ standard related to that.
Static variable with a function scope is allocated first time the function is called. The compiler keeps track of the fact that is is initialized the first time and avoids further creation during the next visit to the function. This property would be ideal to implement a singleton pattern since this ensures we just maintain a single copy of object. In addition newer compilers makes sure this allocation is also thread safe giving a bonus thread safe singleton implementation without using any dynamic memory allocation. [As pointed out in comments, it is c++11 standard which guarantees the thread safety so do your check about thread safety if you are using a different compiler]
1) where does the static variable S defined in line(*) allocated in
memory? And why it can work like return this?
There is specific regions in memory where static variables are stored, like heap for dynamically allocated variables and stack for the typical compile time defined variables. It could vary with compilers but for GCC there are specific sections called DATA and BSS segments with in the executable generated by the compiler where initialized and uninitialized static variables are stored.
2) what if there exist more than one instance of class S, whose
reference will be returned?
As mentioned on the top, since it is a static variable the compiler ensures there can only be one instance created the first time function is visited. Also since it has the scope with in the function it cannot clash with any other instances existing else where and getInstance ensures you see the same single instance.
Is there a way to force a class to be instantiated on the stack or at least prevent it to be global in C++?
I want to prevent global instantiation because the constructor calls C APIs that need previous initialization. AFAIK there is no way to control the construction order of global objects.
Edit: The application targets an embedded device for which dynamic memory allocation is also prohibited. The only possible solution for the user to instanciate the class is either on the stack or through a placement new operator.
Edit2: My class is part of a library which depends on other external libraries (from which come the C APIs). I can't modify those libraries and I can't control the way libraries are initialized in the final application, that's why I am looking for a way to restrict how the class could be used.
Instead of placing somewhat arbitrary restrictions on objects of your class I'd rather make the calls to the C API safe by wrapping them into a class. The constructor of that class would do the initialization and the destructor would release acquired resources.
Then you can require this class as an argument to your class and initialization is always going to work out.
The technique used for the wrapper is called RAII and you can read more about it in this SO question and this wiki page. It originally was meant to combine encapsulate resource initialization and release into objects, but can also be used for a variety of other things.
Half an answer:
To prevent heap allocation (so only allow stack allocation) override operator new and make it private.
void* operator new( size_t size );
EDIT: Others have said just document the limitations, and I kind of agree, nevertheless and just for the hell of it: No Heap allocation, no global allocation, APIs initialised (not quite in the constructor but I would argue still good enough):
class Boogy
{
public:
static Boogy* GetBoogy()
{
// here, we intilialise the APIs before calling
// DoAPIStuffThatRequiresInitialisationFirst()
InitAPIs();
Boogy* ptr = new Boogy();
ptr->DoAPIStuffThatRequiresInitialisationFirst();
return ptr;
}
// a public operator delete, so people can "delete" what we give them
void operator delete( void* ptr )
{
// this function needs to manage marking array objects as allocated
// or not
}
private:
// operator new returns STACK allocated objects.
void* operator new( size_t size )
{
Boogy* ptr = &(m_Memory[0]);
// (this function also needs to manage marking objects as allocated
// or not)
return ptr;
}
void DoAPIStuffThatRequiresInitialisationFirst()
{
// move the stuff that requires initiaisation first
// from the ctor into HERE.
}
// Declare ALL ctors private so no uncontrolled allocation,
// on stack or HEAP, GLOBAL or otherwise,
Boogy(){}
// All Boogys are on the STACK.
static Boogy m_Memory[10];
};
I don't know if I'm proud or ashamed! :-)
You cannot, per se, prevent putting objects as globals. And I would argue you should not try: after all, why cannot build an object that initialize those libraries, instantiate it globally, and then instantiate your object globally ?
So, let me rephrase the question to drill down to its core:
How can I prevent my object from being constructed before some initialization work has been done ?
The response, in general, is: depends.
It all boils down at what the initialization work is, specifically:
is there a way to detect it has not been called yet ?
are there drawbacks to calling the initialization functions several times ?
For example, I can create the following initializer:
class Initializer {
public:
Initializer() { static bool _ = Init(); (void)_; }
protected:
// boilerplate to prevent slicing
Initializer(Initializer&&) = default;
Initializer(Initializer const&) = default;
Initializer& operator=(Initializer) = default;
private:
static bool Init();
}; // class Initializer
The first time this class is instantiated, it calls Init, and afterwards this is ignored (at the cost of a trivial comparison). Now, it's trivial to inherit (privately) from this class to ensure that by the time your constructor's initializer list or body is called the initialization required has been performed.
How should Init be implemented ?
Depends on what's possible and cheaper, either detecting the initialization is done or calling the initialization regardless.
And if the C API is so crappy you cannot actually do either ?
You're toast. Welcome documentation.
You can try using the Singleton pattern
Is there a way to force a class to be instantiated on the stack or at least prevent it to be global in C++?
Not really. You could make constructor private and create said object only using factory method, but nothing would really prevent you from using said method to create global variable.
If global variables were initialized before application enters "main", then you could throw exception from the constructor before "main" sets some flag. However, it is up to implementation to decide when initialize global variables. So, they could be initialized after application enters "main". I.e. that would be relying on undefined behavior, which isn't a good idea.
You could, in theory, attempt to walk the call stack and see from there it is called. However, compiler could inline constructor or several functions, and this will be non-portable, and walking call stack in C++ will be painful.
You could also manually check "this" pointer and attempt to guess where it is located. However, this will be non-portable hack specific to this particular compiler, OS and architecture.
So there is no good solution I can think of.
As a result, the best idea would be to change your program behavior, as others have already suggeste - make a singleton class that initializes your C api in constructor, deinitializes it in destructor, and request this class when necessary via factory method. This will be the most elegant solution to your problem.
Alternatively, you could attempt to document program behavior.
To allocate a class on the stack, you simply say
FooClass foo; // NOTE no parenthesis because it'd be parsed
// as a function declaration. It's a famous gotcha.
To allocate in on the heap, you say
std::unique_ptr<FooClass> foo(new FooClass()); //or
FooClass* foop = new FooClass(); // less safe
Your object will only be global if you declare it at program scope.
I've been thinking about the possible use of delete this in c++, and I've seen one use.
Because you can say delete this only when an object is on heap, I can make the destructor private and stop objects from being created on stack altogether. In the end I can just delete the object on heap by saying delete this in a random public member function that acts as a destructor. My questions:
1) Why would I want to force the object to be made on the heap instead of on the stack?
2) Is there another use of delete this apart from this? (supposing that this is a legitimate use of it :) )
Any scheme that uses delete this is somewhat dangerous, since whoever called the function that does that is left with a dangling pointer. (Of course, that's also the case when you delete an object normally, but in that case, it's clear that the object has been deleted). Nevertheless, there are somewhat legitimate cases for wanting an object to manage its own lifetime.
It could be used to implement a nasty, intrusive reference-counting scheme. You would have functions to "acquire" a reference to the object, preventing it from being deleted, and then "release" it once you've finished, deleting it if noone else has acquired it, along the lines of:
class Nasty {
public:
Nasty() : references(1) {}
void acquire() {
++references;
}
void release() {
if (--references == 0) {
delete this;
}
}
private:
~Nasty() {}
size_t references;
};
// Usage
Nasty * nasty = new Nasty; // 1 reference
nasty->acquire(); // get a second reference
nasty->release(); // back to one
nasty->release(); // deleted
nasty->acquire(); // BOOM!
I would prefer to use std::shared_ptr for this purpose, since it's thread-safe, exception-safe, works for any type without needing any explicit support, and prevents access after deleting.
More usefully, it could be used in an event-driven system, where objects are created, and then manage themselves until they receive an event that tells them that they're no longer needed:
class Worker : EventReceiver {
public:
Worker() {
start_receiving_events(this);
}
virtual void on(WorkEvent) {
do_work();
}
virtual void on(DeleteEvent) {
stop_receiving_events(this);
delete this;
}
private:
~Worker() {}
void do_work();
};
1) Why would I want to force the object to be made on the heap instead of on the stack?
1) Because the object's lifetime is not logically tied to a scope (e.g., function body, etc.). Either because it must manage its own lifespan, or because it is inherently a shared object (and thus, its lifespan must be attached to those of its co-dependent objects). Some people here have pointed out some examples like event handlers, task objects (in a scheduler), and just general objects in a complex object hierarchy.
2) Because you want to control the exact location where code is executed for the allocation / deallocation and construction / destruction. The typical use-case here is that of cross-module code (spread across executables and DLLs (or .so files)). Because of issues of binary compatibility and separate heaps between modules, it is often a requirement that you strictly control in what module these allocation-construction operations happen. And that implies the use of heap-based objects only.
2) Is there another use of delete this apart from this? (supposing that this is a legitimate use of it :) )
Well, your use-case is really just a "how-to" not a "why". Of course, if you are going to use a delete this; statement within a member function, then you must have controls in place to force all creations to occur with new (and in the same translation unit as the delete this; statement occurs). Not doing this would just be very very poor style and dangerous. But that doesn't address the "why" you would use this.
1) As others have pointed out, one legitimate use-case is where you have an object that can determine when its job is over and consequently destroy itself. For example, an event handler deleting itself when the event has been handled, a network communication object that deletes itself once the transaction it was appointed to do is over, or a task object in a scheduler deleting itself when the task is done. However, this leaves a big problem: signaling to the outside world that it no longer exists. That's why many have mentioned the "intrusive reference counting" scheme, which is one way to ensure that the object is only deleted when there are no more references to it. Another solution is to use a global (singleton-like) repository of "valid" objects, in which case any accesses to the object must go through a check in the repository and the object must also add/remove itself from the repository at the same time as it makes the new and delete this; calls (either as part of an overloaded new/delete, or alongside every new/delete calls).
However, there is a much simpler and less intrusive way to achieve the same behavior, albeit less economical. One can use a self-referencing shared_ptr scheme. As so:
class AutonomousObject {
private:
std::shared_ptr<AutonomousObject> m_shared_this;
protected:
AutonomousObject(/* some params */);
public:
virtual ~AutonomousObject() { };
template <typename... Args>
static std::weak_ptr<AutonomousObject> Create(Args&&... args) {
std::shared_ptr<AutonomousObject> result(new AutonomousObject(std::forward<Args>(args)...));
result->m_shared_this = result; // link the self-reference.
return result; // return a weak-pointer.
};
// this is the function called when the life-time should be terminated:
void OnTerminate() {
m_shared_this.reset( NULL ); // do not use reset(), but use reset( NULL ).
};
};
With the above (or some variations upon this crude example, depending on your needs), the object will be alive for as long as it deems necessary and that no-one else is using it. The weak-pointer mechanism serves as the proxy to query for the existence of the object, by possible outside users of the object. This scheme makes the object a bit heavier (has a shared-pointer in it) but it is easier and safer to implement. Of course, you have to make sure that the object eventually deletes itself, but that's a given in this kind of scenario.
2) The second use-case I can think of ties in to the second motivation for restricting an object to be heap-only (see above), however, it applies also for when you don't restrict it as such. If you want to make sure that both the deallocation and the destruction are dispatched to the correct module (the module from which the object was allocated and constructed), you must use a dynamic dispatching method. And for that, the easiest is to just use a virtual function. However, a virtual destructor is not going to cut it because it only dispatches the destruction, not the deallocation. The solution is to use a virtual "destroy" function that calls delete this; on the object in question. Here is a simple scheme to achieve this:
struct CrossModuleDeleter; //forward-declare.
class CrossModuleObject {
private:
virtual void Destroy() /* final */;
public:
CrossModuleObject(/* some params */); //constructor can be public.
virtual ~CrossModuleObject() { }; //destructor can be public.
//.... whatever...
friend struct CrossModuleDeleter;
template <typename... Args>
static std::shared_ptr< CrossModuleObject > Create(Args&&... args);
};
struct CrossModuleDeleter {
void operator()(CrossModuleObject* p) const {
p->Destroy(); // do a virtual dispatch to reach the correct deallocator.
};
};
// In the cpp file:
// Note: This function should not be inlined, so stash it into a cpp file.
void CrossModuleObject::Destroy() {
delete this;
};
template <typename... Args>
std::shared_ptr< CrossModuleObject > CrossModuleObject::Create(Args&&... args) {
return std::shared_ptr< CrossModuleObject >( new CrossModuleObject(std::forward<Args>(args)...), CrossModuleDeleter() );
};
The above kind of scheme works well in practice, and it has the nice advantage that the class can act as a base-class with no additional intrusion by this virtual-destroy mechanism in the derived classes. And, you can also modify it for the purpose of allowing only heap-based objects (as usually, making constructors-destructors private or protected). Without the heap-based restriction, the advantage is that you can still use the object as a local variable or data member (by value) if you want, but, of course, there will be loop-holes left to avoid by whoever uses the class.
As far as I know, these are the only legitimate use-cases I have ever seen anywhere or heard of (and the first one is easily avoidable, as I have shown, and often should be).
The general reason is that the lifetime of the object is determined by some factor internal to the class, at least from an application viewpoint. Hence, it may very well be a private method which calls delete this;.
Obviously, when the object is the only one to know how long it's needed, you can't put it on a random thread stack. It's necessary to create such objects on the heap.
It's generally an exceptionally bad idea. There are a very few cases- for example, COM objects have enforced intrusive reference counting. You'd only ever do this with a very specific situational reason- never for a general-purpose class.
1) Why would I want to force the object to be made on the heap instead of on the stack?
Because its life span isn't determined by the scoping rule.
2) Is there another use of delete this apart from this? (supposing that this is a legitimate use of it :) )
You use delete this when the object is the best placed one to be responsible for its own life span. One of the simplest example I know of is a window in a GUI. The window reacts to events, a subset of which means that the window has to be closed and thus deleted. In the event handler the window does a delete this. (You may delegate the handling to a controller class. But the situation "window forwards event to controller class which decides to delete the window" isn't much different of delete this, the window event handler will be left with the window deleted. You may also need to decouple the close from the delete, but your rationale won't be related to the desirability of delete this).
delete this;
can be useful at times and is usually used for a control class that also controls the lifetime of another object. With intrusive reference counting, the class it is controlling is one that derives from it.
The outcome of using such a class should be to make lifetime handling easier for users or creators of your class. If it doesn't achieve this, it is bad practice.
A legitimate example may be where you need a class to clean up all references to itself before it is destructed. In such a case, you "tell" the class whenever you are storing a reference to it (in your model, presumably) and then on exit, your class goes around nulling out these references or whatever before it calls delete this on itself.
This should all happen "behind the scenes" for users of your class.
"Why would I want to force the object to be made on the heap instead of on the stack?"
Generally when you force that it's not because you want to as such, it's because the class is part of some polymorphic hierarchy, and the only legitimate way to get one is from a factory function that returns an instance of a different derived class according to the parameters you pass it, or according to some configuration that it knows about. Then it's easy to arrange that the factory function creates them with new. There's no way that users of those classes could have them on the stack even if they wanted to, because they don't know in advance the derived type of the object they're using, only the base type.
Once you have objects like that, you know that they're destroyed with delete, and you can consider managing their lifecycle in a way that ultimately ends in delete this. You'd only do this if the object is somehow capable of knowing when it's no longer needed, which usually would be (as Mike says) because it's part of some framework that doesn't manage object lifetime explicitly, but does tell its components that they've been detached/deregistered/whatever[*].
If I remember correctly, James Kanze is your man for this. I may have misremembered, but I think he occasionally mentions that in his designs delete this isn't just used but is common. Such designs avoid shared ownership and external lifecycle management, in favour of networks of entity objects managing their own lifecycles. And where necessary, deregistering themselves from anything that knows about them prior to destroying themselves. So if you have several "tools" in a "toolbelt" then you wouldn't construe that as the toolbelt "owning" references to each of the tools, you think of the tools putting themselves in and out of the belt.
[*] Otherwise you'd have your factory return a unique_ptr or auto_ptr to encourage callers to stuff the object straight into the memory management type of their choice, or you'd return a raw pointer but provide the same encouragement via documentation. All the stuff you're used to seeing.
A good rule of thumb is not to use delete this.
Simply put, the thing that uses new should be responsible enough to use the delete when done with the object. This also avoids the problems with is on the stack/heap.
Once upon a time i was writing some plugin code. I believe i mixed build (debug for plugin, release for main code or maybe the other way around) because one part should be fast. Or maybe another situation happened. Such main is already released built on gcc and plugin is being debugged/tested on VC. When the main code deleted something from the plugin or plugin deleted something a memory issue would occur. It was because they both used different memory pools or malloc implementations. So i had a private dtor and a virtual function called deleteThis().
-edit- Now i may consider overloading the delete operator or using a smart pointer or simply just state never delete a function. It will depend and usually overloading new/delete should never be done unless you really know what your doing (dont do it). I decide to use deleteThis() because i found it easier then the C like way of thing_alloc and thing_free as deleteThis() felt like the more OOP way of doing it
I'm coming from C# and trying to translate some of my practices into C++. I've used dependency injection in various places throughout my code using raw pointers. Then I decide to replace the raw pointers with std::shared_ptr's. As part of that process it was suggested that I consider using stack allocated automatic variables rather than dynamically allocating them (see this question although that question was in the context of unique_ptr so maybe that is different).
I believe the below example shows the use of automatic variables.
class MyClass
{
public:
MyClass(ApplicationService& app): appService_(app)
{
}
~MyClass()
{
appService_.Destroy(something);
}
private:
ApplicationService& appService_;
}
class ConsumerClass
{
DoSomething()
{
CustomApplicationService customAppService;
MyClass myclass(customAppService);
myclass...
}
}
In the above example, when customAppservice and myclass go out of scope how do I know which will be destroyed first? If customAppService is destroyed first than the MyClass destructor will fail. Is this a good reason to use shared_ptr instead in this scenario or is there a clean way around this?
UPDATE
ApplicationService is a class that is a wrapper around global functions needed to interact with a 3rd party library that my code uses. I have this class as I believe it's the standard way to support unit testing and stubbing/mocking of free standing functions. This class simply delegates calls to the corresponding global functions. The call appService_.Destroy(something); is actually destroying an object used by each specific instance of MyClass not destroying anything do with the Application class itself.
The answer is: you don't need to know, as your design is broken, anyway.
First, a Destroy sounds like a bad idea, furthermore if called in an object that is not responsible for the destruction of the other object. The code from the Destroy method belongs into ApplicationService's destructor (which is hopefully virtual, although in this case it doesn't actually need to), which in contrast to C# gets called at a perfectly determined point in time.
Once you've done this, you will (hopefully) realize, that it is not the responsibility of MyClass to destroy the appService_, as it does not own it. It is the responsibility of the ConsumerClass (or rather the DoSomething method), which really manages the actual service and which does actually destroy it automatically once you've moved Destroy's code into the destructor. Isn't it nice how RAII does make happen everything in a clean and automatic way?
class MyClass
{
public:
MyClass(ApplicationService& app): appService_(app)
{
}
private:
ApplicationService& appService_;
}
class ConsumerClass
{
DoSomething()
{
CustomApplicationService customAppService;
MyClass myclass(customAppService);
myclass...
}
}
class ApplicationService
{
public:
virtual ~ApplicationService()
{
//code from former Destroy method
}
}
class CustomApplicationService
{
public:
virtual ~CustomApplicationService()
{
//code from former Destroy method
}
}
This is IMHO the perfect clean C++ way around it and the problem is definitely not a reason to spam shared_ptrs. Even if you really need a dedicated Destroy method and cannot move the code into the destructor (which I would take as a motivation for overthinking the design), then you would still call Destroy from DoSomething as again, MyClass is not responsible for destroying the appService_.
EDIT: According to you update (and my stupid overlooking of the something argument), your design seems indeed quite correct (at least if you cannot mess with changing the ApplicationService), sorry.
Allthough class members should get destroyed in reverse order of construction, I'm not sure this also holds for local automatic variables. What you could do to make sure the destructors get called in a defined order is introduce nested scopes using simple blocks:
void DoSomething()
{
CustomApplicationService customAppService;
{
MyClass myclass(customAppService);
myclass...
} // myclass destroyed
} // customAppService destroyed
Of course there is still completely no need to use dynamic allocation, let aside shared_ptrs. Although the nested blocks blow the code a bit, it is nothing against the ugliness of dynamic allocation applied in a non-dynamic way and without reason and it at least "looks nice in a semantic way" with customAppService's declaration on top of the block ;)
In C++, objects, in general, are destroyed in the order that is exact opposite of the order they were created in.
Based on your example, MyClass will be destroyed before CustomApplicationService
The exception is when a destructor is called explicitly. However, I don't think you should concern yourself with this exception at this stage.
Another subtlety is called static initialization order fiasco . However, this does not apply to automatic (stack) variables.
Edit:
From C++2003 - looked for 'reverse order'
6.6.0.2
On exit from a scope (however accomplished), destructors (12.4) are called for all
constructed objects with automatic storage duration (3.7.2) (named objects or
temporaries) that are declared in that scope, in the reverse order of their
declaration. ... [Note: However, the program can be terminated (by calling exit()
or abort()(18.3), for example) without destroying class objects with automatic
storage duration. ]
I have an object -a scheduler class-. This scheduler class is given member function pointers, times and the pointer to the object which created the scheduler.
This means I could do something as much as: (pObject->*h.function)(*h.param); Where pObject is the pointer to the original object, h a class which contains the function + void pointer parameter so I can pass arguments to the original function.
When I like to initialize this object I have the explicit Scheduler(pObjType o); constructor (where pObjType is a template parameter).
When I create an object which should have this alarm I type:
struct A {
typedef void (A::*A_FN2)(void*);
typedef Scheduler<A*,A_FN2> AlarmType;
A(int _x) : alarm(NULL)
{
alarm.SetObject(this);
}
AlarmType alarm
However this alarm-type puts quite a big limitation on the object: if I forget to add a copy-constructor (to A) the class would get undefined behaviour. The scheduler would keep pointing to the original object, and that original object might go out of scope, or even worse, might not.
Is there a method that when I copy my alarm (by default, so in the scheduler's copy-constructor) I can get the calling object (and pointer to that?)?
Or if that isn't possible, is it possible to throw a (compile) error if I forget to implement a copy-constructor for my structure? - And try to copy this structure somewhere?
As I see it, you have an opportunity to improve your design here, that may help you get rid of your worry.
It is usually a bad idea to pass
around member function pointers. It
is better to make your structs
inherit from an abstract base class,
making the functions you want to
customize abstract virtual.
If you don't need copying, it is best
to disallow it in the base class.
Either by making the copy constructor and operator undefined and private,
or by inheriting boost::NonCopyable.
If you want any kind of automatic copy construction semantics, then you're going to need to go to the CRTP- no other pattern provides for a pointer to the owning object.
The other thing is that you should really use a boost::/std::function<>, they're far more generic and you're going to need that if you want to be able to use Lua functions.
The simplest way to prevent the specific issue you're asking about is to make Scheduler noncopyable (e.g. with boost::noncopyable). This means that any client class incorporating a value member of type Scheduler will fail to be copyable. The hope is that this provides a hint for the programmer to check the docs and figure out the copy semantics of Scheduler (i.e. construct a new Scheduler for every new A), but it's possible for someone to get this wrong if they work around the problem by just holding the Scheduler by pointer. Aliasing the pointer gives exactly the same problem as default-copy-constructing the Scheduler instance that holds a pointer.
Any time you have raw pointers you have to have a policy on object lifetime. You want to ensure that the lifetime of any class A is at least as long as the corresponding instance of Scheduler, and as I see it there are three ways to ensure this:
Use composition - not possible in this case because A contains a Scheduler, so Scheduler can't contain an A
Use inheritance, e.g. in the form of the Curiously Recurring Template Pattern (CRTP)
Have a global policy on enforcing lifetimes of A instances, e.g. requiring that they are always held by smart pointer, or that cleaning them up is the responsibility of some class that also knows to clean up the Schedulers that depend on them
The CRTP could work like this:
#include <iostream>
using namespace std;
template<typename T>
struct Scheduler {
typedef void (T::* MemFuncPtr)(void);
Scheduler(MemFuncPtr action) :
action(action)
{
}
private:
void doAction()
{
this->*action();
}
MemFuncPtr action;
};
struct Alarm : private Scheduler<Alarm> {
Alarm() : Scheduler<Alarm>(&Alarm::doStuff)
{
}
void doStuff()
{
cout << "Doing stuff" << endl;
}
};
Note that private inheritance ensures that clients of the Alarm class can't treat it as a raw Scheduler.