RAII state management - c++

I need to change a state. Then do stuff. Then reset the state back to what it was - e.g:
auto oldActivationOrder = mdiArea->activationOrder();
mdiArea->setActivationOrder( QMdiArea::StackingOrder );
mdiArea->cascadeSubWindows();
mdiArea->setActivationOrder( oldActivationOrder );
How do I do this in a RAII way?
(c++ 11 and/or 14)
Edit: Thanks for all the answers.
There are several suggestions to create a custom class for handling the state change (BoBTFish, mindriot, Mattias Johansson). This solution seems good and clear. However I think it is a drawback that it increases the line count from 4 to 20+. If used a lot this would bloat the code. Also it seems that some locality is lost by having a separate class.
Ami Tavory suggests using std::unique_ptr. This does not have the code bloat issue and maintains locality. However, as Ami also indicates, it may not be the most readable solution.
sp2danny suggests a generalized state-change class that can be reused. This avoids code bloat provided that it can replace several custom classes. I'm going to accept this answer - but I guess the right approach really depends on the context.

RAII: Resource Acquisition Is Initialisation.
Which also implies that Resource Release Is Destruction, although I've never seen people talk about RRID, even though that's the more useful side of it. (Perhaps that should be Termination, or Finalisation?)
The point is, you do some work in the constructor of an object, and effectively reverse it in the destructor. This means that the cleanup is carried out no matter how you exit the scope: multiple returns, multiple breaks, throw an exception, ... (even goto!)
class ScopedActivationOrderChange {
QMdiArea& area_; // the object to operate on
QMdiArea::WindowOrder oldOrder_; // save the old state
public:
ScopedActivationOrderChange(QMdiArea& area, ActivationOrder newOrder)
: area_(area)
, oldOrder_(area_.activationOrder()) // save old state
{
area_.setActivationOrder(newOrder); // set new state
}
~ScopedActivationOrderChange()
{
area_.setActivationOrder(oldOrder_); // reset to old state
}
};
// ...
{ // <-- new scope, just to establish lifetime of the change
ScopedActivationOrderChange orderChange{*mdiArea, QMdiArea::StackingOrder};
mdiArea->cascadeSubWindows();
} // <-- end of scope, change is reversed
The Standard Library doesn't provide any general facility for this. It does provide some for more specific uses, such as std::unique_ptr for deleting dynamically allocated objects, which can in some cases be used for other things, though it's a bit ugly. std::vector can be seen as a RAII class for dynamic arrays, providing some other management facilities also, but this one is less easily abused for other purposes.

Perhaps the most succinct way (albeit possibly not the most readable) of implementing the scoped guard pattern is to use a std::unique_ptr with a custom deleter:
#include <memory>
#include <utility>
int main()
{
void *p, *q;
auto reverser = [&p, &q](char *){std::swap(p, q);};
/* This guard doesn't really release memory -
it just calls the lambda at exit. */
auto guard = std::unique_ptr<char, decltype(reverser)>{nullptr, reverser};
std::swap(p, q);
}

You can do it like this:
class SetActivationOrder
{
public:
SetActivationOrder(QMdiArea *mdiArea, QMdiArea::WindowOrder order)
: m_mdiArea(mdiArea),
m_oldActivationOrder(mdiArea->activationOrder())
{
m_mdiArea->setActivationOrder(order);
}
~SetActivationOrder()
{
m_mdiArea->setActivationOrder(m_oldActivationOrder)
}
private:
QMdiArea *m_mdiArea;
QMdiArea::WindowOrder m_oldActivationOrder;
};
And then use it like this:
{
// This sets the order:
SetActivationOrder sao(mdiArea, QMdiArea::StackingOrder);
mdiArea->cascadeSubWindows();
// Destructor is called at end of scope and sets the old order
}

With RAII (Resource Allocation Is Initialization) you would create an instance of a storage class in the local scope (i.e. on the stack). You pass the state you want to store into the constructor of the storage object and make sure that the destructor of the storage object restores the state again. Because C++ guarantees that the destructor of an object on the local scope will be automatically called for you when the object goes out of scope, also if an exception is thrown, you don't have to worry about remembering to restore the state again.
I would write the class like this:
class ActivationOrderState
{
public:
ActivationOrderState(QMdiArea& area)
: m_area(area)
{
// Get the old value
m_oldOrder = area.activationOrder();
}
~ActivationOrderState()
{
// Restore the old value
m_area.setActivationOrder( m_oldOrder );
}
private:
QMdiArea& m_area;
QMdiArea::WindowOrder m_oldOrder;
};
This object is then used like this
{
ActivationOrderState state(*mdiArea); // saves the state
mdiArea->setActivationOrder( QMdiArea::StackingOrder ); // set the new state
// do other things here...
} // end of scope, destructor is called and state is restored again
to be sure that no other user misuses this code by allocating it on the free store/heap instead of on the local scope, you can delete the operator new:
class ActivationOrderState
{
public:
ActivationOrderState(QMdiArea& area)
: m_area(area)
{
// Get the old value
m_oldOrder = area.activationOrder();
}
~ActivationOrderState()
{
// Restore the old value
m_area.setActivationOrder( m_oldOrder );
}
// Remove the possibility to create this object on the free store.
template<typename... Args> void* operator new(std::size_t,Args...) = delete;
private:
QMdiArea& m_area;
QMdiArea::WindowOrder m_oldOrder;
};
See also
Using RAII to raise thread priority temporarily

You can do a generic template:
template< typename Obj, typename Getter, typename Setter , typename StateType >
class ScopedStateChangeType
{
public:
ScopedStateChangeType( Obj& o, Getter g, Setter s, const StateType& state )
: o(o), s(s)
{
oldstate = (o.*g)();
(o.*s)(state);
}
Obj* operator -> () { return &o; }
~ScopedStateChangeType()
{
(o.*s)(oldstate);
}
private:
Obj& o;
Setter s;
StateType oldstate;
};
template< typename Obj, typename Getter, typename Setter , typename StateType >
auto MakeScopedStateChanger( Obj& o, Getter g, Setter s, StateType state )
-> ScopedStateChangeType<Obj,Getter,Setter,StateType>
{
return { o, g, s, state };
}
use it like:
QMdiArea mdiArea;
{
auto ref = MakeScopedStateChanger(
mdiArea, &QMdiArea::activationOrder, &QMdiArea::setActivationOrder,
QMdiArea::StackingOrder );
ref->cascadeSubWindows();
}
maybe it's worth it if you use this pattern often

Related

Is it better to declare the static singleton object outside of the static instance getter method? [duplicate]

In this thread, the following is said about singleton instances:
The static variable can be static to the GetInstance() function, or it can be static in the Singleton class. There's interesting tradeoffs there.
What are these trade-offs? I am aware that, if declared as a static function variable, the singleton won't be constructed until the function is first called. I've also read something about thread-safety, but am unaware of what exactly that entails, or how the two approaches differ in that regard.
Are there any other major differences between the two? Which approach is better?
In my concrete example, I have a factory class set up as a singleton, and I'm storing the instance as a static const field in the class. I don't have a getInstance() method, but rather expect the user to access the instance directly, like so: ItemFactory::factory. The default constructor is private, and the instance is allocated statically.
Addendum: how good of an idea is it to overload operator() to call the createItem() method for the singleton, such that Items can be created like so: ItemFactory::factory("id")?
What are these trade-offs?
This is the most important consideration:
The static data member is initialized during the static initialization at the start of the program. If any static object depends on the singleton, then there will be a static initialization order fiasco.
The function local static object is initialized when the function is first called. Since whoever depends on the singleton will call the function, the singleton will be appropriately initialized and is not susceptible to the fiasco. There is still a - very subtle - problem with the destruction. If a destructor of a static object depends on the singleton, but the constructor of that object does not, then you'll end up with undefined behaviour.
Also, being initialized on the first time the function is called, means that the function may be called after the static initialization is done and main has been called. And therefore, the program may have spawned multiple threads. There could be a race condition on the initialization of the static local, resulting in multiple instances being constructed. Luckily, since C++11, the standard guarantees that the initialization is thread safe and this tradeoff no longer exists in conforming compilers.
Thread safety is not an issue with the static data member.
Which approach is better?
That depends on what your requirements are and what version of the standard you support.
I vote for static function variable. The newer C++ standard require automatic thread safety for initialization of such variables. It's implemented in GNU C++ for about ten years already. Visual Studio 2015 also supports this. If you make a static pointer variable holding reference to your singleton object, you'll have to deal with thread issues manually.
In the other hand, if you make a static member pointer field like shown in in the snippet below, you will be able to change it from other static methods, maybe re-init this field with other instance upon handling request to change program configuration. However, the snippet below contains a bug just to remind you how difficult multithreading is.
class ItemFactory {
static std::atomic_flag initialized = ATOMIC_FLAG_INIT;
static std::unique_ptr<ItemFactory> theFactoryInstance;
public:
static ItemFactory& getInstance() {
if (!initialized.test_and_set(std::memory_order_acquire)) {
theFactoryInstance = std::make_unique<ItemFactory>();
}
return *theFactoryInstance;
}
};
I wouldn't advise you to implement your singleton as a global non-pointer variable initialized before entry to the main() function. Thread safety issues will go away along with implicit cache coherency overhead, but you're not able to control the initialization order of your global variables in any precise or portable way.
Anyway, this choice doesn't force any permanent design implications. Since this instance will reside in the private section of your class you may always change it.
I don't think overloading of operator() for a factory is a good idea. operator() have "execute" semantics while in factory it's gonna stand for "create".
What is the best approach to a singleton in c++?
Hide the fact that it's a singleton and give it value semantics.
How?
All singleton-ness ought to be an implementation detail. In this way, consumers of your class need not refactor their programs if you need to change the way you implement your singleton (or indeed if you decide that it should not really be a singleton after all).
Why ?
Because now your program never has to worry itself with references, pointers, lifetimes and whatnot. It just uses an instance of the object as if it were a value. Safe in the knowledge that the singleton will take care of whatever lifetime/resource requirements it has.
What about a singleton that releases resources when not in use?
no problem.
Here's an example of the two approaches hidden behind the facade of an object with value semantics.
imagine this use case:
auto j1 = jobbie();
auto j2 = jobbie();
auto j3 = jobbie();
j1.log("doh");
j2.log("ray");
j3.log("me");
{
shared_file f;
f.log("hello");
}
{
shared_file().log("goodbye");
}
shared_file().log("here's another");
shared_file f2;
{
shared_file().log("no need to reopen");
shared_file().log("or here");
shared_file().log("or even here");
}
f2.log("all done");
where a jobbie object is just a facade for a singleton, but the shared_file object wants to flush/close itself when not in use.
so the output should look like this:
doh
ray
me
opening file
logging to file: hello
closing file
opening file
logging to file: goodbye
closing file
opening file
logging to file: here's another
closing file
opening file
logging to file: no need to reopen
logging to file: or here
logging to file: or even here
logging to file: all done
closing file
We can achieve this using the idiom, which I'll call 'value-semantics-is-a-facade-for-singleton':
#include <iostream>
#include <vector>
// interface
struct jobbie
{
void log(const std::string& s);
private:
// if we decide to make jobbie less singleton-like in future
// then as far as the interface is concerned the only change is here
// and since these items are private, it won't matter to consumers of the class
struct impl;
static impl& get();
};
// implementation
struct jobbie::impl
{
void log(const std::string& s) {
std::cout << s << std::endl;
}
};
auto jobbie::get() -> impl& {
//
// NOTE
// now you can change the singleton storage strategy simply by changing this code
// alternative 1:
static impl _;
return _;
// for example, we could use a weak_ptr which we lock and store the shared_ptr in the outer
// jobbie class. This would give us a shared singleton which releases resources when not in use
}
// implement non-singleton interface
void jobbie::log(const std::string& s)
{
get().log(s);
}
struct shared_file
{
shared_file();
void log(const std::string& s);
private:
struct impl;
static std::shared_ptr<impl> get();
std::shared_ptr<impl> _impl;
};
// private implementation
struct shared_file::impl {
// in a multithreaded program
// we require a condition variable to ensure that the shared resource is closed
// when we try to re-open it (race condition)
struct statics {
std::mutex m;
std::condition_variable cv;
bool still_open = false;
std::weak_ptr<impl> cache;
};
static statics& get_statics() {
static statics _;
return _;
}
impl() {
std::cout << "opening file\n";
}
~impl() {
std::cout << "closing file\n";
// close file here
// and now that it's closed, we can signal the singleton state that it can be
// reopened
auto& stats = get_statics();
// we *must* use a lock otherwise the compiler may re-order memory access
// across the memory fence
auto lock = std::unique_lock<std::mutex>(stats.m);
stats.still_open = false;
lock.unlock();
stats.cv.notify_one();
}
void log(const std::string& s) {
std::cout << "logging to file: " << s << std::endl;
}
};
auto shared_file::get() -> std::shared_ptr<impl>
{
auto& statics = impl::get_statics();
auto lock = std::unique_lock<std::mutex>(statics.m);
std::shared_ptr<impl> candidate;
statics.cv.wait(lock, [&statics, &candidate] {
return bool(candidate = statics.cache.lock())
or not statics.still_open;
});
if (candidate)
return candidate;
statics.cache = candidate = std::make_shared<impl>();
statics.still_open = true;
return candidate;
}
// interface implementation
shared_file::shared_file() : _impl(get()) {}
void shared_file::log(const std::string& s) { _impl->log(s); }
// test our class
auto main() -> int
{
using namespace std;
auto j1 = jobbie();
auto j2 = jobbie();
auto j3 = jobbie();
j1.log("doh");
j2.log("ray");
j3.log("me");
{
shared_file f;
f.log("hello");
}
{
shared_file().log("goodbye");
}
shared_file().log("here's another");
shared_file f2;
{
shared_file().log("no need to reopen");
shared_file().log("or here");
shared_file().log("or even here");
}
f2.log("all done");
return 0;
}

Remove related object from list C++

I have some code:
class LowLevelObject {
public:
void* variable;
};
// internal, can't get access, erase, push. just exists somewhere
std::list<LowLevelObject*> low_level_objects_list;
class HighLevelObject {
public:
LowLevelObject* low_level_object;
};
// my list of objects
std::list<HighLevelObject*> high_level_objects_list;
// some callback which notifies that LowLevelObject* added to low_level_objects_list.
void CallbackAttachLowLevelObject(LowLevelObject* low_level_object) {
HighLevelObject* high_level_object = new HighLevelObject;
high_level_object->low_level_object = low_level_object;
low_level_object->variable = high_level_object;
high_level_objects_list.push_back(high_level_object);
}
void CallbackDetachLowLevelObject(LowLevelObject* low_level_object) {
// how to delete my HighLevelObject* from high_level_objects_list?
// HighLevelObject* address in field `variable` of LowLevelObject.
}
I have low level object which defined in library, it contains field variable for using by user.
I set to this varaible pointer to my HighLevelObject from my code.
I can set callbacks on add and remove LowLevelObject from list in library.
But how can I remove my HighLevelObject from my list of objects?
Of course, I know that I can iterate whole list and find by object by pointer and remove, but it's long way.
List may contains a lot of objects.
Thanks in advance!
The setup lends itself to finding a solution where converting a pointer to an iterator is a constant-time operation. Boost.Intrusive offers this feature. This will require changes to your code though; if you were not careful about encapsulation, these changes might be significant. A boost::intrusive::list is functionally similar to a std::list, but requires some changes to your data structure. This option might not be for everyone.
Another feature of Boost.Intrusive is that sometimes you do not need to explicitly convert a pointer to an iterator. If you enable auto-unlinking, then the actual deletion from the list happens behind the scenes in a destructor. This is not a good option if you need to get the size of your list in constant time, though. (Nothing in the question indicates that getting the size of the list is needed, so I'll go ahead with this approach.)
If you had a container of objects, I might let you work through the documentation for the intrusive list. However, your use of pointers makes the conversion potentially confusing, so I'll walk through the setup. The setup begins with the following.
#include <boost/intrusive/list.hpp>
// Shorten the needed boost namespace.
namespace bi = boost::intrusive;
Since the list of high-level objects contains pointers, an auxiliary structure is needed. We need what amounts to a pointer that derives from a class provided by Boost. (I will proceed assuming that the objects created in CallbackAttachLowLevelObject() must be destroyed in CallbackDetachLowLevelObject(). Hence, I've changed the raw pointer to a smart pointer.)
#include <memory>
#include <utility>
// The auxiliary structure that will be stored in the high level list:
// The hook supplies the intrusive infrastructure.
// The link_mode enables auto-unlinking.
class ListEntry : public bi::list_base_hook< bi::link_mode<bi::auto_unlink> >
{
public:
// The expected way to construct this.
explicit ListEntry(std::unique_ptr<HighLevelObject> && p) : ptr(std::move(p)) {}
// Another option would be to forward parameters for constructing HighLevelObject,
// and have the constructor call make_unique. I'll leave that as an exercise.
// Make this class look like a pointer to HighLevelObject.
const std::unique_ptr<HighLevelObject> & operator->() const { return ptr; }
HighLevelObject& operator*() const { return *ptr; }
private:
std::unique_ptr<HighLevelObject> ptr;
};
The definition of the list becomes the following. We need to specify non-constant time size() to allow auto-unlinking.
bi::list<ListEntry, bi::constant_time_size<false>> high_level_objects_list;
These changes require some changes to the "attach" callback. I'll present them before going on to the "detach" callback.
// Callback that notifies when LowLevelObject* is added to low_level_objects_list.
void CallbackAttachLowLevelObject(LowLevelObject* low_level_object) {
// Dynamically allocate the entry, in addition to allocating the high level object.
ListEntry * entry = new ListEntry(std::make_unique<HighLevelObject>());
(*entry)->low_level_object = low_level_object; // Double indirection needed here.
low_level_object->variable = entry;
high_level_objects_list.push_back(*entry); // Intentional indirection here!
}
With this prep work, the cleanup is in your destructors, as is appropriate for RAII. Your "detach" just has to initiate the process. One line suffices.
void CallbackDetachLowLevelObject(LowLevelObject* low_level_object) {
delete static_cast<ListEntry *>(low_level_object->variable);
}
There (appropriately) is not enough context in the question to explain why the high level list is of pointers instead of being of objects. One potential reason is that the high-level object is polymorphic, and the use of pointers avoids slicing. If this is the case (or if there is not a good reason for using pointers), an intrusive list could be designed with less impact on existing code. The caveat here is that changes to HighLevelObject are required.
The initial setup is the same as before.
#include <boost/intrusive/list.hpp>
// Shorten the needed boost namespace.
namespace bi = boost::intrusive;
Next, have HighLevelObject derive from the hook.
class HighLevelObject : public bi::list_base_hook< bi::link_mode<bi::auto_unlink> > {
public:
LowLevelObject* low_level_object;
};
In this situation, the list is of HighLevelObjects, not of pointers, nor of pointer stand-ins.
bi::list<HighLevelObject, bi::constant_time_size<false>> high_level_objects_list;
The "attach" callback reverts to almost what is in the question. The one change to this function is that the object itself is pushed into the list, not a pointer. This is why slicing is not a problem; it's not a copy that is added to the list, but the object itself.
high_level_objects_list.push_back(*high_level_object); // Intentional indirection!
The rest of your code might work as-is. We just need the "detach" callback, which again is a one-liner.
void CallbackDetachLowLevelObject(LowLevelObject* low_level_object) {
delete static_cast<HighLevelObject *>(low_level_object->variable);
}
This answer is for those who do not want to use – or cannot use – Boost.Intrusive.
As long as modifying HighLevelObject is an option, the object could be told how to remove itself from the list. Add a callback to HighLevelObject and invoke it in its destructor.
#include <functional>
#include <utility>
class HighLevelObject {
public:
LowLevelObject* low_level_object;
// ****** The above is from the question. The below is new. ******
// Have the destructor invoke the callback.
~HighLevelObject() { if ( on_delete ) on_delete(); }
// Provide a way to set the callback.
void set_deleter(std::function<void()> && deleter)
{ on_delete = std::move(deleter); }
private:
// Storage for the callback:
std::function<void()> on_delete;
};
Set the callback when an object is added to the high level list.
Caution: This setup supports only one callback. Don't overwrite the callback somewhere else in your code!
Caution: Additional precautions are needed if multiple threads might add elements to high_level_objects_list.
// Callback that notifies when LowLevelObject* is added to low_level_objects_list.
void CallbackAttachLowLevelObject(LowLevelObject* low_level_object) {
HighLevelObject* high_level_object = new HighLevelObject;
high_level_object->low_level_object = low_level_object;
low_level_object->variable = high_level_object;
high_level_objects_list.push_back(high_level_object);
// ****** The above is from the question. The below is new. ******
// Arrange cleanup.
auto iter = high_level_objects_list.end(); // Not thread-safe
high_level_object->set_deleter([iter]() { high_level_objects_list.erase(iter); });
}
With this prep work, the cleanup is in your destructor, as is appropriate for RAII. Your "detach" just has to initiate the process. One line suffices.
void CallbackDetachLowLevelObject(LowLevelObject* low_level_object) {
delete static_cast<HighLevelObject *>(low_level_object->variable);
}
I was thinking of storing an iterator (specifically, iter in the above) in HighLevelObject and having the destructor use that to call erase() instead of going through a lambda. However, I ran into trouble with the declarations, since members of std::list cannot be instantiated with an incomplete element type. It could be done with type erasure, but at that point I preferred using a function object.

Conditionally create an object in c++

I am writing a program that has the option to visualize the output of an algorithm I am working on - this is done by changing a const bool VISUALIZE_OUTPUT variable defined in a header file. In the main file, I want to have this kind of pattern:
if(VISUALIZE_OUTPUT) {
VisualizerObject vis_object;
}
...
if(VISUALIZE_OUTPUT) {
vis_object.initscene(objects_here);
}
...
if(VISUALIZE_OUTPUT) {
vis_object.drawScene(objects_here);
}
However, this clearly won't compile since vis_object goes out of scope. I don't want to declare the object before the condition since it is a big object and it needs to available for multiple points in the code (I can't just have one conditional statement where everything is done).
What is the preferred way of doing this?
Declare the object on the heap and refer to it by using a pointer (or
unique_ptr)?
Declare the object on the heap and make a reference to it
since it won't ever change?
Some other alternative?
A reference will not be useable here, because at declaration it should refere to an already existing object, and live in a scope englobing all your if(VISUALIZE_OUTPUT). Long story short, the object will have to be created unconditionally.
So IMHO a simple way would be to create it on the heap and use it through a pointer - do not forget do delete it when done. The good point is that the pointer could be initialized to nullptr, and so it could be unconditionnaly deleted.
But I think that the best way would be to encapsulate everything in an object created in highest scope. This object would then contain methods to create, use internally and finally destroy the actual vis_object. That way, if you do not need it, nothing will be actually instanciated, but the main procedure will not be cluttered with raw pointer processing.
I would use Null_object_pattern:
struct IVisualizerObject
{
virtual ~IVisualizerObject() = default;
virtual void initscene(Object&) = 0;
virtual void drawScene(Object&) = 0;
// ...
};
struct NullVisualizerObject : IVisualizerObject
{
void initscene(Object&) override { /* Empty */ }
void drawScene(Object&) override { /* Empty */}
// ...
};
struct VisualizerObject : IVisualizerObject
{
void initscene(Object& o) override { /*Implementation*/}
void drawScene(Object& o) override { /*Implementation*/}
// ...
};
And finally:
std::unique_ptr<IVisualizerObject> vis_object;
if (VISUALIZE_OUTPUT) {
vis_object = std::make_unique<VisualizerObject>();
} else {
vis_object = std::make_unique<NullVisualizer>();
}
// ...
vis_object->initscene(objects_here);
//...
vis_object->drawScene(objects_here);
I'll give a few options. All have upsides and downsides.
If it is NOT possible to modify VisualizerObject, as I noted in comments, the effect could be achieved by using the preprocessor, since the preprocessor does not respect scope, and the question specifically seeks controlling lifetime of an object in a manner that crosses scope boundaries.
#ifdef VISUALIZE_OUTPUT
VisualizerObject vis_object;
#endif
#ifdef VISUALIZE_OUTPUT
vis_object.initscene(objects_here);
#endif
The compiler will diagnose any usage of vis_object that are not in #ifdef/#endif.
The big criticism, of course, is that use of the preprocessor is considered poor practice in C++. The advantage is that the approach can be used even if it is not possible to modify the VisualizerObject class (e.g. because it is in a third-party library without source code provided).
However, this is the only option that has the feature requested by the OP of object lifetime crossing scope boundaries.
If it is possible to modify the VisualizerObject class, make it a template with two specialisations
template<bool visualise> struct VisualizerObject
{
// implement all member functions required to do nothing and have no members
VisualizerObject() {};
void initscene(types_here) {};
};
template<> struct VisualizerObject<true> // heavyweight implementation with lots of members
{
VisualizerObject(): heavy1(), heavy2() {};
void initscene(types_here) { expensive_operations_here();};
HeavyWeight1 heavy1;
HeavyWeight2 heavy2;
};
int main()
{
VisualizerObject<VISUALIZE_OUTPUT> vis_object;
...
vis_object.initscene(objects_here);
...
vis_object.drawScene(objects_here);
}
The above will work in all C++ versions. Essentially, it works by either instantiating a lightweight object with member functions that do nothing, or instantiating the heavyweight version.
It would also be possible to use the above approach to wrap a VisualizerObject.
template<bool visualise> VisualizerWrapper
{
// implement all required member functions to do nothing
// don't supply any members either
}
template<> VisualizerWrapper<true>
{
VisualizerWrapper() : object() {};
// implement all member functions as forwarders
void initscene(types_here) { object.initscene(types_here);};
VisualizerObject object;
}
int main()
{
VisualizerWrapper<VISUALIZE_OUTPUT> vis_object;
...
vis_object.initscene(objects_here);
...
vis_object.drawScene(objects_here);
}
The disadvantage of both of the template approaches is maintenance - when adding a member function to one class (template specialisation) it is necessary to add a function with the same signature to the other. In large team settings, it is likely that testing/building will be mostly done with one setting of VISUALIZE_OUTPUT or the other - so it is easy to get one version out of alignment (different interface) to the other. Problems of that (e.g. a failed build on changing the setting) are likely to emerge at inconvenient times - such as when there is a tight deadline to deliver a different version of the product.
Pedantically, the other downside of the template options is that they don't comply with the desired "kind of pattern" i.e. the if is not required in
if(VISUALIZE_OUTPUT)
{
vis_object.initscene(objects_here);
}
and object lifetimes do not cross scope boundaries.

Preventing users from creating unnamed instances of a class [duplicate]

This question already has answers here:
How to avoid C++ anonymous objects
(7 answers)
Closed 6 years ago.
For many RAII "guard" classes, being instantiated as anonymous variables does not make sense at all:
{
std::lock_guard<std::mutex>{some_mutex};
// Does not protect the scope!
// The unnamed instance is immediately destroyed.
}
{
scope_guard{[]{ cleanup(); }};
// `cleanup()` is executed immediately!
// The unnamed instance is immediately destroyed.
}
From this article:
Anonymous variables in C++ have “expression scope”, meaning they are destroyed at the end of the expression in which they are created.
Is there any way to prevent the user from instantiating them without a name? ("Prevent" may be too strong - "making it very difficult" is also acceptable).
I can think of two possible workarounds, but they introduce syntactical overhead in the use of the class:
Hide the class in a detail namespace and provide a macro.
namespace detail
{
class my_guard { /* ... */ };
};
#define SOME_LIB_MY_GUARD(...) \
detail::my_guard MY_GUARD_UNIQUE_NAME(__LINE__) {__VA_ARGS__}
This works, but is hackish.
Only allow the user to use the guard through an higher-order function.
template <typename TArgTuple, typename TF>
decltype(auto) with_guard(TArgTuple&& guardCtorArgs, TF&& f)
{
make_from_tuple<detail::my_guard>(std::forward<TArgTuple>(guardCtorArgs));
f();
}
Usage:
with_guard(std::forward_as_tuple(some_mutex), [&]
{
// ...
});
This workaround does not work when the initialization of the guard class has "fluent" syntax:
{
auto _ = guard_creator()
.some_setting(1)
.some_setting(2)
.create();
}
Is there any better alternative? I have access to C++17 features.
The only sensible way I think about is to make the user pass the result of guard_creator::create to some guard_activator which takes a lvalue-reference as a parameter.
this way, the user of the class has no way but either create the object with a name (the sane option that most developers will do), or new it then dereference (insane options)
for example, you said in the comments you work on a non allocating asynchronous chain creator. I can think on an API which looks like this:
auto token = monad_creator().then([]{...}).then([]{...}).then([]{...}).create();
launch_async_monad(token); //gets token as Token&, the user has no way BUT create this object with a name
If have access to the full potential of C++17, you can expand the idea of using a static factory function into something usefull: guarantied copy elision makes the static factory function possible even for non-movable classes, and the [[nodiscard]] attributes prompts the compiler to issue a warning if the return value is ignored.
class [[nodiscard]] Guard {
public:
Guard(Guard& other) = delete;
~Guard() { /* do sth. with _ptr */ }
static Guard create(void* ptr) { return Guard(ptr); }
private:
Guard(void* ptr) : _ptr(ptr) {}
void* _ptr;
};
int main(int, char**) {
Guard::create(nullptr);
//auto g = Guard::create(nullptr);
}
Compile in Compiler Explorer
You could use an extensible lint tool such as Vera++ https://bitbucket.org/verateam/vera/wiki/Home it lets you lint your code, you can create new rules using Python or tcl (I prefer Python)
A possible flow would be - after each commit, your CI system (e.g Jenkins) will run a job that executes Vera++ and validate such oversights, upon a failure a mail would be issued to the committer.
The canonical way to prevent a class from being instantiated is by making its constructor private. To actually get one of the desired instances, you call a static method, which returns a reference to a constructed object.
class Me {
public:
static Me &MakeMe() { return Me(); }
private:
Me();
}; // Me
This doesn't help of course - but it'd probably make the programmer pause!
int main() {
Me(); // Invalid
Me m; // Invalid
Me::MakeMe(); // Valid - but who'd write that?
Me m = Me::MakeMe();
} // main()
I know this isn't a direct analog to the Guard instances that you describe - but maybe you could adapt the concept?

C Memory Management

The following is a sketch of how I might have some form of automated memory management in C++:
template<class T>
class Ptr{
public:
/* Some memory management stuff (ref counting etc.)
as Ptr object is initialized */
Ptr( ... ) { .. }
/* Manage reference counts etc.
as Ptr object is copied -- might be necessary
when Ptr is passed to or returned from functions */
Ptr<T>& operator=( .. ) { .. };
/* Do memory management stuff
when this "Pointer" object is destroyed. */
~Ptr() { .. }
private:
/* Pointer to main object */
T* object;
}
class Obj{
public:
static Ptr<Obj> newObj( .. ) { return Ptr<Obj>( new Obj( .. ) ); }
private:
/* Hide constructor so it can only be created by newObj */
Obj( .. ) { .. }
/* some variables for memory management routines */
int refcnt;
..
}
This way, the end-user never has to call new or malloc, and can instead call Obj.newObj( .. ).
However, I'm pretty stumped on how I might do something similar for C.
It doesn't have to be exactly like above, but I don't want to have to care about memory management when it isn't important.
The biggest issue I feel I have is that when a variable in C goes out of scope, I don't really have a 'destructor' that can be signaled to let me know that the variable has gone out of scope.
Yes, that is the primary benefit of C++. You can create classes, which encapsulate functionality. And this functionality can include constructors and destructors to ensure that data is created, managed, and destroyed in a controlled manner.
There is no such option in C unless you implement an entire framework that supports such an option.
For a complete solution and answer to your question see GObject.