OO Approach For Hardware Communication... Possible Singleton? - c++

I am working on a project where I need to talk to a particular box over UDP. There will only ever be one box connected to the system at any given time. The connection should last the entire duration of the program.
I have written a class that works (yay!) in providing the necessary data to the hardware. However, my main problem is that now I have to account for the fact that someone (a programmer down the road who will more than likely just ignore all my very neat comments ;) ) may create more than one instance of this class. This will more than likely result in some hilarious and rather amusing crash where the hardware in question is wondering why it is receiving data from two sockets on the same machine. More troublesome is the fact that creating the object actually spawns a thread that periodically sends updates. So you can imagine if my imaginary future programmer does something like create a linked list of these objects (after all, this is C++ and we have the ability to do such things) the CPU might not be very happy after a while.
As a result, I turn to you... the more experienced people of SO who have seen such issues in the past. I have debated creating a singleton to handle all of this, but some of my readings lead me to believe that this might not be the way to go. There is a TON of information regarding them on the internet, and it's almost like asking a highly sensitive political question based on the responses I've seen.
An alternative I've developed that will preserve as much code as possible is to just use a static bool to keep track if there is an active thread passing data to the hardware. However, I suspect my approach can lead to race conditions in the case where I have competing threads attempting to access the class at the same time. Here's what I have thus far:
// in MyClass.cpp:
static bool running_ = false; // declared in the class in the .h, but defined here
MyClass::MyClass() {
// various initialization stuff you don't care about goes here
if (pthread_create(mythread_, NULL, MyThreadFunc, this) != 0) {
// error
}
else {
// no error
}
}
static void* MyClass::MyThreadFunc(void* args) {
MyClass myclass = static_cast<MyClass>(args);
// now I have access to all the stuff in MyClass
// do various checks here to make sure I can talk to the box
if (!running_) {
running_ = true;
// open a connection
while (!terminate) { // terminate is a flag set to true in the destructor
// update the hardware via UDP
}
// close the socket
running_ = false;
}
}
While I certainly note that this will check for only one instance being active, there is still the possibility that two concurrent threads will access the !running_ check at the same time and therefore both open the connection.
As a result, I'm wondering what my options are here? Do I implement a singleton? Is there a way I can get the static variable to work? Alternatively, do I just comment about this issue and hope that the next programmer understands to not open two instances to talk to the hardware?
As always, thanks for the help!
Edited to add:
I just had another idea pop into my mind... what if the static bool was a static lock instead? That way, I could set the lock and then just have subsequent instances attempt to get the lock and if they failed, just return a zombie class... Just a thought...

You're right, asking about singleton is likely to start a flamewar, that will not make you any wiser. You better make up your mind yourself. It's not that hard really if you are aware of the primary principles.
For your case I'd skip that whole branch as irrelevant, as your post is motivated by FEAR. Fear from a speculative issue. So let me just advise you on that: relax. You can't fight idiots. As soon as you invent some fool-proof schema, the universe evolves and will produce a better idiot that will go around it. Not worth the effort. Leave the idiot problem to the management and HR, to keep them employed elsewhere.
Your task is to provide working solution and proper documentation on how to use it (ideally with tests and examples too). If you document usage to create just a single instance of your stuff, and doing the listed init and teardown steps, you can just expext that as followed -- or if not it be the next guy's problem.
Most of the real life grief comes NOT from dismissing dox, but that dox not present or is inaccurate. So just do that part properly.
Once done, certainly nothing forbids you to ass a few static or runtime asserts on preconditions: it's not hard to count your class' instances and assert it will not go over 1.

What if you have two instances of the hardware itself? [I know you say it will only be one - but I've been there, done that on the aspect of "It's only ever going to be one!! Oh, <swearword>, now we need to use two..."].
Of course, your if(running_) is a race-condition. You really should use some sort of atomic type, so that you don't get two attempts to start the class at once. That also won't stop someone from trying to start two instances of the overall program.
Returning a zombie class seems like a BAD solution - throwing an exception, returning an error value, or some such would be a much better choice.
Would it be possible to have "the other side" control the number of connections? In other words, if a second instance tries to communicate, it gets an error back from the hardware that receives the message "Sorry, already have a connection"?
Sorry if this isn't really "an answer".

First, I do not think you can really protect anything from this imaginary future developer if he's so much into breaking your code. Comments/doc should do the trick. If he misses them, the hardware (or the code) will likely crash, and he will notice. Moreover, if he as a good reason to reuse your class (like connecting to some other hardwares of the same kind), you do not want to block him with nasty hidden tricks.
This said, for your example, I would consider using an atomic<bool> to avoid any concurrency issue, and use the compare_exchange member function instead of if(!running) running = true:
static std::atomic<bool> running;
...
bool expected = false;
if(running.compare_exchange_strong(expected, true)) {
...

Related

What should a function return that can fail?

My question is a bit vague and that is at least partially intentional. I'm looking for early advice about interface design, future expansion and easy to test-interfaces and not too much about exact code. Although, if it's a solution/advice that applies especially to C++(17), then that's perfect.
I've a simulation that solves a hard combinatorial problem. The simulation propagates step by step to a solution. But sometimes, a step is a dead end (or appears as such). For example, in the pseudo code below, the while loop propagates the simulation by feeding it with pieces. There there can be a case where a 'Piece' can not be used to propagate the simulation. E.g. it's to expensive, it's a piece that simply does not fit or the for what ever reason. Failure is a normal behaviour that can arise an no reason to abort.
while(!allNotHandledPieced.empty())
{
auto nextPiece = allNotHandledPieces.back();
allNotHandledPieces.pop_back();
auto status = simulation.propagate(nextPiece);
if(status == 0)
next; // ok
else if (status == 1)
dropPieceDueToTooBadStatus();
}
Actually, for the 'simulation' it doesn't matter why it failed. The state is still valid, there is no problem with using another piece, it can just continue. But later, I might see the need to know why the 'propagate' method failed. Then it might make sense to have a meaningful return value on failure.
Is there a good way to return 'failure'? Is there a best practice for situations like this?
From what I can gather, it seems like neither exceptions or "returning" are appropriate for your situation. In your question you talk about "returning a result" but your particular code example/use case suggests that that the algorithm should not halt or be interrupted if a "bad piece" is used. Both exceptions and "returning" from a function will result in more complex control flow, and in some cases, more complex state handling depending on how you deal with exceptions.
If you choose the exception route, I suggest modeling your "reporting" classes after system_error.
If you choose a non-control flow route, I suggest pushing an error code and other meta data about the piece (if the piece itself is too expensive to copy around) into another data structure where it can be consumed.
What you choose to do depends on whether or not the bad piece requires immediate action. If it doesn't, exceptions are not the way to go since that implies it needs to be handled at some point.

Functional vs Safety / Static vs Dynamic instantiations

I'm in a situation where I think that two implementations are correct, and I don't know which one to choose.
I've an application simulating card readers. It has a GUI where you choose which serial port, and speed to use, and a play and stop button.
I'm looking for the best implementation for reader construction.
I have a SimulatorCore class who's living as long as my application
SimulatorCore instantiate the Reader class. And it will be possible to simulate multiple readers on multiple serial port.
Two possibilities:
My Reader is a pointer (dynamic instantiation), I instantiate it when play button is hit, delete it when stop button is hit.
My Reader is an object (static instantiation), I instantiate it in SimulatorCore constructor then create and call Reader.init() and Reader.cleanup() into my Reader class and call these when play and stop are being hit
I personally see the functional side, and I clearly want to use pointer, and do not have any reader instantiate if no reader are simulated.
Someone say me that I should use static instantiation (Reason : for safety, and because "it's bad to use pointer when you have choice to not use them")
I'm not familiar with them, but I think I can also use smart pointer.
Code samples: 1st solution:
class SimulatorCore
{
play(){reader = new Reader();};
stop(){delete reader; reader = nullptr;};
private:
Reader *reader;
}
Code samples: 2nd solution:
class SimulatorCore
{
play(){reader.init();};
stop(){reader.cleanup();};
private:
Reader reader;
}
The code is unstest, I've juste wite it for illustration.
What is the best solution? Why?
You can easily use shared_ptr/unique_ptr:
class SimulatorCore
{
play(){_reader = make_shared<Reader>();};
stop(){_reader = nullptr};
private:
shared_ptr<Reader> _reader;
}
That will solve your problem right way, I guess.
Dynamic allocation gives some problems, for example, with throwing exception (there can be memory losing if between play() and stop() there will be thrown exception, for example, and stop() will never be called). Or you can just forget somewhere call stop() before destruction of SimulatorCore, it is possible if program is heavy.
If you never tried smart pointers, it is good chance to start doing it.
You should generally avoid performing dynamic allocation with new yourself, so if you were going to go with the 1st solution, you should use smart pointers instead.
However, the main question here is a question of logic. A real card reader exists in an idle state until it is being used. In the 2nd solution, what do init and cleanup do? Do they simply setup the card reader into an idle state or do they start simulating actually having a card being read? If it's the first case, I suggest that this behaviour should be in the constructor and destructor of Reader, and then creating a Reader object denotes bringing a card reader into existence. If it's the second case, then I'd say the 2nd solution is pretty much correct, just that the functions are badly named.
What seems most logical to me is something more like this:
class SimulatorCore
{
play(){reader.start();};
stop(){reader.stop();};
private:
Reader reader;
}
Yes, all I've done is change the function names for Reader. However, the functions now are not responsible for initialising or cleaning up the reader - that responsibility is in the hands of Reader's constructor and destructor. Instead, start and stop begin and end simulation of the Reader. A single Reader instance can then enter and exit this simulation mode multiple times in its lifetime.
If you later want to extend this idea to multiple Readers, you can just change the member to:
std::vector<Reader> readers;
However, I cannot know for certain that this is what you want because I don't know the logic of your program. Hopefully this will give you some ideas though.
Again, whatever you decide to do, you should avoid using new to allocate your Readers and then also avoid using raw pointers to refer to those Readers. Use smart pointers and their corresponding make_... functions to dynamically allocate those objects.
It clearly depends on how your whole program is organized, but in general, I think I would prefer the static approach, because of responsability considerations:
Suppose you have a separate class that handles serial communication. That class will send and receive messages and dispatch them to the reader class. A message may arrive at any time. The difference of the dynamic and static approaches is:
With the dynamic approach, the serial class must test if the reader actually exists before dispatching a message. Or the reader has to register and unregister itself in the serial class.
With the static approach, the reader class can decide for itself, if it is able to process the message at the moment, or not.
So I think the static approach is a bit easier and straight-forward.
However, if there is a chance that you will have to implement other, different reader classes in the future, the dynamic approach will make this extension easier, because the appropriate class can easily be instanciated at runtime.
So the dynamic approach offers more flexibility.

C++ communication between threads

I have a couple classes that each open a different program in different threads and do/hold information about it using CreateProcess (if there's a more C++ oriented way to do this let me know-- I looked).
Some of the classes are dependent on one of the other programs running. ie B must stop if A stopped. I made this code a while ago and my solution then, was having a class with static functions that run the various programs and static member variables that hold their "state". I was also using CreateThread.
Looking back, this method seemed... fragile and awkward looking.
I have no idea if using such a "static class" is good practice or not (especially recalling how awkward initializing the state member variables). I'd like to perhaps make each class contain its own run function. However, the issue I am considering is how to let class B know if A has awkwardly stopped. They'd still need to know a way to be aware of each other's state. Note that I'd like to use std::thread in this rework and that I have little to no experience with multithreading. Thanks for any help.
Well, in a multi-process application you would be using pipes/files to transmit information from one process to another (or even maybe the return value of a child process). So could also try shared memory, though it can be somewhat challenging (look into Boost.Interprocess if you wish to).
In a multi-threaded application you basically have the same options available:
you can use memory that is shared (provided you synchronize access)
you can use queues to pass information from one thread to another (such as with pipes)
so really the two are quite similar.
Following Tony Hoare's precept, you should generally share by communicating, and not communicate by sharing, which means privileging queues/pipes to shared memory; however for just a boolean flag, shared memory can prove easier to put in place:
void do_a(std::atomic_bool& done) {
// do things
done = true;
}
int main() {
std::atomic_bool done = false;
auto a = std::async(do_a, done);
auto b = std::async([](std::atomic_bool& done) {
while (not done) {
std::cout << "still not done" << std::endl;
sleep(1);
}
});
// other stuff in parallel.
}
And of course, you can even put this flag in a std::shared_ptr to avoid the risk of dangling references.

Risking the exception anti-pattern.. with some modifications

Lets say that I have a library which runs 24x7 on certain machines. Even if the code is rock solid, a hardware fault can sooner or later trigger an exception. I would like to have some sort of failsafe in position for events like this. One approach would be to write wrapper functions that encapsulate each api a:
returnCode=DEFAULT;
try
{
returnCode=libraryAPI1();
}
catch(...)
{
returnCode=BAD;
}
return returnCode;
The caller of the library then restarts the whole thread, reinitializes the module if the returnCode is bad.
Things CAN go horribly wrong. E.g.
if the try block(or libraryAPI1()) had:
func1();
char *x=malloc(1000);
func2();
if func2() throws an exception, x will never be freed. On a similar vein, file corruption is a possible outcome.
Could you please tell me what other things can possibly go wrong in this scenario?
This code:
func1();
char *x=malloc(1000);
func2();
Is not C++ code. This is what people refer to as C with classes. It is a style of program that looks like C++ but does not match up to how C++ is used in real life. The reason is; good exception safe C++ code practically never requires the use of pointer (directly) in code as pointers are always contained inside a class specifically designed to manage their lifespan in an exception safe manor (Usually smart pointers or containers).
The C++ equivalent of that code is:
func1();
std::vector<char> x(1000);
func2();
A hardware failure may not lead to a C++ exception. On some systems, hardware exceptions are a completely different mechanism than C++ exceptions. On others, C++ exceptions are built on top of the hardware exception mechanism. So this isn't really a general design question.
If you want to be able to recover, you need to be transactional--each state change needs to run to completion or be backed out completely. RAII is one part of that. As Chris Becke points out in another answer, there's more to state than resource acquisition.
There's a copy-modify-swap idiom that's used a lot for transactions, but that might be way too heavy if you're trying to adapt working code to handle this one-in-a-million case.
If you truly need robustness, then isolate the code into a process. If a hardware fault kills the process, you can have a watchdog restart it. The OS will reclaim the lost resources. Your code would only need to worry about being transactional with persistent state, like stuff saved to files.
Do you have control over libraryAPI implementation ?
If it can fit into OO model, you need to design it using RAII pattern, which guarantees the destructor (who will release acquired resources) to be invoked on exception.
usage of resource-manage-helper such as smart pointer do help too
try
{
someNormalFunction();
cSmartPtr<BYTE> pBuf = malloc(1000);
someExceptionThrowingFunction();
}
catch(...)
{
// Do logging and other necessary actions
// but no cleaning required for <pBuf>
}
The problem with exeptions is - even if you do re-engineer with RAiI - its still easy to make code that becomes desynchronized:
void SomeClass::SomeMethod()
{
this->stateA++;
SomeOtherMethod();
this->stateB++;
}
Now, the example might look artifical, but if you substitue stateA++ and stateB++ for operations that change the state of the class in some way, the expected outcome of this class is for the states to remain in sync. RAII might solve some of the problems associated with state when using exceptions, but all it does is provide a false sense of security - If SomeOtherMethod() throws an exception ALL the surrounding code needs to be analyzed to ensure that the post conditions (stateA.delta == stateB.delta) are met.

Does ScopeGuard use really lead to better code?

I came across this article written by Andrei Alexandrescu and Petru Marginean many years ago, which presents and discusses a utility class called ScopeGuard for writing exception-safe code. I'd like to know if coding with these objects truly leads to better code or if it obfuscates error handling, in that perhaps the guard's callback would be better presented in a catch block? Does anyone have any experience using these in actual production code?
It definitely improves your code. Your tentatively formulated claim, that it's obscure and that code would merit from a catch block is simply not true in C++ because RAII is an established idiom. Resource handling in C++ is done by resource acquisition and garbage collection is done by implicit destructor calls.
On the other hand, explicit catch blocks would bloat the code and introduce subtle errors because the code flow gets much more complex and resource handling has to be done explicitly.
RAII (including ScopeGuards) isn't an obscure technique in C++ but firmly established best-practice.
Yes.
If there is one single piece of C++ code that I could recommend every C++ programmer spend 10 minutes learning, it is ScopeGuard (now part of the freely available Loki library).
I decided to try using a (slightly modified) version of ScopeGuard for a smallish Win32 GUI program I was working on. Win32 as you may know has many different types of resources that need to be closed in different ways (e.g. kernel handles are usually closed with CloseHandle(), GDI BeginPaint() needs to be paired with EndPaint(), etc.) I used ScopeGuard with all these resources, and also for allocating working buffers with new (e.g. for character set conversions to/from Unicode).
What amazed me was how much shorter the program was. Basically, it's a win-win: your code gets shorter and more robust at the same time. Future code changes can't leak anything. They just can't. How cool is that?
I often use it for guarding memory usage, things that need to be freed that were returned from the OS. For example:
DATA_BLOB blobIn, blobOut;
blobIn.pbData=const_cast<BYTE*>(data);
blobIn.cbData=length;
CryptUnprotectData(&blobIn, NULL, NULL, NULL, NULL, CRYPTPROTECT_UI_FORBIDDEN, &blobOut);
Guard guardBlob=guardFn(::LocalFree, blobOut.pbData);
// do stuff with blobOut.pbData
I think above answers lack one important note. As others have pointed out, you can use ScopeGuard in order to free allocated resources independent of failure (exception). But that might not be the only thing you might want to use scope guard for. In fact, the examples in linked article use ScopeGuard for a different purpose: transcations. In short, it might be useful if you have multiple objects (even if those objects properly use RAII) that you need to keep in a state that's somehow correlated. If change of state of any of those objects results in an exception (which, I presume, usually means that its state didn't change) then all changes already applied need to be rolled back. This creates it's own set of problems (what if a rollback fails as well?). You could try to roll out your own class that manages such correlated objects, but as the number of those increases it would get messy and you would probably fall back to using ScopeGuard internally anyway.
Yes.
It was so important in C++ that even a special syntax for it in D:
void somefunction() {
writeln("function enter");
// c++ has similar constructs but not in syntax level
scope(exit) writeln("function exit");
// do what ever you do, you never miss the function exit output
}
I haven't used this particular template but I've used something similar before. Yes, it does lead to clearer code when compared to equally robust code implemented in different ways.
I have to say, no, no it does not. The answers here help to demonstrate why it's a genuinely awful idea. Resource handling should be done through re-usable classes. The only thing they've achieved by using a scope guard is to violate DRY up the wazoo and duplicate their resource freeing code all over their codebase, instead of writing one class to handle the resource and then that's it, for the whole lot.
If scope guards have any actual uses, resource handling is not one of them. They're massively inferior to plain RAII in that case, since RAII is deduplicated and automatic and scope guards are manual code duplication or bust.
My experience shows that usage of scoped_guard is far inferior to any of the short reusable RAII classes that you can write by hand.
Before trying the scoped_guard, I had written RAII classes to
set GLcolor or GLwidth back to the original, once I've drawn a shape
make sure a file has fclosed once I had fopened it.
reset a mouse pointer to its initial state, after I've changed it to gears/hourgrlass during a execution of a slow function
reset the sorting state of a QListView's back to its previous state, once I've temporarily finished with altering its QListViewItems -- I did not want the list to reorder itself everytime I changed the text of a single item...
using simple RAII class
Here's how my code looked like with my hand-crafted RAII classes:
class scoped_width {
int m_old_width;
public:
scoped_width(int w) {
m_old_width = getGLwidth();
setGLwidth(w);
}
~scoped_width() {
setGLwidth(m_old_width);
}
};
void DrawTriangle(Tria *t)
{
// GLwidth=1 here
auto guard = scoped_width(2); // sets GLwidth=2
draw_line(t->a, t->b);
draw_line(t->b, t->c);
draw_line(t->c, t->a);
setGLwidth(5);
draw_point(t->a);
draw_point(t->b);
draw_point(t->c);
} // scoped_width sets GLwidth back to 1 here
Very simple implementation for scoped_width, and quite reusable.
Very simple and readable from the consumer side, also.
using scoped_guard (C++14)
Now, with the scoped_guard, I have to capture the existing value in the introducer ([]) in order to pass it to the guard's callback:
void DrawTriangle(Tria *t)
{
// GLwidth=1 here
auto guard = sg::make_scoped_guard([w=getGLwidth()](){ setGLwidth(w); }); // capture current GLwidth in order to set it back
setGLwidth(2); // sets GLwidth=2
draw_line(t->a, t->b);
draw_line(t->b, t->c);
draw_line(t->c, t->a);
setGLwidth(5);
draw_point(t->a);
draw_point(t->b);
draw_point(t->c);
} // scoped_guard sets GLwidth back to 1 here
The above doesn't even work on C++11.
Not to mention that trying to introduce the state to the lambda this way hurts my eyes.
using scoped_guard (C++11)
In C++11 you have to do this:
void DrawTriangle(Tria *t)
{
// GLwidth=1 here
int previous_width = getGLwidth(); // explicitly capture current width
auto guard = sg::make_scoped_guard([=](){ setGLwidth(previous_width); }); // pass it to lambda in order to set it back
setGLwidth(2); // sets GLwidth=2
draw_line(t->a, t->b);
draw_line(t->b, t->c);
draw_line(t->c, t->a);
setGLwidth(5);
draw_point(t->a);
draw_point(t->b);
draw_point(t->c);
} // scoped_guard sets GLwidth back to 1 here
As you can see,
the scoped_guard snoppet requires
3 lines to keep previous value (state) and set it to a new one, and
2 stack variables (previous_width and guard, again) to hold the previous state
the hand-crafted RAII class requires
1 readable line to set new state and keep the previous one, and
1 stack variable (guard) to hold the previous state.
Conclusion
I think that examples such as
void some_function() {
sg::scoped_guard([](){ cout << "this is printed last"; }
cout << "this is printed first";
}
are no proof of the usefullness of scoped_guard.
I hope that somebody can show me why I don't get the expected gain from scoped_guard.
I am convinced that RAII can be exploited better by writing short hand-crafted classes, than using the more generic but hard to use scoped_guard