Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
So I have starting to learn Qt 4.5 and found the Signal/Slot mechanism to be of help. However, now I find myself to be considering two types of architecture.
This is the one I would use
class IDataBlock
{
public:
virtual void updateBlock(std::string& someData) = 0;
}
class Updater
{
private:
void updateData(IDataBlock &someblock)
{
....
someblock.updateBlock(data);
....
}
}
Note: code inlined for brevity.
Now with signals I could just
void Updater::updateData()
{
...
emit updatedData(data);
}
This is cleaner, reduces the need of an interface, but should I do it just because I could? The first block of code requires more typing and more classes, but it shows a relationship. With the second block of code, everything is more "formless". Which one is more desirable, and if it is a case-by-case basis, what are the guidelines?
Emmitting a signal costs few switches and some additional function calls (depending on what and how is connected), but overhead should be minimal.
Provider of a signal has no control over who its clients are and even if they all actually got the signal by the time emit returns.
This is very convenient and allows complete decoupling, but can also lead to problems when order of execution matters or when you want to return something.
Never pass in pointers to temporary data (unless you know exactly what you are doing and even then...). If you must, pass address of your member variable -- Qt provides a way to delay destruction of object untill after all events for it are processed.
Signals also might requre event loop to be running (unless connection is direct I think).
Overall they make a lot of sense in event driven applications (actually it quickly becomes very annoying without them).
If you already using Qt in a project, definitely use them. If dependency on Qt is unacceptable, boost has a similar mechanism.
There is another difference. #1 is hard coupled to the IDataBlock interface, and the Updater class needs to know about "someblock". #2 can be late-coupled via a connect call (or several, including disconnects), which leads to a more dynamic approach. #2 acts like a message (think Smalltalk/ObjC) and not a call (think C/C++). Messages can also be subject to multiple dispatch, which requires hand implementing that feature in #1.
My preference would be to utilize signals/slots due to their flexibility, unless code performance or the need for immediate return data does not allow for it (or the dependence on Qt is not desirable).
The two forms may appear to be similar. Functionally, that is true. In practice, you are solving a larger problem. In those cases, external circumstances will cause these two soltuions to be not equivalent.
A common case is figuring out the relation between source and sink. Do they even know each other? In your first example, updateData() needs to have the sink passed in. But what if the trigger is a GUI button [Update Data] ? Pushbuttons are generic components and shouldn't know about IDataBlock.
A solution is of course to add a m_someblock member to Updater. The pushbutton would now update whatever member is in Updater. But is this really what you intended?
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
recently I am doing a project for a school course. I always declare all variables and methods inside every classes public because It helps me access those variables easier while developing and less coding for the get(); and set(); functions. However, i think this is the wrong way of doing OOP. Any ideas?
Getters/Setters are useful sometimes, but aggregates are also useful.
If you have an aggregate, you should be willing to accept any data that matches the types of your data fields. If you want to maintain invariants (width>height) and assume it elsewhere in your code, you'll want accessors.
But code that doesn't assume invariants is often easier to work with and can even be less bug prone; manually maintaining invariants can get extremely hard, as messing up or compromising even once makes the invariant false.
Honestly, the biggest advantage of getters/setters is mocking (making test harnesses) and putting a breakpoint at access/modification. The costs in terms of code bulk and the like are real, and having more of the code you write not be boilerplate has value.
So a width/height field on a non-"live" rendered rect? Default to public data. A buffer used to store the data in a hand written optional<T>? Private data, and accessors.
Accessors should be used to reduce your own (or the code reader's) cognitive load. Write code with a purpose, and don't write code that doesn't have a purpose.
Now you'll still want to know how to write getters/setters, so practicing on stupid "rect width/height" cases has value. And learning the LSP problem that while a ReadOnly square is a kind of ReadOnly rect, a ReadWrite square is not a kind of ReadWrite rectangle might be best done via experience (or maybe not, as so many people experience it but don't learn the lesson).
This pertains to the principle of encapsulation where exposing internals means, from the perspective of the class in question ("you"):
You have no control over what is written to these fields
You are not notified if these fields are accessed
You are not notified if these fields are changed
You can never trust that the values are valid
You cannot change the types of these values without impacting any code that uses them
When you encapsulate you control access to these properties meaning:
You can prevent alterations
You can validate before writing, and reject invalid values
You can change the internal representation without consequence, provided the get/set functions still behave the same way
You can clean up the values before they are written
You have confidence that at all times the values are valid since you are the gatekeeper
You can layer additional behaviour on before or after changes have been made, such as the observer pattern
This is not to say you must use encapsulation all the time. There are many cases when you want "dumb data" that doesn't do anything fancy, it's just a container for passing things around.
In C++ this often leads to the use of struct as "dumb data" since all fields are public by default, and class as "smart data" as the fields are private by default, even though apart from the access defaults these two things are largely interchangeable.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm writing a program for a microcontroller in C++, and I need to write a function to input some numbers trough a computer connected to it.
This function should perform many different and well-defined tasks, e.g.: obtain data (characters) from the computer, check if the characters are valid, transform the characters in actual numbers, and many others. Written as a single function it would be at least 500 lines long. So I'll write a group of shorter functions and one "main" function that calls the others in the only meaningful order. Those functions will never be called in the rest of the code (except of course the main function). One last thing - the functions need to pass each other quite a lot of variables.
What is the best way to organize those functions? My first tough was to create a class with only the "main" function in the public section and the other functions and the variables shared by different functions as private members, but I was wondering if this is good practice: I think it doesn't respect the C++ concept of "class"... for example to use this "group of functions" I would need to do something like that:
class GetNumbers {
public:
//using the constructor as what I called "main" function
GetNumbers(int arg1, char arg2) {
performFirstAction();
performSecondAction();
...
}
private:
performFirstAction() {...};
performSecondAction() {...};
...
bool aSharedVariable;
int anotherVariable;
...
};
And where I actually need to input those numbers from the computer:
GetNumbers thisMakesNoSenseInMyOpinion (x,y);
Making the "main" function a normal class method (and not the constructor) seems to be even worse:
GetNumbers howCanICallThis;
howCanICallThis.getNumbers(x,y);
...
//somewhere else in the same scope
howCanICallThis.getNumbers(r,s);
This is really a software design question. To be honest, unless you're sharing the component with a bunch of other people, I would really worry too much about how its encapsulated.
Many libraries (new and old) might make a family of functions that are to be used in a specific way. Sometimes they have a built in "state machine," but do not have a specific way of enforcing that the functions are used in a specific order. This is okay, as long as its well documented. A group of functions might be prefixed by the same word and packaged as a library if its to be re-usable, that way someone could link to your dll and include the appropriate headers.
https://en.wikipedia.org/wiki/Finite-state_machine
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation.
Another way of organizing a set of computations is to package them up as a class, allow the contructor to accept the required input, and...
Kick off the required computation in the constructor
Kick off the computation after calling a public function like ->compute()
Making it into a functor (a class with an overloaded set of () operators)
Along with basically countless other options...
This has some benefit because if later you have multiple ways to compute results, you can replace it on the fly using something called the Strategy Pattern.
https://en.wikipedia.org/wiki/Strategy_pattern
In computer programming, the strategy pattern (also known as the policy pattern) is a behavioural software design pattern that enables selecting an algorithm at runtime.
In the end, it doesn't matter which you use as long as you are consistent. If you are looking to make something more "idiomatic" in a given language, you really have to go out there and look at some code on the web to get a feel for how things are done. Different communities prefer different styles, which are all in the end subjective.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to make a state machine for a hardware device. It will have more than 25 states and I am not sure what design to apply.
Because I am using C++11 I thought of using OOP and implementing it using the State Pattern, but I don't think it is appropriate for the embedded system.
Should it be more like a C style design ? I haven't code it one before. Can someone give me some pointers on what is the best suited design ?
System information:
ARM Cortex-M4
1 MB Flash
196 KB Ram
I also saw this question, the accepted answers points to a table design, the other answer to a State pattern design.
The State Pattern is not very efficient, because any function call goes at least through a pointer and vtable lookup but as long as you don't update your state every 2 or 3 clock cycles or call a state machine function inside a time critical loop you should be fine. After all the M4 is a quite powerful microcontroller.
The question is, whether you need it or not. In my opinion, the state pattern makes only sense, if in each state, the behavior of an object significantly differs (with the need for different internal variables in each state) and if you don't want to carry over variable values during state transitions.
If your TS is only about taking the transition from A to B when reading event alpha and emitting signal beta in the process, then the classic table or switch based approach is much more sensible.
EDIT:
I just want to clarify that my answer wasn't meant as a statement against c++ or OOP, which I would definitly use here (mostly out of personal preference). I only wanted to point out that the State Pattern might be an overkill and just because one is using c++ doesn't mean he/she has to use class hierarchies, polymorphism and special design patterns everywhere.
Consider the QP active object framework, a framework for implementing hierarchical state machines in embedded systems. It's described in the book, Practical UML Statecharts in C/C++: Event Driven Programming for Embedded Systems by Miro Samek. Also, Chapter 3 of the book describes more traditional ways of implementing state machines in C and C++.
Nothing wrong with a class. You could define a 'State' enum and pass, or queue, in events, using a case switch on State to access the corect action code/function. I prefer that for simpler hardware-control state engines than the classic 'State-Machine 101' table-driven approach. Table-driven engines are awesomely flexible, but can get a bit convoluted for complex functionality and somewhat more difficult to debug.
Should it be more like a C style design ?
Gawd, NO!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I parse/process data coming from many different streams (with different formats) and the number of different sources for data keeps growing in my system. I have a factory class which based on a config file specifying the source will give me the appropriate parser/processor pair (abiding to a small common interface) requested in something like this:
static Foo* FooFactory::createFoo(source c, /*couple flags*/)
{
switch (c)
{
case SOURCE_A:
{
//3 or 4 lines to put together a parser for A, and something to process stuff from the parser
return new FooA(/*args*/);
}
break;
//too many more cases which has started to worry me
default:
return NULL;
};
}
the problem is as the number of sources has grown I am facing two issues. First, when I build, I find myself pulling in all the FooA, FooB, FooC, FooD, FooE... relevant code - even if I was only interested in perhaps building a binary in which I'll only request FooA lets say. So how to go about modularizing that. A secondary issue is, right now in the case of SOURCE_A, I am returning FooA, but what if I am interested in SOURCE_A but I have different ways of parsing it and perhaps I want FooA_simple and FooA_careful but with the ability to plug and play as well?
For some reason, one thing that came to mind was the -u option to the linker when building a binary...it somehow suggests to me the notion of plug and play but I'm not sure what a good approach to the problem would be.
Well, you just create a factory interface and divide the logic among subtypes of that factory. So there might be a sub-factory (type/instance) for libFooA, and another for libFooB. Then you can simply create a composite factory depending on the subfactories/libraries you want to support in a particular scenario/program. Then you could even further subdivide the factories. You could also create factory enumerators for your composite types and do away with all that switch logic. Then you might say to your libFooA factory instance to enable careful mode or simple mode at that higher level. So your graph of FooFactory instances and subtypes could easily vary, and the class structure could be like a tree. Libraries are one way to approach it to minimize dependencies, but there may be more logical ways to divide the specialized sub-factories.
I'm not sure if you can get around importing FooA,FooB... because at any given moment any one of them might be instantiated. As for modularizing it, I'd recommend creating helper functions that gets called inside the switch statement.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a task to replace a library of classes all associated with the same thing. However this thing permeates into the rest of the code to a huge degree. I have been trying to simply comment it all out, but it is taking forever!
Is there a better way? The new system is somewhat similar but not nearly similar enough to just replace the old one.
What's the best plan of attack?
edit - My main concern is this -
what if I comment every reference to the old code, and then find that because of the complexity of the system, it still doesn't run. have I then wasted all that time?
If you're worried that the code won't run after all this surgery, then the goal must be to modify the system gradually and reversibly, verifying that it's still working at every step. Primum non nocere.
If you have a good set of unit tests (which I doubt very much, from the sound of this project), you should be in the habit of running it every few minutes. Otherwise you can at least cobble up a regression test of your own: run the code on a typical set of input data, and take the checksum of the output-- if the checksum changes, then you broke something since the last time you ran the test, so rewind to that time (you do use version control, don't you?) and proceed with care. The longer the test takes to run, the less often you can afford to run it, but it should be nightly at least.
The old Thing has not remained encapsulated (if it ever was to begin with). The rest of the code knows too much about the implementation of oldThing, making a simple swapout with newThing impossible. So clean up the interface. Look over the public declarations of oldThing (including whatever base classes are exposed) and consider whether each one is something the world really needs to know about-- if not, put in an accessor/mutator, or revise the class tree, or whatever. Isolate the implementation from the interface.
While you're doing that, look at the public interface of newThing; it should be clean and abstract, like what you're trying to achieve with oldThing (if it's a mess, then you have a whole other set of problems). With some effort you can guide the changes in the oldThing interface to match what newThing has.
As that starts to come together, the task of swapping out old for new will start to look feasible. In the end you'll be able to do it by changing a single #include statement and a single word in the makefile, if you want to go that far.
You could throw away all base libraries you don´t want to use anymore, and walk through the resulting error list.
The unsolved references will lead you to where you must get active. If you have replacement patterns for the calling patterns, that´s help.
If you don't want to shuffle in your 60k files, I'd suggest implementing a dummy version of the existing classes: remove all code done in the original classes and replace all class members by that kind of macro:
#define DEPRECATED( function, file, line ) printf("Unsupported %s call in %s line %d\n", function, file, line )
#define DEPRECATED_METHOD_WRAPPER(type, X) X { DEPRECATED( #X, __FILE__, __LINE__ ); return (type)0; }
class OldClass
{
OldClass() { DEPRECATED( "OldClass", __FILE__, __LINE__ ); }
// original method
// int doSomeStuff(int a, void *b);
// deprecated:
DEPRECATED_METHOD_WRAPPER(int, doSomeStuff(int a, void *b) );
}
Now, when your big program calls your deprecated library, you'll see in the traces:
the place where it's called
the method that is called
AND, you don't have to touch the original files for the moment.
Your program will run without it failing, but now your work will be to remove the references of your old classes... but at least you'll get a nice reminder of where they are without perturbating the flow too much.