I have been working with QT and I noticed that they took OOP to another level by implementing some new features to classes such as private slots: public slots: signals: etc... What are they doing to declare such catagories of a class? Is it compiler specific or is it simply a sort of typedef? I am guessing it's portable to major OS's since QT will run on several systems. I ask out of curiosity and to create my own subclasses to help organize and create more OOP programs. For example
class Main
{
handles:
HANDLE hWin;
threads:
HANDLE hThread;
};
And then clean inheritance would be easy by simply doing
class Dialog : handles Main
{
};
To me, it looks like people are not answering your question, they're answering something a little different. You seem to be talking about the "section tags" that just happen to be used for slots, and would like section tags for your own class sections like handles and threads. That is something that QT does by preprocessing the code before it gets sent to the compiler. It's not something you could do yourself without adding another compilation stage.
That said, it really doesn't have much of a use, except to tell the QT precompiler the sections to find it's slots in. You can't inherit them, as you appear to want to do. It just marks an area to generate introspective code on.
And you really wouldn't want to do it like that anyway. If you have separate components that you'd like to inheret, separate them into separate classes, and then make those separate classes members of your larger class that controls their interaction. If you tried to split apart a class that had different sections, there would be no way for the compiler to ensure that they didn't interact in some way that required invariants of the other pieces, because the compiler doesn't put those kinds of boundaries up on using the different members.
It is called signals and slots is implemented by the Qt meta-object compiler moc. The moc also adds an introspection system to pre-processed classes. There are also exist signal/slot implementations that don't require a special preprocessing step such as Boost.Signals and Boost.Signals2. I wouldn't consider signals and slots more or less OOP than normal message passing through function calls, but that is argumentative and not really relevant.
Related
I am working on custom lightweight UI library for my C++ project and I have question about implementing control's behavior.
Currently I have system that create instances of controls from provided files, so graphical representation, text, dimensions and other parameters are defined in external file. Regarding buttons and other interactive controls: I have base button class and for each button I am creating derived class with its own implementation of behavior related methods(onClick() etc.) see pseudo snippet:
class Button
{
public:
Button();
virtual void onClick();
};
class newButton : public Button
{
void onClick()
{}//specific implementation
};
I am looking for way to describe behavior externally(scripting language possibly) and inserting it to specific instance of base button on compile, so I won't need to create subclasses of button, and buttons/components could be completely described in external file. Any toughts or recommendations will be appreciated.
Thanks
Well I think you got another XY problem: Using run-time object polymorphism is not able to solve your wish. Although it enables C++ code to not care which specific implementation one uses, and can accept new unknown solutions at run time, these new solution still have to be compiled with an C++ compiler, and be compliant with the base class definition and C++ implementation (your compiler like gcc or MSVC) specific details of runtime polymorphism.
Since I assume you don't want that, which also makes sense and is a pattern getting popular in the C++ world (see QML or Javascript/C++ hybrids), you have some options.
Either you implement all basic behaviour building blocks in C++ (where you can of course use runtime polymorphism or even compile time polymorphism), which you then simply combine according to your UI description language at run time. To parse your so called DSL i.e. UI description language, you can use libraries like boost spirit.
How ever if you really to script the UI logic and not simply describe it, you would have to design and implement a whole programming language, which is also compatible to your C++ implementations runtime object polymorphism. Which I would find a very big and long task.
But don't dispair, that are ready solution like Javescript/C++ in the form of react native and electron using native node modules, or QML/Qt, or lua, a scripting language, specifically designed to be used in other programming languages.
I was wondering if it is possible to change the logic of an application at runtime? Meybe we could replace the implementation of an abstract class with another implementation? Or maybe we could replace a shared library at runtime...
update: Suppose that I've got two implementations of function foo(x, y) and can use any of them based on strategy pattern. Now I want to know if it's possible to add a third implementation of foo(x, y) without restarting the application.
You can use a plugin (a library that you will load at runtime) that expose a new foo function.
I remember we implemented something similar at school, a calculator in which we could add new operations at runtime, without having to restart the program. See dlsym and dlopen.
Addenda
Be very careful when dlclose-ing a plugin that it is not still used in some active call stack frame. On Linux you can call many thousands of times dlopen (so you could accept not dlclose-ing plugins, with some address space leak).
Exactly, as you said "replace the implementation of an abstract class with another implementation" if by it you mean, you can use runtime polymorphism and change the instances of concrete classes with instances of another set of concrete classes.
More specifically, there is a well-known pattern called Strategy pattern exactly for this purpose. Have a look at the wiki page, as it explains this very nicely, even with a code example along with diagram.
C++ mechanism of virtual functions does not allow you to change the implementation at run-time.
However, you can implement whatever implementation change at runtime with function pointers.
Here is an article on self-modifying code that I read recently: http://mainisusuallyafunction.blogspot.com/2011/11/self-modifying-code-for-debug-tracing.html
I have been using two libraries, SFML and Box2D, while at the same time taking great pain to ensure none of their functions or classes are exposed in the main body of my code, hiding them behind classes that serve little more than to act as a mediator between my code and the library itself. My mediators take the following form:
class MyWindow{
public:
// could be 10 or so functions like below
int doSomething(int arg){
return library_window->doSomething(arg);
};
private:
library::window * library_window;
};
The benefit to this, at least what I've been told, is that my main code body is not reliant upon the library, in such a way that if it changes or I choose to use a different one, say SDL or OpenGL in place of SFML or something, I can switch by merely amending the mediator classes. But the pain of having to code an access point into every feature I want to use is painful and repetitive...
Is this really how professional programmers are supposed to treat external libraries? And is it worth it?
Am I even doing this right?
Not worth it. Just use the libraries. If you end up wanting to change to a different third-party library, you'll end up needing to change your application code anyway...otherwise what was the point of changing in the first place, if everything works the same in both versions anyway.
Friends don't let friends over-engineer. Just say no.
The problem with the wrapper technique you're describing is that your wrapper is transparent (in the true sense of that word) -- every method in the library is still visible to you through the wrapper, with the same semantics, same preconditions, etc. You can "see right through" your wrapper.
A transparent wrapper like that is useful to you only if you someday switch the underlying library to something that has identical semantics, or at least very nearly identical semantics. Consider this example. Let's say the library was the std::fstream, and your application needed to read and write files, and lets say that you diligently wrote a wrapper:
class MyFile {
std::fstream* fst;
public:
void writeData(void* data, size_t count) {
fst->write((const char*) data, count);
}
void readData(void* buffer, size_t count) {
fst->read((char*) data, count);
}
// etc, etc.
};
Now let's say you want (or need) to switch to asynchronous I/O with non-blocking reads and writes. There's simply no way that your transparent wrapper is going to help you make that transition. The asynchronous read requires two methods, one to start the read operation and one to confirm that the read has completed. It also requires a commitment from the application that the buffer won't be used in between those two method calls.
When all is said and done, a library interface wrapper is useful only when very carefully designed to not be transparent (good interfaces are opaque). Furthermore, to be useful, the library you are wrapping must be something that you are intimately familiar with. So, boost::filesystem can "wrap" the pathnames for both DOS and Unix because the authors know intimately POSIX, UNIX and DOS pathnames and are designing the "wrapper" to effectively encapsulate those implementations.
From what you've described, it seems to me that your effort is going to end up wasted. Simple is better than complicated, and unless the wrapper is really encapsulating something (i.e., hiding the underlying library), direct is better than indirect.
That's not a license to write spaghetti -- your application still needs structure and isolation of the major components (e.g., isolate the UI from the actual calculations/simulations/document that your application provides). If you do that right, swapping the library some day will be a manageable task without any wrapper code.
You should wrap something under two circumstances:
You have reason to believe you might change it. And I don't mean "well, one day, maybe kinda." I mean you have some real belief that you might switch libraries. Alternatively, if you need to support more than one library. Maybe you allow a choice between using SDL and SFML. Or whatever.
You are providing an abstraction of that functionality. That is, you're not just making a thin wrapper; you're improving on that functionality. Simplifying the interface, adding features, etc.
That depends.
If you are using a very mature library and you probably won't migrate to other implementations, the mediator class is not necessary. For example, have you ever encapsulated stl or boost library?
On the other hand, if the library you are using is new or there are many alternatives around, then an abstraction might be useful.
Yes that's correct you should code basic functionality that your program needs and write a wrapper that wraps (redundant...) the libraries to do what you need. Simply when adding a new library you just write a new wrapper and you can simply just swap out the underlying wrappers from underneath your program without it having to care. If you separate your worries then its much easier to add functionality later because you don't have to find where you're using functions or use complicated #ifdef statements to switch out libraries you would just use some simple #ifdef to define something like
#ifdef LIB_A
typedef MyGfxClassA MyGfxClass;
#endif
etc...
It's not a bad idea if you want to provide a simpler interface(Convention over Configuration). But if you are simply going to provide a 1-to-1 translation of all the utilities in the library, then it's not really worth it.
To follow from my previous question about virtual and multiple inheritance (in a cross platform scenario) - after reading some answers, it has occurred to me that I could simplify my model by keeping the server and client classes, and replacing the platform specific classes with #ifdefs (which is what I was going to do originally).
Will using this code be simpler? It'd mean there'd be less files at least! The downside is that it creates a somewhat "ugly" and slightly harder to read Foobar class since there's #ifdefs all over the place. Note that our Unix Foobar source code will never be passed to the compiler, so this has the same effect as #ifdef (since we'd also use #ifdef to decide what platform specific class to call).
class Foobar {
public:
int someData;
#if WINDOWS
void someWinFunc1();
void someWinFunc2();
#elif UNIX
void someUnixFunc1();
void someUnixFunc2();
#endif
void crossPlatformFunc();
};
class FoobarClient : public Foobar;
class FoobarServer : public Foobar;
Note: Some stuff (ctor, etc) left out for a simpler example.
Update:
For those who want to read more into the background of this issue, I really suggest skimming over the appropriate mailing list thread. Thing start to get interesting around the 3rd post. Also there is a related code commit with which you can see the real life code in question here.
Preferably, contain the platform dependant nature of the operations within the methods so the class declaration remains the same across platforms. (ie, use #ifdefs in the implementations)
If you can't do this, then your class ought to be two completely separate classes, one for each platform.
My personal preference is to push ifdef magic into the make files, so the source code stays as clean as possible. Then have an implementation file per platform. This of course implies you can come up with an interface common for all your supported systems.
Edit:
One common way of getting around such a lower denominator design for cross-platform development is an opaque handle idiom. Same idea as ioctl(2) escape route - have a method returning opaque forward-declared structure defined differently for each platform (preferably in the implementation file) and only use it when common abstraction doesn't hold.
If you're fully sure that you won't use functions from the other OS on the one compiled, then using ifdef's has a lot of advantages:
Code and variables non used won't be compiled into the executable (however smart-linking helps here a bit)
It will be ease to see what code is live
You will be able to easily include platform dependent files.
However, classing based on OS can still have it's benefits:
You'll be able to be sure that the code compiles on all platforms when doing changes for one
The code and design will be cleaner
The latter is achieved by ifdefing platform-specific code in the class bodies themselves, or just ifdefing out the non-supported OS instances in compilation.
My preference is to push platform specific issues to the leaf-most modules and try to wrap them into a common interface. Put the specific methods, classes and functions into separate translation units. Let the linker and build process determine which specific translation units to combine. This makes for much cleaner code and easier debugging times.
From experience, I had a project that used #ifdef VERSION2. I spent a week in debugging because one module used #ifdef VERSION_2. A subtlety that would be easier to catch if all the version 2 specific code was in version 2 modules.
Having #ifdefs for platform specific code is idiomatic; especially since code for one platform won't even compile if it's enabled on another. Sounds like a good approach to me.
Most mature C++ projects seem to have an own reflection and attribute system, i.e for defining attributes which can be accessed by string and are automatically serializable. At least many C++ projects I participated in seemed to reinvent the wheel.
Do you know any good open source libraries for C++ which support reflection and attribute containers, specifically:
Defining RTTI and attributes via macros
Accessing RTTI and attributes via code
Automatic serialisation of attributes
Listening to attribute modifications (e.g. OnValueChanged)
There is a new project providing reflection in C++ using a totally different approach: CAMP.
https://github.com/tegesoft/camp
CAMP doesn't use a precompiler, the classes/properties/functions/... are declared manually using a syntax similar to boost.python or luabind. Of course, people can use a precompiler like gccxml or open-c++ to generate this declaration if they prefer.
It's based on pure C++ and boost headers only, and thanks to the power of template meta-programming it supports any kind of bindable entity (inheritance and strange constructors are not a problem, for example).
It is distributed under the MIT licence (previously LGPL).
This is what you get when C++ meets Reflection:
Whatever you choose, it'll probably have horrible macros, hard to debug code or weird build steps. I've seen one system automatically generate the serialisation code from DevStudio's PDB file.
Seriously though, for small projects, it'll be easier to write save/load functions (or use streaming operators). In fact, that might hold for big projects too - it's obvious what's going on and you'd usually need to change code anyway if the structure changes.
You could have a look at the two tools below. I've never used either of them, so I can't tell you how (im)practical they are.
XRTTI:
Xrtti is a tool and accompanying C++ library which extends the standard runtime type system of C++ to provide a much richer set of reflection information about classes and methods to manipulate these classes and their members.
OpenC++:
OpenC++ is C++ frontend library (lexer+parser+DOM/MOP) and source-to-source translator. OpenC++ enables development of C++ language tools, extensions, domain specific compiler optimizations and runtime metaobject protocols.
I looked at these things for quite a while but they tend to be very heavy-handed. They might prevent you from using inheritance, or having strange constructors etc etc. In the end they ended up being too much of a burden instead of a convenience.
This approach for exposing members that I now use is quite lightweight and lets you explore a class for serialization or setting all fields called "x" to 0, for example. It's also statically determined so is very very fast. No layers of library code or code-gen to worry about messing with the build process. It generalises to hierarchies of nested types.
Set your editor up with some macros to automate writing some of these things.
struct point
{
int x;
int y;
// add this to your classes
template <typename Visitor>
void visit(Visitor v)
{
v->visit(x, "x");
v->visit(y, "y");
}
};
/** Outputs any type to standard output in key=value format */
struct stdout_visitor
{
template <typename T>
void visit(const T& rhs)
{
rhs.visit(this);
}
template <typename Scalar>
void visit (const Scalar& s, const char* name)
{
std::cout << name << " = " << s << " ";
}
}
This is a notorious weakness of the C++ language in general because the things that would need to be standardized to make reflection implementations portable and worthwhile aren't standard. Calling conventions, object layouts, and symbol mangling come to mind, but there are others as well.
The lack of direction from the standard means that compiler implementers will do some things differently, which means that very few people have the motivation to write a portable reflection library, which means that people who need reflection re-invent the wheel, but only just enough for what they need. This happens ad infinitum, and here we are.
Looked at this for a while too. The current easiest solution seems to be BOOST_FUSION_ADAPT_STRUCT. Practically once you have a library/header you only need to add your struct fields into the BOOST_FUSION_ADAPT_STRUCT() macro, as the last segment of the code shows. Yes it has restrictions many other people have mentioned. And it does not support listeners directly.
The other promising solutions I looked into are
CAMP and XRTTI/gccxml, however both seem to be a hurdle to bring external tools dependency into your project.
Years ago I used perl c2ph/pstruct to dump the meta info from the output of gcc -gstabs, that is less intrusive but needs more work though it worked perfectly for me.
Regarding the boost/__cxa approach, once you figure out all the small details, adding/changing structs or fields is simple to maintain. we currently use it to build a custom types binding layer on top of dbus, to serialize the API and hide the transport/RPC details for a managed object service subsystem.
Not a general one but QT supports this via a meta compiler, and is GPL.
My understanding from talking to the QT people was that this isn't possible with pure C++, hence the need for the moc.
Automatic introspection/reflection toolkit. Use meta compiler like Qt's and adding meta information directly into object files. Intuitive easy to use. No external dependencies. Even allow automatically reflect std::string and then use it in scripts. Please visit IDK