In relation to this topic: What is the copy-and-swap idiom?
It states that one class should at most handle one resource. What is meant by resource?
EDIT: For example I have a class that handles the information for each monitor, and contains an array of the desktop pixels. Would the array and only the array be considered a resource? Would the array of monitors that hold the monitor information and array of the desktop pixels another resource, that would require another class? It is in another class, but is that what is meant?
I have searched here and Google but only find more information relating to the rule of three or resource files (*.rc and MSDN). Nothing that relates to a definition.
That question is referring to concepts in C++ in general. In this case, it is also using the general concept of a "resource": an object, usually external, provided by an outside source, that a block of code uses. It's the same as a "resource" in real life.
In the case of copy-and-swap, it is referring to what your C++ class represents. The idea is simple: each instance of your C++ class will provide access to a given resource. For instance, let's take a file on disk. The resource is the file; the operating system provides access to that file. Your C++ class would wrap around the operating system's API calls to read and write to that file.
The question that you linked is asking about how to mix these classes with C++'s initialization rules. How you follow these rules is going to be shaped by what accesses the outside source lets you use and what you personally need to do.
Of course, your program can also have its own resources. For example, a global array that maps one type of enumeration to another type of enumeration could be a resource in your program. So the answer to your question of "what should be a resource in my program" is "it really depends on how you wrote your program".
Windows programs also have their own "resources", such as bitmaps, icons, dialog box control layouts, menu layouts, localized strings, and other stuff. To allow these things to be embedded in the output binary, Windows provides its own system, and also calls all these "resources". A .rc file is merely how you would list all of these Windows resources; you build that into your Windows program.
GLib (which GTK+ is built on top of), Qt, and the various Apple APIs have their own analogous systems, each of which also have "resource" in their names.
(You can write C++ classes that provide access to all these specific resources too; in this case, the resource is the resource.)
But it is very important when reading not to confuse he general term ("resource") for the specific technology (Windows resources) and vice versa, especially for such an abstract concept as a resource. Keep that in mind as you continue your programming journey.
Resources, are things like: raw pointers that need to be deleted (that's why we have smart pointers like std::unique_ptr, std::shared_ptr), files that need to be properly closed (that's why we have std::fstream), std::thread (to manage the raw handles that you would have to otherwise manage yourself in different operating systems). std::reference_wrapper manages references... and so on. As you may have noticed all of them manage a single resource, not more.
The idea is that you don't want a single unique_ptr to manage 2 pointers for you. You would create two unique_ptr classes for each raw pointer.
For your specific question which you have edited, generally what you wanna do is:
create a class MonitorsManager that manages all the monitors (that's his task)
create a class MonitorInfo (that manages a single monitor information and the pixels
You will notice things get easier if you have a good design, because in the future you will want to be able to easily edit and update those classes, and if you mix everything, it's gonna be really hard, trust me.
Also, your MonitorInfo class should not have any dependency on MonitorsManager because if people has to include the class "MonitorsManager" in their project in order to use your "MonitorInfo" class, then you have failed. Not everyone has multiple monitors, some may have a single one.
Related
I would like to develop relatively simple dialog-based GUIs for Raspberry PI 2 (Linux/Raspbian) that is why would like to study Qt beginning with the basic/core features. RTFM, I know. So I read one of them, and to be honest understood very little.
The biggest question in my head: Why would it have been worth reading in the first place? What is the purpose of the Qt properties? What are they good for?
How do they work internally and from the point of view of writing the code.
What is the memory and performance cost? Is it necessary to use them? If not, is it worth using them considering my use case?
Properties are used to read/write values in objects through a standard generic interface. Needed for interfacing with script engines (QtScript or QtQml), the widget designer, remote object interfaces (QtDBus, QtRemoteObjects, QtWebChannel).
Most properties get implemented through normal getter/setter functions which are then tied to a property name and registered with the property system using the Q_PROPERTY macro. Alternatively the property name can be bound to a member variable. Read/write access with the generic property() and setProperty() API is the rerouted either to calls to the registered getter/setter or to the registered member variable.
The property information is stored as a QMetaProperty in the class' staticMetaObject, access through the property API will incur a lookup based on the property name.
Your use case doesn't seem to require usage of properties.
Another use case, as mentioned by Kuba in the comment, is to attach data to QObject based objects without modifying them.
These kind of properties, so called "dynamic properties", are handled slightly differently. Instead of getter/setter functions or a member variable, they are stored in a generic internal storage, currently a QVector<QVariant>
The DYNAMIC FACTORY pattern describes how to create a factory that
allows the creation of unanticipated products derived from the same
abstraction by storing the information about their concrete type in
external metadata
from : http://www.wirfs-brock.com/PDFs/TheDynamicFactoryPattern.pdf
The PDF says:
Configurability
. We can change the behavior of an application by just changing its configuration
information. This can be done without the need to change any source code (just change the descriptive information about the type in the metadata repository) or to restart the application (if caching is not used – if caching is used the cache will need to be flushed).
It is not possible to introduce new types to a running C++ program without modifying source code. At the very least, you'd need to write a shared library containing a factory to generate instances of the new type: but doing so is expressly rules out by the PDF:
Extensibility / Evolvability
. New product types should be easily
added without requiring neither a
new factory class nor modifying
any existing one.
This is not practical in C++.
Still, the functionality can be achieved by using metadata to guide some code writing function, then invoking the compiler (whether as a subprocess or a library) to create a shared library. This is pretty much what the languages mentioned in the PDF are doing when they use reflection and metadata to ask the virtual machine to create new class instances: it's just more normal in those language environments to need bits of the compiler/interpreter hanging around in memory, so it doesn't seem such a big step.
Yes...
Look at the Factories classes in the Qtilities Qt library.
#TonyD regarding
We can change the behavior of an application by just changing its configuration information.
It is 100% possible if you interpret the sentence in another way. What I read and understand is you change a configuration file (xml in the doc) that gets loaded to change the behaviour of the application. So perhaps your application has 2 loggers, one to file and one to a GUI. So the config file can be edited to choose one or both to be used. Thus no change of the application but the behaviour is changed. The requirement is that anything that you can configure in the file is available in the code, so to say log using network will not work since it is not implemented.
New product types should be easily added without requiring neither a new factory class nor modifying any existing one.
Yes that sounds a bit impossible. I will accept the ability to add ones without having to change the original application. Thus one should be able to add using plugins or another method and leave the application/factory/existing classes in tact and unchanged.
All of the above is supported by the example provided. Although Qtilities is a Qt library, the factories are not Qt specific.
I am developing an interface that can be used as dynamic loading.Also it should be compiler independent.So i wanted to export Interfaces.
I am facing now the following problems..
Problem 1: The interface functions are taking some custom data types (basically classes or structures) as In\Out parameters.I want to initialise members of these classes with default values using constructors.If i do this it is not possible to load my library dynamically and it becomes compiler dependent. How to solve this.
Problem 2: Some interfaces returns lists(or maps) of element to client.I am using std containers for this purpose.But this also once again compiler dependent(and compiler version also some times).
Thanks.
Code compiled differently can only work together if it adopts the same Application Binary Interface (ABI) for the set of types used for parameters and return value. ABI's are significant at a much deeper level - name mangling, virtual dispatch tables etc., but my point's that if there's one your compilers support allowing calling of functions with simple types, you can at least think about hacking together some support for more complex types like compiler-specific implementations of Standard containers and user-defined types.
You'll have to research what ABI support your compilers provide, and infer what you can about what they'll continue to provide.
If you want to support other types beyond what the relevant ABI standardises, options include:
use simpler types to expose internals of more complex types
pass [const] char* and size_t extracted by my_std_string.data() or &my_std_string[0] and my_std_string.size(), similarly for std::vector
serialise the data and deserialise it using the data structures of the receiver (can be slow)
provide a set of function pointers to simple accessor/mutator functions implemented by the object that created the data type
e.g. the way the classic C qsort function accepts a pointer to an element comparison function
As I usually have a multithreading focus, I'm mostly going to bark about your second problem.
You already realized that passing elements of a container over an API seems to be compiler dependent. It's actually worse: it's header file & C++-library dependent, so at least for Linux you're already stuck with two different sets: libstc++ (originating from gcc) and libcxx (originating from clang).
Because part of the containers is header files and part is library code, getting things ABI-independent is close to impossible.
My bigger worry is that you actually thought of passing container elements around. This is a huge threadsafety issue: the STL containers are not threadsafe - by design.
By passing references over the interface, you are passing "pointers to encapsulated knowledge" around - the users of your API could make assumptions of your internal structures and start modifying the data pointed to. That is usually already really bad in a singlethreaded environment, but gets worse in a multithreaded environment.
Secondly, pointers you provided could get stale, not good either.
Make sure to return copies of your inner knowledge to prevent user modification of your structures.
Passing things const is not enough: const can be cast away and you still expose your innards.
So my suggestion: hide the data types, only pass simple types and/or structs that you fully control (i.e. are not dependent on STL or boost).
Designing an API with the widest ABI compatibility is an extremely complex subject, even more so when C++ is involved instead of C.
Yet there are more theoretical-type issues that aren't really quite as bad as they sound in practice. For example, in theory, calling conventions and structure padding/alignment sound like they could be major headaches. In practice they aren't so much, and you can even resolve such issues in hindsight by specifying additional build instructions to third parties or decorating your SDK functions with macros indicating the appropriate calling convention. By "not so bad" here, I mean that they can trip you up but they won't have you going back to the drawing board and redesigning your entire SDK in response.
The "practical" issues I want to focus on are issues that can have you revisiting the drawing board and redoing the entire SDK. My list is also not exhaustive, but are some of the ones I think you should really keep in mind first.
You can also treat your SDK as consisting of two parts: a dynamically-linked part that actually exports functionality whose implementation is hidden from clients, and a statically (internally) linked convenience library part that adds C++ wrappers on top. If you treat your SDK as having these two distinct parts, you're allowed a lot more liberty in the statically-linked library to use a lot more C++ mechanisms.
So, let's get started with those practical headache inducers:
1. The binary layout of a vtable is not necessarily consistent across compilers.
This is, in my opinion, one of the biggest gotchas. We're usually looking at 2 main ways to access functionality from one module to another at runtime: function pointers (including those provided by dylib symbol lookup) and interfaces containing virtual functions. The latter can be so much more convenient in C++ (both for implementor and client using the interface), yet unfortunately using virtual functions in an API that aims to be binary compatible with the widest range of compilers is like playing minesweeper through a land of gotchas.
I would recommend avoiding virtual functions outright for this purpose unless your team consists of minesweeper experts who know all of these gotchas. It's useful to try to fall in love with C again for those public interface parts and start building a fondness for these kinds of interfaces consisting of function pointers:
struct Interface
{
void* opaque_private_data;
void (*func1)(struct Interface* self, ...);
void (*func2)(struct Interface* self, ...);
void (*func3)(struct Interface* self, ...);
};
These present far fewer gotchas and are nowhere near as fragile against changes (ex: you're perfectly allowed to do things like add more function pointers to the bottom of the structure without affecting ABI).
2. Stub libs for dylib symbol lookup are linker-specific (as are all static libs in general).
This might not seem like a big deal until combined with #1. When you toss out virtual functions for the purpose of exporting interfaces, then the next big temptation is to often export whole classes or select methods through a dylib.
Unfortunately doing this with manual symbol lookup can become very unwieldy very quickly, so the temptation is to often do this automatically by simply linking to the appropriate stub.
Yet this too can become unwieldy when your goal is to support as many compilers/linkers as possible. In such a case, you may have to possess many compilers and build and distribute different stubs for each possibility.
So this can kind of push you into a corner where it's no longer very practical export class definitions anymore. At this point you might simply export free-standing functions with C linkage (to avoid C++ name mangling which is another potential source of headaches).
One of the things that should be obvious already is that we're getting nudged more and more towards favoring a C or C-like API if our goal is universal binary compatibility without opening up too many cans of worms.
3. Different modules have 'different heaps'.
If you allocate memory in one module and try to deallocate it in another, then you're trying to free memory from a mismatching heap and will invoke undefined behavior.
Even in plain old C, it's easy to forget this rule and malloc in one exported function only to return a pointer to it with the expectation that the client accessing the memory from a different module will free it when done. This once again invokes undefined behavior, and we have to export a second function to indirectly free the memory from the same module that allocated it.
This can become a much bigger gotcha in C++ where we often have class templates that have internal linkage that implicitly do memory management. For example, even if we roll our own std::vector-like sequence like List<T>, we can run into a scenario where a client creates a list, passes it to our API by reference where we use functions that can allocate/deallocate memory (like push_back or insert) and butt heads with this mismatching heap/free store issue. So even this hand-rolled container should ensure that it allocates and deallocates memory from the same central location if it's going to be passed around across modules, and placement new will become your friend when implementing such containers.
4. Passing/returning C++ standard objects is not ABI-compatible.
This includes C++ standard containers as you have already guessed. There's no really practical way to ensure that one compiler will use a compatible representation of something like std::vector when including <vector> as another. So passing/returning such standard objects whose representation is outside of your control is generally out of the question if you're targeting wide binary compatibility.
These don't even necessarily have compatible representations within two projects built by the same compiler, as their representations can vary in incompatible ways based on build settings.
This might make you think that you should now roll all kinds of containers by hand, but I would suggest a KISS approach here. If you're returning a variable number of elements as a result from a function, then we don't need a wide range of container types. We only need one dynamic array kind of container, and it doesn't even have to be a growable sequence, just something with proper copy, move, and destruction semantics.
It might seem nicer and could save some cycles if you just returned a set or a map in a function that computes one, but I'd suggest forgetting about returning these more sophisticated structures and convert to/from this basic dynamic array kind of representation. It's rarely the bottleneck you might think it would be to transfer to/from contiguous representations, and if you actually do run into a hotspot as a result of this which you actually gained from a legit profiling session of a real world use case, then you can always add more to your SDK in a very discrete and selective fashion.
You can also always wrap those more sophisticated containers like map into a C-like function pointer interface that treats the handle to the map as opaque, hidden away from clients. For heftier data structures like a binary search tree, paying the cost of one level of indirection is generally very negligible (for simpler structures like a random-access contiguous sequence, it generally isn't quite as negligible, especially if your read operations like operator[] involve indirect calls).
Another thing worth noting is that everything I've discussed so far relates to the exported, dynamically-linked side of your SDK. The static convenience library that is internally linked is free to receive and return standard objects to make things convenient for the third party using your library, provided that you're not actually passing/returning them in your exported interfaces. You can even avoid rolling your own containers outright and just take a C-style mindset to your exported interfaces, returning raw pointers to T* that needs to be freed while your convenience library does that automatically and transfers the contents to std::vector<T>, e.g.
5. Throwing exceptions across module boundaries is undefined.
We should generally not be throwing exceptions from one module to be caught in another when we cannot ensure compatible build settings in the two modules, let alone the same compiler. So throwing exceptions from your API to indicate input errors is generally out of the question in this case.
Instead we should catch all possible exceptions at the entry points to our module to avoid leaking them into the outside world, and translate all such exceptions into error codes.
The statically-linked convenience library can still call one of your exported functions, check the error code, and in the case of failure, throw an exception. This is perfectly fine here since that convenience library is internally linked to the module of the third party using this library, so it's effectively throwing the exception from the third party module to be caught by the same third party module.
Conclusion
While this is, by no means, an exhaustive list, these are some caveats that can, when unheeded, cause some of the biggest issues at the broadest level of your API design. These kinds of design-level issues can be exponentially more expensive to fix in hindsight than implementation-type issues, so they should generally have the highest priority.
If you're new to these subjects, you can't go too far wrong favoring a C or very C-like API. You can still use a lot of C++ implementing it and can also build a C++ convenience library back on top (your clients don't even have to use anything but the C++ interfaces provided by that internally-linked convenience library).
With C, you're typically looking at more work at the baseline level, but potentially far fewer of those disastrous design-level gotchas. With C++, you're looking at less work at the baseline level, but far more potentially disastrous surprise scenarios. If you favor the latter route, you generally want to ensure that your team's expertise with ABI issues is higher with a larger coding standards document dedicating large sections to these potential ABI gotchas.
For your specific questions:
Problem 1: The interface functions are taking some custom data types
(basically classes or structures) as In\Out parameters.I want to
initialise members of these classes with default values using
constructors.If i do this it is not possible to load my library
dynamically and it becomes compiler dependent. How to solve this.
This is where that statically-linked convenience library can come in handy. You can statically link all that convenient code like a class with constructors and still pass in its data in a more raw, primitive kind of form to the exported interfaces. Another option is to selectively inline or statically link the constructor so that its code is not exported as with the rest of the class, but you probably don't want to be exporting classes as indicated above if your goal is max binary compatibility and don't want too many gotchas.
Problem 2: Some interfaces returns lists(or maps) of element to
client.I am using std containers for this purpose.But this also once
again compiler dependent(and compiler version also some times).
Here we have to forgo those standard container goodies at least at the exported API level. You can still utilize them at the convenience library level which has internal linkage.
DirectX 9 / C++
Do you declare d3ddevice global to your entire app/game or do you pass into classes that require the d3ddevice?
What is the usual way?
I understand it (and this may be wrong) that if you declare it globally, all classes and functions will be burdened by header memory that declares that global variable within the class after compiling?
I can be more specific about my question my application but, I'm just looking for the typical way.
I know how to start the d3ddevice etc, it's just a question about what is best?
I would recommend you wrap everything within a class and never put anything in global because global variables can be accessed from anywhere and that can make it very hard to keep track of the variable and who is and isn't using it.
Little bit late to the party here, but I also just recently stumbled into this same design question. It's a little surprising to me that there isn't more talk about it on the internet. I even tried perusing random github libraries to see if I could glean how others did it. Didn't find much. I also have an example of when you can't declare the d3d device as a global/static object. For my game/engine, I have a dll which serves as my engine framework. This dll is consumed by both an editor, and a game client (what people will use to play my game). This allows both things (and other applications in the future if desired) to access all of the world objects, math code, collections, etc.
With that system, I can't think of a way to make the Device object static, and still use it in both the editor and the client. If this were just a game client, or just an editor, then sure, it would be possible. Instead, I've pretty much decided to bite the bullet and pass Device to whatever needs it. One example, is a class that generates vertices at runtime. I need a pointer to Device for rebuilding the class.
I really just wanted to post this here, because I've been thinking about it for most of the day, and it really seems like this is the best way to handle it. Yeah, it sucks to have to pass the Device to nearly everything. But there's not really anything you can do about it.
Is there any way in C++ or Java or Python that would allow me to save the state of my program, no questions asked? For example, I've spent an hour learning how to save a tree-like structure into a file. Very educative but I feel I could just do:
saveState(file);
And the "file" would contain whole memory my program uses. Just like operating system's "hibernate" or "suspend-to-disk" feature. I know about boost serialization, this is probably not what I'm looking for.
What you most likely want is what we call serialization or object marshalling. There are a whole butt load of academic problems with data/object serialization that you can easily google.
That being said given the right library (probably very native) you could do a true snapshot of your running program similarly what "OS specific hibernate" does. Here is an SO answer for doing that on Linux: https://stackoverflow.com/a/12190830/318174
To do the above snapshot-ing though you will most likely need an external process from the process you want to save. I highly recommend you don't that. Instead read/lookup in your language of choice (btw welcome to SO, don't tag every language... that pisses people off) how to do serialization or object marshalling... hint... most people these days pick JSON.
I think that what you describe would be a feature that few people would actually want to use for a real system. Usually you want to save something so it can be transmitted, or so you can stop running the program, or guard against the possibility that the program quits (or power fails).
In most production systems one wants to make the writes to disk small and incremental so that the system can remain responsive, and writing inconsistent data can be avoided. Writing ALL memory to disk on a regular basis would probably result in lots of non-responsive time. You would need to lock the entire system to avoid inconsistent state.
Writing your own persistence is tedious and error prone however so you may find this SO question of interest: Persisting graph data (Java)
There are a couple of frameworks around this. Check out Google Protocol Buffers if you need support for Java, Python, and C++ https://developers.google.com/protocol-buffers/ I've used it in some projects and it works well.
There's also Thrift (orginally from Facebook) http://thrift.apache.org/ I don't have any experience with it though.
Another option is what #QuentinUK suggests. Use a class that inherits from something streamable and/or make streamable operators/functions.
I'd use a framework.
Here's your problem:
http://en.wikipedia.org/wiki/Address_space_layout_randomization
Back in ancient history (16-bit DOS programs with extenders), compilers used to support "based" pointers which stored relative addresses. These were safe to serialize en masse. And applications did so, saving both code and data, the serialized modules were called "overlays".
Today, you'd need based pointer support in your toolchain (resulting in every pointer access requiring an extra adjustment), or else to go through all the data, distinguishing the pointers from the other data (how?) and adjusting them to their new storage location, in case the OS already loaded some library at the same address your old program had used for its heap. In modern "managed" environments, where pointers already have to be identified for the garbage collector, this is feasible even if not commonly done. In native code, it's very difficult, although that metadata is created to enable relocation of shared libraries.
So instead people end up walking their entire data structures manually, and converting object links (pointers) into something that can be restored on the other end, even though the object has a new address (again, because the old address may have been used for a shared library).
Note that many processors have features to support based addressing... and that since based addressing is no longer common, compilers went ahead and used those pointer arithmetic features to speed up user code.
Yes, derive objects from a streamable class and add the streaming functions. Then you can stream everything to disk. You will need a library for this such as MFC.