From some browsing on net, I just understood that any framework is set of libraries provided by the framework and we can simply use those library functions to develop the application.
I would like to know more about
what is a framework with respect to C++.
How are C++ frameworks designed?
How can we use them and develop applications.
Can someone provide me some links to understand the concept of "framework" in C++
A "framework" is something designed to provide the structure of a solution - much as the steel frame of a skyscraper gives it structure, but needs to be fleshed out with use-specific customisations. Both assume some particular problem space - whether it's multi-threaded client/server transactions, or a need for air-conditioned office space, and if your needs are substantively different - e.g. image manipulation or a government art gallery - then trying to use a poorly suited framework is often worse than using none. Indeed, if the evolving needs of your system pass beyond what the framework supports, you may find your options for customising the framework itself are insufficient, or the design you adopted to use it just doesn't suit the re-architected solution you later need. For example, a single-threaded framework encourages you to program in a non-threadsafe fashion, which may be a nightmare to make efficiently multi-threaded post-facto.
They're designed by observing that a large number of programs require a similar solution architecture, and abstracting that into a canned solution framework with facilities for those app-specific customisations.
How they're used depends on the problems they're trying to solve. A framework for transaction dispatch/handling will typically define a way to list IP ports to listen on, nominate functions to be called when connections are made and new data arrives, register timer events that call back to arbitrary functions. XML document, image manipulation, A.I. etc. frameworks would be totally different.... The whole idea is that they each provide a style of use that is simple and intuitive for the applications that might wish to use them.
A big hassle with many frameworks is that they assume ownership of the applications that use them, and relegate the application to a secondary role of filling in some callbacks. If the application needs to use several frameworks, or even one framework with some extra libraries doing e.g. asynchronous communications, then the frameworks may make that very difficult. A good framework is designed more like a set of libraries that the client can control, but need not be confined by. Good frameworks are rare.
More often than not, a framework (as opposed to "just" a library or set of libraries), in OOP languages (including C++), implies a software subsystem that, among other things, supplies classes you're supposed to inherit from, overriding certain methods to specialize the class's functionality for your application's needs, in your application code. If it was just some collection of functions and typedefs it should more properly be called a library, rather than a framework.
I hope this addresses your points 1 and 3. Regarding point 2, ideally, the designers of a framework have a lot of experience designing applications in a certain area, and they "distill" their experience and skill into a framework that lets (possibly less-experienced) developers build their own applications in that area more easily and expeditiously. In the real world, of course, such ideals are not always followed.
With a tool like CppDepend you can analyze any C++ framework, reverse engineer its design in a minute, but also have an accurate idea of the overall code quality of the framework.
An application framework (regardless of language) is a library that attempts to provide a complete framework within which you plug in functionality for your specific application.
The idea is that things like web applications and GUI applications typically require quite a bit of boilerplate to get working at all. The application framework provides all that boilerplate code, and some sort of organization (typically some variation of model-view-controller) where you can plug in the logic specific to your particular application, and it handles most of the other stuff like automatically routing messages and such as needed.
Related
For my next project I would like to try UML modeling. There are several reason - mainly documentation + to break ground for development to avoid re-coding everything over and over again.
I've tried it several times in the past, but I had a feeling like without a deep knowledge of the background libraries my work will depend on, It's not a trivial task, as at the very beginning I don't know, what kind of member variables and functions I would need.
Usually I was coding to get familiar with the libraries and API my app was interface and I get into a state, where the work was almost done or let's say it was from 50% ready, where it made no sense to me to start modeling something.
Am I true you really need to understand well the background or there are ways/techniques how to overcome this?
Another point is, do you built up the model from bottom to top or from top to bottom or it depends on the use case?
Thank you for any recommendations how to proceed.
If I understand well, your main challenge is to get an understanding of the libraries and API that you are using.
If you intend to create an UML diagram for reverse-engineering the library and understand it, you might loose your time: You'd be able to make a meaningful model only once you've understood how the pieces fit together. And for this discovery and knowledge acquisition, you already use the most effective approach:
Usually I was coding to get familiar with the libraries and API my app was interfaced.
Now, if the library or the API is delivered with an UML model, it's another story: an existing design model (not all the details of the implementation, but the core elements of the design, and interaction scenario that are difficult to grasp from the code) could help you to grasp faster how the library works, which will help you to go faster through the exploratory phase.
It's also a different story when you are reverese-engineering an undocumented app: there you don't have a tutorial, and it's difficult to write code to use the existing elements in a meaninful way. There it could make sense to document the system post-mortem. But again, do not lose yourself in a detailed implementation model with all the details: focus on the core elements, whose understanding will really matter for your maintenance fellows.
The three main purposes of making UML class models when developing an app are:
Describing the entity types of the app's problem domain for analyzing and better understanding the requirements for the app in a conceptual (domain) model.
Designing the schema of the app's underlying database (this is typically an RDB schema defined with a bunch of CREATE TABLE statements).
Designing the model classes of the data model of your app, which will be coded, e.g., as Java Entity classes or C# classes with EF annotations.
For 1 and 2, you may take a look at my book An introduction to information modeling and databases, while for 3 you may check out a book on model-based development, e.g. for Java Backend Apps or JavaScript Frontend Apps.
If your goal is to model the dependencies of your app, this may indeed be another purpose. However, as argued by #Christope, reverse-engineering a library is itself a big project that may easily consume more time than you have for developing your app.
I want to write a C++ application framework which will be completely view agnostic. Ideally, I want to be able to use either of the following as the "frontend"
Qt
Web front end
I am aware of developments like web toolkit (wt) etc, but I want to avoid these because of at least one of the following reasons:
They use cgi/fastcgi approach (when using Apache)
AFAIK, they impose a "frontend" framework on you - for example, I cannot use CakePHP, Symfony, Django etc to create the web page and only have "widgets" in the page binding to the server side C++ application. I would like to be free to use whichever web framework I want, so I can benefit from the many popular and established templating frameworks out there (e.g. Smarty etc).
I think some variation of the MVC pattern (not sure which variation) could work well in this instance.
This is how I intend to proceed:
The model and controller layer are implemented in C++
A plugin sits between the controller and the view
The view is implemented using either QT or a third party web framework
Communication between the view (frontend) and the plugin is done using either:
i. events for a QT frontend
ii. AJAX/PUSH mechanism for a web frontend (maybe backbone.js can be used here?)
Is there a name for the pattern I describe above - and (before I start coding), what (if any) are there any gotchas/performance issues (other than network latency) that I should be aware of?
From the sounds of it, it is an MVC, with the plugin implementing a Bridge between the controller and view. I could not locate a variant of MVC that specifically has a bridge as a participant in the design; however, none of them preclude a bridge, or other patterns, from collaborating or implementing the MVC.
The difficulty in implementing this will likely come from the bridge abstraction. It can be difficult to:
Prevent implementation details from affecting the abstraction. For example, if implementation A has an error code that is only meaningful to implementation A and implementation B has an error code that is similar but occurs under different conditions, then how will the errors pass through the abstraction without losing too much meaning?
Account for behavioral differences between implementations. This generally requires a solid understanding of the implementation being abstracted so that pre-conditions and post-conditions can be met for the abstraction. For example, if implementation A supports asynchronous reads, and implementation B only supports synchronous reads, then some work will need to be done in the abstraction layer to to account for the threading.
Find an acceptable compromise between decoupling and performance. It will be a balancing act. As always, try to avoid premature optimizations. Often times, it easier to introduce a little coupling for the sake of performance, than it is to decouple highly performant code.
Also, consider leveraging other patterns to help in the decoupling. For example, if concrete type Foo needs to be passed through the abstraction layer, and implementation A will convert it to Foo_A, while implementation will convert it to Foo_B, then consider having the plugin provide an Abstract Factory. Foo would become an abstract base class for Foo_A and Foo_B, and the plugin would provide a factory to create objects that implement Foo, allowing the controller to allocate the exact type the plugin is expecting.
My clients have used MFC applications in years. Main reason was because their applications were real-time app interacting with various sensors, and their performance was key to their success.
I used MFC about 10 years ago and moved to .NET. But I am willing to go back to MFC if neccessary. But question is if it is worth and if there is anything better than MFC right now.
I understand that C++ is necessary to optimize our applications and that MFC is OOP wrapper for Win32 API and might be fastest OOP UI API on Windows.
But I am mainly worried about its testability and its complex API. So MFC might slow us down in long term.
What do you think? Is there any framework that you can achieve better performance than with MFC?
UPDATE: As for needed performance, I don't have exact numbers but I saw one app in its operations. It was almost getting various types of signals from each of moving objects. My guess at the time was less than 1/2 second to get & display all the signals from each one. But I could be wrong.
You probably want to look at Qt.
The internet is full of comparisons of MFC and Qt; here is a particularly recent one: https://softwareengineering.stackexchange.com/questions/17490/comparing-qt-vs-mfc
Assuming (although it wasn't specified in the question) that your application is another sensor-control system, it doesn't matter as much as you think it does.
Basically, your architecture should keep the sensor communications in their own thread, which asynchronously communicate with the rest of the app. So you're mostly checking to see if your potential replacement libraries do something pathological with their multi-threading implementation.
To give particulars, we would need particulars: required response times, interrupt frequencies, these sorts of things. But even in that case, we'd mostly just be guessing (or campaigning for our favorite API).
My real recommendation is that you look into the performance numbers you get with .NET in a "prototype control". Your recent familiarity with the API should enable you to do this relatively quickly.
If the performance seems unacceptable, do a similar prototype in Qt or WTL or whatever else looks reasonable. I would consider MFC a last resort simply due to age UNLESS you can leverage significant amounts of existing control code from the client.
There are some best alternatives for MFC.
QT is the first choice to go with. But for commercial release, it becomes little costlier.
wxWidgets, is also a good choice for a cross platform opensource library.
Ultralight - This is totally different as it is a HTML based UI engine to create nice applications with the help of HTML, css, and javascript.
In my workplace (and a lot of other areas), there is a lot of emphasis on building architecture around services. (I am working in an e-commerce startup). However, I think services are implicitly considered as distributed. I am a believer of the first law of distribution - "don't distribute". So, I believe that we should not un-necessarily complicate architecture. It should be an architecture which can evolve. So, one of the ways to approach the problem would be to create well defined namespaces and build code around it, but keep the communication via java api. (this keeps monitoring requirement low, and reliability/availability problems low). This can easily be evolved into a distributed architecture by wrapping modules into web service, as and when, the scale requirements kick-in. So, the question is - what are the cons of writing code as a single application and evolving into distributed services, rather than straight jumping into implementing web services based architecture? Am I right in assuming that services should imply the basic principles of design (abstraction, encapsulation etc), rather than distribution over network?
Distribution requires modularity. However, it requires more than just modularity: it also requires coarse-grained interaction between the modules.
For example, in a single-process ecommerce system, you might have separate modules for managing the user's shopping cart and calculating prices. They might interact by the cart asking the calculator to price an item, then another item, etc. That would be perfectly fine.
However, in a distributed system, that would require a torrent of small method calls, which is inefficient; you might get away with it if you used CORBA for distribution, but with SOAP, you'd be in trouble. Rather, you would want to have the cart ask the calculator to price the whole order in one go. That might be worse from a separation of concerns point of view (why should the calculator have to know about the idea of carts?), but it would be required to make the system perform adequately.
Related to granularity, there's also the problem of modules interacting via interfaces or implementations. With a single process, you can define a set of interfaces through which modules will interact; modules can pass each other objects implementing those interfaces without having to tell each other about the implementations (eg a scheduler module could be passed anything implementing interface Job { void run(); }). Across a network, the requirement for coarse grain means that any objects passed must be passed by value (because passing by reference would entail fine-grained calls back to the passing module - unless you were using mobile code, which you aren't, because nobody is), which means that both modules must know about and agree on the implementations of the objects.
So, while building a single-process system in a modular way makes it easier to implement SOA later, it doesn't make it as simple as wrapping each module in a SOAP interface. At least, not unless you build your system in a coarse-grained manner from the start, which means throwing away a number of sound and helpful good software engineering practices.
We currently have a number of C++/MFC applications that communicate with each other via DCOM. Now we will update the applications and also want to replace DCOM with something more modern, something that is easier to work with. But we do not know what. What do you think
Edit
The data exchanged is not something that may be of interest to others. It is only status information between the different parts of the program running on different computers.
there are many C++ messaging libraries, from the old ACE to new ones like Google's Protocol Buffers or Facebook's (now Apache's) Thrift or Cisco's Etch.
Currently I'm hearing good things about ZeroMq which might give you more than you are used to.
DCOM is nothing more than sugar-coating over a messenging system.
Any proper messenging system would do, and would allow you to actually spot where messages are exchanged (which may be important to localize point of failures/performance bottlenecks in waiting).
There are two typical ways to do so, nowadays:
A pure messenging system, for example using Google Protocol Buffers as the exchange format
A webservice (either full webservice in JSON or a REST API)
I've been doing lots of apps in both C++ and Java using REST and I'm pretty satisfied. Far from the complexity of CORBA and SOAP, REST is easy to implement and flexible. I had a bit of a learning curve to ged used to model things as CRUD, but now it seems even more intuitive that way.
Now, for the C++ side I don't use a specific REST library, just cURL and a XML parser (in my case, CPPDOM) because the C++ apps are only clients, and the servers are Java (using the Restlet framework). If you need one, there's another question here at SO that recommends:
Can anyone recommend a good C/C++ RESTful framework
I'd also mention my decision to use XML was arbitrary and I'm seriously considering to replace it with JSON. Unless you have a specific need for XML, JSON is simpler and lightweight. And the beauty of REST is that you could even support both, along with other representations, if you want to.