Application architecture that is COMPLETELY view layer agnostic - c++

I want to write a C++ application framework which will be completely view agnostic. Ideally, I want to be able to use either of the following as the "frontend"
Qt
Web front end
I am aware of developments like web toolkit (wt) etc, but I want to avoid these because of at least one of the following reasons:
They use cgi/fastcgi approach (when using Apache)
AFAIK, they impose a "frontend" framework on you - for example, I cannot use CakePHP, Symfony, Django etc to create the web page and only have "widgets" in the page binding to the server side C++ application. I would like to be free to use whichever web framework I want, so I can benefit from the many popular and established templating frameworks out there (e.g. Smarty etc).
I think some variation of the MVC pattern (not sure which variation) could work well in this instance.
This is how I intend to proceed:
The model and controller layer are implemented in C++
A plugin sits between the controller and the view
The view is implemented using either QT or a third party web framework
Communication between the view (frontend) and the plugin is done using either:
i. events for a QT frontend
ii. AJAX/PUSH mechanism for a web frontend (maybe backbone.js can be used here?)
Is there a name for the pattern I describe above - and (before I start coding), what (if any) are there any gotchas/performance issues (other than network latency) that I should be aware of?

From the sounds of it, it is an MVC, with the plugin implementing a Bridge between the controller and view. I could not locate a variant of MVC that specifically has a bridge as a participant in the design; however, none of them preclude a bridge, or other patterns, from collaborating or implementing the MVC.
The difficulty in implementing this will likely come from the bridge abstraction. It can be difficult to:
Prevent implementation details from affecting the abstraction. For example, if implementation A has an error code that is only meaningful to implementation A and implementation B has an error code that is similar but occurs under different conditions, then how will the errors pass through the abstraction without losing too much meaning?
Account for behavioral differences between implementations. This generally requires a solid understanding of the implementation being abstracted so that pre-conditions and post-conditions can be met for the abstraction. For example, if implementation A supports asynchronous reads, and implementation B only supports synchronous reads, then some work will need to be done in the abstraction layer to to account for the threading.
Find an acceptable compromise between decoupling and performance. It will be a balancing act. As always, try to avoid premature optimizations. Often times, it easier to introduce a little coupling for the sake of performance, than it is to decouple highly performant code.
Also, consider leveraging other patterns to help in the decoupling. For example, if concrete type Foo needs to be passed through the abstraction layer, and implementation A will convert it to Foo_A, while implementation will convert it to Foo_B, then consider having the plugin provide an Abstract Factory. Foo would become an abstract base class for Foo_A and Foo_B, and the plugin would provide a factory to create objects that implement Foo, allowing the controller to allocate the exact type the plugin is expecting.

Related

What is the value of separating interface from implementation in internet-based service-oriented computing?

Are the reasons like in normal multi-module application programming - so a client can just use the interface without having to worry about implementation details?
Note that I am talking about WSDI/UDDI/SOAP and not normal application interfaces.
A WSDL has an abstract part and a concrete part and they are separate as to allow the reuse of these definitions. The same contract can be bound to many concrete network protocols and message formats.
This reuse of definitions, in the context of UDDI means one interface, multiple implementations.
One of the idea with UDDI was that needed web services could be discovered at runtime. And you can go inside the registry and look for implementations of a certain WSDL contract:
Beyond the Cookbook: Interfaces and Implementations
[...]
If three different companies have implemented the same WSDL file and a piece of client software has created the proxy/stub code for that WSDL interface, then the client software can communicate with all three of those implementations with the same codebase
[...]
http://www2.sys-con.com/itsg/virtualcd/webservices/archives/0103/januszewski/index.html
At least that was the theory. In practice it turned out another way.
The short answer is none. When you publish a Web service via a WSDL it doesn't matter how you have implemented it. The client application consuming your service, will generate the appropriate code from the WSDL, whether you have defined an interface for your backend Web service or not.
That said, adding an interface to in front of a Web service, is rather a waste of time.
The pointy haired boss decides he'd like the application to work a different way, in a different sequence of screens because:
His wives friend at the tennis club thinks it would work better that way.
Rigorous user testing indicates a higher customer conversion rate based on a different application flow or sequence of usage steps.
You want to provide white-label versions of your website (similar to a franchise).
In the above cases, one would only need to rewrite the graphical elements, the person doing so would not need to know anything about databases, or complex back-end data-processing.
Separating interface and implementation helps you keep your design loosely coupled. You can change the implementation independently from the interface as the requirements change.

SOA: Is it preferable to implement a service instead of just writing service-ready code, when no external access is needed?

I'm working on the initial architecture for a solution for which an SOA approach has been recommended by a previous consultant. From reading the Erl book(s) and applying to previous work with services (and good design patterns in general), I can see the benefits of such an approach. However, this particular group does not currently have any traditional needs for implementing web services -- there are no external consumers, and no integration with other applications.
What I'm wondering is, are there any advantages to going with web services strictly to stick to SOA, that we couldn't get from just implementing objects that are "service ready"?
To explain, an example. Let's say you implement the entity "Person" as a service. You have to implement:
1. Business object/logic
2. Translator to service data structure
3. Translator from service data structure
4. WSDL
5. Service data structure (XML/JSON/etc)
6. Assertions
Now, on the other hand, if you don't go with a service, you only have to implement #1, and make sure the other code accesses it through a loose reference (using dependency injection, or a wrapper, etc). Then, if it later becomes apparent that a service is needed, you can just have the reference instead point to #2/#3 logic above in a wrapper object (so all caller objects do not need updating), and implement the same amount of objects without a penalty to the amount of development you have to do -- no extra objects or code have to be created as opposed to doing it all up front.
So, if the amount of work that has to be done is the same whether the service is implemented initially or as-needed, and there is no current need for external access through a service, is there any reason to initially implement it as a service just to stick to SOA?
Generally speaking you'd be better to wait.
You could design and implement a web service which was simply a technical facade that exposes the underlying functionality - the question is would you just do a straight one for one 'reflection' of that underlying functionality? If yes - did you design that underlying thing in such a way that it's fit for external callers? Does the API make sense, does it expose members that should be private, etc.
Another factor to consider is do you really know what the callers of the service want or need? The risk you run with building a service is that (as you're basically only guessing) you might need to re-write it when the first customers / callers come along. This can could result in all sorts of work including test cases, backwards compatibility if it drives change down to the lower levels, and so on.
having said that the advantage of putting something out there is that it might help spark use of the service - get people thinking - a more agile principled approach.
If your application is an isolated Client type application (a UI that connects to a service just to get data out of the Database) implementing a SOA like architecture is usually overkill.
Nevertheless there could be security, maintainability, serviceability aspects where using web services is a must. e.g. some clients needs access to the data outside the firewall or you prefer to separate your business logic/data access from the UI and put it on 1 server so that you don’t need to re-deploy the app every time some bus. rules changes.
Entreprise applications require many components interacting with each other and many developers working on it. In this type of scénario using SOA type architecture is the way to go.
The main reason to adopt SOA is to reduce the dependencies.
Enterprise Applications usually depends on a lot of external components (logic or data) and you don’t want to integrate these components by sharing assemblies.
Imagine that you share a component that implements some specific calculation, would you deploy this component to all the dependent applications? What will happen if you want to change some calculation logic? Would you ask all teams to upgrade their references and recompile and redeploy their app?
I recently posted on my blog a story where the former Architect had also choosed not to use web services and thought that sharing assemblies was fine. The result was chaos. Read more here.
As I mentioned, it depends on your requirements. If it’s a monolithically application and you’re sure you’ll never integrate this app and that you’ll never reuse the bus. Logic/data access a 2 tier application (UI/DB) is good enough.
Nevertheless this is an Architectural decision and as most of the architectural decisions it’s costly to change. Of course you can still factor in a web service model later on but it’s not as easy as you could think. Refactoring an existing app to add a service layer is usually a difficult task to accomplish even when using a good design based on interfaces. Example of things that could go wrong: data structure that are not serializable, circular references in properties, constructor overloading, dependencies on some internal behaviors…

understand design of C++ framework

From some browsing on net, I just understood that any framework is set of libraries provided by the framework and we can simply use those library functions to develop the application.
I would like to know more about
what is a framework with respect to C++.
How are C++ frameworks designed?
How can we use them and develop applications.
Can someone provide me some links to understand the concept of "framework" in C++
A "framework" is something designed to provide the structure of a solution - much as the steel frame of a skyscraper gives it structure, but needs to be fleshed out with use-specific customisations. Both assume some particular problem space - whether it's multi-threaded client/server transactions, or a need for air-conditioned office space, and if your needs are substantively different - e.g. image manipulation or a government art gallery - then trying to use a poorly suited framework is often worse than using none. Indeed, if the evolving needs of your system pass beyond what the framework supports, you may find your options for customising the framework itself are insufficient, or the design you adopted to use it just doesn't suit the re-architected solution you later need. For example, a single-threaded framework encourages you to program in a non-threadsafe fashion, which may be a nightmare to make efficiently multi-threaded post-facto.
They're designed by observing that a large number of programs require a similar solution architecture, and abstracting that into a canned solution framework with facilities for those app-specific customisations.
How they're used depends on the problems they're trying to solve. A framework for transaction dispatch/handling will typically define a way to list IP ports to listen on, nominate functions to be called when connections are made and new data arrives, register timer events that call back to arbitrary functions. XML document, image manipulation, A.I. etc. frameworks would be totally different.... The whole idea is that they each provide a style of use that is simple and intuitive for the applications that might wish to use them.
A big hassle with many frameworks is that they assume ownership of the applications that use them, and relegate the application to a secondary role of filling in some callbacks. If the application needs to use several frameworks, or even one framework with some extra libraries doing e.g. asynchronous communications, then the frameworks may make that very difficult. A good framework is designed more like a set of libraries that the client can control, but need not be confined by. Good frameworks are rare.
More often than not, a framework (as opposed to "just" a library or set of libraries), in OOP languages (including C++), implies a software subsystem that, among other things, supplies classes you're supposed to inherit from, overriding certain methods to specialize the class's functionality for your application's needs, in your application code. If it was just some collection of functions and typedefs it should more properly be called a library, rather than a framework.
I hope this addresses your points 1 and 3. Regarding point 2, ideally, the designers of a framework have a lot of experience designing applications in a certain area, and they "distill" their experience and skill into a framework that lets (possibly less-experienced) developers build their own applications in that area more easily and expeditiously. In the real world, of course, such ideals are not always followed.
With a tool like CppDepend you can analyze any C++ framework, reverse engineer its design in a minute, but also have an accurate idea of the overall code quality of the framework.
An application framework (regardless of language) is a library that attempts to provide a complete framework within which you plug in functionality for your specific application.
The idea is that things like web applications and GUI applications typically require quite a bit of boilerplate to get working at all. The application framework provides all that boilerplate code, and some sort of organization (typically some variation of model-view-controller) where you can plug in the logic specific to your particular application, and it handles most of the other stuff like automatically routing messages and such as needed.

Testing system where App-level and Request-level IoC containers exist

My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there.
We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy.
Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing.
The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container.
We are also using MSTest, which presents some concurrency dilemmas during testing as well.
So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament?
How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers.
I hope the question is clear, thanks!
Thanks for the replies. I agree that using Service Location is not the optimal approach - but it does seem necessary for this situation. The scenario is that we need our Entities to resolve dependencies, on-demand, only when needed - for business rule validation. Forcing all our entities, on being materialized by NHibernate, to undergo constructor injection, doesn't seem appropriate, at a minimum for performance reasons.
We're considering a solution where the containers are stored either in the HttpApplication/HttpContext at real runtime, and in static/ThreadStatic fields during test. StructureMap has a similar approach baked-in. Any thoughts on this kind of solution? Thanks!
Also, this isn't necessarily integration testing (although it may play into that too). For example, we want to unit-test a particular entity's business rule behavior--during which this scenario will unfold.
I am definitely open to the Http object abstractions - I've used them and loved them in MVC; how can one get them going outside of MVC?
DI Containers should not be necessary during unit testing. Rather, a DI Container is used at application startup time to resolve the application's dependency graph, and then get out of the way.
However, it sounds like you have applied the Service Locator anti-pattern, and you are now feeling the pain of that. Unfortunately, there's no easy way out of this.
You obviously can't rely on the real HTTP Context during unit testing, as it will not be available to you in that environment, so you will need to hide them away behind interfaces. If you are using .NET 3.5 SP1, you might be able to use the abstractions introduced in System.Web.Abstractions, but otherwise, you can extract some yourself.
Once you have introduced these Seams into your system, you can use proper Dependency Injection (preferably Constructor Injection) to inject them into your consuming classes.
In any case, following Test-Driven Development can very effectively prevent this type of tight coupling from being introduced in the first place.

Avoiding Inheritance Madness

So, I have an API that I need to implement in to an existing framework. This API manages interactions with an external server. I've been charged with coming up with a way to create an easily repeatable "pattern," so that if people are working on new projects in the given framework they have a simple solution for integrating the API.
My first idea was to create a class for your "main" class of the framework to extend that, would provide all the virtual functions necessary to interact with the API. However, my boss vetoed this, since the existing framework is "inheritence heavy" and he wants to avoid adding to the madness. I obviously can't incapsulate my API, because that is what the API itself is supposed to be doing, and doing so might hide functionality.
Short of asking futures developers to copy and paste my example, what do I do?
If your boss is hostile to inheritance, try aggregation. (Has-a relationships rather than inheritance's is-a relationship.) Assuming you interface with the API in question via an object, maybe you can just keep that object in a property of your framework 'main' class, so you'd interact with it like main->whateverapi->doWhatever(). If the API isn't object-implemented or you need to load a lot of functionality specific to your environment onto it, that points toward making your own class that goes into that role and relates to the third party API however it needs to. Yeah, this basically means you're building an API to the API. Aggregation allows you to avoid the masking-functionality problem, though; even if you do have to do an intermediary layer, you can expose the original API as main->yourobject->originalapi and not have to worry about inheritance mucking things up.
Sounds to me like what your boss is having a problem with is the Framework part of this. There is an important distiction between Framework and API, in order to code to a framework you must have a good understanding of it and how it fits within your overall development, much more of a wholeistic view, adding to frameworks should never be taken lightly.
API's on the other hand are just an interface to your application / Framework and usually just a library of utility calls, I can't see that he would have a problem with inheritance or aggregation in a library, seems to me that the issue would be creating additional complexity in the framework itself, i.e. requiring developers to extend the main class of the framework is much more onerous than creating a stand alone API library that people can just call into (if they choose) I would be willing to bet that your boss would not care (in fact probably support) if the library itself contained inheritance.
Like the answer from chaos above, I was going to suggest aggregation as an alternative to inheritance. You can wrap the API and make it configurable either via properties or via dependency injection.
Also for a related topic see my answer to "How do the Proxy, Decorator, Adaptor, and Bridge Patterns differ?" for a run-down on other "wrapper" design patterns.