Designing VS2012 project interaction: should I use an interface or a composite class across different projects? - c++

I have the following design problem. For example I have 3 visual studio projects. They are called "TheGame", "OpenGLDevice" and "DirectXDevice". "OpenGLDevice" and "DirectXDevice" are also the name of a class inside the projects. As you may have guessed, TheGame handles the logic of the application and use one of the other two projects to draw things on the screen. My question is about how should TheGame interact with the two projects. Here are what I could think of.
Create a "DeviceContainer" base class, and a "OpenGLDeviceContainer" and "DirectXDeviceContainer" derived private inner classes in TheGame. The derived classes each contains a private member field of type OpenGLDevice or DirectXDevice. DeviceContainer would basically serve as wrappers.
Create a "IGraphicsDevice" interface in a fourth project and have all three projects reference it. Make OpenGLDevice and DirectXDevice both implement that interface.
I like approach 1 because I wouldn't need to create a fourth project and both OpenGLDevice and DirectXDevice would have no dependencies.
First of all, which of these two would be the best approach and is there a third solution which would be even better.
Second, if I do choose to use the interface approach, is there a better way than to declare the interface in a fourth project?

Related

Editing T4 poco template to implement custom interface

I am using the Poco generator with EF4 and I am wondering if it is possible to edit the T4 template to force all of my entity classes to implement a custom interface. Since the pocos get blown away and recreated each time the custom tool is run, I would have to add this upon each update - I would sure like to avoid that.
I realize I could create partial classes for each poco and implement the interface there, but I was hoping to avoid all that boilerplate code.
Any suggestions would be welcome.
I think I am getting closer to a solution. I am editing the tt template by adding the implemenatation to the signature that is generated.
<#=Accessibility.ForType(entity)#> <#=code.SpaceAfter(code.AbstractOption(entity))#>partial class <#=code.Escape(entity)#> : IEntity<#=code.StringBefore(" , ", code.Escape(entity.BaseType))#>
But I have hit a bit of a snag. Some of my entities have base classes (table inheritance) that I designated in the edmx design. I have need to force all the entities to implement an interface called IEntity. The IEntity contract has no methods so there really is nothing to implement. I will need to rely on all of the entities having a common base. This is due to a completely separate implementation of a custom validation framework. I am getting the proper signatures for most of the entities, however, the entities that already have a base class are throwing a wobbly because you cant implement an interface before you inherit a base class. :IEntity, BaseClass is not allowed. I need to swap those but am not sure how I would pull that off in the template.
On perusing the code in the CodeGenerationTools class that the T4 template uses (found in the include file EF.Utility.CS.ttinclude), I came across this function StringAfter(string value, string append). Therefore, the answer is quite simple, since you state all your entities have to implement IEntity, the following should do the trick:
<#=Accessibility.ForType(entity)#> <#=code.SpaceAfter(code.AbstractOption(entity))#>partial class <#=code.Escape(entity)#> : <#=code.StringAfter(code.Escape(entity.BaseType), "," )#> IEntity
In fact, I know it does because I've tested it :-)
After the T4 template is added to your application, it becomes part of your app and as any other part of the app, you can do whatever you want with it. If for some reason, you don't want to modify the VS added template, make a copy of it and update this to include only the interface implementation. The second way would produce another set of partial files with the custom interface being implemented.
Dont know if this is near what you need but....
I´ve created a Nuget Package that scaffold tiers from T4-templates.
There are default templates for all interfaces (Repository Pattern and UnitOfWork), but you can edit these templates yourself and re-scaffold your system.
To keep it short.. You just install the package (Install-Package CodePlanner) and then define your domainmodel.. And then run "Scaffold CodePlanner.ScaffoldAll"
Its open source (codeplanner.codeplex.com)
Demo: http://average-uffe.blogspot.com/2011/11/codeplanner-011-released-on-nuget-and.html
Edit: The codeplanner package is built for MVC3!
Regards
Uffe

How should I create classes in ATL project?

I'm writing an ATL project and I wonder how should I create classes here.
Right now I have one class created by Add/Class/ATL Simple Object. I want to divide it to smaller classes but method from this classes should use CComPtr and have CComPtr as an argument. I can't create 'simple' c++ class because I don't have CComPtr there.
Should I create ATL classes by ATL Simple Object Wizard and then use interface for this class to call methods. Like here:
CComPtr<ITestAtlClass> tptr;
tptr.CoCreateInstance(CLSID_TestAtlClass);
tptr->test();
And should I add all public methods by Class View/ITestAtlClass/Add/Add Method?
What about constructors? Do I must initialize my class only by properties (and add them by Class View/ITestAtlClass/Add/Add Property)? And pass every com object by IUnknown interface?
Can somebody tell me how it should be done in ATL project. I will use this smaller classes internally (nobody will create this classes outside my DLL) just to make my code more readable.
I don't understand your comment that you can't use CComPtr from a simple C++ class. Can you please clarify?
I see two strategies:
build a clean C++ object model that solves the problem, and then wrap it in a thin facade layer of one or more COM objects
Use ATL classes throughout, and use CComObject<> and derivatives to instantiate and maintain these without the overhead of CoCreateInstance and the limitations of only using public interfaces.
The first one is usually much nicer, but if you're building a data-heavy object model, the second can be a useful technique.
If you have an ATL COM class called CVehicle, that derives from CComObjectRootEx<> and friends, you can instantiate it like so;
CComObject<CVehicle>* vehicle = NULL;
CComObject<CVehicle>::CreateInstance(&vehicle);
vehicle->AddRef();
// To get at any of its interfaces, use:
CComPtr<ICar> car = 0;
vehicle->QueryInterface(&car);
// And to delete object, use:
vehicle->Release();
There's also variations on CComObject<>, e.g. CComObjectStack<> that use different allocation and reference counting strategies.
As you can see, this is pretty messy. If you can explain what you mean by your comment on not being able to use CComPtr, maybe I can expand on that.

Putting all code of a module behind 1 interface. Good idea or not?

I have several modules (mainly C) that need to be redesigned (using C++). Currently, the main problems are:
many parts of the application rely on the functions of the module
some parts of the application might want to overrule the behavior of the module
I was thinking about the following approach:
redesign the module so that it has a clear modern class structure (using interfaces, inheritence, STL containers, ...)
writing a global module interface class that can be used to access any functionality of the module
writing an implementation of this interface that simply maps the interface methods to the correct methods of the correct class in the interface
Other modules in the application that currently directly use the C functions of the module, should be passed [an implementation of] this interface. That way, if the application wants to alter the behavior of one of the functions of the module, it simply inherits from this default implementation and overrules any function that it wants.
An example:
Suppose I completely redesign my module so that I have classes like: Book, Page, Cover, Author, ... All these classes have lots of different methods.
I make a global interface, called ILibraryAccessor, with lots of pure virtual methods
I make a default implementation, called DefaultLibraryAccessor, than simply forwards all methods to the correct method of the correct class, e.g.
DefaultLibraryAccessor::printBook(book) calls book->print()
DefaultLibraryAccessor::getPage(book,10) calls book->getPage(10)
DefaultLibraryAccessor::printPage(page) calls page->print()
Suppose my application has 3 kinds of windows
The first one allows all functionality and as an application I want to allow that
The second one also allows all functionality (internally), but from the application I want to prevent printing separate pages
The third one also allows all functionality (internally), but from the application I want to prevent printing certain kinds of books
When constructing the window, the application passes an implementation of ILibraryAccessor to the window
The first window will get the DefaultLibraryAccessor, allowing everything
I will pass a special MyLibraryAccessor to the second window, and in MyLibraryAccessor, I will overrule the printPage method and let it fail
I will pass a special AnotherLibraryAccessor to the third window, and in AnotherLibraryAccessor, I will overrule the printBook method and check the type of book before I will call book->print().
The advantage of this approach is that, as shown in the example, an application can overrule any method it wants to overrule. The disadvantage is that I get a rather big interface, and the class-structure is completely lost for all modules that wants to access this other module.
Good idea or not?
You could represent the class structure with nested interfaces. E.g. instead of DefaultLibraryAccessor::printBook(book), have DefaultLibraryAccessor::Book::print(book). Otherwise it looks like a good design to me.
Maybe look at the design pattern called "Facade". Use one facade per module. Your approach seems good.
ILibraryAccessor sounds like a known anti-pattern, the "god class".
Your individual windows are probably better off inheriting and overriding at Book/Page/Cover/Author level.
The only thing I'd worry about is a loss of granularity, partly addressed by suszterpatt previously. Your implementations might end up being rather heavyweight and inflexible. If you're sure that you can predict the future use of the module at this point then the design is probably ok.
It occurs to me that you might want to keep the interface fine-grained, but find some way of injecting this kind of display-specific behaviour rather than trying to incorporate it at top level.
If you have n number of methods in your interface class, And there are m number of behaviors per each method, you get m*(nC1 + nC2 + nC3 + ... + nCn) Implementations of your interface (I hope I got my math right :) ). Compare this with the m*n implementations you need if you were to have a single interface per function. And this method has added flexibility which is more important. So, no - I don't think a single interface would do. But you don't have to be extreme about it.
EDIT: I am sure the math is wrong. :(

Flexible application configuration in C++

I am developing a C++ application used to simulate a real world scenario. Based on this simulation our team is going to develop, test and evaluate different algorithms working within such a real world scenrio.
We need the possibility to define several scenarios (they might differ in a few parameters, but a future scenario might also require creating objects of new classes) and the possibility to maintain a set of algorithms (which is, again, a set of parameters but also the definition which classes are to be created). Parameters are passed to the classes in the constructor.
I am wondering which is the best way to manage all the scenario and algorithm configurations. It should be easily possible to have one developer work on one scenario with "his" algorithm and another developer working on another scenario with "his" different algorithm. Still, the parameter sets might be huge and should be "sharable" (if I defined a set of parameters for a certain algorithm in Scenario A, it should be possible to use the algorithm in Scenario B without copy&paste).
It seems like there are two main ways to accomplish my task:
Define a configuration file format that can handle my requirements. This format might be XML based or custom. As there is no C#-like reflection in C++, it seems like I have to update the config-file parser each time a new algorithm class is added to project (in order to convert a string like "MyClass" into a new instance of MyClass). I could create a name for every setup and pass this name as command line argument.
The pros are: no compilation required to change a parameter and re-run, I can easily store the whole config file with the simulation results
contra: seems like a lot of effort, especially hard because I am using a lot of template classes that have to be instantiated with given template arguments. No IDE support for writing the file (at least without creating a whole XSD which I would have to update everytime a parameter/class is added)
Wire everything up in C++ code. I am not completely sure how I would do this to separate all the different creation logic but still be able to reuse parameters across scenarios. I think I'd also try to give every setup a (string) name and use this name to select the setup via command line arg.
pro: type safety, IDE support, no parser needed
con: how can I easily store the setup with the results (maybe some serialization?)?, needs compilation after every parameter change
Now here are my questions:
- What is your opinion? Did I miss
important pros/cons?
- did I miss a third option?
- Is there a simple way to implement the config file approach that gives
me enough flexibility?
- How would you organize all the factory code in the seconde approach? Are there any good C++ examples for something like this out there?
Thanks a lot!
There is a way to do this without templates or reflection.
First, you make sure that all the classes you want to create from the configuration file have a common base class. Let's call this MyBaseClass and assume that MyClass1, MyClass2 and MyClass3 all inherit from it.
Second, you implement a factory function for each of MyClass1, MyClass2 and MyClass3. The signatures of all these factory functions must be identical. An example factory function is as follows.
MyBaseClass * create_MyClass1(Configuration & cfg)
{
// Retrieve config variables and pass as parameters
// to the constructor
int age = cfg->lookupInt("age");
std::string address = cfg->lookupString("address");
return new MyClass1(age, address);
}
Third, you register all the factory functions in a map.
typedef MyBaseClass* (*FactoryFunc)(Configuration *);
std::map<std::string, FactoryFunc> nameToFactoryFunc;
nameToFactoryFunc["MyClass1"] = &create_MyClass1;
nameToFactoryFunc["MyClass2"] = &create_MyClass2;
nameToFactoryFunc["MyClass3"] = &create_MyClass3;
Finally, you parse the configuration file and iterate over it to find all the entries that specify the name of a class. When you find such an entry, you look up its factory function in the nameToFactoryFunc table and invoke the function to create the corresponding object.
If you don't use XML, it's possible that boost::spirit could short-circuit at least some of the problems you are facing. Here's a simple example of how config data could be parsed directly into a class instance.
I found this website with a nice template supporting factory which I think will be used in my code.

How Do You Create Test Objects For Third Party Legacy Code

I have a code base where many of the classes I implement derive from classes that are provided by other divisions of my company. Working with these other devisions often have the working relationship as though they are third party middle ware vendors.
I'm trying to write test code without modifying these base classes. However, there are issues with creating meaningful test
objects due to the lack of interfaces:
//ACommonClass.h
#include "globalthermonuclearwar.h" //which contains deep #include dependencies...
#include "tictactoe.h" //...and need to exist at compile time to get into test...
class Something //which may or may not inherit from another class similar to this...
{
public:
virtual void fxn1(void); //which often calls into many other classes, similar to this
//...
int data1; //will be the only thing I can test against, but is often meaningless without fxn1 implemented
//...
};
I'd normally extract an interface and work from there, but as these are "Third Party", I can't commit these changes.
Currently, I've created a separate file that holds fake implementations for functions that are defined in the third-party supplied base class headers on a need to know basis, as has been described in the book "Working with Legacy Code".
My plan was to continue to use these definitions and provide alternative test implementations for each third party class that I needed:
//SomethingRequiredImplementations.cpp
#include "ACommonClass.h"
void CGlobalThermoNuclearWar::Simulate(void) {}; // fake this and all other required functions...
// fake implementations for otherwise undefined functions in globalthermonuclearwar.h's #include files...
void Something::fxn1(void) { data1 = blah(); } //test specific functionality.
But before I start doing that I was wondering if any one has tried providing actual objects on a code base similar to mine, which would allow creating new test specific classes to use in place of actual third-party classes.
Note all code bases in question are written in C++.
Mock objects are suitable for this kind of task. They allow you to simulate the existence of other components without needing them to be present. You simply define the expected input and output in your tests.
Google have a good mocking framework for C++.
I'm running into a very similar problem at the moment. I don't want to add a bunch of interfaces that are only there for the purpose of testing, so I can't use any of the existing mock object libraries. To get around this I do the same thing, creating a different file with fake implementations, and having my tests link the fake behaviour, and production code links the real behaviour.
What I wish I could do at this point, is take the internals of another mock framework, and use it inside my fake objects. It would look a little something like this:
Production.h
class ConcreteProductionClass { // regular everyday class
protected:
ConcreteProductionClass(); // I've found the 0 arg constructor useful
public:
void regularFunction(); // regular function that I want to mock
}
Mock.h
class MockProductionClass
: public ConcreteProductionClass
, public ClassThatLetsMeSetExpectations
{
friend class ConcreteProductionClass;
MockTypes membersNeededToSetExpectations;
public:
MockClass() : ConcreteProductionClass() {}
}
ConcreteProductionClass::regularFunction() {
membersNeededToSetExpectations.PassOrFailTheTest();
}
ProductionCode.cpp
void doSomething(ConcreteProductionClass c) {
c.regularFunction();
}
Test.cpp
TEST(myTest) {
MockProductionClass m;
m.SetExpectationsAndReturnValues();
doSomething(m);
ASSERT(m.verify());
}
The most painful part of all this is that the other mock frameworks are so close to this, but don't do it exactly, and the macros are so convoluted that it's not trivial to adapt them. I've begun looking into this on my spare time, but it's not moving along very quickly. Even if I got my method working the way I want, and had the expectation setting code in place, this method still has a couple drawbacks, one of them being that your build commands can get to be kind of long if you have to link against a lot of .o files rather than one .a, but that's manageable. It's also impossible to fall through to the default implementation, since we're not linking it. Anyway, I know this doesn't answer the question, or really even tell you anything you don't already know, but it shows how close the C++ community is to being able to mock classes that don't have a pure virtual interface.
You might want to consider mocking instead of faking as a potential solution. In some cases you may need to write wrapper classes that are mockable if the original classes aren't. I've done this with framework classes in C#/.Net, but not C++ so YMMV.
If I have a class that I need under test that derives from something I can't (or don't want to) run under test I'll:
Make a new logic-only class.
Move the code-i-wanna-test to the logic class.
Use an interface to talk back to the real class to interact with the base class and/or things I can't or won't put in the logic.
Define a test class using that same interface. This test class could have nothing but noops or fancy code that simulates the real classes.
If I have a class that I just need to use in testing, but using the real class is a problem (dependencies or unwanted behaviors):
I'll define a new interface that looks like all of the public methods I need to call.
I'll create a mock version of the object that supports that interface for testing.
I'll create another class that is constructed with a "real" version of that class. It also supports that interface. All interface calls a forwarded to the real object methods.
I'll only do this for methods I actually call - not ALL the public methods. I'll add to these classes as I write more tests.
For example, I wrap MFC's GDI classes like this to test Windows GDI drawing code. Templates can make some of this easier - but we often end up not doing that for various technical reasons (stuff with Windows DLL class exporting...).
I'm sure all this is in Feather's Working with Legacy Code book - and what I'm describing has actual terms. Just don't make me pull the book off the shelf...
One thing you did not indicate in your question is the reason why your classes derive from base classes from the other division. Is the relationship really a IS-A relationshiop ?
Unless your classes needs to be used by a framework, you could consider favoring delegation over inheritance. Then you can use dependency injection to provide your class with a mock of their class in the unit tests.
Otherwise, an idea would be to write a script to extract and create the interface your need from the header they provide, and integrate this to the compilation process so your unit test can ve checked in.