Editing T4 poco template to implement custom interface - templates

I am using the Poco generator with EF4 and I am wondering if it is possible to edit the T4 template to force all of my entity classes to implement a custom interface. Since the pocos get blown away and recreated each time the custom tool is run, I would have to add this upon each update - I would sure like to avoid that.
I realize I could create partial classes for each poco and implement the interface there, but I was hoping to avoid all that boilerplate code.
Any suggestions would be welcome.
I think I am getting closer to a solution. I am editing the tt template by adding the implemenatation to the signature that is generated.
<#=Accessibility.ForType(entity)#> <#=code.SpaceAfter(code.AbstractOption(entity))#>partial class <#=code.Escape(entity)#> : IEntity<#=code.StringBefore(" , ", code.Escape(entity.BaseType))#>
But I have hit a bit of a snag. Some of my entities have base classes (table inheritance) that I designated in the edmx design. I have need to force all the entities to implement an interface called IEntity. The IEntity contract has no methods so there really is nothing to implement. I will need to rely on all of the entities having a common base. This is due to a completely separate implementation of a custom validation framework. I am getting the proper signatures for most of the entities, however, the entities that already have a base class are throwing a wobbly because you cant implement an interface before you inherit a base class. :IEntity, BaseClass is not allowed. I need to swap those but am not sure how I would pull that off in the template.

On perusing the code in the CodeGenerationTools class that the T4 template uses (found in the include file EF.Utility.CS.ttinclude), I came across this function StringAfter(string value, string append). Therefore, the answer is quite simple, since you state all your entities have to implement IEntity, the following should do the trick:
<#=Accessibility.ForType(entity)#> <#=code.SpaceAfter(code.AbstractOption(entity))#>partial class <#=code.Escape(entity)#> : <#=code.StringAfter(code.Escape(entity.BaseType), "," )#> IEntity
In fact, I know it does because I've tested it :-)

After the T4 template is added to your application, it becomes part of your app and as any other part of the app, you can do whatever you want with it. If for some reason, you don't want to modify the VS added template, make a copy of it and update this to include only the interface implementation. The second way would produce another set of partial files with the custom interface being implemented.

Dont know if this is near what you need but....
I´ve created a Nuget Package that scaffold tiers from T4-templates.
There are default templates for all interfaces (Repository Pattern and UnitOfWork), but you can edit these templates yourself and re-scaffold your system.
To keep it short.. You just install the package (Install-Package CodePlanner) and then define your domainmodel.. And then run "Scaffold CodePlanner.ScaffoldAll"
Its open source (codeplanner.codeplex.com)
Demo: http://average-uffe.blogspot.com/2011/11/codeplanner-011-released-on-nuget-and.html
Edit: The codeplanner package is built for MVC3!
Regards
Uffe

Related

Testing a custom PolicyViolationHandler in FluentSecurity

I have written a custom policy in FluentSecurity (implement ISecurityPolicy) and a corresponding PolicyViolationHandler by implementing IPolicyViolationHandler. Everything is working perfectly with the policy and the handler, however I'm doing some back-filling by writing some unit tests to test my implementation of IPolicyViolationHandler.Handle(PolicyViolationException exception).
I know I'm doing it backwards writing the test after the implementation (admission to avoid flames).
My question is: Is there a way to generate a PolicyViolationException object as a mock that I can pass in for my test? PolicyViolationException doesn't have any public constructors (so I can't new an object), nor an abstract base, or interface to mock against (using Moq).
I took a look through the API but didn't see anything to generate one. I know I could do some reflection magic to get one, but wanted to check if I was missing something.
In releases up to and including version 2.0-alpha4 this is not possible. However, this issue will be resolved in the upcoming 2.0-beta1 release of FluentSecurity where the constructor will be made public.
https://github.com/kristofferahl/FluentSecurity/commit/09e9b69ef5a297d242f8a813babbeebd47b54818
Thanks for bringing this to my attention!

Flexible application configuration in C++

I am developing a C++ application used to simulate a real world scenario. Based on this simulation our team is going to develop, test and evaluate different algorithms working within such a real world scenrio.
We need the possibility to define several scenarios (they might differ in a few parameters, but a future scenario might also require creating objects of new classes) and the possibility to maintain a set of algorithms (which is, again, a set of parameters but also the definition which classes are to be created). Parameters are passed to the classes in the constructor.
I am wondering which is the best way to manage all the scenario and algorithm configurations. It should be easily possible to have one developer work on one scenario with "his" algorithm and another developer working on another scenario with "his" different algorithm. Still, the parameter sets might be huge and should be "sharable" (if I defined a set of parameters for a certain algorithm in Scenario A, it should be possible to use the algorithm in Scenario B without copy&paste).
It seems like there are two main ways to accomplish my task:
Define a configuration file format that can handle my requirements. This format might be XML based or custom. As there is no C#-like reflection in C++, it seems like I have to update the config-file parser each time a new algorithm class is added to project (in order to convert a string like "MyClass" into a new instance of MyClass). I could create a name for every setup and pass this name as command line argument.
The pros are: no compilation required to change a parameter and re-run, I can easily store the whole config file with the simulation results
contra: seems like a lot of effort, especially hard because I am using a lot of template classes that have to be instantiated with given template arguments. No IDE support for writing the file (at least without creating a whole XSD which I would have to update everytime a parameter/class is added)
Wire everything up in C++ code. I am not completely sure how I would do this to separate all the different creation logic but still be able to reuse parameters across scenarios. I think I'd also try to give every setup a (string) name and use this name to select the setup via command line arg.
pro: type safety, IDE support, no parser needed
con: how can I easily store the setup with the results (maybe some serialization?)?, needs compilation after every parameter change
Now here are my questions:
- What is your opinion? Did I miss
important pros/cons?
- did I miss a third option?
- Is there a simple way to implement the config file approach that gives
me enough flexibility?
- How would you organize all the factory code in the seconde approach? Are there any good C++ examples for something like this out there?
Thanks a lot!
There is a way to do this without templates or reflection.
First, you make sure that all the classes you want to create from the configuration file have a common base class. Let's call this MyBaseClass and assume that MyClass1, MyClass2 and MyClass3 all inherit from it.
Second, you implement a factory function for each of MyClass1, MyClass2 and MyClass3. The signatures of all these factory functions must be identical. An example factory function is as follows.
MyBaseClass * create_MyClass1(Configuration & cfg)
{
// Retrieve config variables and pass as parameters
// to the constructor
int age = cfg->lookupInt("age");
std::string address = cfg->lookupString("address");
return new MyClass1(age, address);
}
Third, you register all the factory functions in a map.
typedef MyBaseClass* (*FactoryFunc)(Configuration *);
std::map<std::string, FactoryFunc> nameToFactoryFunc;
nameToFactoryFunc["MyClass1"] = &create_MyClass1;
nameToFactoryFunc["MyClass2"] = &create_MyClass2;
nameToFactoryFunc["MyClass3"] = &create_MyClass3;
Finally, you parse the configuration file and iterate over it to find all the entries that specify the name of a class. When you find such an entry, you look up its factory function in the nameToFactoryFunc table and invoke the function to create the corresponding object.
If you don't use XML, it's possible that boost::spirit could short-circuit at least some of the problems you are facing. Here's a simple example of how config data could be parsed directly into a class instance.
I found this website with a nice template supporting factory which I think will be used in my code.

Mocking non-virtual methods in C++ without editing production code?

I am a fairly new software developer currently working adding unit tests to an existing C++ project that started years ago. Due to a non-technical reason, I'm not allowed to modify any existing code. The base class of all my modules has a bunch of methods for Setting/Getting data and communicating with other modules.
Since I just want to unit testing each individual module, I want to be able to use canned values for all my inter-module communication methods. I.e. for a method Ping() which checks if another module is active, I want to have it return true or false based on what kind of test I'm doing. I've been looking into Google Test and Google Mock, and it does support mocking non-virtual methods. However the approach described (https://google.github.io/googletest/gmock_cook_book.html#MockingNonVirtualMethods) requires me to "templatize" the original methods to take in either real or mock objects. I can't go and templatize my methods in the base class due to the requirement mentioned earlier, so I need some other way of mocking these virtual methods
Basically, the methods I want to mock are in some base class, the modules I want to unit test and create mocks of are derived classes of that base class. There are intermediate modules in between my base Module class and the modules that I want to test.
I would appreciate any advise!
Thanks,
JW
EDIT: A more concrete examples
My base class is lets say rootModule, the module I want to test is leafModule. There is an intermediate module which inherits from rootModule, leafModule inherits from this intermediate module.
In my leafModule, I want to test the doStuff() method, which calls the non virtual GetStatus(moduleName) defined in the rootModule class. I need to somehow make GetStatus() to return a chosen canned value. Mocking is new to me, so is using mock objects even the right approach?
There are some different ways of replacing non-virtual functions. One is to re-declare them and compile a new test executable for each different set of non-virtual functions you'd like to test. That's hardly scaleable.
A second option is to make them virtual for test. Most compilers allow you to define something on the command-line so compile your code with -DTEST_VIRTUAL=virtual or -DTEST_VIRTUAL to make them either virtual or normal depending on whether or not it's under test or not.
A third option which may be usable is to use a mocking framework that lets you mock non-virtual functions. I'm the author of HippoMocks (disclaimer with regard to neutrality and so on) and we've recently added the ability to mock plain C functions on X86 platforms. This can be extended to non-virtual member functions with a bit of work and would be what you're looking for. Keep in mind that, if your compiler can see both the use and the definition of a function at one time that it may inline it and that the mocking may fail. That holds in particular for functions that are defined in headers.
If regular C function mocking is sufficient for you, you can use it as it is now.
I would write a Perl/Ruby/Python script to read in the original source tree and write out a mocked source tree in a different directory. You don't have to fully parse C++ in order to replace a function definition.
One approach would be to specify different sources for testing. Say your production target uses rootModule.h and rootModule.cpp. Use different sources for your testing target. You can specify a different header by changing your include path, so that #include "rootModule.h" actually loads unittest/rootModule.h. Then mock rootModule to your heart's content.

How Do You Create Test Objects For Third Party Legacy Code

I have a code base where many of the classes I implement derive from classes that are provided by other divisions of my company. Working with these other devisions often have the working relationship as though they are third party middle ware vendors.
I'm trying to write test code without modifying these base classes. However, there are issues with creating meaningful test
objects due to the lack of interfaces:
//ACommonClass.h
#include "globalthermonuclearwar.h" //which contains deep #include dependencies...
#include "tictactoe.h" //...and need to exist at compile time to get into test...
class Something //which may or may not inherit from another class similar to this...
{
public:
virtual void fxn1(void); //which often calls into many other classes, similar to this
//...
int data1; //will be the only thing I can test against, but is often meaningless without fxn1 implemented
//...
};
I'd normally extract an interface and work from there, but as these are "Third Party", I can't commit these changes.
Currently, I've created a separate file that holds fake implementations for functions that are defined in the third-party supplied base class headers on a need to know basis, as has been described in the book "Working with Legacy Code".
My plan was to continue to use these definitions and provide alternative test implementations for each third party class that I needed:
//SomethingRequiredImplementations.cpp
#include "ACommonClass.h"
void CGlobalThermoNuclearWar::Simulate(void) {}; // fake this and all other required functions...
// fake implementations for otherwise undefined functions in globalthermonuclearwar.h's #include files...
void Something::fxn1(void) { data1 = blah(); } //test specific functionality.
But before I start doing that I was wondering if any one has tried providing actual objects on a code base similar to mine, which would allow creating new test specific classes to use in place of actual third-party classes.
Note all code bases in question are written in C++.
Mock objects are suitable for this kind of task. They allow you to simulate the existence of other components without needing them to be present. You simply define the expected input and output in your tests.
Google have a good mocking framework for C++.
I'm running into a very similar problem at the moment. I don't want to add a bunch of interfaces that are only there for the purpose of testing, so I can't use any of the existing mock object libraries. To get around this I do the same thing, creating a different file with fake implementations, and having my tests link the fake behaviour, and production code links the real behaviour.
What I wish I could do at this point, is take the internals of another mock framework, and use it inside my fake objects. It would look a little something like this:
Production.h
class ConcreteProductionClass { // regular everyday class
protected:
ConcreteProductionClass(); // I've found the 0 arg constructor useful
public:
void regularFunction(); // regular function that I want to mock
}
Mock.h
class MockProductionClass
: public ConcreteProductionClass
, public ClassThatLetsMeSetExpectations
{
friend class ConcreteProductionClass;
MockTypes membersNeededToSetExpectations;
public:
MockClass() : ConcreteProductionClass() {}
}
ConcreteProductionClass::regularFunction() {
membersNeededToSetExpectations.PassOrFailTheTest();
}
ProductionCode.cpp
void doSomething(ConcreteProductionClass c) {
c.regularFunction();
}
Test.cpp
TEST(myTest) {
MockProductionClass m;
m.SetExpectationsAndReturnValues();
doSomething(m);
ASSERT(m.verify());
}
The most painful part of all this is that the other mock frameworks are so close to this, but don't do it exactly, and the macros are so convoluted that it's not trivial to adapt them. I've begun looking into this on my spare time, but it's not moving along very quickly. Even if I got my method working the way I want, and had the expectation setting code in place, this method still has a couple drawbacks, one of them being that your build commands can get to be kind of long if you have to link against a lot of .o files rather than one .a, but that's manageable. It's also impossible to fall through to the default implementation, since we're not linking it. Anyway, I know this doesn't answer the question, or really even tell you anything you don't already know, but it shows how close the C++ community is to being able to mock classes that don't have a pure virtual interface.
You might want to consider mocking instead of faking as a potential solution. In some cases you may need to write wrapper classes that are mockable if the original classes aren't. I've done this with framework classes in C#/.Net, but not C++ so YMMV.
If I have a class that I need under test that derives from something I can't (or don't want to) run under test I'll:
Make a new logic-only class.
Move the code-i-wanna-test to the logic class.
Use an interface to talk back to the real class to interact with the base class and/or things I can't or won't put in the logic.
Define a test class using that same interface. This test class could have nothing but noops or fancy code that simulates the real classes.
If I have a class that I just need to use in testing, but using the real class is a problem (dependencies or unwanted behaviors):
I'll define a new interface that looks like all of the public methods I need to call.
I'll create a mock version of the object that supports that interface for testing.
I'll create another class that is constructed with a "real" version of that class. It also supports that interface. All interface calls a forwarded to the real object methods.
I'll only do this for methods I actually call - not ALL the public methods. I'll add to these classes as I write more tests.
For example, I wrap MFC's GDI classes like this to test Windows GDI drawing code. Templates can make some of this easier - but we often end up not doing that for various technical reasons (stuff with Windows DLL class exporting...).
I'm sure all this is in Feather's Working with Legacy Code book - and what I'm describing has actual terms. Just don't make me pull the book off the shelf...
One thing you did not indicate in your question is the reason why your classes derive from base classes from the other division. Is the relationship really a IS-A relationshiop ?
Unless your classes needs to be used by a framework, you could consider favoring delegation over inheritance. Then you can use dependency injection to provide your class with a mock of their class in the unit tests.
Otherwise, an idea would be to write a script to extract and create the interface your need from the header they provide, and integrate this to the compilation process so your unit test can ve checked in.

Should I use nested classes in this case?

I am working on a collection of classes used for video playback and recording. I have one main class which acts like the public interface, with methods like play(), stop(), pause(), record() etc... Then I have workhorse classes which do the video decoding and video encoding.
I just learned about the existence of nested classes in C++, and I'm curious to know what programmers think about using them. I am a little wary and not really sure what the benefits/drawbacks are, but they seem (according to the book I'm reading) to be used in cases such as mine.
The book suggests that in a scenario like mine, a good solution would be to nest the workhorse classes inside the interface class, so there are no separate files for classes the client is not meant to use, and to avoid any possible naming conflicts? I don't know about these justifications. Nested classes are a new concept to me. Just want to see what programmers think about the issue.
I would be a bit reluctant to use nested classes here. What if you created an abstract base class for a "multimedia driver" to handle the back-end stuff (workhorse), and a separate class for the front-end work? The front-end class could take a pointer/reference to an implemented driver class (for the appropriate media type and situation) and perform the abstract operations on the workhorse structure.
My philosophy would be to go ahead and make both structures accessible to the client in a polished way, just under the assumption they would be used in tandem.
I would reference something like a QTextDocument in Qt. You provide a direct interface to the bare metal data handling, but pass the authority along to an object like a QTextEdit to do the manipulation.
You would use a nested class to create a (small) helper class that's required to implement the main class. Or for example, to define an interface (a class with abstract methods).
In this case, the main disadvantage of nested classes is that this makes it harder to re-use them. Perhaps you'd like to use your VideoDecoder class in another project. If you make it a nested class of VideoPlayer, you can't do this in an elegant way.
Instead, put the other classes in separate .h/.cpp files, which you can then use in your VideoPlayer class. The client of VideoPlayer now only needs to include the file that declares VideoPlayer, and still doesn't need to know about how you implemented it.
One way of deciding whether or not to use nested classes is to think whether or not this class plays a supporting role or it's own part.
If it exists solely for the purpose of helping another class then I generally make it a nested class. There are a whole load of caveats to that, some of which seem contradictory but it all comes down to experience and gut-feeling.
sounds like a case where you could use the strategy pattern
Sometimes it's appropriate to hide the implementation classes from the user -- in these cases it's better to put them in an foo_internal.h than inside the public class definition. That way, readers of your foo.h will not see what you'd prefer they not be troubled with, but you can still write tests against each of the concrete implementations of your interface.
We hit an issue with a semi-old Sun C++ compiler and visibility of nested classes which behavior changed in the standard. This is not a reason to not do your nested class, of course, just something to be aware of if you plan on compiling your software on lots of platforms including old compilers.
Well, if you use pointers to your workhorse classes in your Interface class and don't expose them as parameters or return types in your interface methods, you will not need to include the definitions for those work horses in your interface header file (you just forward declare them instead). That way, users of your interface will not need to know about the classes in the background.
You definitely don't need to nest classes for this. In fact, separate class files will actually make your code a lot more readable and easier to manage as your project grows. it will also help you later on if you need to subclass (say for different content/codec types).
Here's more information on the PIMPL pattern (section 3.1.1).
You should use an inner class only when you cannot implement it as a separate class using the would-be outer class' public interface. Inner classes increase the size, complexity, and responsibility of a class so they should be used sparingly.
Your encoder/decoder class sounds like it better fits the Strategy Pattern
One reason to avoid nested classes is if you ever intend to wrap the code with swig (http://www.swig.org) for use with other languages. Swig currently has problems with nested classes, so interfacing with libraries that expose any nested classes becomes a real pain.
Another thing to keep in mind is whether you ever envision different implementations of your work functions (such as decoding and encoding). In that case, you would definitely want an abstract base class with different concrete classes which implement the functions. It would not really be appropriate to nest a separate subclass for each type of implementation.