The balance of Single responsibility/unit testability and practicality - unit-testing

I'm still confused about unit testing. Suppose I have something as trivial as this:
class x {
zzz someMethod(some input...) {
BufferedImage image = getter.getImageFromFile(...);
// determine resize mode:
int width = image.getWidth();
int height = image.getHeight();
Scalr.Mode resizeMode = (width > height) ? Scalr.Mode.FIT_TO_WIDTH : Scalr.Mode.FIT_TO_HEIGHT;
return ScalrWrapper.resize(image, resizeMode);
}
}
Going by rules, Scalr.Mode resizeMode = should probably be a in a separate class for better unit testability of the aforementioned method, like so:
class xxx {
mode getResizeMode(int width, int height)
{
return (width > height) ? Scalr.Mode.FIT_TO_WIDTH : Scalr.Mode.FIT_TO_HEIGHT;
}
}
class x {
zzz someMethod(some input...) {
BufferedImage image = getter.getImageFromFile(...);
// determine resize mode:
int width = image.getWidth();
int height = image.getHeight();
Scalr.Mode resizeMode = xxx.getResizeMode(width, height);
return ScalrWrapper.resize(image, resizeMode);
}
}
But it looks like such an overkill... I'm not sure which one is better but I guess this way is better. Suppose I go this route, would it be even better to do it this way?
class xxx {
mode getResizeMode(Image image)
{
return (image.getWidth() > image.getHeight()) ? Scalr.Mode.FIT_TO_WIDTH : Scalr.Mode.FIT_TO_HEIGHT;
}
}
class x {
void someMethod(some input...) {
BufferedImage image = getter.getImageFromFile(...);
// determine resize mode:
Scalr.Mode resizeMode = xxx.getResizeMode(image);
return ScalrWrapper.resize(image, resizeMode);
}
}
From what I understand, the correct way is the one where getResizeMode accepts integers as it is decoupled from the type of data whose properties are width and height. However, personally to me, the use of getResizeMode(BufferedImage) actually justifies the creation of a separate class better as some more work is removed from the main method. And since I am not going to be using getResizeMode for any sort of data other than BufferedImage in my application anyway, there is no problem of reusability. Also, I don't think I should be doing getResizeMode(int, int) simply for reusability if I see no need for it due to YAGNI principle.
So my question is: would getResizeMode(BufferedImage) be a good way according to OOD in real world? I understand it's text book good OOD, but then I have been lead to believe that 100% text book OOD is impracticle in real world. So as I am trying to learn OOD, I just want to know which path I should follow.
...Or maybe I should I just leave everything in one method like in the very first code snippet?

I don't think that resize mode calculation influences testability a lot.
As to Single Responsibility:
"A class should have only one reason to change" (https://en.wikipedia.org/wiki/Single_responsibility_principle).
Do you think that resizing mode calculation is going to change?
If not then just put in the class where this mode is needed.
This won't add any reasons to change for that class.
If the calculation is likely to change (and/or may have several versions)
then move it to a separate class (make it a strategy)

Achieving the Single Responsibility Principle (SRP) is not about creating new classes every time, one extracting a method. Moreover the SRP depends on the context.
A module should concern to the SRP.
A class should concern to the SRP.
A method should concern to the SRP.
The message from Uncle Bob is: Extract till you Drop
Beyond he said:
Perhaps you think this is taking things too far. I used to think so too. But after programming for over 40+ years, I’m beginning to come to the conclusion that this level of extraction is not taking things too far at all.
When it comes to the decision to create new classes, keep the metric high cohesion in mind. Cohesion is the degree to which the elements of a module belong together. If all methods work in one specific context and on the same set of variables, they belong to one class.
Back to your case. I would extract all the methods and put them in on class. And this one class is also nicely testable.

Little bit late to the party, but here's my 2c.
To my mind, class x is not adhering to the SRP for a different reason.
It's currently responsible for
Getting an image from a file (getter.getImageFromFile)
Resizing that image
TL;DR
The TL;DR on this is that both of your approaches are fine and both do in fact stick - with varying degrees of stickiness - to the SRP. However if you want to adhere very tightly to the SRP (which tends to lead to very testable code), you could split this into three classes first:
Orchestrator
class imageResizeService
{
ImageGetter _getter;
ImageResizer _resizer;
zzz ResizeImage(imageName)
{
image=_getter.GetImage(imageName);
resizedImage=_resizer.ResizeImage(image);
return resizedImage;
}
}
This class has a single responsibility; namely, given an image name,
return a resized version of it based on some criteria.
To do so, it orchestrates two dependencies. But it only has a single reason to change which is that the process used to get and resize an image in
general , has changed.
You can easily unit test this by mocking the getter and resizer and testing that they are called in order, that the resizer is called with the data given by the getter, and that the final return value equals that returned by the resizer, and so on (i.e. "White Box" testing)
ImageGetter
class ImageGetter
{
BufferedImage GetImage(imageName)
{
image=io.LoadFromDisk(imageName) or explode;
return image;
}
}
Again, we have a single responsiblity (load an image from disk, and return it).
The only reason to change this class would be if the mechanics of loading the image were to change - e.g. you are loading from a Database, not a Disk.
An interesting note here is that this class is ripe for further generalisation - for example to be able to compose it using a BufferedImageBuilder and a RawImageDataGetter abstraction which could have multiple implementations for Disk, Database, Http, etc. But that's YAGNI right now and a conversation for another day :)
Note on testability
In terms of unit testing this, you may run into a small problem, namely that you can't quite "unit test" it - unless your framework as a mock for the file system. In that case, you can either further abstract the loading of the raw data (as per the previous paragraph) or accept it and just perform an integration test off a known good file. Both approaches are perfectly valid and you should not worry about which you choose - whatever is easier for you.
ImageResizer
class ImageResizer
{
zzz ResizeImage(image)
{
int width = image.getWidth();
int height = image.getHeight();
Scalr.Mode resizeMode = getResizeMode(width, height);
return ScalrWrapper.resize(image, resizeMode);
}
private mode getResizemode(width, height)
{
return (width > height) ? Scalr.Mode.FIT_TO_WIDTH : Scalr.Mode.FIT_TO_HEIGHT;
}
}
This class also has but a single job, to resize an image.
The question of whether or not the getResizeMode method - currently just a private method to keep the code clean - should be a separate responsiblity has to be answered in the context of whether or not that operation is somehow independent of the image resizing.
Even if it's not, then the SRP is still being followed, because it's part of the single responsibility "Resize an Image".
Test-wise this is also really easy to test, and because it doesn't even cross any boundaries (you can create and supply the sole dependency - the image - during test runtime) you probably won't even need mocks.
Personally I would extract it to a separate class, just so that I could, in isolation, verify that given a width larger than a height, I was returned a Scalr.Mode.FIT_TO_WIDTH and vice-versa; it would also mean I could adhere to the Open Closed Principle whereby new scaling modes could be introduced without having to modify the ImageResizer class.
But really
The answer here has to be that that it depends; for example if you have a simple way to verify that, given a width of 100 and a height of 99, then the resized image is indeed scaled to "Fit to Width" then you really don't need to.
That being said I suspect you'll have an easier time testing this if you do extract that to a separate method.
Just bear in mind that if you're using a decent IDE with good refactoring tools, that should really not take you more than a couple of keystrokes, so don't worry about the overhead.

Related

c++ best way to realise global switches/flags to control program behaviour without tying the classes to a common point

Let me elaborate on the title:
I want to implement a system that would allow me to enable/disable/modify the general behavior of my program. Here are some examples:
I could switch off and on logging
I could change if my graphing program should use floating or pixel coordinates
I could change if my calculations should be based upon some method or some other method
I could enable/disable certain aspects like maybe a extension api
I could enable/disable some basic integrated profiler (if I had one)
These are some made-up examples.
Now I want to know what the most common solution for this sort of thing is.
I could imagine this working with some sort of singelton class that gets instanced globally or in some other globally available object. Another thing that would be possible would be just constexpr or other variables floating around in a namespace, again globally.
However doing something like that, globally, feels like bad practise.
second part of the question
This might sound like I cant decide what I want, but I want a way to modify all these switches/flags or whatever they are actually called in a single location, without tying any of my classes to it. I don't know if this is possible however.
Why don't I want to do that? Well I like to make my classes somewhat reusable and I don't like tying classes together, unless its required by the DRY principle and or inheritance. I basically couldn't get rid of the flags without modifying the possible hundreds of classes that used them.
What I have tried in the past
Having it all as compiler defines. This worked reasonably well, however I didnt like that I couldnt make it so if the flag file was gone there were some sort of default settings that would make the classes themselves still operational and changeable (through these default values)
Having it as a class and instancing it globally (system class). Worked ok, however I didnt like instancing anything globally. Also same problem as above
Instancing the system class locally and passing it to the classes on construction. This was kinda cool, since I could make multiple instruction sets. However at the same time that kinda ruined the point since it would lead to things that needed to have one flag set the same to have them set differently and therefore failing to properly work together. Also passing it on every construction was a pain.
A static class. This one worked ok for the longest time, however there is still the problem when there are missing dependencies.
Summary
Basically I am looking for a way to have a single "place" where I can mess with some values (bools, floats etc.) and that will change the behaviour of all classes using them for whatever, where said values either overwrite default values or get replaced by default values if said "place" isnt defined.
If a Singleton class does not work for you , maybe using a DI container may fit in your third approach? It may help with the construction and make the code more testable.
There are some DI frameworks for c++, like https://github.com/google/fruit/wiki or https://github.com/boost-experimental/di which you can use.
If you decide to use switch/flags, pay attention for "cyclometric complexity".
If you do not change the skeleton of your algorithm but only his behaviour according to the objets in parameter, have a look at "template design pattern". This method allow you to define a generic algorithm and specify particular step for a particular situation.
Here's an approach I found useful; I don't know if it's what you're looking for, but maybe it will give you some ideas.
First, I created a BehaviorFlags.h file that declares the following function:
// Returns true iff the given feature/behavior flag was specified for us to use
bool IsBehaviorFlagEnabled(const char * flagName);
The idea being that any code in any of your classes could call this function to find out if a particular behavior should be enabled or not. For example, you might put this code at the top of your ExtensionsAPI.cpp file:
#include "BehaviorFlags.h"
static const enableExtensionAPI = IsBehaviorFlagEnabled("enable_extensions_api");
[...]
void DoTheExtensionsAPIStuff()
{
if (enableExtensionsAPI == false) return;
[... otherwise do the extensions API stuff ...]
}
Note that the IsBehaviorFlagEnabled() call is only executed once at program startup, for best run-time efficiency; but you also have the option of calling IsBehaviorFlagEnabled() on every call to DoTheExtensionsAPIStuff(), if run-time efficiency is less important that being able to change your program's behavior without having to restart your program.
As far as how the IsBehaviorFlagEnabled() function itself is implemented, it looks something like this (simplified version for demonstration purposes):
bool IsBehaviorFlagEnabled(const char * fileName)
{
// Note: a real implementation would find the user's home directory
// using the proper API and not just rely on ~ to expand to the home-dir path
std::string filePath = "~/MyProgram_Settings/";
filePath += fileName;
FILE * fpIn = fopen(filePath.c_str(), "r"); // i.e. does the file exist?
bool ret = (fpIn != NULL);
fclose(fpIn);
return ret;
}
The idea being that if you want to change your program's behavior, you can do so by creating a file (or folder) in the ~/MyProgram_Settings directory with the appropriate name. E.g. if you want to enable your Extensions API, you could just do a
touch ~/MyProgram_Settings/enable_extensions_api
... and then re-start your program, and now IsBehaviorFlagEnabled("enable_extensions_api") returns true and so your Extensions API is enabled.
The benefits I see of doing it this way (as opposed to parsing a .ini file at startup or something like that) are:
There's no need to modify any "central header file" or "registry file" every time you add a new behavior-flag.
You don't have to put a ParseINIFile() function at the top of main() in order for your flags-functionality to work correctly.
You don't have to use a text editor or memorize a .ini syntax to change the program's behavior
In a pinch (e.g. no shell access) you can create/remove settings simply using the "New Folder" and "Delete" functionality of the desktop's window manager.
The settings are persistent across runs of the program (i.e. no need to specify the same command line arguments every time)
The settings are persistent across reboots of the computer
The flags can be easily modified by a script (via e.g. touch ~/MyProgram_Settings/blah or rm -f ~/MyProgram_Settings/blah) -- much easier than getting a shell script to correctly modify a .ini file
If you have code in multiple different .cpp files that needs to be controlled by the same flag-file, you can just call IsBehaviorFlagEnabled("that_file") from each of them; no need to have every call site refer to the same global boolean variable if you don't want them to.
Extra credit: If you're using a bug-tracker and therefore have bug/feature ticket numbers assigned to various issues, you can creep the elegance a little bit further by also adding a class like this one:
/** This class encapsulates a feature that can be selectively disabled/enabled by putting an
* "enable_behavior_xxxx" or "disable_behavior_xxxx" file into the ~/MyProgram_Settings folder.
*/
class ConditionalBehavior
{
public:
/** Constructor.
* #param bugNumber Bug-Tracker ID number associated with this bug/feature.
* #param defaultState If true, this beheavior will be enabled by default (i.e. if no corresponding
* file exists in ~/MyProgram_Settings). If false, it will be disabled by default.
* #param switchAtVersion If specified, this feature's default-enabled state will be inverted if
* GetMyProgramVersion() returns any version number greater than this.
*/
ConditionalBehavior(int bugNumber, bool defaultState, int switchAtVersion = -1)
{
if ((switchAtVersion >= 0)&&(GetMyProgramVersion() >= switchAtVersion)) _enabled = !_enabled;
std::string fn = defaultState ? "disable" : "enable";
fn += "_behavior_";
fn += to_string(bugNumber);
if ((IsBehaviorFlagEnabled(fn))
||(IsBehaviorFlagEnabled("enable_everything")))
{
_enabled = !_enabled;
printf("Note: %s Behavior #%i\n", _enabled?"Enabling":"Disabling", bugNumber);
}
}
/** Returns true iff this feature should be enabled. */
bool IsEnabled() const {return _enabled;}
private:
bool _enabled;
};
Then, in your ExtensionsAPI.cpp file, you might have something like this:
// Extensions API feature is tracker #4321; disabled by default for now
// but you can try it out via "touch ~/MyProgram_Settings/enable_feature_4321"
static const ConditionalBehavior _feature4321(4321, false);
// Also tracker #4222 is now enabled-by-default, but you can disable
// it manually via "touch ~/MyProgram_Settings/disable_feature_4222"
static const ConditionalBehavior _feature4222(4222, true);
[...]
void DoTheExtensionsAPIStuff()
{
if (_feature4321.IsEnabled() == false) return;
[... otherwise do the extensions API stuff ...]
}
... or if you know that you are planning to make your Extensions API enabled-by-default starting with version 4500 of your program, you can set it so that Extensions API will be enabled-by-default only if GetMyProgramVersion() returns 4500 or greater:
static ConditionalBehavior _feature4321(4321, false, 4500);
[...]
... also, if you wanted to get more elaborate, the API could be extended so that IsBehaviorFlagEnabled() can optionally return a string to the caller containing the contents of the file it found (if any), so that you could do shell commands like:
echo "opengl" > ~/MyProgram_Settings/graphics_renderer
... to tell your program to use OpenGL for its 3D graphics, or etc:
// In Renderer.cpp
std::string rendererType;
if (IsDebugFlagEnabled("graphics_renderer", &rendererType))
{
printf("The user wants me to use [%s] for rendering 3D graphics!\n", rendererType.c_str());
}
else printf("The user didn't specify what renderer to use.\n");

Should I write a separate method for any possible parameter value

I am very new to unit testing concept and I stick with writing my first one.
I have a method for normalize ID value. It should return passed value for any positive number (even if it is string with number inside) and zero (0) for any other passed value.
function normalizeId($val) {
// if $val is good positive number return $val;
// else return 0;
}
I want to write a unit test for this function and have assertion to any possible arguments. For example:
5, -5, 0, "5", "-5", 3.14, "fff", new StdClass() etc.
Should I write a method in my TestCase class for any of this condition or have one method with all conditions on separate lines?
I.e.
public function testNormalizeId() {
$this->assertEquals(5, MyClass::normalizeId(5));
$this->assertEquals(0, MyClass::normalizeId(-5));
$this->assertEquals(0, MyClass::normalizeId("fff"));
}
or
public function testNormalizeId_IfPositiveInt_GetPositiveInt() {
$this->assertEquals(5, MyClass::normalizeId(5));
}
public function testNormalizeId_IfNegativeInt_GetZeroInt() {
$this->assertEquals(0, MyClass::normalizeId(-5));
}
public function testNormalizeId_IfNotIntAsString_GetZeroInt() {
$this->assertEquals(0, MyClass::normalizeId("fff"));
}
How about best practices? I hear that the second choice is good but I'm worry about very many methods for very many possible parameter values. It can be positive number, negative number, zero, string with positive number inside, string with negative number inside, string with float inside etc etc.
Edit
Or maybe the third approach with provider?
public function testNormalizeIdProvider()
{
return array(
array(5, 5),
array(-5, 0),
array(0, 0),
array(3.14, 0),
array(-3.14, 0),
array("5", 5),
array("-5", 0),
array("0", 0),
array("3.14", 0),
array("-3.14", 0),
array("fff", 0),
array("-fff", 0),
array(true, 0),
array(array(), 0),
array(new stdClass(), 0),
);
}
/**
* #dataProvider testNormalizeIdProvider
*/
public function testNormalizeId($provided, $expected)
{
$this->assertEquals($expected, MyObject::normalizeId($provided));
}
I'm not very knowledgeable about PHP nor the unit testing frameworks that you can use therein, but in the general sphere of Unit Testing I'd recommend the second approach for these reasons
Gives a specific test case fail for a particular type of input rather than having to trawl through the actual Assert failure message to figure out which one failed.
Makes it much easier to parametrize these tests if you decide that you need to perform tests on a specific type of conversion with more than one input (e.g if you decided to have a text file containing 1,000 random strings and wanted to load these up in a test driver and run the test case for converting strings for each entry by way of functional or acceptance testing later on)
Makes it easier to change out the individual test cases for when you need some special logic to setup
Makes it easier to spot when you've missed a type of conversion because the method names read off easier against a checklist :)
(Dubious) Will maybe make it easier to spot where your "god class" might be in need of internal refactoring to use separate sub-classes to perform specific types of conversions (not saying your approach is wrong but you might find the logic for one type of conversion very nasty; when you review your 20 or 30 individual test cases that could provide the impetus to bite the bullet and develop more specialized converter classes)
Hope that helps.
Use the data provider, as you discovered yourself. There is no benefit in duplicating the exact testcase in multiple methods with only having parameters and expectations change.
Personally, I do really start with the tests all in one method for such simple cases. I'd start with a simple good case, and then gradually adding more cases. I may not feel the need to change this into a data provider instantly, because it won't pay off instantly - but on the other hand things can change, and this test structure can be a short term solution that needs refactoring.
So whenever you observe yourself adding more lines of test into such a multi test case method, stop and make it using a data provider instead.

How do you control a player character in Bullet Physics?

I am not sure how you are supposed to control a player character in Bullet. The methods that I read were to use the provided btKinematicCharacterController. I also saw methods that use btDynamicCharacterController from the demos. However, in the manual it is stated that kinematic controller has several outstanding issues. Is this still the preferred path? If so, are there any tutorials or documentations for this? All I found are snippets of code from the demo, and the usage of controllers with Ogre, which I do not use.
If this is not the path that should be tread, then someone point me to the correct solution. I am new to bullet and would like a straightforward, easy solution. What I currently have is hacked together bits of a btKinematicCharacterController.
This is the code I used to set up the controller:
playerShape = new btCapsuleShape(0.25, 1);
ghostObject= new btPairCachingGhostObject();
ghostObject->setWorldTransform(btTransform(btQuaternion(0,0,0,1),btVector3(0,20,0)));
physics.getWorld()->getPairCache()->setInternalGhostPairCallback(new btGhostPairCallback());
ghostObject->setCollisionShape(playerShape);
ghostObject->setCollisionFlags(btCollisionObject::CF_CHARACTER_OBJECT);
controller = new btKinematicCharacterController(ghostObject,playerShape,0.5);
physics.getWorld()->addCollisionObject(ghostObject,btBroadphaseProxy::CharacterFilter, btBroadphaseProxy::StaticFilter|btBroadphaseProxy::DefaultFilter);
physics.getWorld()->addAction(controller);
This is the code I use to access the controller's position:
trans = controller->getGhostObject()->getWorldTransform();
camPosition.z = trans.getOrigin().z();
camPosition.y = trans.getOrigin().y()+0.5;
camPosition.x = trans.getOrigin().x();
The way I control it is through setWalkDirection() and jump() (if canJump() is true).
The issue right now is that the character spazzes out a little, then drops through the static floor. Clearly this is not intended. Is this due to the lack of a rigid body? How does one integrate that?
Actually, now it just falls as it should, but then slowly sinks through the floor.
I have moved this line to be right after the dynamic world is created
physics.getWorld()->getPairCache()->setInternalGhostPairCallback(new btGhostPairCallback());
It is now this:
broadphase->getOverlappingPairCache()->setInternalGhostPairCallback(new btGhostPairCallback());
I am also using a .bullet file imported from blender, if that is relevant.
The issue was with the bullet file, which has since been fixed(the collision boxes weren't working). However, I still experience jitteryness, unable to step up occasionally, instant step down from to high a height, and other issues.
My answer to this question here tells you what worked well for me and apparently also for the person who asked.
Avoid ground collision with Bullet
The character controller implementations in bullet are very "basic" unfortunately.
To get good character controller, you'll need to invest this much.

How to test asynchronuous code

I've written my own access layer to a game engine. There is a GameLoop which gets called every frame which lets me process my own code. I'm able to do specific things and to check if these things happened. In a very basic way it could look like this:
void cycle()
{
//set a specific value
Engine::setText("Hello World");
//read the value
std::string text = Engine::getText();
}
I want to test if my Engine-layer is working by writing automated tests. I have some experience in using the Boost Unittest Framework for simple comparison tests like this.
The problem is, that some things I want the engine to do are just processed after the call to cycle(). So calling Engine::getText() directly after Engine::setText(...) would return an empty string. If I would wait until the next call of cycle() the right value would be returned.
I now am wondering how I should write my tests if it is not possible to process them in the same cycle. Are there any best practices? Is it possible to use the "traditional testing" approach given by Boost Unittest Framework in such an environment? Are there perhaps other frameworks aimed at such a specialised case?
I'm using C++ for everything here, but I could imagine that there are answers unrelated to the programming language.
UPDATE:
It is not possible to access the Engine outside of cycle()
In your example above, std::string text = Engine::getText(); is the code you want to remember from one cycle but execute in the next. You can save it for later execution. For example - using C++11 you could use a lambda to wrap the test into a simple function specified inline.
There are two options with you:
If the library that you have can be used synchronously or using c++11 futures like facility (which can indicate the readyness of the result) then in your test case you can do something as below
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (!Engine::isResultReady());
//read the value
assert(Engine::getText() == "WHATEVERVALUEYOUEXPECT");
}
If you dont have the above the best you can do have a timeout (this is not a good option though because you may have spurious failures):
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (Engine::getText() != "WHATEVERVALUEYOUEXPECT") {
wait(1 millisec);
if (total_wait_time > 1 sec) // you can put whatever max time
assert(0);
}
}

Displaying polymorphic classes

I have an existing app with a command-line interface that I'm adding a GUI to. One situation that often comes up is that I have a list of objects that inherit from one class, and need to be displayed in a list, but each subclass has a slightly different way of being displayed.
Not wanting to have giant switch statements everywhere using reflection/RTTI to do the displaying, each class knows how to return its own summary string which then gets displayed in the list:
int position = 0;
for (vector<DisplayableObject>::const_iterator iDisp = listToDisplay.begin(); iDisp != listToDisplay.end(); ++iDisp)
cout << ++position << ". " << iDisp->GetSummary();
Similar functions are there to display different information in different contexts. This was all fine and good until we needed to add a GUI. A string is no longer sufficient - I need to create graphical controls.
I don't want to have to modify every single class to be able to display it in a GUI - especially since there is at least one more GUI platform we will want to move this to.
Is there some kind of technique I can use to separate this GUI code out of the data objects without resorting to RTTI and switch statements? It would be nice to be able to take out the GetSummary functions as well.
Ideally I'd be able to have a heierarchy of display classes that could take a data class and display it based on the runtime type instead of the compile time type:
shared_ptr<Displayer> displayer = new ConsoleDisplayer();
// or new GUIDisplayer()
for (vector<DisplayableObject>::const_iterator iDisp = listToDisplay.begin(); iDisp != listToDisplay.end(); ++iDisp)
displayer->Display(*iDisp);
I don't think this will solve your problem of not needing to write the code, but you should be able to abstract the GUI logic from the data objects.
Look at a Visitor pattern (http://en.wikipedia.org/wiki/Visitor_pattern) it will allow you to add code to an existing object without changing the object itself. You can also change the visitor based on the platform.