I've built myself a basic EEPROM burner using a Teensy++ 2.0 for my PC bridge, and it's working great, but as I look to expand its compatibility, my code is getting rather hacky. I'm looking for some advice for a proper design for making this code expandable. I've taken a class in software design patterns, but it was awhile ago, and I'm currently drawing a blank. Basically, here's the use case:
I have several methods, such as ReadByte(), WriteByte(), ProgramByte() (for FlashROMs that require a multi-byte write sequence in order to program), EraseChip(), etc. so basically I have an EEPROM pure virtual base class that gets implemented by concrete classes for each chip type that I want to support. The tricky part is determining which chip type object to generate. I'm currently using a pseudo-terminal front-end on the Teensy++ serial input, a basic command-line type interface with parameters, to send options like the chip type to the Teensy++. The question is, is there a design pattern (in C/C++), something like a Factory Pattern, that would take a string input of the chip type (because that's what I'm getting from the user), and return an EEPROM object of the correct derived type, without having to manually create some big switch statement or something ugly like that where I'd have to add the new chip to a list any time I create a new chip derived class? So something like:
public const EEPROM & GetEEPROM(const std::string & id)
and if I pass it the string "am29f032b" it returns a reference to an AM29F032B object, or if I pass it the string "sst39sf040" it returns a reference to an SST39SF040 object, which I could then call the previously mentioned functions on, and it would work for the specified chip.
This code will be run on an AVR microcontroller, so I can't have anything with a huge OOP overhead, but the particular microcontroller I'm using does have a relatively large amount of program flash and work RAM, so it's not like I'm trying to operate in 2kb, but I do have to keep in mind limited resources.
What you're looking for is a Pluggable Factory. There's a good description here. It's attributed to John Vlissides (1 of the Gang of Four) and takes the Abstract factory pattern a few steps further. This pattern happens to also be the architectural foundation of COM.
The usual way of implementing one in C++ is to maintain a static registry of abstract factories. With judicious use of a few templates and static initialisers, you can wrap the whole lot up few lines of boiler-plate which you include in each concrete product (e.g. chip type).
The use of static initialisers allows a complete decoupling of concrete products from both the registry and the code wanting to create products, and has the possibility of implementing each as a plug-in.
You could have a singleton factory manager that keeps a map of string->factory object. Then each factory class would have a global instance that registers itself with the manager on startup.
I try to avoid globals in general, and singletons in particular, but any other approach would require some form of explicit list which you're trying to avoid. You will have to be careful with timing issues (you can't really assume anything about the order in which the various factories will be created).
Related
Following my reading of the article Programmers Are People Too by Ken Arnold, I have been trying to implement the idea of progressive disclosure in a minimal C++ API, to understand how it could be done at a larger scale.
Progressive disclosure refers to the idea of "splitting" an API into categories that will be disclosed to the user of an API only upon request. For example, an API can be split into two categories: a base category what is (accessible to the user by default) for methods which are often needed and easy to use and a extended category for expert level services.
I have found only one example on the web of such an implementation: the db4o library (in Java), but I do not really understand their strategy. For example, if we take a look at ObjectServer, it is declared as an interface, just like its extended class ExtObjectServer. Then an implementing ObjectServerImpl class, inheriting from both these interfaces is defined and all methods from both interfaces are implemented there.
This supposedly allows code such as:
public void test() throws IOException {
final String user = "hohohi";
final String password = "hohoho";
ObjectServer server = clientServerFixture().server();
server.grantAccess(user, password);
ObjectContainer con = openClient(user, password);
Assert.isNotNull(con);
con.close();
server.ext().revokeAccess(user); // How does this limit the scope to
// expert level methods only since it
// inherits from ObjectServer?
// ...
});
My knowledge of Java is not that good, but it seems my misunderstanding of how this work is at an higher level.
Thanks for your help!
Java and C++ are both statically typed, so what you can do with an object depends not so much on its actual dynamic type, but on the type through which you're accessing it.
In the example you've shown, you'll notice that the variable server is of type ObjectServer. This means that when going through server, you can only access ObjectServer methods. Even if the object happens to be of a type which has other methods (which is the case in your case and its ObjectServerImpl type), you have no way of directly accessing methods other than ObjectServer ones.
To access other methods, you need to get hold of the object through different type. This could be done with a cast, or with an explicit accessor such as your ext(). a.ext() returns a, but as a different type (ExtObjectServer), giving you access to different methods of a.
Your question also asks how is server.ext() limited to expert methods when ExtObjectServer extends ObjectServer. The answer is: it is not, but that is correct. It should not be limited like this. The goal is not to provide only the expert functions. If that was the case, then client code which needs to use both normal and expert functions would need to take two references to the object, just differently typed. There's no advantage to be gained from this.
The goal of progressive disclosure is to hide the expert stuff until it's explicitly requested. Once you ask for it, you've already seen the basic stuff, so why hide it from you?
I am migrating a tile based 2D game to C++, because I am really not a fan of Java (some features are nice, but I just can't get used to it). I am using TMX tiled maps. This question is concerning how to translate the object definitions into actual game entities. In Java, I used reflection to allocate an object of the specified type (given that it derived from the basic game entity).
This worked fine, but this feature is not available in C++ (I understand why, and I'm not complaining. I find reflection messy, and I was hesitant to use it in Java, haha). I was just wondering what the best way was to translate this data. My idea was a base class from which all entities could derive from (this seems pretty standard), then have the loader allocate the derived types based on the 'type' value from the TMX map. I have thought of two ways to do this.
A giant switch-case block. Lengthy and disgusting. I'm doubtful that anyone would suggest this (but it is the obvious).
Use a std::map, which would map arbitrary type names to a function to allocate said classes corresponding to said type names.
Lastly, I had thought of making entities of one base class, and using scripting for different entity types. The scripts themselves would register their entity type with the system, although the game would need to load said entity type scripts upon loading (this could be done via one main entity type declaration script, which would bring the number of edits per entity down to 2: entity creation, and entity registration).
while option two looks pretty good, I don't like having to change 3 pieces of code for each type (defining the entity class, defining an allocate function, and adding the function to the std::map). Option 3 sounds great except for two things in my mind: I'm afraid of the speed of purely script driven entities. Also, I know that adding scripting to my engine is going to be a big project in itself (adding all the helper functions for interfacing with the library will be interesting).
Does anyone know of a better solution? Maybe not better, but just cleaner. With less code edits per entity type.
You can reduce the number of code changes in solution 2, if you use a self registration to a factory. The drawback is that the entities know this factory (self registration) and this factory has to be a global (e.g. a singleton) instance. If tis is no problem to you, this pattern can be very nice. Each new type requires only compilation an linking of one new file.
You can implement self registration like this:
// foo.cpp
namespace
{
bool dummy = FactoryInstance().Register("FooKey", FooCreator);
}
Abstract Factory, Template Style, by Jim Hyslop and Herb Sutter
I'm creating a C++ object serialization library. This is more towards self-learning and enhancements & I don't want to use off-the-shelf library like boost or google protocol buf.
Please share your experience or comments on good ways to go about it (like creating some encoding with tag-value etc).
I would like to start by supporting PODs followed by support to non-linear DSs.
Thanks
PS: HNY2012
If you need serialization for inter process communication, then I suggest to use some interface language (IDL or ASN.1) for defining interfaces.
So it will be easier to make support for other languages (than C++) too. And also, it will be easier to implement code/stub generator.
I have been working on something similar for the last few months. I couldn't use Boost because the task was to serialize a bunch of existing classes (huge existing codebase) and it was inappropriate to have the classes inherit from the interface which had the serialize() virtual function (we did not want multiple inheritance).
The approach taken had the following salient features:
Create a helper class for each existing class, designated with the task of serializing that particular class, and make the helper class a friend of the class being serialized. This avoids introduction of inheritance in the class being serialized, and also allows the helper class access to private variables.
Have each of the helper classes (let's call them 'serializers') register themselves into a global map. Each serializer class implements a clone() virtual function ('prototype' pattern), which allows one to retrieve a pointer to a serializer, given the name of the class, from this map. The name is obtained by using compiler-specific RTTI information. The registration into the global map is taken care of by instantiating static pointers and 'new'ing them, since static variables get created before the program starts.
A special stream object was created (derived from std::fstream), that contained template functions to serialize non-pointer, pointer, and STL data types. The stream object could only be opened in read-only or write-only modes (by design), so the same serialize() function could be used to either read from the file or write into the file, depending on the mode in which the stream was opened. Thus, there is no chance of any mismatch in the order of reading versus writing of the class members.
For every object being saved or restored, a unique tag (integer) was created based on the address of the variable and stored in a map. If the same address occurred again, only the tag was saved, not the deep-copied object itself. Thus, each object was deep copied only once into the file.
A page on the web captures some of these ideas shared above: http://www.cs.sjsu.edu/~pearce/modules/lectures/cpp/Serialization.htm. Hope that helps.
I wrote an article some years ago. Code and tools can be obsolete, but concepts can remain the same.
May be this can help you.
I was looking into using the to implement the style and rendering in a little GUI toolkit of mine using the Factory Pattern, and then this crazy Idea struck me. Why not use the Factory Pattern for all widgets?
Right now I'm thinking something in style of:
Widget label = Widget::create("label", "Hello, World");
Widget byebtn = Widget::create("button", "Good Bye!");
byebtn.onEvent("click", &goodByeHandler);
New widget types would be registered with a function like Widget::register(std::string ref, factfunc(*fact))
What would you say is the pros and cons of this method?
I would bet on that there is almost no performance issues using a std::map<std::string> as it is only used when setting things up.
And yes, forgive me for being damaged by JavaScript and any possible syntax errors in my C++ as it was quite a while since I used that language.
Summary of pros and cons
Pros
It would be easy to load new widgets dynamically from a file
Widgets could easily be replaced on run-time (it could also assist with unit testing)
Appeal to Dynamic People
Cons (and possible solutions)
Not compile-time checked (Could write own build-time check. JS/Python/name it, is not compile-time checked either; and I believe that the developers do quite fine without it)
Perhaps less performant (Do not need to create Widgets all the time. String Maps would not be used all the time but rather when creating the widget and registering event handlers)
Up/Downcasting hell (Provide usual class instantiation where needed)
Possible of making it harder to configure widgets (Make widgets have the same options)
My GUI Toolkit is in its early stages, it lacks quite much on the planning side. And as I'm not used to make GUI frameworks there is probably a lot of questions/design decisions that I have not even though of yet.
Right now I can't see that widgets would be too different in their set of methods, but that might rather just be because I've not gone trough the more advanced widgets in my mind yet.
Thank you all for your input!
Pros
It would be easy to read widgets from file or some other external source and create them by their names.
Cons
It would be easy to pass incorrect type name into Widget::create and you'll get error in run-time only.
You will have no access to widget-specific API without downcasting - i.e. one more source of run-time errors.
It might be not that easy to find all places where you create a widget of particular type.
Why not get best of both worlds by allowing widget creation both with regular constructors and with a factory?
I think this is not very intuitive. I would rather allow direct accesss to the classes. The question here is, why you want to have it this way. Is there any technical reason? Do you have a higher abstraction layer above the "real classes" that contain the widgets?
For example, you have pure virtual classes for the outside API and then an implementation-class?
But it is difficult to say if this is any good without knowing how your library is set up, but I am not a big fan of factories myself, unless you really need them.
If you have this higher abstraction layer and want that factory, take a look at irrlicht. It uses pure virtual classes for the API and then "implementation classes". The factory chooses the correct implementation depending on the chosen renderer (directx, opengl, software).
But they don't identify the wanted class by a string. That's something I really wouldn't do in C++.
Just my 2 cents.
I would advise against using a string as a key, and promote a real method for each subtype.
class Factory
{
public:
std::unique_ptr<Label> createLabel(std::string const& v);
std::unique_ptr<Button> createButton(std::string const& v, Action const* a);
};
This has several advantages:
Compiler-checked, you could mispell the string, but you cannot get a running program with a mispelled method
More precise return type, and thus, no need for up-casting (and all its woes)
More precise parameters, for not all elements require the same set of parameters
Generally, this is done to allow customization. If Factory is an abstract interface, then you can have several implementations (Windows, GTK, Mac) that create them differently under the hood.
Then, instead of setting up one creator function per type, you would switch the whole factory at once.
Note: performance-wise, virtual functions are much faster than map-lookup by string key
The more I get into writing unit tests the more often I find myself writing smaller and smaller classes. The classes are so small now that many of them have only one public method on them that is tied to an interface. The tests then go directly against that public method and are fairly small (sometimes that public method will call out to internal private methods within the class). I then use an IOC container to manage the instantiation of these lightweight classes because there are so many of them.
Is this typical of trying to do things in a more of a TDD manner? I fear that I have now refactored a legacy 3,000 line class that had one method in it into something that is also difficult to maintain on the other side of the spectrum because there is now literally about 100 different class files.
Is what I am doing going too far? I am trying to follow the single responsibility principle with this approach but I may be treading into something that is an anemic class structure where I do not have very intelligent "business objects".
This multitude of small classes would drive me nuts. With this design style it becomes really hard to figure out where the real work gets done. I am not a fan of having a ton of interfaces each with a corresponding implementation class, either. Having lots of "IWidget" and "WidgetImpl" pairings is a code smell in my book.
Breaking up a 3,000 line class into smaller pieces is great and commendable. Remember the goal, though: it's to make the code easier to read and easier to work with. If you end up with 30 classes and interfaces you've likely just created a different type of monster. Now you have a really complicated class design. It takes a lot of mental effort to keep that many classes straight in your head. And with lots of small classes you lose the very useful ability to open up a couple of key files, pick out the most important methods, and get an idea of what the heck is going on.
For what it's worth, though, I'm not really sold on test-driven design. Writing tests early, that's sensible. But reorganizing and restructuring your class design so it can be more easily unit tested? No thanks. I'll make interfaces only if they make architectural sense, not because I need to be able to mock up some objects so I can test my classes. That's putting the cart before the horse.
You might have gone a bit too far if you are asking this question. Having only one public method in a class isn't bad as such, if that class has a clear responsibility/function and encapsulates all logic concerning that function, even if most of it is in private methods.
When refactoring such legacy code, I usually try to identify the components in play at a high level that can be assigned distinct roles/responsibilities and separate them into their own classes. I think about which functions should be which components's responsibility and move the methods into that class.
You write a class so that instances of the class maintain state. You put this state in a class because all the state in the class is related.You have function to managed this state so that invalid permutations of state can't be set (the infamous square that has members width and height, but if width doesn't equal height it's not really a square.)
If you don't have state, you don't need a class, you could just use free functions (or in Java, static functions).
So, the question isn't "should I have one function?" but rather "what state-ful entity does my class encapsulate?"
Maybe you have one function that sets all state -- and you should make it more granular, so that, e.g., instead of having void Rectangle::setWidthAndHeight( int x, int y) you should have a setWidth and a separate setHeight.
Perhaps you have a ctor that sets things up, and a single function that doesIt, whatever "it" is. Then you have a functor, and a single doIt might make sense. E.g., class Add implements Operation { Add( int howmuch); Operand doIt(Operand rhs);}
(But then you may find that you really want something like the Visitor Pattern -- a pure functor is more likely if you have purely value objects, Visitor if they're arranged in a tree and are related to each other.)
Even if having these many small objects, single-function is the correct level of granularity, you may want something like a facade Pattern, to compose out of primitive operations, often-used complex operations.
There's no one answer. If you really have a bunch of functors, it's cool. If you're really just making each free function a class, it's foolish.
The real answer lies in answering the question, "what state am I managing, and how well do my classes model my problem domain?"
I'd be speculating if I gave a definite answer without looking at the code.
However it sounds like you're concerned and that is a definite flag for reviewing the code. The short answer to your question comes back to the definition of Simple Design. Minimal number of classes and methods is one of them. If you feel like you can take away some elements without losing the other desirable attributes, go ahead and collapse/inline them.
Some pointers to help you decide:
Do you have a good check for "Single Responsibility" ? It's deceptively difficult to get it right but is a key skill (I still don't see it like the masters). It doesn't necessarily translate to one method-classes. A good yardstick is 5-7 public methods per class. Each class could have 0-5 collaborators. Also to validate against SRP, ask the question what can drive a change into this class ? If there are multiple unrelated answers (e.g. change in the packet structure (parsing) + change in the packet contents to action map (command dispatcher) ) , maybe the class needs to be split. On the other end, if you feel that a change in the packet structure, can affect 4 different classes - you've run off the other cliff; maybe you need to combine them into a cohesive class.
If you have trouble naming the concrete implementations, maybe you don't need the interface. e.g. XXXImpl classes implmenting XXX need to be looked at. I recently learned of a naming convention, where the interface describes a Role and the implementation is named by the technology used to implement the role (or falling back to what it does). e.g. XmppAuction implements Auction (or SniperNotifier implements AuctionEventListener)
Lastly are you finding it difficult to add / modify / test existing code (e.g. test setup is long or painful ) ? Those can be signs that you need to go refactoring.