In couple of recent projects that I took part in I was almost addicted to the following coding pattern: (I'm not sure if there is a proper name for this, but anyway...)
Let's say some object is in some determined state and we wan't to change this state from outside. These changes could mean any behaviour, could invoke any algorithms, but the fact is that they focus on changing the state (member state, data state, etc...) of some object.
And let's call one discrete way of changing those object a Mutator. Mutators are applied once (generally) and they have some internal method like apply(Target& target, ...), which instantly provokes changing the state of the object (in fact, they're some sort of functional objects).
They also could be easily assimilated into chains and applied one-by-one (Mutator m1, m2, ...); they also could derive from some basic BasicMutator with virtual void apply(...) method.
I have introduced classes called InnerMutator and ExplicitMutator which differ in terms of access - first of them can also change the internal state of the object and should be declared as a friend (friend InnerMutator::access;).
In those projects my logic turned to work the following way:
Prepare available mutators, choose which to apply
Create and set the object to some determined state
foreach (mutator) mutator.apply(object);
Now the question.
This scheme turnes to work well and
(to me) seems as a sample of some non-standard but useful design
pattern.
What makes me feel uncomfortable is that InnerMutator stuff. I don't
think declaring mutator a friend to
every object which state could be
changed is a good idea and I wan't to
find the appropriate alternative.
Could this situation be solved in terms of Mutators or could you
advice some alternative pattern with
the same result?
Thanks.
I hope this isn't taken offensively, but I can't help but think this design is an overly fancy solution for a rather simple problem.
Why do you need to aggregate mutators, for instance? One can merely write:
template <class T>
void mutate(T& t)
{
t.mutate1(...);
t.mutate2(...);
t.mutate3(...);
}
There's your aggregate mutator, only it's a simple function template relying on static polymorphism rather than a combined list of mutators making one super-mutator.
I once did something that might have been rather similar. I made function objects which could be combined by operator && into super function objects that applied both operands (in the order from left to right) so that one could write code like:
foreach(v.begin(), v.end(), attack && burn && drain);
It was really neat and seemed really clever to me at the time and it was pretty efficient because it generated new function objects on the fly (no runtime mechanisms involved), but when my co-workers saw it, they thought I was insane. In retrospect I was trying to cram everything into functional style programming which has its usefulness but probably shouldn't be over-relied on in C++. It would have been easy enough to write:
BOOST_FOR_EACH(Something& something, v)
{
attack(something);
burn(something);
drain(something);
}
Granted it is more lines of code but it has the benefit of being centralized and easy to debug and doesn't introduce alien concepts to other people working with my code. I've found it much more beneficial instead to focus on tight logic rather than trying to forcefully reduce lines of code.
I recommend you think deeply about this and really consider if it's worth it to continue with this mutator-based design no matter how clever and tight it is. If other people can't quickly understand it, unless you have the authority of boost authors, it's going to be hard to convince people to like it.
As for InnerMutator, I think you should avoid it like the plague. Having external mutator classes which can modify a class's internals as directly as they can here defeats a lot of the purpose of having internals at all.
Would it not be better to simply make those Mutators methods of the classes they're changing? If there's common functionality you want several classes to have then why not make them inherit from the same base class? Doing this will deal with all the inner mutator discomfort I would imagine.
If you want to queue up changes you could store them as sets of function calls to the methods within the classes.
Related
I am trying to make an architecture for a MMO game and I can't figure out how I can store as many variables as I need in GameObjects without having a lot of calls to send them on a wire at the same time I update them.
What I have now is:
Game::ChangePosition(Vector3 newPos) {
gameobject.ChangePosition(newPos);
SendOnWireNEWPOSITION(gameobject.id, newPos);
}
It makes the code rubbish, hard to maintain, understand, extend. So think of a Champion example:
I would have to make a lot of functions for each variable. And this is just the generalisation for this Champion, I might have have 1-2 other member variable for each Champion type/"class".
It would be perfect if I would be able to have OnPropertyChange from .NET or something similar. The architecture I am trying to guess would work nicely is if I had something similar to:
For HP: when I update it, automatically call SendFloatOnWire("HP", hp);
For Position: when I update it, automatically call SendVector3OnWire("Position", Position)
For Name: when I update it, automatically call SendSOnWire("Name", Name);
What are exactly SendFloatOnWire, SendVector3OnWire, SendSOnWire ? Functions that serialize those types in a char buffer.
OR METHOD 2 (Preffered), but might be expensive
Update Hp, Position normally and then every Network Thread tick scan all GameObject instances on the server for the changed variables and send those.
How would that be implemented on a high scale game server and what are my options? Any useful book for such cases?
Would macros turn out to be useful? I think I was explosed to some source code of something similar and I think it used macros.
Thank you in advance.
EDIT: I think I've found a solution, but I don't know how robust it actually is. I am going to have a go at it and see where I stand afterwards. https://developer.valvesoftware.com/wiki/Networking_Entities
On method 1:
Such an approach could be relatively "easy" to implement using a maps, that are accessed via getters/setters. The general idea would be something like:
class GameCharacter {
map<string, int> myints;
// same for doubles, floats, strings
public:
GameCharacter() {
myints["HP"]=100;
myints["FP"]=50;
}
int getInt(string fld) { return myints[fld]; };
void setInt(string fld, int val) { myints[fld]=val; sendIntOnWire(fld,val); }
};
Online demo
If you prefer to keep the properties in your class, you'd go for a map to pointers or member pointers instead of values. At construction you'd then initialize the map with the relevant pointers. If you decide to change the member variable you should however always go via the setter.
You could even go further and abstract your Champion by making it just a collection of properties and behaviors, that would be accessed via the map. This component architecture is exposed by Mike McShaffry in Game Coding Complete (a must read book for any game developer). There's a community site for the book with some source code to download. You may have a look at the actor.h and actor.cpp file. Nevertheless, I really recommend to read the full explanations in the book.
The advantage of componentization is that you could embed your network forwarding logic in the base class of all properties: this could simplify your code by an order of magnitude.
On method 2:
I think the base idea is perfectly suitable, except that a complete analysis (or worse, transmission) of all objects would be an overkill.
A nice alternative would be have a marker that is set when a change is done and is reset when the change is transmitted. If you transmit marked objects (and perhaps only marked properties of those), you would minimize workload of your synchronization thread, and reduce network overhead by pooling transmission of several changes affecting the same object.
Overall conclusion I arrived at: Having another call after I update the position, is not that bad. It is a line of code longer, but it is better for different motives:
It is explicit. You know exactly what's happening.
You don't slow down the code by making all kinds of hacks to get it working.
You don't use extra memory.
Methods I've tried:
Having maps for each type, as suggest by #Christophe. The major drawback of it was that it wasn't error prone. You could've had HP and Hp declared in the same map and it could've added another layer of problems and frustrations, such as declaring maps for each type and then preceding every variable with the mapname.
Using something SIMILAR to valve's engine: It created a separate class for each networking variable you wanted. Then, it used a template to wrap up the basic types you declared (int, float, bool) and also extended operators for that template. It used way too much memory and extra calls for basic functionality.
Using a data mapper that added pointers for each variable in the constructor, and then sent them with an offset. I left the project prematurely when I realised the code started to be confusing and hard to maintain.
Using a struct that is sent every time something changes, manually. This is easily done by using protobuf. Extending structs is also easy.
Every tick, generate a new struct with the data for the classes and send it. This keeps very important stuff always up to date, but eats a lot of bandwidth.
Use reflection with help from boost. It wasn't a great solution.
After all, I went with using a mix of 4, and 5. And now I am implementing it in my game. One huge advantage of protobuf is the capability of generating structs from a .proto file, while also offering serialisation for the struct for you. It is blazingly fast.
For those special named variables that appear in subclasses, I have another struct made. Alternatively, with help from protobuf I could have an array of properties that are as simple as: ENUM_KEY_BYTE VALUE. Where ENUM_KEY_BYTE is just a byte that references a enum to properties such as IS_FLYING, IS_UP, IS_POISONED, and VALUE is a string.
The most important thing I've learned from this is to have as much serialization as possible. It is better to use more CPU on both ends than to have more Input&Output.
If anyone has any questions, comment and I will do my best helping you out.
ioanb7
I'm studying for an exam and am trying to figure this question out. The specific question is "Inheritance and object composition both promote code reuse. (T/F)", but I believe I understand the inheritance portion of the question.
I believe inheritance promotes code reuse because similar methods can be placed in an abstract base class such that the similar methods do not have to be identically implemented within multiple children classes. For example, if you have three kinds of shapes, and each shape's method "getName" simply returns a data member '_name', then why re-implement this method in each of the child classes when it can be implemented once in the abstract base class "shape".
However, my best understanding of object composition is the "has-a" relationship between objects/classes. For example, a student has a school, and a school has a number of students. This can be seen as object composition since they can't really exist without each other (a school without any students isn't exactly a school, is it? etc). But I see no way that these two objects "having" each other as a data member will promote code reuse.
Any help? Thanks!
Object composition can promote code reuse because you can delegate implementation to a different class, and include that class as a member.
Instead of putting all your code in your outermost classes' methods, you can create smaller classes with smaller scopes, and smaller methods, and reuse those classes/methods throughout your code.
class Inner
{
public:
void DoSomething();
};
class Outer1
{
public:
void DoSomethingBig()
{
// We delegate part of the call to inner, allowing us to reuse its code
inner.DoSomething();
// Todo: Do something else interesting here
}
private:
Inner inner;
};
class Outer2
{
public:
void DoSomethingElseThatIsBig()
{
// We used the implementation twice in different classes,
// without copying and pasting the code.
// This is the most basic possible case of reuse
inner.DoSomething();
// Todo: Do something else interesting here
}
private:
Inner inner;
};
As you mentioned in your question, this is one of the two most basic Object Oriented Programming principals, called a "has-a relationship". Inheritance is the other relationship, and is called an "is-a replationship".
You can also combine inheritance and composition in quite useful ways that will often multiply your code (and design) reuse potential. Any real world and well-architected application will constantly combine both of these concepts to gain as much reuse as possible. You'll find out about this when you learn about Design Patterns.
Edit:
Per Mike's request in the comments, a less abstract example:
// Assume the person class exists
#include<list>
class Bus
{
public:
void Board(Person newOccupant);
std::list<Person>& GetOccupants();
private:
std::list<Person> occupants;
};
In this example, instead of re-implementing a linked list structure, you've delegated it to a list class. Every time you use that list class, you're re-using the code that implements the list.
In fact, since list semantics are so common, the C++ standard library gave you std::list, and you just had to reuse it.
1) The student knows about a school, but this is not really a HAS-A relationship; while you would want to keep track of what school the student attends, it would not be logical to describe the school as being part of the student.
2) More people occupy the school than just students. That's where the reuse comes in. You don't have to re-define the things that make up a school each time you describe a new type of school-attendee.
I have to agree with #Karl Knechtel -- this is a pretty poor question. As he said, it's hard to explain why, but I'll give it a shot.
The first problem is that it uses a term without defining it -- and "code reuse" means a lot of different things to different people. To some people, cutting and pasting qualifies as code reuse. As little as I like it, I have to agree with them, to at least some degree. Other people define cod reuse in ways that rule out cutting and pasting as being code reuse (classing another copy of the same code as separate code, not reusing the same code). I can see that viewpoint too, though I tend to think their definition is intended more to serve a specific end than be really meaningful (i.e., "code reuse"->good, "cut-n-paste"->bad, therefore "cut-n-paste"!="code reuse"). Unfortunately, what we're looking at here is right on the border, where you need a very specific definition of what code reuse means before you can answer the question.
The definition used by your professor is likely to depend heavily upon the degree of enthusiasm he has for OOP -- especially during the '90s (or so) when OOP was just becoming mainstream, many people chose to define it in ways that only included the cool new OOP "stuff". To achieve the nirvana of code reuse, you had to not only sign up for their OOP religion, but really believe in it! Something as mundane as composition couldn't possibly qualify -- no matter how strangely they had to twist the language for that to be true.
As a second major point, after decades of use of OOP, a few people have done some fairly careful studies of what code got reused and what didn't. Most that I've seen have reached a fairly simple conclusion: it's quite difficult (i.e., essentially impossible) correlate coding style with reuse. Nearly any rule you attempt to make about what will or won't result in code reuse can and will be violated on a regular basis.
Third, and what I suspect tends to be foremost in many people's minds is the fact that asking the question at all makes it sound as if this is something that can/will affect a typical coder -- that you might want to choose between composition and inheritance (for example) based on which "promotes code reuse" more, or something on that order. The reality is that (just for example) you should choose between composition and inheritance primarily based upon which more accurately models the problem you're trying to solve and which does more to help you solve that problem.
Though I don't have any serious studies to support the contention, I would posit that the chances of that code being reused will depend heavily upon a couple of factors that are rarely even considered in most studies: 1) how similar of a problem somebody else needs to solve, and 2) whether they believe it will be easier to adapt your code to their problem than to write new code.
I should add that in some of the studies I've seen, there were factors found that seemed to affect code reuse. To the best of my recollection, the one that stuck out as being the most important/telling was not the code itself at all, but the documentation available for that code. Being able to use the code without basically reverse engineer it contributes a great deal toward its being reused. The second point was simply the quality of the code -- a number of the studies were done in places/situations where they were trying to promote code reuse. In a fair number of cases, people tried to reuse quite a bit more code than they really did, but had to give up on it simply because the code wasn't good enough -- everything from bugs to clumsy interfaces to poor portability prevented reuse.
Summary: I'll go on record as saying that code reuse has probably been the most overhyped, under-delivered promise in software engineering over at least the last couple of decades. Even at best, code reuse remains a fairly elusive goal. Trying to simplify it to the point of treating it as a true/false question based on two factors is oversimplifying the question to the point that it's not only meaningless, but utterly ridiculous. It appears to trivialize and demean nearly the entire practice of software engineering.
I have an object Car and an object Engine:
class Engine {
int horsepower;
}
class Car {
string make;
Engine cars_engine;
}
A Car has an Engine; this is composition. However, I don't need to redefine Engine to put an engine in a car -- I simply say that a Car has an Engine. Thus, composition does indeed promote code reuse.
Object composition does promote code re-use. Without object composition, if I understand your definition of it properly, every class could have only primitive data members, which would be beyond awful.
Consider the classes
class Vector3
{
double x, y, z;
double vectorNorm;
}
class Object
{
Vector3 position;
Vector3 velocity;
Vector3 acceleration;
}
Without object composition, you would be forced to have something like
class Object
{
double positionX, positionY, positionZ, positionVectorNorm;
double velocityX, velocityY, velocityZ, velocityVectorNorm;
double accelerationX, accelerationY, accelerationZ, accelerationVectorNorm;
}
This is just a very simple example, but I hope you can see how even the most basic object composition promotes code reuse. Now think about what would happen if Vector3 contained 30 data members. Does this answer your question?
In a project I am working on, we have several "disposable" classes. What I mean by disposable is that they are a class where you call some methods to set up the info, and you call what equates to a doit function. You doit once and throw them away. If you want to doit again, you have to create another instance of the class. The reason they're not reduced to single functions is that they must store state for after they doit for the user to get information about what happened and it seems to be not very clean to return a bunch of things through reference parameters. It's not a singleton but not a normal class either.
Is this a bad way to do things? Is there a better design pattern for this sort of thing? Or should I just give in and make the user pass in a boatload of reference parameters to return a bunch of things through?
What you describe is not a class (state + methods to alter it), but an algorithm (map input data to output data):
result_t do_it(parameters_t);
Why do you think you need a class for that?
Sounds like your class is basically a parameter block in a thin disguise.
There's nothing wrong with that IMO, and it's certainly better than a function with so many parameters it's hard to keep track of which is which.
It can also be a good idea when there's a lot of input parameters - several setup methods can set up a few of those at a time, so that the names of the setup functions give more clue as to which parameter is which. Also, you can cover different ways of setting up the same parameters using alternative setter functions - either overloads or with different names. You might even use a simple state-machine or flag system to ensure the correct setups are done.
However, it should really be possible to recycle your instances without having to delete and recreate. A "reset" method, perhaps.
As Konrad suggests, this is perhaps misleading. The reset method shouldn't be seen as a replacement for the constructor - it's the constructors job to put the object into a self-consistent initialised state, not the reset methods. Object should be self-consistent at all times.
Unless there's a reason for making cumulative-running-total-style do-it calls, the caller should never have to call reset explicitly - it should be built into the do-it call as the first step.
I still decided, on reflection, to strike that out - not so much because of Jalfs comment, but because of the hairs I had to split to argue the point ;-) - Basically, I figure I almost always have a reset method for this style of class, partly because my "tools" usually have multiple related kinds of "do it" (e.g. "insert", "search" and "delete" for a tree tool), and shared mode. The mode is just some input fields, in parameter block terms, but that doesn't mean I want to keep re-initializing. But just because this pattern happens a lot for me, doesn't mean it should be a point of principle.
I even have a name for these things (not limited to the single-operation case) - "tool" classes. A "tree_searching_tool" will be a class that searches (but doesn't contain) a tree, for example, though in practice I'd have a "tree_tool" that implements several tree-related operations.
Basically, even parameter blocks in C should ideally provide a kind of abstraction that gives it some order beyond being just a bunch of parameters. "Tool" is a (vague) abstraction. Classes are a major means of handling abstraction in C++.
I have used a similar design and wondered about this too. A fictive simplified example could look like this:
FileDownloader downloader(url);
downloader.download();
downloader.result(); // get the path to the downloaded file
To make it reusable I store it in a boost::scoped_ptr:
boost::scoped_ptr<FileDownloader> downloader;
// Download first file
downloader.reset(new FileDownloader(url1));
downloader->download();
// Download second file
downloader.reset(new FileDownloader(url2));
downloader->download();
To answer your question: I think it's ok. I have not found any problems with this design.
As far as I can tell you are describing a class that represents an algorithm. You configure the algorithm, then you run the algorithm and then you get the result of the algorithm. I see nothing wrong with putting those steps together in a class if the alternative is a function that takes 7 configuration parameters and 5 output references.
This structuring of code also has the advantage that you can split your algorithm into several steps and put them in separate private member functions. You can do that without a class too, but that can lead to the sub-functions having many parameters if the algorithm has a lot of state. In a class you can conveniently represent that state through member variables.
One thing you might want to look out for is that structuring your code like this could easily tempt you to use inheritance to share code among similar algorithms. If algorithm A defines a private helper function that algorithm B needs, it's easy to make that member function protected and then access that helper function by having class B derive from class A. It could also feel natural to define a third class C that contains the common code and then have A and B derive from C. As a rule of thumb, inheritance used only to share code in non-virtual methods is not the best way - it's inflexible, you end up having to take on the data members of the super class and you break the encapsulation of the super class. As a rule of thumb for that situation, prefer factoring the common code out of both classes without using inheritance. You can factor that code into a non-member function or you might factor it into a utility class that you then use without deriving from it.
YMMV - what is best depends on the specific situation. Factoring code into a common super class is the basis for the template method pattern, so when using virtual methods inheritance might be what you want.
Nothing especially wrong with the concept. You should try to set it up so that the objects in question can generally be auto-allocated vs having to be newed -- significant performance savings in most cases. And you probably shouldn't use the technique for highly performance-sensitive code unless you know your compiler generates it efficiently.
I disagree that the class you're describing "is not a normal class". It has state and it has behavior. You've pointed out that it has a relatively short lifespan, but that doesn't make it any less of a class.
Short-lived classes vs. functions with out-params:
I agree that your short-lived classes are probably a little more intuitive and easier to maintain than a function which takes many out-params (or 1 complex out-param). However, I suspect a function will perform slightly better, because you won't be taking the time to instantiate a new short-lived object. If it's a simple class, that performance difference is probably negligible. However, if you're talking about an extremely performance-intensive environment, it might be a consideration for you.
Short-lived classes: creating new vs. re-using instances:
There's plenty of examples where instances of classes are re-used: thread-pools, DB-connection pools (probably darn near any software construct ending in 'pool' :). In my experience, they seem to be used when instantiating the object is an expensive operation. Your small, short-lived classes don't sound like they're expensive to instantiate, so I wouldn't bother trying to re-use them. You may find that whatever pooling mechanism you implement, actually costs MORE (performance-wise) than simply instantiating new objects whenever needed.
Over time I have come to appreciate the mindset of many small functions ,and I really do like it a lot, but I'm having a hard time losing my shyness to apply it to classes, especially ones with more than a handful of nonpublic member variables.
Every additional helper function clutters up the interface, since often the code is class specific and I can't just use some generic piece of code.
(To my limited knowledge, anyway, still a beginner, don't know every library out there, etc.)
So in extreme cases, I usually create a helper class which becomes the friend of the class that needs to be operated on, so it has access to all the nonpublic guts.
An alternative are free functions that need parameters, but even though premature optimization is evil, and I haven't actually profiled or disassembled it...
I still DREAD the mere thought of passing all the stuff I need sometimes, even just as reference, even though that should be a simple address per argument.
Is all this a matter of preference, or is there a widely used way of dealing with that kind of stuff?
I know that trying to force stuff into patterns is a kind of anti pattern, but I am concerned about code sharing and standards, and I want to get stuff at least fairly non painful for other people to read.
So, how do you guys deal with that?
Edit:
Some examples that motivated me to ask this question:
About the free functions:
DeadMG was confused about making free functions work...without arguments.
My issue with those functions is that unlike member functions, free functions only know about data, if you give it to them, unless global variables and the like are used.
Sometimes, however, I have a huge, complicated procedure I want to break down for readability and understandings sake, but there are so many different variables which get used all over the place that passing all the data to free functions, which are agnostic to every bit of member data, looks simply nightmarish.
Click for an example
That is a snippet of a function that converts data into a format that my mesh class accepts.
It would take all of those parameter to refactor this into a "finalizeMesh" function, for example.
At this point it's a part of a huge computer mesh data function, and bits of dimension info and sizes and scaling info is used all over the place, interwoven.
That's what I mean with "free functions need too many parameters sometimes".
I think it shows bad style, and not necessarily a symptom of being irrational per se, I hope :P.
I'll try to clear things up more along the way, if necessary.
Every additional helper function clutters up the interface
A private helper function doesn't.
I usually create a helper class which becomes the friend of the class that needs to be operated on
Don't do this unless it's absolutely unavoidable. You might want to break up your class's data into smaller nested classes (or plain old structs), then pass those around between methods.
I still DREAD the mere thought of passing all the stuff I need sometimes, even just as reference
That's not premature optimization, that's a perfectly acceptable way of preventing/reducing cognitive load. You don't want functions taking more than three parameters. If there are more then three, consider packaging your data in a struct or class.
I sometimes have the same problems as you have described: increasingly large classes that need too many helper functions to be accessed in a civilized manner.
When this occurs I try to seperate the class in multiple smaller classes if that is possible and convenient.
Scott Meyers states in Effective C++ that friend classes or functions is mostly not the best option, since the client code might do anything with the object.
Maybe you can try nested classes, that deal with the internals of your object. Another option are helper functions that use the public interface of your class and put the into a namespace related to your class.
Another way to keep your classes free of cruft is to use the pimpl idiom. Hide your private implementation behind a pointer to a class that actually implements whatever it is that you're doing, and then expose a limited subset of features to whoever is the consumer of your class.
// Your public API in foo.h (note: only foo.cpp should #include foo_impl.h)
class Foo {
public:
bool func(int i) { return impl_->func(i); }
private:
FooImpl* impl_;
};
There are many ways to implement this. The Boost pimpl template in the Vault is pretty good. Using smart pointers is another useful way of handling this, too.
http://www.boost.org/doc/libs/1_46_1/libs/smart_ptr/sp_techniques.html#pimpl
An alternative are free functions that
need parameters, but even though
premature optimization is evil, and I
haven't actually profiled or
disassembled it... I still DREAD the
mere thought of passing all the stuff
I need sometimes, even just as
reference, even though that should be
a simple address per argument.
So, let me get this entirely straight. You haven't profiled or disassembled. But somehow, you intend on ... making functions work ... without arguments? How, exactly, do you propose to program without using function arguments? Member functions are no more or less efficient than free functions.
More importantly, you come up with lots of logical reasons why you know you're wrong. I think the problem here is in your head, which possibly stems from you being completely irrational, and nothing that any answer from any of us can help you with.
Generic algorithms that take parameters are the basis of modern object orientated programming- that's the entire point of both templates and inheritance.
The more I get into writing unit tests the more often I find myself writing smaller and smaller classes. The classes are so small now that many of them have only one public method on them that is tied to an interface. The tests then go directly against that public method and are fairly small (sometimes that public method will call out to internal private methods within the class). I then use an IOC container to manage the instantiation of these lightweight classes because there are so many of them.
Is this typical of trying to do things in a more of a TDD manner? I fear that I have now refactored a legacy 3,000 line class that had one method in it into something that is also difficult to maintain on the other side of the spectrum because there is now literally about 100 different class files.
Is what I am doing going too far? I am trying to follow the single responsibility principle with this approach but I may be treading into something that is an anemic class structure where I do not have very intelligent "business objects".
This multitude of small classes would drive me nuts. With this design style it becomes really hard to figure out where the real work gets done. I am not a fan of having a ton of interfaces each with a corresponding implementation class, either. Having lots of "IWidget" and "WidgetImpl" pairings is a code smell in my book.
Breaking up a 3,000 line class into smaller pieces is great and commendable. Remember the goal, though: it's to make the code easier to read and easier to work with. If you end up with 30 classes and interfaces you've likely just created a different type of monster. Now you have a really complicated class design. It takes a lot of mental effort to keep that many classes straight in your head. And with lots of small classes you lose the very useful ability to open up a couple of key files, pick out the most important methods, and get an idea of what the heck is going on.
For what it's worth, though, I'm not really sold on test-driven design. Writing tests early, that's sensible. But reorganizing and restructuring your class design so it can be more easily unit tested? No thanks. I'll make interfaces only if they make architectural sense, not because I need to be able to mock up some objects so I can test my classes. That's putting the cart before the horse.
You might have gone a bit too far if you are asking this question. Having only one public method in a class isn't bad as such, if that class has a clear responsibility/function and encapsulates all logic concerning that function, even if most of it is in private methods.
When refactoring such legacy code, I usually try to identify the components in play at a high level that can be assigned distinct roles/responsibilities and separate them into their own classes. I think about which functions should be which components's responsibility and move the methods into that class.
You write a class so that instances of the class maintain state. You put this state in a class because all the state in the class is related.You have function to managed this state so that invalid permutations of state can't be set (the infamous square that has members width and height, but if width doesn't equal height it's not really a square.)
If you don't have state, you don't need a class, you could just use free functions (or in Java, static functions).
So, the question isn't "should I have one function?" but rather "what state-ful entity does my class encapsulate?"
Maybe you have one function that sets all state -- and you should make it more granular, so that, e.g., instead of having void Rectangle::setWidthAndHeight( int x, int y) you should have a setWidth and a separate setHeight.
Perhaps you have a ctor that sets things up, and a single function that doesIt, whatever "it" is. Then you have a functor, and a single doIt might make sense. E.g., class Add implements Operation { Add( int howmuch); Operand doIt(Operand rhs);}
(But then you may find that you really want something like the Visitor Pattern -- a pure functor is more likely if you have purely value objects, Visitor if they're arranged in a tree and are related to each other.)
Even if having these many small objects, single-function is the correct level of granularity, you may want something like a facade Pattern, to compose out of primitive operations, often-used complex operations.
There's no one answer. If you really have a bunch of functors, it's cool. If you're really just making each free function a class, it's foolish.
The real answer lies in answering the question, "what state am I managing, and how well do my classes model my problem domain?"
I'd be speculating if I gave a definite answer without looking at the code.
However it sounds like you're concerned and that is a definite flag for reviewing the code. The short answer to your question comes back to the definition of Simple Design. Minimal number of classes and methods is one of them. If you feel like you can take away some elements without losing the other desirable attributes, go ahead and collapse/inline them.
Some pointers to help you decide:
Do you have a good check for "Single Responsibility" ? It's deceptively difficult to get it right but is a key skill (I still don't see it like the masters). It doesn't necessarily translate to one method-classes. A good yardstick is 5-7 public methods per class. Each class could have 0-5 collaborators. Also to validate against SRP, ask the question what can drive a change into this class ? If there are multiple unrelated answers (e.g. change in the packet structure (parsing) + change in the packet contents to action map (command dispatcher) ) , maybe the class needs to be split. On the other end, if you feel that a change in the packet structure, can affect 4 different classes - you've run off the other cliff; maybe you need to combine them into a cohesive class.
If you have trouble naming the concrete implementations, maybe you don't need the interface. e.g. XXXImpl classes implmenting XXX need to be looked at. I recently learned of a naming convention, where the interface describes a Role and the implementation is named by the technology used to implement the role (or falling back to what it does). e.g. XmppAuction implements Auction (or SniperNotifier implements AuctionEventListener)
Lastly are you finding it difficult to add / modify / test existing code (e.g. test setup is long or painful ) ? Those can be signs that you need to go refactoring.