How should I decide whether to build a "protected interface"? - c++

From: http://www.parashift.com/c++-faq-lite/basics-of-inheritance.html#faq-19.9
Three keys: ROI, ROI and ROI.
Every interface you build has a cost and a benefit. Every reusable
component you build has a cost and a benefit. Every test case, every
cleanly structured thing-a-ma-bob, every investment of any sort. You
should never invest any time or any money in any thing if there is not
a positive return on that investment. If it costs your company more
than it saves, don't do it!
Not everyone agrees with me on this; they have a right to be wrong.
For example, people who live sufficiently far from the real world act
like every investment is good. After all, they reason, if you wait
long enough, it might someday save somebody some time. Maybe. We hope.
That whole line of reasoning is unprofessional and irresponsible. You
don't have infinite time, so invest it wisely. Sure, if you live in an
ivory tower, you don't have to worry about those pesky things called
"schedules" or "customers." But in the real world, you work within a
schedule, and you must therefore invest your time only where you'll
get good pay-back.
Back to the original question: when should you invest time in building
a protected interface? Answer: when you get a good return on that
investment. If it's going to cost you an hour, make sure it saves
somebody more than an hour, and make sure the savings isn't "someday
over the rainbow." If you can save an hour within the current project,
it's a no-brainer: go for it. If it's going to save some other project
an hour someday maybe we hope, then don't do it. And if it's in
between, your answer will depend on exactly how your company trades
off the future against the present.
The point is simple: do not do something that could damage your
schedule. (Or if you do, make sure you never work with me; I'll have
your head on a platter.) Investing is good if there's a pay-back for
that investment. Don't be naive and childish; grow up and realize that
some investments are bad because they, in balance, cost more than they
return.
Well, I didn't understand how to correlate this to C++ protected interface.
Please give any real C++ examples to show what this FAQ is talking about.

First off, do not ever treat any programming reference as definitive. Ever. Everything is somebody's opinion, and in the end you should do what works best for you.
So, that said, what this text is basically trying to say is "don't use techniques that cost you more time than they save". One example of the "protected interface" they're describing is the following:
class C {
public:
int x;
};
Now, in Java, all the Java EE programming books will tell you to always implement that class like this:
class C {
public:
int getX() { return x; }
void setX(int x) { this.x = x; }
private:
int x;
};
... that's an implementation of proper encapsulation (technical term: simplifying a little, it means minimizing sharing between discrete parts). The classes using your code are concerned that you have some way to get and set an integer, not that it's actually stored as an int inside the class. So if you use accessor methods, you're better able to change the underlying implementation later: maybe you want it to read that variable from the network?
However, that was a large amount of extra code (in terms of characters) and some extra complexity to implement that. Doing things properly actually has a cost! It's not a cost in terms of correctness of the code - directly - but you spent some number of minutes doing it "better" that you could have spent doing something else, and there is a nonzero amount of work involved in maintaining everything you write, no matter how trivial.
So, what is being said in this passage is in my mind good advice: always double-check that when you go to do something, you're going to get more out of it than what you put in. Sanity check that you are not following an ideal to the detriment of your actual effectiveness as a programmer or a human being.
That's advice that will serve you well in any programming language, and in any walk of life.

From your quote above, the guy sounds like a pedantic jerk :)
Looking at the previous entries in his FAQ, he's really saying the following:
1) A class has two distinct interfaces for two distinct sets of clients:
It has a public interface that serves unrelated classes
It has a protected interface that serves derived classes
2) Should you always go to the trouble of creating two different interfaces for each class?
3) Answer: "no, not necessarily"
Sometimes it's worth the extra effort to create protected getter and setter methods, and make all data "private"
Other times - he says - it's "good enough" to make the data itself "protected". Without doing all the extra work of writing a bunch of extra code, and incurring the consequent size and performance penalties.
Sounds reasonable to me. Do what you need to do - but don't go overboard and do a bunch of unnecessary stuff in the name of "theory".
That's all he's saying - use good judgement, and don't go overboard.
You can't argue with that :)
PS:
FAQ's 19.5 through 19.9 in your link deal with "derived classes". None of this discussion is relevant outside of the question "how should I structure base classes for inheritance?" In other words, it's not a discussion about "classes" in general - only about "how should a super class best make things visible to it's subclasses?".

Related

Does any language VM/compiler use single class instance property as god object array item optimization?

There are a lot of popular talks this year on C++ cache utilization optimizations (alike this). From that videos it seems like having god objects (pseudocode):
class apples {
vector<int> property_1_Values;
vector<float> property_2_Values;
};
instead of
class apple {
int property_1;
float property_2;
};
so that iterationg from N'th element to M'th would be cache optimal (*they also say tht cpu can predict not only ++/-- but also +-const sequences).
Well I can see the point, also I can see how to reimplement my programs logic to fit into such model... yet it feels like a really architecturaly bad idea - create god objects, reinvent inheritance... So it seams for me that this is should be a compiler optimization not programmers headake.
So I wonder what OO language VM/compiler has already implemented such objects restructuring at programm compile/evecution time? (so that OO programmer would not have to make such handmade optimizations)? .NET, JVM, Clang, anyone?
Update:
Profile as a guidence for implementation of such thing is a really sad and bad answer - to implement such god object optimally one would require tons of profiling, debugging etc (such god object is in a sence a micro GC becuse reduce vector size on each object addition or removal would be painfull...) this is why I've hoped existing VMs already heve done that. I heve not sceen code generators/template classes that would provide a factory with resonable interface in C+... so idea of such thing seems to be scary when you have 100+k lines of code and a new not well tested god object...

Using getter/setter vs "tell, don't ask"?

Tell, don't ask principle here is often pasted to me when I use getters or setters, and people tell me not to use them.
The site clearly explains what I should and what I shouldn't do, but it doesn't really explain WHY I should tell, instead of asking.
I find using getters and setters much more efficient, and I can do more with them.
Imagine a class Warrior with attributes health and armor:
class Warrior {
unsigned int m_health;
unsigned int m_armor;
};
Now someone attacks my warrior with a special attack that reduces his armor for 5 seconds. Using setter's it would be like this:
void Attacker::attack(Warrior *target)
{
target->setHealth(target->getHealth() - m_damage);
target->setArmor(target->getArmor() - 20);
// wait 5 seconds
target->setArmor(target->getArmor() + 20);
}
And with tell, don't ask principle it would look like this (correct me if i'm wrong):
void Attacker::attack(Warrior *target)
{
target->hurt(m_damage);
target->reduceArmor(20);
// wait 5 seconds
target->increaseArmor(20);
}
Now the second one obviously looks better, but I can't find the real benefits of this.
You still need the same amount of methods (increase/decrease vs set/get) and you lose the benefit of asking if you ever need to ask.
For example, how would you set warriors health to 100?
How do you figure out whether you should use heal or hurt, and how much health you need to heal or hurt?
Also, I see setters and getters being used by some of the best programmers in the world.
Most APIs use it, and it's being used in the std lib all the time:
for (i = 0; i < vector.size(); i++) {
my_func(i);
}
// vs.
vector.execForElements(my_func);
And if I have to decide whether to believe people here linking me one article about telling, not asking, or to believe 90% of the large companies (apple, microsoft, android, most of the games, etc. etc.) who have successfully made a lot of money and working programs, it's kinda hard for me to understand why would tell, don't ask be a good principle.
Why should I use it (should I?) when everything seems easier with getters and setters?
You still need the same amount of methods (increase/decrease vs set/get) and you lose the benefit of asking if you ever need to ask.
You got it wrong. The point is to replace the getVariable and setVariable with a meaningful operation: inflictDamage, for example. Replacing getVariable with increaseVariable just gives you different more obscure names for the getter and setter.
Where does this matter. For example, you don't need to provide a setter/getter to track the armor and health differently, a single inflictDamage can be processed by the class by trying to block (and damaging the shield in the process) and then taking damage on the character if the shield is not sufficient or your algorithm demands it. At the same time you can add more complex logic in a single place.
Add a magic shield that will temporarily increase the damage caused by your weapons for a short time when taking damage, for example. If you have getter/setters all attackers need to see if you have such an item, then apply the same logic in multiple places to hopefully get to the same result. In the tell approach attackers still need to just figure out how much damage they do, and tell it to your character. The character can then figure out how the damage is spread across the items, and whether it affects the character in any other way.
Complicate the game and add fire weapons, then you can have inflictFireDamage (or pass the fire damage as a different argument to the inflictDamage function). The Warrior can figure out whether she is affected by a fire resistance spell and ignore the fire damage, rather than having all other objects in the program try to figure out how their action is going to affect the others.
Well, if that's so, why bother with getters and setters after all? You can just have public fields.
void Attacker::attack(Warrior *target)
{
target->health -= m_damage;
target->armor -= 20;
// wait 5 seconds
target->armor += 20;
}
The reason is simple here. Encapsulation. If you have setters and getters, it's no better than public field. You don't create a struct here. You create a proper member of your program with defined semantics.
Quoting the article:
The biggest danger here is that by asking for data from an object, you
are only getting data. You’re not getting an object—not in the large
sense. Even if the thing you received from a query is an object
structurally (e.g., a String) it is no longer an object semantically.
It no longer has any association with its owner object. Just because
you got a string whose contents was “RED”, you can’t ask the string
what that means. Is it the owners last name? The color of the car? The
current condition of the tachometer? An object knows these things,
data does not.
The article here suggests here that "tell, don't ask" is better here because you can't do things that make no sense.
target->setHealth(target->getArmor() - m_damage);
It doesn't make sense here, because the armor has nothing in relation to health.
Also, you got it wrong with std lib here. Getters and setters are only used in std::complex and that's because of language lacking functionality (C++ hadn't had references then). It's the opposite, actually. C++ standard library encourages usage of algorithms, to tell the things to do on containers.
std::for_each(begin(v), end(v), my_func);
std::copy(begin(v), end(v), begin(u));
One reason that comes to mind is the ability to decide where you want the control to be.
For example, with your setter/getter example, the caller can change the Warrior's health arbitrarily. At best, your setter might enforce maximum and minimum values to ensure the health remains valid. But if you use the "tell" form you can enforce additional rules. You might not allow more than a certain amount of damage or healing at once, and so on.
Using this form gives you much greater control over the Warrior's interface: you can define the operations that are permitted, and you can change their implementation without having to rewrite all the code that calls them.
At my point of view, both codes do the same thing. The difference is in the expressivity of each one. The first one (setters anad getters) can be more expressive than the second one (tell, don' ask).
It's true that, when you ask, you are going to make a decision. But it not happens in most part of times. Sometimes you just want to know or set some value of the object, and this is not possible with tell, don't ask.
Of course, when you create a program, it's important to define the responsabilities of an object and make sure that these responsabilities remains only inside the object, letting the logic of your application out of it. This we already know, but if you need ask to make a decision that's not a responsability of your object, how do you make it with tell, don't ask?
Actually, getters and setters prevails, but it's common to see the idea of tell, don't ask together with it. In other words, some APIs has getters and setters and also the methods of the tell, don't ask idea.

Object composition promotes code reuse. (T/F, why)

I'm studying for an exam and am trying to figure this question out. The specific question is "Inheritance and object composition both promote code reuse. (T/F)", but I believe I understand the inheritance portion of the question.
I believe inheritance promotes code reuse because similar methods can be placed in an abstract base class such that the similar methods do not have to be identically implemented within multiple children classes. For example, if you have three kinds of shapes, and each shape's method "getName" simply returns a data member '_name', then why re-implement this method in each of the child classes when it can be implemented once in the abstract base class "shape".
However, my best understanding of object composition is the "has-a" relationship between objects/classes. For example, a student has a school, and a school has a number of students. This can be seen as object composition since they can't really exist without each other (a school without any students isn't exactly a school, is it? etc). But I see no way that these two objects "having" each other as a data member will promote code reuse.
Any help? Thanks!
Object composition can promote code reuse because you can delegate implementation to a different class, and include that class as a member.
Instead of putting all your code in your outermost classes' methods, you can create smaller classes with smaller scopes, and smaller methods, and reuse those classes/methods throughout your code.
class Inner
{
public:
void DoSomething();
};
class Outer1
{
public:
void DoSomethingBig()
{
// We delegate part of the call to inner, allowing us to reuse its code
inner.DoSomething();
// Todo: Do something else interesting here
}
private:
Inner inner;
};
class Outer2
{
public:
void DoSomethingElseThatIsBig()
{
// We used the implementation twice in different classes,
// without copying and pasting the code.
// This is the most basic possible case of reuse
inner.DoSomething();
// Todo: Do something else interesting here
}
private:
Inner inner;
};
As you mentioned in your question, this is one of the two most basic Object Oriented Programming principals, called a "has-a relationship". Inheritance is the other relationship, and is called an "is-a replationship".
You can also combine inheritance and composition in quite useful ways that will often multiply your code (and design) reuse potential. Any real world and well-architected application will constantly combine both of these concepts to gain as much reuse as possible. You'll find out about this when you learn about Design Patterns.
Edit:
Per Mike's request in the comments, a less abstract example:
// Assume the person class exists
#include<list>
class Bus
{
public:
void Board(Person newOccupant);
std::list<Person>& GetOccupants();
private:
std::list<Person> occupants;
};
In this example, instead of re-implementing a linked list structure, you've delegated it to a list class. Every time you use that list class, you're re-using the code that implements the list.
In fact, since list semantics are so common, the C++ standard library gave you std::list, and you just had to reuse it.
1) The student knows about a school, but this is not really a HAS-A relationship; while you would want to keep track of what school the student attends, it would not be logical to describe the school as being part of the student.
2) More people occupy the school than just students. That's where the reuse comes in. You don't have to re-define the things that make up a school each time you describe a new type of school-attendee.
I have to agree with #Karl Knechtel -- this is a pretty poor question. As he said, it's hard to explain why, but I'll give it a shot.
The first problem is that it uses a term without defining it -- and "code reuse" means a lot of different things to different people. To some people, cutting and pasting qualifies as code reuse. As little as I like it, I have to agree with them, to at least some degree. Other people define cod reuse in ways that rule out cutting and pasting as being code reuse (classing another copy of the same code as separate code, not reusing the same code). I can see that viewpoint too, though I tend to think their definition is intended more to serve a specific end than be really meaningful (i.e., "code reuse"->good, "cut-n-paste"->bad, therefore "cut-n-paste"!="code reuse"). Unfortunately, what we're looking at here is right on the border, where you need a very specific definition of what code reuse means before you can answer the question.
The definition used by your professor is likely to depend heavily upon the degree of enthusiasm he has for OOP -- especially during the '90s (or so) when OOP was just becoming mainstream, many people chose to define it in ways that only included the cool new OOP "stuff". To achieve the nirvana of code reuse, you had to not only sign up for their OOP religion, but really believe in it! Something as mundane as composition couldn't possibly qualify -- no matter how strangely they had to twist the language for that to be true.
As a second major point, after decades of use of OOP, a few people have done some fairly careful studies of what code got reused and what didn't. Most that I've seen have reached a fairly simple conclusion: it's quite difficult (i.e., essentially impossible) correlate coding style with reuse. Nearly any rule you attempt to make about what will or won't result in code reuse can and will be violated on a regular basis.
Third, and what I suspect tends to be foremost in many people's minds is the fact that asking the question at all makes it sound as if this is something that can/will affect a typical coder -- that you might want to choose between composition and inheritance (for example) based on which "promotes code reuse" more, or something on that order. The reality is that (just for example) you should choose between composition and inheritance primarily based upon which more accurately models the problem you're trying to solve and which does more to help you solve that problem.
Though I don't have any serious studies to support the contention, I would posit that the chances of that code being reused will depend heavily upon a couple of factors that are rarely even considered in most studies: 1) how similar of a problem somebody else needs to solve, and 2) whether they believe it will be easier to adapt your code to their problem than to write new code.
I should add that in some of the studies I've seen, there were factors found that seemed to affect code reuse. To the best of my recollection, the one that stuck out as being the most important/telling was not the code itself at all, but the documentation available for that code. Being able to use the code without basically reverse engineer it contributes a great deal toward its being reused. The second point was simply the quality of the code -- a number of the studies were done in places/situations where they were trying to promote code reuse. In a fair number of cases, people tried to reuse quite a bit more code than they really did, but had to give up on it simply because the code wasn't good enough -- everything from bugs to clumsy interfaces to poor portability prevented reuse.
Summary: I'll go on record as saying that code reuse has probably been the most overhyped, under-delivered promise in software engineering over at least the last couple of decades. Even at best, code reuse remains a fairly elusive goal. Trying to simplify it to the point of treating it as a true/false question based on two factors is oversimplifying the question to the point that it's not only meaningless, but utterly ridiculous. It appears to trivialize and demean nearly the entire practice of software engineering.
I have an object Car and an object Engine:
class Engine {
int horsepower;
}
class Car {
string make;
Engine cars_engine;
}
A Car has an Engine; this is composition. However, I don't need to redefine Engine to put an engine in a car -- I simply say that a Car has an Engine. Thus, composition does indeed promote code reuse.
Object composition does promote code re-use. Without object composition, if I understand your definition of it properly, every class could have only primitive data members, which would be beyond awful.
Consider the classes
class Vector3
{
double x, y, z;
double vectorNorm;
}
class Object
{
Vector3 position;
Vector3 velocity;
Vector3 acceleration;
}
Without object composition, you would be forced to have something like
class Object
{
double positionX, positionY, positionZ, positionVectorNorm;
double velocityX, velocityY, velocityZ, velocityVectorNorm;
double accelerationX, accelerationY, accelerationZ, accelerationVectorNorm;
}
This is just a very simple example, but I hope you can see how even the most basic object composition promotes code reuse. Now think about what would happen if Vector3 contained 30 data members. Does this answer your question?

Examples of why declaring data in a class as private is important?

I understand that only the class can access the data so therefore it is "safer" and what not but I don't really understand why it is such a big deal. Maybe it is because I haven't made any programs complex enough where data could accidentally be changed but it just a bit confusing when learning classes and being told that making things private is important because it is "safer" when the only time I have changed data in a program is when I have explicitly meant to. Could anyone provide some examples where data would have been unintentionally changed had that data not been private?
Depends what you mean by "unintentional changes". All code is written by someone so if he is changing a member variable of a class then the change is intentional (at least from his side). However the implementor of the class might not have expected this and it can break the functionality.
Imagine a very simple stack:
class Stack
{
public:
int Items[10];
int CurrentItemIndex;
}
Now CurrentItemIndex points to the index which represents the current item on top of the stack. If someone goes ahead and changes it then your stack is corrupted. Similarly someone can just write stuff into Items. If something is public then it is usually a sign that it is intended for public usage.
Also making members private provides encapsulation of the implementation details. Imagine someone iterates over stack on the above implementation by examining Items. Then it will break all code if the implementation of the stack gets changed to be a linked list to allow arbitrary number of items. In the end the maintenance will kill you.
The public interface of a class should always be as stable as possible because that's what people will be using. You do not want to touch x lines of code using a class just because you changed some little detail.
The moment you start collaborating with other people on code, you'll appreciate the clarity and security of keeping your privates private.
Say you've designed a class that rotates an image. The constructor takes an image object, and there's a "rotate" method that will rotate the image the requested number of degrees and return it.
During rotation, you keep member variables with the state of the image, say for example a map of the pixels in the image itself.
Your colleagues begin to use the class, and you're responsible for keeping it working. After a few months, someone points out to you a technique that performs the manipulations more efficiently without keeping a map of the pixels.
Did you minimize your exposed interface by keeping your privates private?
If you did, you can swap out the internal implementation to use on the other technique, and the people who've been depending on your code won't need to make any changes.
If you didn't, you have no idea what bits of your internal state your colleagues are depending on, and you can't safely make any changes without contacting all of your colleagues and potentially asking them to change their code, or changing their code for them.
Is this a problem when you're working alone? Maybe not. But it is a problem when you've got an employer, or when you want to open-source that cool new library you're so proud of.
When you make a library that other people use, you want to show the most basic sub-set of your code possible to allow external code to interface with it. This is called information hiding. It would cause more issues if other developers were allowed to modify any field they wanted, perhaps in an attempt of performing some task. An attempt that would cause unspecified program behaviour.
Generally you want to hide "data" (make vars private) so when people that aren't familiar with the class don't access data directly. Instead if they use Public modifiers to access and change that data.
Eg. accessing name via public setter could check for any problems and also make first character upper case
Accessing data directly will not do those checks and possible changes.
You don't want someone to suddenly fiddle with your internals, no? So do C++'s classes.
The problem is, if anyone can suddenly change the state of a variable that is yours, your class will screw up. It's as if someone suddenly fills your gut with something you don't want. Or exchanges your lung for someone elses.
Let's say you have a BankAccount class where you store a person's NIP and cash amount. Let's put all the fields public and see what could go wrong:
class BankAccount
{
public:
std::string NIP;
int cash;
};
Now, let's pretend that you leave it this way and use it throughout your program. Later on, you find a nasty bug caused by a negative amount of cash (whether it is from calculations or simply an accident). So you spend a couple of hours finding where that negative amount came from and fix it.
You don't want this to happen again, so you decide to put the cash amount private and perform checks before setting the cash amount to avoid any other bugs like the previous one. So you go like this:
class BankAccount
{
public:
int getCash() const { return cash; }
void setCash(int amount)
{
if (amount >= 0)
cash = amount;
else
throw std::runtime_exception("Cash amount is negative.");
}
private:
int cash;
}
Now what? You have to find all the cash references and replace them. A quick and dirty Find and Replace won't fix it so easily: you must change accessors to getCash() and setters to setCash. All this time fixing something not so important that could have been avoided by hiding the implementation details within your class and only giving access to the general interface.
Sure, that's indeed a pretty dumb example, but it happened to me so many times with more complex cases(sometimes the bug is much harder to find) that I've really learned to encapsulate as much as I can. Do your future-self and the viewers of your code a favor and hide private members, you never know when your "implementation details" will change.
When you are on a project where 2 or more people are working on the same project, but you work lets, say, 2 people work on Mondays, 2 on Tuesdays, 2 on Wednesdays, etc. The next people that will continue the project won't have to go bother the other coders just to explain what/when/why it has been that way. If you know TORTOISE you will see it's very helpful.

Is it a good practice to write classes that typically have only one public method exposed?

The more I get into writing unit tests the more often I find myself writing smaller and smaller classes. The classes are so small now that many of them have only one public method on them that is tied to an interface. The tests then go directly against that public method and are fairly small (sometimes that public method will call out to internal private methods within the class). I then use an IOC container to manage the instantiation of these lightweight classes because there are so many of them.
Is this typical of trying to do things in a more of a TDD manner? I fear that I have now refactored a legacy 3,000 line class that had one method in it into something that is also difficult to maintain on the other side of the spectrum because there is now literally about 100 different class files.
Is what I am doing going too far? I am trying to follow the single responsibility principle with this approach but I may be treading into something that is an anemic class structure where I do not have very intelligent "business objects".
This multitude of small classes would drive me nuts. With this design style it becomes really hard to figure out where the real work gets done. I am not a fan of having a ton of interfaces each with a corresponding implementation class, either. Having lots of "IWidget" and "WidgetImpl" pairings is a code smell in my book.
Breaking up a 3,000 line class into smaller pieces is great and commendable. Remember the goal, though: it's to make the code easier to read and easier to work with. If you end up with 30 classes and interfaces you've likely just created a different type of monster. Now you have a really complicated class design. It takes a lot of mental effort to keep that many classes straight in your head. And with lots of small classes you lose the very useful ability to open up a couple of key files, pick out the most important methods, and get an idea of what the heck is going on.
For what it's worth, though, I'm not really sold on test-driven design. Writing tests early, that's sensible. But reorganizing and restructuring your class design so it can be more easily unit tested? No thanks. I'll make interfaces only if they make architectural sense, not because I need to be able to mock up some objects so I can test my classes. That's putting the cart before the horse.
You might have gone a bit too far if you are asking this question. Having only one public method in a class isn't bad as such, if that class has a clear responsibility/function and encapsulates all logic concerning that function, even if most of it is in private methods.
When refactoring such legacy code, I usually try to identify the components in play at a high level that can be assigned distinct roles/responsibilities and separate them into their own classes. I think about which functions should be which components's responsibility and move the methods into that class.
You write a class so that instances of the class maintain state. You put this state in a class because all the state in the class is related.You have function to managed this state so that invalid permutations of state can't be set (the infamous square that has members width and height, but if width doesn't equal height it's not really a square.)
If you don't have state, you don't need a class, you could just use free functions (or in Java, static functions).
So, the question isn't "should I have one function?" but rather "what state-ful entity does my class encapsulate?"
Maybe you have one function that sets all state -- and you should make it more granular, so that, e.g., instead of having void Rectangle::setWidthAndHeight( int x, int y) you should have a setWidth and a separate setHeight.
Perhaps you have a ctor that sets things up, and a single function that doesIt, whatever "it" is. Then you have a functor, and a single doIt might make sense. E.g., class Add implements Operation { Add( int howmuch); Operand doIt(Operand rhs);}
(But then you may find that you really want something like the Visitor Pattern -- a pure functor is more likely if you have purely value objects, Visitor if they're arranged in a tree and are related to each other.)
Even if having these many small objects, single-function is the correct level of granularity, you may want something like a facade Pattern, to compose out of primitive operations, often-used complex operations.
There's no one answer. If you really have a bunch of functors, it's cool. If you're really just making each free function a class, it's foolish.
The real answer lies in answering the question, "what state am I managing, and how well do my classes model my problem domain?"
I'd be speculating if I gave a definite answer without looking at the code.
However it sounds like you're concerned and that is a definite flag for reviewing the code. The short answer to your question comes back to the definition of Simple Design. Minimal number of classes and methods is one of them. If you feel like you can take away some elements without losing the other desirable attributes, go ahead and collapse/inline them.
Some pointers to help you decide:
Do you have a good check for "Single Responsibility" ? It's deceptively difficult to get it right but is a key skill (I still don't see it like the masters). It doesn't necessarily translate to one method-classes. A good yardstick is 5-7 public methods per class. Each class could have 0-5 collaborators. Also to validate against SRP, ask the question what can drive a change into this class ? If there are multiple unrelated answers (e.g. change in the packet structure (parsing) + change in the packet contents to action map (command dispatcher) ) , maybe the class needs to be split. On the other end, if you feel that a change in the packet structure, can affect 4 different classes - you've run off the other cliff; maybe you need to combine them into a cohesive class.
If you have trouble naming the concrete implementations, maybe you don't need the interface. e.g. XXXImpl classes implmenting XXX need to be looked at. I recently learned of a naming convention, where the interface describes a Role and the implementation is named by the technology used to implement the role (or falling back to what it does). e.g. XmppAuction implements Auction (or SniperNotifier implements AuctionEventListener)
Lastly are you finding it difficult to add / modify / test existing code (e.g. test setup is long or painful ) ? Those can be signs that you need to go refactoring.