Polymorphism in C++ and Objective C - c++

I am new to Objective C, I wanted to understand protocol concept more clearly.
#protocol protocolName
#optional
#required
#end
Can I correlate #optional part with virtual function and #required part with pure virtual function of C++?
Is #protocol is way of Objective-C to create interface and abstract class?

Is #protocol is way of Objective-C to create interface and abstract
class?
Exactly.
Can I correlate #optional part with virtual function and #required
part with pure virtual function of C++?
Yes you can but there is one difference - if classA does not implement OptionalProtocolMethodB, any attempt to call [classA OptionalProtocolMethodB] will cause a runtime exception. Calling a virtual function in C++ will not.
You should check if the class implements the optional method before calling it. Example:
if ([_delegate respondsToSelector:#selector(didUploadedTotalBytes: totalBytesExpectedToWrite:)]) {
[_delegate didUploadedTotalBytes:_uploadedBytes totalBytesExpectedToWrite:_totalBytes];
}

Forget about abstract classes in Objective-C (there are none). Forget about protocols in connection with class hierarchy.
A protocol describes a set of methods that an object needs to implement to be usable for some purposes. For example, if a protocol has two required methods "color" and "setColor", then any instance of any class implementing these two methods can be used. In addition, the class must claim that it supports the protocol - this avoids a class being used by coincidence. On the other hand, all methods in a protocol could be optional and a class could claim to support the protocol without implementing any of the methods.
There will usually be a description of what happens when optional methods are not implemented. For example, the documentation for an optional method returning BOOL might say "if not implemented, it is assumed that the method returns YES". In other cases the documentation might say under which circumstances an optional method will be called. In any case, the caller must check that an optional method is implemented before calling it using "respondsToSelector". (Of course, the documentation might say for example that if wantsComplexBehaviour returns YES, then doComplexBehaviour1 and doComplexThings2 must be implemented, and not implementing that would be a programmer error punished with an exception when the methods get called).
This is usually all done in a very pragmatic way. Many classes that you use need delegate objects which implement some protocol, so you either add the protocol methods to your implementation and make yourself the delegate, or you create a class for the sole purpose of creating these delegates and implement all the protocol methods in the implementation of that class.

Related

Virtual event handlers from several classes: multiple inheritance or composition?

My team has written several C++ classes which implement event handling via pure virtual callbacks - for example, when a message is received from another process, the base class which handles IPC messaging calls its own pure virtual function, and a derived class handles the event in an override of that function. The base class knows the event has occurred; the derived class knows what to do with it.
I now want to combine the features provided by these base classes in a higher-level class, so for example when a message arrives from another process, my new class can then forward it on over its network connection using a similar event-driven networking class. It looks like I have two options:
(1) composition: derive classes from each of the event-handling base classes and add objects of those derived classes to my new class as members, or:
(2) multiple inheritance: make my new class a derived class of all of the event-handling base classes.
I've tried both (1) and (2), and I'm not satisfied with my implementation of either.
There's an extra complication: some of the base classes have been written using initialisation and shutdown methods instead of using constructors and destructors, and of course these methods have the same names in each class. So multiple inheritance causes function name ambiguity. Solvable with using declarations and/or explicit scoping, but not the most maintainable-looking thing I've ever seen.
Even without that problem, using multiple inheritance and overriding every pure virtual function from each of several base classes is going to make my new class very big, bordering on "God Object"-ness. As requirements change (read: "as requirements are added") this isn't going to scale well.
On the other hand, using separate derived classes and adding them as members of my new class means I have to write lots of methods on each derived class to exchange information between them. This feels very much like "getters and setters" - not quite as bad, but there's a lot of "get this information from that class and hand it to this one", which has an inefficient feel to it - lots of extra methods, lots of extra reads and writes, and the classes have to know a lot about each other's logic, which feels wrong. I think a full-blown publish-and-subscribe model would be overkill, but I haven't yet found a simple alternative.
There's also a lot of duplication of data if I use composition. For example, if my class's state depends on whether its network connection is up and running, I have to either have a state flag in every class affected by this, or have every class query the networking class for its state every time a decision needs to be made. If I had just one multiply-inherited class, I could just use a flag which any code in my class could access.
So, multiple inheritance, composition, or perhaps something else entirely? Is there a general rule-of-thumb on how best to approach this kind of thing?
From your description I think you've gone for a "template method" style approach where the base does work and then calls a pure virtual that the derived class implements rather than a "callback interface" approach which is pretty much the same except that the pure virtual method is on a completely separate interface that's passed in to the "base" as a parameter to the constructor. I personally prefer the later as I find it considerably more flexible when the time comes to plug objects together and build higher level objects.
I tend to go for composition with the composing class implementing the callback interfaces that the composed objects require and then potentially composing again in a similar style at a higher level.
You can then decide if it's appropriate to compose by having the composing object implement the callback interfaces and pass them in to the "composed" objects in their constructors OR you can implement the callback interface in its own object possibly with a simpler and more precise callback interface that your composing object implements, and compose both the "base object" and the "callback implementation object"...
Personally I wouldn't go with an "abstract event handling" interface as I prefer my code to be explicit and clear even if that leads to it being slightly less generic.
I'm not totally clear on what your new class is trying to achieve, but it sounds like you're effectively having to provide a new implementation somewhere for all of these abstract event classes.
Personally I would plump for composition. Multiple inheritance quickly becomes a nightmare, especially when things have to change, and composition keeps the existing separation of concerns.
You state that each derived object will have to communicate with the network class, but can you try and reduce this to the minimum. For instance, each derived event object is purely responsible for packaging up the event info into some kind of generic packet, and then that packet is passed to the network class to do the guts of sending?
Without knowing exactly what your new class is doing it's hard to comment, or suggest better patterns, but the more I code, the more I am learning to agree with the old adage "favour composition over inheritance"

C++: What is a class interface?

I know that in C++ there is no interface keyword or whatsoever, but that it is more of a design-pattern instead.
So, if I have an Apple class, which contains information and methods to work on apples (color, sourness, size, eat, throw)..
What would an interface to Apple look like?
What do you usually need interfaces for?
You just use pure virtual functions in a class.
class IApple
{
public:
virtual ~IApple() {} // Define a virtual de-structor
virtual color getColor() = 0;
virtual sourness getSourness() = 0;
virtual size getSize() = 0;
virtual void eat() = 0;
};
Martin's illustrated an interface. Re your other question - what do you usually need them for:
they can be used as base classes by functions that provide this API
an interface may be a small part of the derived class's overall functionality; a derived class can implement many interfaces
pointers or references to interfaces (possibly in containers) can be used in code to decouple that code from any particular implementation (i.e. as a base for run-time polymorphic code using virtual functions / dispatch)
this can help reduce compile times and break cyclic dependencies
the implementation might be provided by a caller or a factory method
being able to vary the implementation often makes the system overall more flexible and reusable
implementations that facilitate testing can be slotted in
the interface itself may have value as a form of usage documentation (sometimes I even create interfaces as illustrates of expected template policy parameters, although there's no actual need to derive your policy from them)
some design patterns work by changing the implementation during the lifetime of the containing object/code
they can be used as a kind of annotation or trait for a class - even without providing any actual behaviour of their own - with other code checking whether the interface is a base when deciding on appropriate behaviour
A interface is a set of members eg. functions and variables that is shared between different classes so you can access the members of the interface without having to know which class it was in the first place, as long as it implements the interface you can be sure it has the members.
You can use it for example to iterate through different objects calling the same function on each.

If a class might be inherited, should every function be virtual?

In C++, a coder doesn't know whether other coders will inherit his class. Should he make every function in that class virtual? Are there any drawbacks? Or is it just not acceptable at all?
In C++, you should only make a class inheritable from if you intend for it to be used polymorphically. The way that you treat polymorphic objects in C++ is very different from how you treat other objects. You don't tend to put polymorphic classes on the stack, or pass them by or return them from functions by value, since this can lead to slicing. Polymorphic objects tend to be heap-allocated, be passed around and returns by pointer or by reference, etc.
If you design a class to not be inherited from and then inherit from it, you cause all sorts of problems. If the destructor isn't marked virtual, you can't delete the object through a base class pointer without causing undefined behavior. Without the member functions marked virtual, they can't be overridden in a derived class.
As a general rule in C++, when you design the class, determine whether you want it be inherited from. If you do, mark the appropriate functions virtual and give it a virtual destructor. You might also disable the copy assignment operator to avoid slicing. Similarly, if you want the class not to be inheritable, don't give it any of these functions. In most cases it's a logic error to inherit from a class that wasn't designed to be inherited from, and most of the times you'd want to do this you can often use composition instead of inheritance to achieve this effect.
No, not usually.
A non-virtual function enforces class-invariant behavior. A virtual function doesn't. As such, the person writing the base class should think about whether the behavior of a particular function is/should be class invariant or not.
While it's possible for a design to allow all behaviors to vary in derived classes, it's fairly unusual. It's usually a pretty good clue that the person who wrote the class either didn't think much about its design, lacked the resolve to make a decision.
In C++ you design your class to be used either as a value type or a polymorphic type. See, for example, C++ FAQ.
If you are making a class to be used by other people, you should put a lot of thought into your interface and try to work out how your class will be used. Then make the decisions like which functions should be virtual.
Or better yet write a test case for your class, using it how you expect it to be used, and then make the interface work for that. You might be surprised what you find out doing it. Things you thought were absolutely necessary might turn out to be rarely needed and things that you thought were not going to be used might turn out to be the most useful methods. Doing it this way around will save you time not doing unnecessary work in the long run and end up with solid designs.
Jerry Coffin and Dominic McDonnell have already covered the most important points.
I'll just add an observation, that in the time of MFC (middle 1990s) I was very annoyed with the lack of ways hook into things. For example, the documentation suggested copying MFC's source code for printing and modifying, instead of overriding behavior. Because nothing was virtual there.
There are of course a zillion+1 ways to provide "hooks", but virtual methods are one easy way. They're needed in badly designed classes, so that the client code can fix things, but in those badly designed classes the methods are not virtual. For classes with better design there is not so much need to override behavior, and so for those classes making methods virtual by default (and non-virtual only as active choice) can be counter-productive; as Jerry remarked, virtuals provide opportunites for derived classes to screw up.
There are design patterns that can be employed to minimize the possibilities of screw-ups.
For example, wrapping internal virtuals in exposed non-virtual methods with sanity checks, and, for example, using decoupled event handling (where appropriate) instead of virtuals.
Cheers & hth.,
When you create a class, and you want that class to be used polymorphically you have to consider that the class has two different interfaces. The user interface is defined by the set of public functions that are available in your base class, and that should pretty much cover all operations that users want to perform on objects of your class. This interface is defined by the access qualifiers, and in particular the public qualifier.
There is a second interface, that defines how your class is to be extended. At that level you have to think on what behavior you want to be overridden by extending classes, and what elements of your object you want to provide to extending classes. You offer access to derived classes by means of the protected qualifier, and you offer extension points by means of virtual functions.
You should try to follow the Non-Virtual Interface idiom whenever possible. That idiom (google for it) basically tries to fully separate the two interfaces by not having public virtual functions. Users call non-virtual functions, and those in turn call on configurable functionalities by means of protected/private virtual functions. This clearly separates extension points from the class interface.
There is a single case, where virtual has to be part of the user interface: the destructor. If you want to offer your users the ability to destroy derived objects through pointers to the base, then you have to provide a virtual destructor. Else you just provide a protected non-virtual one.
He should code the functions as it is, he shouldn't make them virtual at all, as in the circumstances specified by you.
The reasons being
1> The CLASS CODER would obviously have certain use of functions he is using.
2> The inherited class may or may not make use of these functions as per requirement.
3> Any function may be overwritten in derived class without any errors.

Factory Pattern in C++ -- doing this correctly?

I am relatively new to "design patterns" as they are referred to in a formal sense. I've not been a professional for very long, so I'm pretty new to this.
We've got a pure virtual interface base class. This interface class is obviously to provide the definition of what functionality its derived children are supposed to do. The current use and situation in the software dictates what type of derived child we want to use, so I recommended creating a wrapper that will communicate which type of derived child we want and return a Base pointer that points to a new derived object. This wrapper, to my understanding, is a factory.
Well, a colleague of mine created a static function in the Base class to act as the factory. This causes me trouble for two reasons. First, it seems to break the interface nature of the Base class. It feels wrong to me that the interface would itself need to have knowledge of the children derived from it.
Secondly, it causes more problems when I try to re-use the Base class across two different Qt projects. One project is where I am implementing the first (and probably only real implementation for this one class... though i want to use the same method for two other features that will have several different derived classes) derived class and the second is the actual application where my code will eventually be used. My colleague has created a derived class to act as a tester for the real application while I code my part. This means that I've got to add his headers and cpp files to my project, and that just seems wrong since I'm not even using his code for the project while I implement my part (but he will use mine when it is finished).
Am I correct in thinking that the factory really needs to be a wrapper around the Base class rather than the Base acting as the factory?
You do NOT want to use your interface class as the factory class. For one, if it is a true interface class, there is no implementation. Second, if the interface class does have some implementation defined (in addition to the pure virtual functions), making a static factory method now forces the base class to be recompiled every time you add a child class implementation.
The best way to implement the factory pattern is to have your interface class separate from your factory.
A very simple (and incomplete) example is below:
class MyInterface
{
public:
virtual void MyFunc() = 0;
};
class MyImplementation : public MyInterface
{
public:
virtual void MyFunc() {}
};
class MyFactory
{
public:
static MyInterface* CreateImplementation(...);
};
I'd have to agree with you. Probably one of the most important principles of object oriented programming is to have a single responsibility for the scope of a piece of code (whether it's a method, class or namespace). In your case, your base class serves the purpose of defining an interface. Adding a factory method to that class, violates that principle, opening the door to a world of shi... trouble.
Yes, a static factory method in the interface (base class) requires it to have knowledge of all possible instantiations. That way, you don't get any of the flexibility the Factory Method pattern is intended to bring.
The Factory should be an independent piece of code, used by client code to create instances. You have to decide somewhere in your program what concrete instance to create. Factory Method allows you to avoid having the same decision spread out through your client code. If later you want to change the implementation (or e.g. for testing), you have just one place to edit: this may be e.g. a simple global change, through conditional compilation (usually for tests), or even via a dependency injection configuration file.
Be careful about how client code communicates what kind of implementation it wants: that's not an uncommon way of reintroducing the dependencies factories are meant to hide.
It's not uncommon to see factory member functions in a class, but it makes my eyes bleed. Often their use have been mixed up with the functionality of the named constructor idiom. Moving the creation function(s) to a separate factory class will buy you more flexibility also to swap factories during testing.
When the interface is just for hiding the implementation details and there will be only one implementation of the Base interface ever, it could be ok to couple them. In that case, the factory function is just a new name for the constructor of the actual implementation.
However, that case is rare. Except when explicit designed having only one implementation ever, you are better off to assume that multiple implementations will exist at some point in time, if only for testing (as you discovered).
So usually it is better to split the Factory part into a separate class.

Parameterized Factory & product classes that cannot be instantiated without the Factory

I'm working on implementing a Factory class along the lines of what is proposed in this response to a previous question:
Factory method implementation - C++
It's a Factory that stores a map from strings to object creation functions so I can request different types of objects from the factory by a string identifier. All the classes this factory produces will inherit from an abstract class (Connection) providing a common interface for connections over different protocols (HTTPConnection, FTPConnection, etc...)
I have a good grasp of how the method linked to above works and have got that working.
Where I'm having problems is trying to figure out a mechanism to prevent instantiation of the Connection objects without using the Factory. In order for the Factory to do it's work, I need to provide it an object creation function to store in it's map. I can't provide it the constructor because you can't make function pointers to constructors. So, as in the link above, there has to be a seperate object creation function to return new objects. But to do this, I need to make this creation function either a static method of the class, which the client code would be able to access, or a seperate function which would require either a)that the constructor of the Connection classes be public, or b) make the constructor private and make a non class member creation function be a friend, which isn't inherited and can't be enforced by the abstract base class.
Similarly, if I just made the Factory class friends with the Connection classes it was supposed to produce so it could access their private constructors, that would work, but I couldn't enforce through the abstact base class because friends aren't inherited. Each subclass would have to explicitly be friends with the Factory.
Can anyone suggest a method of implementing what I've described above?
To reiterate the requirements:
1 - Factory that produces a variety of objects all derived from the same base class based on passed in identifier to the Factory's Create method.
2 - All the subclasses that the factory will need to produce will automatically register a creation function and identifier with the factory (see linked SO answer above)
3 - All the subclasses that the factory will produce should not be instantiable (instantiatable?) without going through the Factory
4 - Enforce #3 explicitly as part of the abstract base class using inheritance. Remove the possibility for someone to subclass from the abstract base class while also providing mechanisms to freely instantiate objects.
The overall goal of what I'm trying to achieve is to allow new Connection types to be added to the hierarchy without having to change the Factory class in any way, while also forcing all the subclasses of Connection to not be instantiable directly by client code.
I'm open to the possibility that this is not the best way to achieve what I want, and suggestions of other alternatives are welcome.
EDIT - Will add some code snippets to this when I get home to hopefully make this clearer.
If I understand you correctly I think you can put some of what you want in the METADECL macro I mention in my answer you link to, ie define a static creator function that is a friend or declare it as a static method. This will make it possible for you to restrict the constructor from public use etc.
Below I try to point out where the METADECL (and METAIMPL) should be. I leave it for you to implement what you need there (I believe in you)
Header file
class MySubClass : public FactoryObjectsRoot {
METADECL(MySubClass) // Declare necessary factory construct
:
:
};
Source file
METAIMPL(MySubClass) // Implement and bootstrap factory construct