I have a scenario in which i am thinking if i can apply any design pattern. The scenario is like this: a base class has 2 derived classes and in the main function we need to do the same operations on both the derived classes. I need this in c++.
For example:
Class Base
{
virtual bool DoWeHaveToPerformOperation()=0;
virtual void PerformOperation()=0;
};
Class Derived1:public Base
{
bool DoWeHaveToPerformOperation();
void PerformOperation();
};
Class Derived2:public Base
{
bool DoWeHaveToPerformOperation();
void PerformOperation();
};
int main()
{
Derived1 d1;
if(d1.DoWeHaveToPerformOperation())
{
d1.PerformOperation();
}
Derived2 d2;
if(d2.DoWeHaveToPerformOperation())
{
d2.PerformOperation();
}
}
Instead of writing like above in the main, I am wondering if there is some how i can optimize the code (or if there is a pattern that could be used).. I am thinking of at least moving the common code to a seperate function and call it for both the objects like
CheckAndOperate(Base* b)
{
if(b->DoWeHaveToPerformOperation())
{
b->PerformOperation();
}
}
and call it for both the derived objects .. But i feel it could still be optimized..
int main()
{
base* b1=new derived1();
CheckAndOperate(b1);
base* b2=new derived2();
CheckAndOperate(b2);
delete b1;
delete b2;
}
Any suggestions please?.
The Template Method pattern typically deals with this type of thing.
Class Base
{
public:
void PerformOperation()
{
if(DoWeHaveToPerformOperation())
{
DoPerformOperation();
}
}
protected:
virtual bool DoWeHaveToPerformOperation()=0;
virtual void DoPerformOperation() = 0;
};
Class Derived1:public Base
{
bool DoWeHaveToPerformOperation();
void DoPerformOperation();
};
Class Derived2:public Base
{
bool DoWeHaveToPerformOperation();
void DoPerformOperation();
};
int main()
{
Derived1 d1;
d1.PerformOperation();
Derived2 d2;
d2.PerformOperation();
return 0;
}
Yes, putting the common code into a function is the right thing to do.
void CheckAndOperate(Base &b) {
if(b.DoWeHaveToPerformOperation()) {
b.PerformOperation();
}
}
Also your example doesn't really require dynamic allocation:
int main() {
Derived1 d1;
CheckAndOperate(d1);
Derived2 d2;
CheckAndOperate(d2);
}
Compilers may be able to perform inlining and devirtualization, but if you want to encourage it you can implement the shared code in a template:
template<typename CheckableAndOperatable>
void CheckAndOperate(CheckableAndOperatable &x) {
if(x.DoWeHaveToPerformOperation()) {
x.PerformOperation();
}
}
and in C++11 you can go further by making the derived implementation methods final; the compiler knows that if it has a derived type then it can always devirtualized calls to final methods:
class Derived1 : public Base {
public:
bool DoWeHaveToPerformOperation() final;
void PerformOperation() final;
};
Related
Given a base class which has some virtual functions, can anyone think of a way to force a derived class to override exactly one of a set of virtual functions, at compile time? Or an alternative formulation of a class hierarchy that achieves the same thing?
In code:
struct Base
{
// Some imaginary syntax to indicate the following are a "pure override set"
// [
virtual void function1(int) = 0;
virtual void function2(float) = 0;
// ...
// ]
};
struct Derived1 : Base {}; // ERROR not implemented
struct Derived2 : Base { void function1(int) override; }; // OK
struct Derived3 : Base { void function2(float) override; }; // OK
struct Derived4 : Base // ERROR too many implemented
{
void function1(int) override;
void function2(float) override;
};
I'm not sure I really have an actual use case for this, but it occurred to me as I was implementing something that loosely follows this pattern and thought it was an interesting question to ponder, if nothing else.
No, but you can fake it.
Base has non-virtual float and int methods that forward to a pure virtual std variant one.
Two helper classes, one int one float, implement the std variant one, forwarding both cases to either a pure virtual int or float implementation.
It is in charge of dealing with the 'wrong type' case.
Derived inherit from one or another helper, and implement only int or float.
struct Base
{
void function1(int x) { vfunction(x); }
void function2(float x) { vfunction(x); }
virtual void vfunction(std::variant<int,float>) = 0;
};
struct Helper1:Base {
void vfunction(std::variant<int,float> v) final {
if (std::holds_alternative<int>(v))
function1_impl( std::get<int>(v) );
}
virtual void function1_impl(int x) = 0;
};
struct Helper2:Base {
void vfunction(std::variant<int,float> v) final {
if (std::holds_alternative<float>(v))
function2_impl( std::get<float>(v) );
}
virtual void function2_impl(float x) = 0;
};
struct Derived1 : Base {}; // ERROR not implemented
struct Derived2 : Helper1 { void function1_impl(int) override; }; // OK
struct Derived3 : Helper2 { void function2_impl(float) override; }; // OK
This uses https://en.wikipedia.org/wiki/Non-virtual_interface_pattern -- the interface contains non-virtual methods, whose details can be overridden to make them behave differently.
If you are afraid people will override vfunction you can use the private lock technique, and/or just give it a name like private_implementation_detail_do_not_implement and trust your code review process.
Or an alternative formulation of a class hierarchy that achieves the same thing?
One option is to have an intermediate base class that implements one function.
struct Base
{
virtual ~Base() {};
virtual void function(int) = 0;
virtual void function(float) = 0;
};
template <typename T>
struct TBase : Base
{
virtual void function(T) override {}
};
struct Derived1 : Base {};
struct Derived2 : TBase<float> { void function(int) override {} };
struct Derived3 : TBase<int> { void function(float) override {} };
int main()
{
Derived1 d1; // ERROR. Virtual functions are not implemented
Derived2 d2; // OK.
Derived3 d3; // OK.
}
Note that the functions are named function in this approach, not function1 and function2.
Your classes will remain abstract if you don't override all the abstract virtual methods. You have to do all of them if you want to instantiate the object.
My code structure is like below where multiple classes implement Interface. In Example class I store a pointer to the Interface and new() it in the constructor appropriately (depending on constructor parameters not shown here). I'm looking for ways to avoid using new() in this scenario but haven't got a solution yet. What's the best practice for something like this?
class Interface
{
virtual void Foo() = 0;
};
class A : public Interface
{
void Foo() { ... }
};
class B : public Interface
{
void Foo() { ... }
};
class Example
{
private:
Interface* m_bar;
public:
Example()
{
m_bar = new A(); // deleted in destructor
}
};
There are two ways this is typically done, each with their own merits.
If A is truely defined at compile time, than a typical way to handle this is to simply use a template type:
template <typename T>
class TemplateExample
{
T m_bar;
public:
TemplateExample() : m_bar() {};
}
This has some downsides. TemplateExample<A> becomes unrelated to TemplateExample<B>, the error messages when T doesn't follow the correct interface are pretty obtuse, ect. The upside is this may use duck typing rather than interface typing, and m_bar is a concrete instance.
The other (arguable more common) way is to do the following
class UniquePtrExample
{
std::unique_ptr<Interface> m_bar;
public:
UniquePtrExample() : m_bar(new A()){}
};
This has the benefit of being able to be run time configuratble if you follow a cloable pattern:
class Interface
{
public:
virtual void Foo() = 0;
virtual Interface* clone() const = 0;
};
template <typename T>
class CloneHelper : public Interface
{
public:
virtual Interface* clone() const { return new T(static_cast<const T&>(*this));}
};
class A : public CloneHelper<A>
{
virtual void Foo() { std::cout << 'A' << std::endl; }
};
class B : public CloneHelper<B>
{
virtual void Foo() { std::cout << 'B' << std::endl; }
};
class UniquePtrExample
{
std::unique_ptr<Interface> m_bar;
public:
UniquePtrExample() : m_bar(new A()){}
UniquePtrExample(const Interface& i) : m_bar(i.clone());
};
Note you can further extend the above to have a move variant of the clone function.
My problem is the following:
int main()
{
Base* derivedobject = new Derived1();
derivedobject->GetProperties()-> ???
return 0;
}
//********************
// BaseClass.h
//********************
struct PropertyStruct
{
int x;
};
class Base
{
public:
Base();
~Base();
virtual PropertyStruct GetProperties() = 0;
private:
};
//********************
// DerivedClass1.h
//********************
struct PropertyStruct
{
int y;
};
class Derived1 : public Base
{
public:
Derived1();
~Derived1();
PropertyStruct GetProperties() { return myOwnDifferentProperties; };
private:
};
//********************
// DerivedClass2.h
//********************
struct PropertyStruct
{
float z;
};
class Derived2 : public Base
{
public:
Derived2();
~Derived2();
PropertyStruct GetProperties() { return myOwnDifferentProperties };
private:
};
If I do it like that I'm going to get an error saying that PropertyStruct is a redefinition. If I use a namespace or rename the struct inside the derived class I am then going to get an error telling me that the return type is not the same as defined by Base.
If I define the virtual functions return type as a pointer it compiles, though the next problem when accessing the function "GetProperties" from the main method (in this example) the base object does not know what variables are inside the struct of the derived class.
Is there any way I can realize this ?
That I can get the different properties of each derived object but using the base class object ?
As others have mentioned, there are ways to achieve your goals here but ultimately you will find yourself writing code like the following:
Base * object = ...;
if object is Derived1 then
get Property1 and do something with it
else if object is Derived2 then
get Property2 and do something with it
This is an anti-pattern in object-oriented programming. You already have a class hierarchy to represent the differences between the various derived types. Rather than extracting the data from your objects and processing it externally, consider adding a virtual function to the base class and letting the derived classes do the processing.
class Base
{
public:
virtual void DoSomething() = 0;
};
class Derived1 : Base
{
public:
void DoSomething()
{
// use myOwnDifferentProperties as necessary
}
private:
PropertyStruct myOwnDifferentProperties;
};
If it's not appropriate to put the required processing in the derived classes (i.e. if it would introduce unwanted responsibilities) then you may want to consider the Visitor Pattern as a way to extend the functionality of your hierarchy.
Since template functions cannot be virtual you can use hierarchy of your properties. It's only one way, no other ways. For get elements of derived Properties you should use virtual getter functions.
struct BaseProp
{
virtual ~BaseProp() { }
virtual boost::any getProperty() const = 0;
};
struct PropertyStruct : BaseProp
{
boost::any getProperty() const { return x; }
private:
int x;
};
struct PropertyStruct2 : BaseProp
{
boost::any getProperty() const { return y; }
private:
float y;
};
class Base
{
public:
virtual std::shared_ptr<BaseProp> GetProperties() const = 0;
virtual ~Base() { }
}
class Derived
{
public:
std::shared_ptr<BaseProp> GetProperties() const { return new PropertyStruct(); }
};
class Derived2
{
public:
std::shared_ptr<BaseProp> GetProperties() const { return new PropertyStruct2(); }
};
You can use template class to do that:
struct PropertyStruct1 {
float f;
};
struct PropertyStruct2 {
int i;
};
template<class T>
class A{
public:
T GetProperties() {return mProps;}
private:
T mProps;
};
int main (int argc, const char * argv[]) {
A<PropertyStruct1> a1;
int f = a1.GetProperties().f;
A<PropertyStruct2> a2;
int i = a2.GetProperties().i;
return 0;
}
I am designing a framework in c++ which is supposed to provide basic functionality and act as interface for the other derived systems.
#include <stdio.h>
class Module
{
public:
virtual void print()
{
printf("Inside print of Module\n");
}
};
class ModuleAlpha : public Module
{
public:
void print()
{
printf("Inside print of ModuleAlpha\n");
}
void module_alpha_function() /* local function of this class */
{
printf("Inside module_alpha_function\n");
}
};
class System
{
public:
virtual void create_module(){}
protected:
class Module * module_obj;
};
class SystemAlpha: public System
{
public:
void create_module()
{
module_obj = new ModuleAlpha();
module_obj->print(); // virtual function, so its fine.
/* to call module_alpha_function, dynamic_cast is required,
* Is this a good practice or there is some better way to design such a system */
ModuleAlpha * module_alpha_obj = dynamic_cast<ModuleAlpha*>(module_obj);
module_alpha_obj->module_alpha_function();
}
};
main()
{
System * system_obj = new SystemAlpha();
system_obj->create_module();
}
Edited the code to be more logical and it compiles straight away. The question is, that is there a better way to design such a system, or dynamic_cast is the only solution. Also, if there are more derived modules, then for type-casting, there is some intelligence required in the base Module class.
If Derived is the only concrete instance of Base you could use static_cast instead.
Personally, I define a function, like MyCast for every specialized class. I define four overloaded variants, so that I can down-cast const and non-const pointers and references. For example:
inline Derived * MyCast(Base * x) { return static_cast<Derived *> (x); }
inline Derived const * MyCast(Base const * x) { return static_cast<Derived const *>(x); }
inline Derived & MyCast(Base & x) { return static_cast<Derived &> (x); }
inline Derived const & MyCast(Base const & x) { return static_cast<Derived const &>(x); }
And likewise for Derived2 and Base2.
The big advantage in having all four is that you will not change constness by accident, and you can use the same construct regardless if you have a pointer or a reference.
Of course, you could replace static_cast with a macro, and use dynamic_cast in debug mode and static_cast is release mode.
Also, the code above can easily be wrapped into a macro, making it easy to batch-define the functions.
Using this pattern, you could then implement your code as:
class Derived : public Base
{
public:
virtual void func2()
{
base2_obj = new Derived2();
}
void DerivedFunc()
{
MyCast(base2_obj)->Derived2Func();
}
}
The design gets much cleaner if Base does not contain the base_obj object, but rather gets a reference via a virtual method. Derived should contain a Derived2 object, like:
class Base
{
public:
virtual void func1();
private:
class Base2;
virtual Base2& get_base2();
};
class Derived : public Base
{
Derived2 derived2;
public:
Base2& get_base2() { return derived2; }
void DerivedFunc()
{
derived2->Derived2Func();
}
}
If you are worried about performance, pass the reference in the constructor of Base.
I took your code with its many compile errors and tried to simplify it. Is this what you are trying to acheive? It will compile.
class Base2 {
public:
virtual void Derived2Func(){
}
};
Base2* fnToInstantiateABase2();
class Base {
public:
Base() : base2_obj(fnToInstantiateABase2()) {
}
virtual void DerivedFunc() {
}
protected:
Base2* base2_obj;
};
class Derived : public Base {
public:
void DerivedFunc() {
base2_obj->Derived2Func(); // not possible as base2_obj is of type Base2
}
};
class Derived2 : public Base2 {
public:
void Derived2Func() {
}
};
void test() {
Base * base_obj = new Derived();
base_obj->DerivedFunc();
}
Base2* fnToInstantiateABase2() {
return new Derived2();
}
Scenario: I have the following defined classes.
class Baseclass { };
class DerivedTypeA : public Baseclass { };
class DerivedTypeB : public Baseclass { };
// ... and so on ...
class Container
{
list<Baseclass*> stuff;
list<DerivedTypeA*> specific_stuff;
// ... initializing constructors and so on ...
public:
void add(Baseclass * b)
{
stuff.add(b);
}
void add(DerivedTypeA * a)
{
stuff.add(a);
specific_stuff.add(a);
}
};
class ContainerOperator
{
Container c;
// ... initializing constructors and so on ...
public:
void operateOnStuff(Baseclass * b)
{
// This will always use "void add(Baseclass * b)" no matter what object b really is.
c.add(b);
}
};
// ...
containerOperator.operateOnStuff(new DerivedTypeA());
So, what I want to do is to handle a certain derived class in some special way in Container.
Problem: void add(DerivedTypeA * a) is never called. I'm obviously doing something wrong. What is the correct way of doing what I am trying to achieve here?
Overload resolution in C++ happens at compile-time, not run-time. The "usual" way to solve problems like this is to use Visitor pattern.
You can reduce the amount of boilerplate copy-paste by implementing Visitor with CRTP.
If you use CRTP for Base::accept, you don't need to define it any more in derived classes.
Here is a similar program to yours, but a little simpler:
#include <iostream>
class Base; class Derived;
struct Operation {
void add(Base *b) {
std::cout << "Base\n";
}
void add(Derived *b) {
std::cout << "Derived\n";
}
void visit(Base *b); // need to define this after Base class
};
struct Base {
virtual ~Base() {}
virtual void accept(Operation &o)
{
o.add(this);
}
};
void Operation::visit(Base *b) {
b->accept(*this);
}
struct Derived : public Base {
void accept(Operation &o)
{
o.add(this);
}
};
int main() {
Operation o;
Base b;
Derived d;
Base *ptrb = &b;
Base *ptrd = &d;
o.add(ptrb); // These two print "Base"
o.add(ptrd);
o.visit(ptrb); // "Base"
o.visit(ptrd); // "Derived"
}
You can use RTTI to determine whether the provided object is of the derived type, and if so, call the second add() function.
void add(Baseclass * b)
{
stuff.add(b);
DerivedTypeA * a = dynamic_cast<DerivedTypeA *>(b);
if ( a != 0 )
specific_stuff.add(a);
}
Unlike the visitor pattern this solution violates the Open-Closed Principle, but it's a lot simpler and easier to understand when the number of derived classes do not change or change slowly over time.