Pros and cons for pure virtual c++ coding [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm coming form java background, had a conversation today with one of our C++ developer regarding to convert an existing code to have a pure virtual methods (interface) and to use them as dependency injection all over the code for better decoupling.
He tried to convince me that we should use them only when there is a "logic" in the code and in case the code is just collecting information from the PC it is not necessary.
Long story short, I'm looking for good reasons why to refactor the code and use IoC and pure virtual method instead of leaving the working coupled code as is.

There are many reasons why you should or should not refactor existing code. Each situation is unique. If you are talking about some kind of project with large codebase and you want to refactor it's core which works good and properly tested than in 99% cases I'd recomend you don't do that. You can add more bugs to tested code without making really needed improvements.
If code is just collecting some information you can extract interface for testing class that uses this object. If you don't use unit tests for some reasons than leave it as it is.
Overall you oponent is probably right, make interfaces when you really need them and write clean code with easy dependency extraction.

Why use pure-virtual methods?
I try to write base class functions to provide default behaviour for all the interface methods. I am surprised by how often these defaults simply generate some error handling (using the locally accepted mechanism).
For one example, I worked on code that received commands to set led states. During development, the 'other software' sometimes would (mistakenly) request a colour explicitly dis-allowed by the requirements. ('Red' disallowed on status led 5) My default functions generated the appropriate error message, and identified which 'other software' sent the erroneous request.
There are also cases that in some way have no appropriate default behaviour. For these situations, I create pure-virtual methods. Declaring the method pure virtual is documenting the idea that the base class will not provide the functionality, and is therefore requiring that all derived class must provide some code to support this concept.
"A pure virtual function or pure virtual method is a virtual function that is required to be implemented by a derived class that is not abstract" - Wikipedia
good reasons why to refactor the code?
Readability.

Related

Is it a good practice for a class to have only public (no private or protected) methods and variables? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
recently I am doing a project for a school course. I always declare all variables and methods inside every classes public because It helps me access those variables easier while developing and less coding for the get(); and set(); functions. However, i think this is the wrong way of doing OOP. Any ideas?
Getters/Setters are useful sometimes, but aggregates are also useful.
If you have an aggregate, you should be willing to accept any data that matches the types of your data fields. If you want to maintain invariants (width>height) and assume it elsewhere in your code, you'll want accessors.
But code that doesn't assume invariants is often easier to work with and can even be less bug prone; manually maintaining invariants can get extremely hard, as messing up or compromising even once makes the invariant false.
Honestly, the biggest advantage of getters/setters is mocking (making test harnesses) and putting a breakpoint at access/modification. The costs in terms of code bulk and the like are real, and having more of the code you write not be boilerplate has value.
So a width/height field on a non-"live" rendered rect? Default to public data. A buffer used to store the data in a hand written optional<T>? Private data, and accessors.
Accessors should be used to reduce your own (or the code reader's) cognitive load. Write code with a purpose, and don't write code that doesn't have a purpose.
Now you'll still want to know how to write getters/setters, so practicing on stupid "rect width/height" cases has value. And learning the LSP problem that while a ReadOnly square is a kind of ReadOnly rect, a ReadWrite square is not a kind of ReadWrite rectangle might be best done via experience (or maybe not, as so many people experience it but don't learn the lesson).
This pertains to the principle of encapsulation where exposing internals means, from the perspective of the class in question ("you"):
You have no control over what is written to these fields
You are not notified if these fields are accessed
You are not notified if these fields are changed
You can never trust that the values are valid
You cannot change the types of these values without impacting any code that uses them
When you encapsulate you control access to these properties meaning:
You can prevent alterations
You can validate before writing, and reject invalid values
You can change the internal representation without consequence, provided the get/set functions still behave the same way
You can clean up the values before they are written
You have confidence that at all times the values are valid since you are the gatekeeper
You can layer additional behaviour on before or after changes have been made, such as the observer pattern
This is not to say you must use encapsulation all the time. There are many cases when you want "dumb data" that doesn't do anything fancy, it's just a container for passing things around.
In C++ this often leads to the use of struct as "dumb data" since all fields are public by default, and class as "smart data" as the fields are private by default, even though apart from the access defaults these two things are largely interchangeable.

Are interfaces a good practice in C++? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Coming from a Java/python world with little or no C++ experience, I am used to work with interfaces to separate the contract that a class has from its implementation, for the sake of the Liskov substitution principle and dependency injection.
I am not going to go over all the benefits of interfaces in Java, or why they were introduced (lack of multiple inheritance) and not needed in C++ (see here for example).
I also found out how to have the equivalent of a Java interface in C++
My question is more about whether or not this is a good practice in a C++ environment.
As I understand it, there cannot be the equivalent of an interface without pure virtual methods. This means that bringing interfaces in C++ will introduce some overhead in the code (because virtual methods introduce an overhead).
Therefore, are interfaces based on pure virtual method a good thing? Is there maybe some other way to achieve the Liskov Substitution principle and dependency injection that I don't know of? using templates maybe?
For example, google test has it easy to mock virtual methods, but proposes a way of mocking non virtual methods.
I am trying to figure out if my coding habits are still relevant in my new C++ environment, or if I should adapt and change my paradigms.
[EDIT based on answers and comments]
I got part of the answer I was looking for (i.e. "yes/no with arguments"), and i guess I should clarify a bit more what I am still trying to figure out
Are there alternatives to using an interface-like design to do dependency injection?
Reversing the question: should one decide to go for an interface-based design, except when speed is absolutely crucial, when would one NOT want to do an interface based on pure virtual methods?
Notes:
I guess I'm trying to figure out if I'm too narrow minded thinking in terms of interfaces (hence my edit looking for alternatives).
I work in a C++ 11 environment
I would say interfaces are still a fine practice in C++. The overhead that virtual methods introduce is minimal, and as you will hear time and time again, premature optimization is a big mistake. Abstract base classes are a well-known, well-understood concept in C++ and favoring readable, common concepts over convoluted template metaprogramming can help you immensely in the long run.
That being said, I would try to avoid multiple inheritance. There are certain tricky issues that come with it, which is why Java explicitly forbids it for regular inheritance. A simple google search can give you more explanation.
If you have multiple unrelated classes and you'd like to call a method of the same name (let's say foo()) on each of them, then instead of an interface you can make a templatized function to do this.
class A {
void foo() {
// do something
}
};
class B {
void foo() {
// do something
}
};
template <typename T>
void callFoo(const T& object) {
object.foo();
}
int main() {
A a;
B b;
callFoo(a);
callFoo(b);
return 0;
}
Even though there is no explicit "contract" in callFoo() stating that the type must support .foo(), any object that you pass to it must support it or there will be a compile error. This is a commonly used way to duck-type objects at compile time and an alternative to interfaces for certain scenarios.
At the end of the day, as you learn more C++, you will use your own judgement to decide how you will accomplish the polymorphic behavior you want. There is no single right answer how to do it, just as there is no wrong answer either. Both abstract base classes and template duck typing are good tools that serve slightly different purposes.

Pattern to Implement an OO interface to a C program written in an imperative style [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Imagine a legacy C program written in an imperative style: top-down, no objects, and peppered with goto statements. This program implements a dependent in an observer pattern; i.e., the subscriber in a pub-sub architecture.
The task is to implement an OO interface to said program.
What design pattern fits best here? At first, I jumped to an Adapter pattern:
Adapter
Convert the interface of a class into another interface clients
expect. Adapter lets classes work together that couldn't otherwise
because of incompatible interfaces [Go4, 8]
The problem here is that an adapter converts the interface of another class. in this case, there is no class to convert; C doesn't have classes.
Next I thought proxy:
Proxy
Provide a surrogate or placeholder for another object to control
access to it. [Go4, 9]
This fits--sort of--but doesn't seem to capture the essence of what I'm trying to do.
Facade
Provide a unified interface to a set of interfaces in a subsystem.
Facade defines a higher-elevel interface that makes the subsystem
easier to use. [Go4, 9]
Perhaps... This may be the best option, but I'm not sure.
Which design pattern is most applicable in this scenario? Thx, Keith :^)
Patterns make it easier for you to explain what you are doing to somebody else, who is already familiar with the same patterns; they do not automatically make your job easier.
In your situation it does not appear to be a need for any pattern at all: you are writing C++ classes to define an interface for a pub-sub system, and call C functions inside member functions implementing the interface. This is basic encapsulation: users of your C++ classes have no ides what's going on inside the implementation, while the implementation hides all the spaghetti/gotos from them.
You could potentially describe it as an application of the adapter pattern, even though all you have is a collection of C functions, not classes. At a very general level, non-OO functions could be thought of as class functions of a single top-level class; your "adapter" puts a set of classes on top of them as a C++ interface.

Should I create an interface for every class to make my code testable (unit testing) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to learn how to create valuable unit tests.
In each tutorial I saw people create interfaces for every dependency to create a mock.
Is it mean that I should always create an interface for every class I have in my project? I don't know is it a good or a bad idea but every time I see a rule with "always" I get suspicious.
I should always create an interface for every class I have in my project?
No.
There's no one single rule you can follow or thing you can do which would make all of your code automatically unit-testable. What you can do is write code with abstractable dependencies. If you want to test whether or not the code you've written is easily unit testable, try to write a unit test for it. If a dependency gets in your way, you have a coupled dependency. Abstract it.
How you abstract it is up to you. You have a variety of tools at your disposal:
Interfaces
Abstract classes
Concrete classes with lots of virtual members
Mockable pass-through wrapper classes (very useful for non-unit-testable 3rd party dependencies)
etc.
You also have a variety of ways to get the dependency into the code that uses it:
Constructor injection
Property injection
Passing it to the method as an argument
Factories
In some circumstances, service locators (useful when introducing dependency abstraction to a legacy codebase, for example)
etc.
How you structure you code really depends on what you're building and what makes sense for those objects. For a service class which integrates with an external system, an interface makes a lot of sense. For a domain model which has a variety of potential implementations that share functionality, an abstract class may make a lot of sense. There are many possibilities for many potential uses.
The real litmus test of whether or not your code is unit-testable isn't "do I use interfaces?", it's "can I write a meaningful unit test for this?" If the functionality is isolatable without relying on dependencies (either by not having them or by allowing them to be mocked for testing), then it seems pretty unit-testable to me.

Does the inheritance of opencv functions make my program better? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a program that uses opencv functions such as calibratecamera. Now I am working on the final version of my code, and I was wondering if instead of calling opencv's functions I inherit them in my classes would make my program 'better' ?
As pointed out in the comments, your question is very "general" and somehow confused. However, there is a general answer to the question "is it better to inherit?". Of course, being a general answer, it is oversimplified and might not apply to your case.
Item 58 in "C++ Coding Standards" (Sutter, Alexandrescu), is titled
Prefer composition to inheritance
You can find similar advice in several other books too.
The reason they give for making their case is:
Avoid inheritance taxes: Inheritance is the second-tightest coupling relationship in
C++, second only to friendship. Tight coupling is undesirable and should be
avoided where possible. Therefore, prefer composition to inheritance unless you
know that the latter truly benefits your design.
So, the general advise is to try and avoid inheritance as much as possible, and always being conservative on using it, unless you have a very strong case for it. For instance, you have a case for the use of public inheritance if you are modelling the so called "is-a" relationship. On the other hand, you have a case for using nonpublic inheritance if you are in one of the following situations:
If you need to override a virtual function
If you need access to a protected member
or in other less frequent cases.
Whatever your final choice is, be sure to only inherit from classes that have been designed in order to be base classes. For instance, be sure that the base class destructor is virtual. As the cited book poses it:
Using a standalone class as a base is a serious design error and
should be avoided. To add behavior, prefer to add nonmem-ber
functions instead of member functions (see Item 44). To add state,
prefer composition instead of inheritance (see Item 34). Avoid
inheriting from concrete base classes
OpenCV is a library with well defined API. If you have an existing application that uses functions bundled within this library and you don't have a valid reason for adding an additional functionality to them, there is no advantage that you could gain by wrapping them.
If you want to change the interface because you think it will make your code cleaner, I would worry about the maintenance in case the API will change in the future.
While changing the design of your applications, your decisions should be based on specific reasons. "I want to make my program better" is too abstract one.