I have an application that I'm porting from C++ to Java. There is a section C++ code that I find really strange.
typedef std::string ArgName;
typedef std::map< ArgName, AnyData > ArgumentMap;
class Arguments : public ArgumentMap
{
public:
// Very important note: When read finds a numeric/set argument,
// it sets anyData.kind to Int. But STILL, it fills anyData.dString,
// just in case. So if the ArgumentMap was built by Arguments::read,
// the dString fields are all filled.
bool read( int argc, char **argv );
// remains is filled with the arguments not starting with '-'.
bool read( int argc, char **argv, std::vector<const char*>& remains );
// const if fails, erases arg if succeeds.
bool getNumericParam( const ArgName& name, int& num );
// sw is true if the switch is present. The function
// returns false if the argument value is not empty.
bool getSwitch( const ArgName& name, bool& sw );
bool getSwitchConst( const ArgName& name, bool& sw ) const;
// Returns true if the switch is present. Throws an error message if
// if the argument value is not empty.
bool getSwitchCompact( const ArgName& name );
void checkEmptyArgs() const;
};
It looks like in there original C++ the author is making their Arguments class inherit from a Map. This makes no sense to me. Map is an interface which means you can't inherit from it, you can only implement it. Is this something that can be done in C++ that you can't do in Java?
Also, I don't understand why you would use a typedef. I read the definition from Wikipedia
typedef is a keyword in the C and C++ programming languages. The purpose of typedef is to
form complex types from more-basic machine types[1] and assign simpler names to such
combinations. They are most often used when a standard declaration is cumbersome,
potentially confusing, or likely to vary from one implementation to another
But I don't understand why the author would do that here. Are they trying to say that they want to inherit from the class AnyData and that ArgumentMap should have a Map as one of its fields?
This makes no sense to me. Map is an interface
In Java, it is. In C++, it's not even a class, it is a class template. C++ does not have a concept similar to Java's interfaces, although you can implement something similar with virtual inheritance.
As far as the collection classes are concerned, C++ solved with templates and generic programming what Java solved with interfaces and inheritance.
Instances of C++ map template are fully functioning classes that work similarly to Java's TreeMap. You can inherit them in the same way that you inherit from classes in Java, and because of multiple inheritance, you are not limited to a single class.
Also, I don't understand why you would use a typedef.
You use typedef to give classes meaningful names. In addition to shortening your typing, it makes your program more readable, and gives you additional flexibility at redefining the class behind the typedef later on.
Note: the fact that you can inherit from standard containers does not mean that you should do it. Read answers to this question for more information.
Interfaces are a language construct specific to Java. They do not exist in C++. In C++ you only have classes. You can have abstract classes (classes with unimplemented methods), and eventually you can use them to enunciate interfaces.
C++ has multiple inheritance, there are no "interfaces". You can certainly inherit from any class. And C++ has templates, something completely absent in Java. Templates allow you to make the compiler write specially tailored functions or classes.
Map isn't an interface, it's a template. Moreover - the author doesn't derive his class from the map template as a whole, but he derives from a parametrized template. Without the code of the class, one can only guess why he's doing it, and what he wants to achieve.
Note that deriving from STL templates is a bad idea in general (with few exceptional cases) and usually it's much better to make the template a member of your class. Templates don't have virtual members and thus - there's no real way to change their behaviour when deriving (and that's the real point of inheritance in many cases).
And a bit about the typedefs: when you use a typedef like that, you make your code a bit easier to change in the future. When you (For any reason) decide that you want to implement your own string class, you only need to change the first line in this file (from typedef std::string ArgName; to typedef myNewStringClass ArgName;), instead of changing the code in all the places where ArgName occurs.
Related
I have looked high and low for answers to this question - here on this forum and on the general internet. While I have found posts discussing similar topics, I am at a point where I need to make some design choices and am wondering if I am going about it the right way, which is as follows:
In C++ I have created 3 data structures: A linked list, a binary tree and a grid. I want to be able to store different classes in these data structures - one may be a class to manipulate strings, another numbers, etc. Now, each of these classes, assigned to the nodes, has the ability to perform and handle comparison operations for the standard inequality operators.
I thought C++ inheritance would provide the perfect solution to the matter - it would allow for a base "data class" (the abstract class) and all the other data classes, such as JString, to inherit from it. So the data class would have the following inequality method:
virtual bool isGreaterThan(const dataStructure & otherData) const = 0;
Then, JString will inherit from dataStructure and the desire would be to override this method, since isGreaterThan will obviously have a different meaning depending on the class. However, what I need is this:
virtual bool isGreaterThan(const JString & otherData) const;
Which, I know will not work since the parameters are of a different data type and C++ requires this for the overriding of virtual methods. The only solution I could see is doing something like this in JString:
virtual bool isGreaterThan(const dataStructure & otherData);
{
this->isGreaterThanJString(dynamic_cast<const JString&>(theSourceData));
};
virtual bool isGreaterThanJString(const JString & otherData) const;
In other words, the overriding method just calls the JString equivalent, down-casting otherData to a JString object, since this will always be true and if not, it should fail regardless.
My question is this: Does this seem like an acceptable strategy or am I missing some ability in C++. I have used templates as well, but I am trying to avoid this as I find debugging becomes very difficult. The other option would be to try a void* that can accept any data type, but this comes with issues as well and shifts the burden onto the code resulting in lengthier classes.
The LSP means operations on a reference to base class must work and have the same semantics as operations on both base and derived class instances when those operations are referentially polymorphic.
Your example fails this test. The base isGreaterThan claims to work on all dataStructure, but it does not.
I would make the dataStructure argument types templates in your containers. Then you know the concrete type of the stored data.
Look at std list for an idea of what a linked list template might look like.
I will now go onto complex additional steps you can do in the 0.1% of cases where the above advice is not correct.
If this causes issues, because of template bloat, you could create a polymorphic container that enforces the type of the stored data, either with a thin template wrapper or runtime tests. Once stored, you blindly cast to the known stored type, and store how to copy/compare/etc said type either in a C or C++ style polymorphic method.
Here is an 8 year old fun talk about this approach: https://channel9.msdn.com/Events/GoingNative/2013/Inheritance-Is-The-Base-Class-of-Evil
I'm learning C++ and want to implement a custom string class, MyTextProc::Word, to add some features to std::string, such as string reversal, case conversion, translations etc.
It seems that this is best done using an is-a relationship:
namespace MyTextProc {
class Word : public string {
/* my custom methods.... */
};
}
I do not specify any constructors for my class but the above definition of the Word class only exposes default void and copy constructors - cant Word just inherit all the public string constructors as well?
It would be good to have Word work just like a string. I am adding no properties to string; must I really implement every single constructor of string, the base class, in order to implement my new subclass?
Is this best done using a has-a relationship? Should I just implement a const string& constructor and require clients to pass a string object for construction? Should I override all of the string constructors?
Welcomne to the C++ hell.
You've just touched one of the most controversial aspect of C++: std::string is not polymorphic and constructors are not inherited.
The only "clean" way (that will not make any sort of criticism) is embed std::string as a member, delegating ALL OF ITS METHODS. Good work!
Other ways can come around, but you have always to take care some limitations.
std::string has no virtual methods, so if you derive it, you will not get a polymorphic type.
That means that if yoy pass a Word to a sting keeping function, and that function calls a string method, your override method will not be called and
whatever allocation of Word via new must not be given to a string*: deleting via such pointer will result in undefined behavior
All the inherited method that take string and return string-s will work, but they'll return string, not Word.
About constructors, they are NOT INHERITED. The default construction inheritance is an illusion, due to the compiler synthesized default implementation of default, copy and assign, that call implicitly the base.
In C++11 a workaround can be
class Word: public std::string
{
public:
template<class... Args>
Word(Args&&... args) :std::string(std::forward<Args>(args)...)
{}
//whatever else
};
This makes whatever kind of arguments to be given in a call to a suitable std::sting ctor (if it exist, otherwise a compile error happens).
Now, decide yourself what the design should be. May be you will come with normal std::string and an independent set of free functions.
Another (imperfect) way can be make Word not inheriting, but embedding std::string, constructed as above, and implicitly convertible into std::string. (plus having a str() explicit method). This let you able to use a Word as a string, create a Word from a string, but not use a Word "in place of" a string.
Another thing to unlearn (may be from Java ...): don't fix yourself to "is-a = inheritance and has-a = embedding" OOP rule. All C++ standard library objects are not Objects in OOP sense, so all the OOP school methodologies have fallacies in that context.
You have to decide in you case what is the trade-off between simple coding (and good application of the "don't repeat yourself" paradigm, much easier with imheritance) or simple maintenance (and embedding let your code less prone to be used wrongly by others)
This is in answer t othe comment below:
"the lack of polymorphism of standard C++ classes. Why is this so? It seems to a novice like me that not implementing std C++ libs using virtual functions is defeating the point of the language they are designed to enrich!!!"
Well ... yes and not!
Since you cite PERL, consider that
- PERL is a scripting language, where types are dynamic.
- Java is a language where types are static and objects dynamic
- C++ is a language where types are static and object are static (and dynamic allocation of object is explicit)
Now, in Java objects are always dynamically allocated and local variables are "reference" to those objects.
In C++, local variables are object themselves, and have value semantics. And the C++ standard library is designed not as a set of bases to extend, but as a set of value types for which generate code by means of templates.
Think to an std::string as something working just like int works: do you expect to derive from int to get "more methods" or to change the behavior of some of them?
The controversial aspect, here, is that -to be coherent with this design- std::string should just manage its internal memory, but should have no methods. Istead, string functions shold have been implemented as templates, so that they can be used as "algorithm" with whatever other class exhibiting a same std::string external behavior. Something the designers didn't.
They placed many methods on it, but they din't make it polymorphic to still retain the value semantics, thus making and ambiguous design, and retaining to inheritance the only way to "reuse" those methods without re-declaring them. This is possible, but with the limitation I told you.
If yo uwant to effectively create new function, to have "polymorphism on value", use teplates: intead of
std::string capitalize(const std::string& s) { .... }
do something like
template<class String>
String capitalize(const String& s) { .... }
So that you code can work with whatever class having the same string interface respect to characters, for whatever type of characters.
As honest advise, I'd implement the methods you want as functions which take in a string and return a string. They'll be easier to test, decoupled, and easy to use. When in C++, don't always reach for a class when a function would do. In fact, when you get into templates, you could create a templated function without definition and a specialization for the basic string class. That way, you always will know if the string type you're touching has a custom defined method (and yes, if you interact with Microsoft you'll discover there's 50 million string implementations.)
I have three classes which each store their own array of double values. To populate the arrays I use a fairly complex function, lets say foo(), which takes in several parameters and calculates the appropriate values for the array.
Each of my three classes uses the same function with only minor adjustments (i.e. the input parameters vary slightly). Each of the classes is actually quite similar although they each perform separate logic when retrieving the values of the array.
So I am wondering how should I 'share' the function so that all classes can use it, without having to duplicate the code?
I was thinking of creating a base class which contained the function foo() and a virtual get() method. My three classes could then inherit this base class. Alternatively, I was also thinking perhaps a global function was the way to go? maybe putting the function into a namespace?
If the classes have nothing in common besides this foo() function, it is silly to put it in a base class; make it a free function instead. C++ is not Java.
Declaring of a function in base class sounds the most appropriate solution. Not sure if you need virtual "get" though, instead just declare the array in the base class and provide access method(s) for descendants.
More complex part is "the input parameters vary slightly". If parameters differ by type only then you may write a template function. If difference is more significant than the only solution I see is splitting main function into several logic blocks and using these blocks in descendant classes to perform final result.
If your classes are quite similar, you could create a template class with three different implementations that has the function foo<T>()
Implement that function in base class. If these classes are similar as you say, they should be derived from one base class anyway! If there are several functions like foo(), it might be reasonable in some cases to combine them into another class which is utilized by/with your classes.
If the underlying data of the class is the same (Array of doubles), considering using a single class and overloading the constructor, or just use 3 different functions:
void PopulateFromString(const string&)
void PopulateFromXml(...)
void PopulateFromInteger(...)
If the data or the behavior is different in each class type, then your solution of base class is good.
You can also define a function in the same namespace as your classes as utility function, if it has nothing to do with specific class behavior (Polymorphism). Bjarne StroupStroup recommends this method by the way.
For the purpose of this answer, I am assuming the classes you have are not common in any other outwards way; they may load the same data, but they are providing different interfaces.
There are two possible situations here, and you haven't told us which one it is. It could be more like
void foo(double* arr, size_t size) {
// Some specific code (that probably just does some preparation)
// Lots of generic code
// ...
// Some more specific code (cleanup?)
}
or something similar to
void foo(double* arr, size_t size) {
// generic_code();
// ...
// specific_code();
// generic_code();
// ...
}
In the first case, the generic code may very well be easy to put into a separate function, and then making a base class doesn't make much sense: you'll probably be inheriting from it privately, and you should prefer composition over private inheritance unless you have a good reason to. You could put the new function in its own class if it benefits from it, but it's not strictly necessary. Whether you put it in a namespace or not depends on how you're organising your code.
The second case is trickier, and in that case I would advise polymorphism. However, you don't seem to need runtime polymorphism for this, and so you could just as well do it compile-time. Using the fact that this is C++, you can use CRTP:
template<typename IMPL>
class MyBase {
void foo(double* arr, size_t size) {
// generic code
// ...
double importantResult = IMPL::DoALittleWork(/* args */);
// more generic code
// ...
}
};
class Derived : MyBase<Derived> {
static double DoALittleWork(/* params */) {
// My specific stuff
return result;
}
};
This gives you the benefit of code organisation and saves you some virtual functions. On the other hand, it does make it slightly less clear what functions need to be implemented (although the error messages are not that bad).
I would only go with the second route if making a new function (possibly within a new class) would clearly be uglier. If you're parsing different formats as Andrey says, then having a parser object (that would be polymorphic) passed in would be even nicer as it would allow you to mock things with less trouble, but you haven't given enough details to say for sure.
While designing an interface for a class I normally get caught in two minds whether should I provide member functions which can be calculated / derived by using combinations of other member functions. For example:
class DocContainer
{
public:
Doc* getDoc(int index) const;
bool isDocSelected(Doc*) const;
int getDocCount() const;
//Should this method be here???
//This method returns the selected documents in the contrainer (in selectedDocs_out)
void getSelectedDocs(std::vector<Doc*>& selectedDocs_out) const;
};
Should I provide this as a class member function or probably a namespace where I can define this method? Which one is preferred?
In general, you should probably prefer free functions. Think about it from an OOP perspective.
If the function does not need access to any private members, then why should it be given access to them? That's not good for encapsulation. It means more code that may potentially fail when the internals of the class is modified.
It also limits the possible amount of code reuse.
If you wrote the function as something like this:
template <typename T>
bool getSelectedDocs(T& container, std::vector<Doc*>&);
Then the same implementation of getSelectedDocs will work for any class that exposes the required functions, not just your DocContainer.
Of course, if you don't like templates, an interface could be used, and then it'd still work for any class that implemented this interface.
On the other hand, if it is a member function, then it'll only work for this particular class (and possibly derived classes).
The C++ standard library follows the same approach. Consider std::find, for example, which is made a free function for this precise reason. It doesn't need to know the internals of the class it's searching in. It just needs some implementation that fulfills its requirements. Which means that the same find() implementation can work on any container, in the standard library or elsewhere.
Scott Meyers argues for the same thing.
If you don't like it cluttering up your main namespace, you can of course put it into a separate namespace with functionality for this particular class.
I think its fine to have getSelectedDocs as a member function. It's a perfectly reasonable operation for a DocContainer, so makes sense as a member. Member functions should be there to make the class useful. They don't need to satisfy some sort of minimality requirement.
One disadvantage to moving it outside the class is that people will have to look in two places when the try to figure out how to use a DocContainer: they need to look in the class and also in the utility namespace.
The STL has basically aimed for small interfaces, so in your case, if and only if getSelectedDocs can be implemented more efficiently than a combination of isDocSelected and getDoc it would be implemented as a member function.
This technique may not be applicable anywhere but it's a good rule of thumbs to prevent clutter in interfaces.
I agree with the answers from Konrad and jalf. Unless there is a significant benefit from having "getSelectedDocs" then it clutters the interface of DocContainer.
Adding this member triggers my smelly code sensor. DocContainer is obviously a container so why not use iterators to scan over individual documents?
class DocContainer
{
public:
iterator begin ();
iterator end ();
// ...
bool isDocSelected (Doc *) const;
};
Then, use a functor that creates the vector of documents as it needs to:
typedef std::vector <Doc*> DocVector;
class IsDocSelected {
public:
IsDocSelected (DocContainer const & docs, DocVector & results)
: docs (docs)
, results (results)
{}
void operator()(Doc & doc) const
{
if (docs.isDocSelected (&doc))
{
results.push_back (&doc);
}
}
private:
DocContainer const & docs;
DocVector & results;
};
void foo (DocContainer & docs)
{
DocVector results;
std :: for_each (docs.begin ()
, docs.end ()
, IsDocSelected (docs, results));
}
This is a bit more verbose (at least until we have lambdas), but an advantage to this kind of approach is that the specific type of filtering is not coupled with the DocContainer class. In the future, if you need a new list of documents that are "NotSelected" there is no need to change the interface to DocContainer, you just write a new "IsDocNotSelected" class.
The answer is proabably "it depends"...
If the class is part of a public interface to a library that will be used by many different callers then there's a good argument for providing a multitude of functionality to make it easy to use, including some duplication and/or crossover. However, if the class is only being used by a single upstream caller then it probably doesn't make sense to provide multiple ways to achieve the same thing. Remember that all the code in the interface has to be tested and documented, so there is always a cost to adding that one last bit of functionality.
I think this is perfectly valid if the method:
fits in the class responsibilities
is not too specific to a small part of the class clients (like at least 20%)
This is especially true if the method contains complex logic/computation that would be more expensive to maintain in many places than only in the class.
I'm sorry if my question is so long and technical but I think it's so important other people will be interested about it
I was looking for a way to separate clearly some softwares internals from their representation in c++
I have a generic parameter class (to be later stored in a container) that can contain any kind of value with the the boost::any class
I have a base class (roughly) of this kind (of course there is more stuff)
class Parameter
{
public:
Parameter()
template typename<T> T GetValue() const { return any_cast<T>( _value ); }
template typename<T> void SetValue(const T& value) { _value = value; }
string GetValueAsString() const = 0;
void SetValueFromString(const string& str) const = 0;
private:
boost::any _value;
}
There are two levels of derived classes:
The first level defines the type and the conversion to/from string (for example ParameterInt or ParameterString)
The second level defines the behaviour and the real creators (for example deriving ParameterAnyInt and ParameterLimitedInt from ParameterInt or ParameterFilename from GenericString)
Depending on the real type I would like to add external function or classes that operates depending on the specific parameter type without adding virtual methods to the base class and without doing strange casts
For example I would like to create the proper gui controls depending on parameter types:
Widget* CreateWidget(const Parameter& p)
Of course I cannot understand real Parameter type from this unless I use RTTI or implement it my self (with enum and switch case), but this is not the right OOP design solution, you know.
The classical solution is the Visitor design pattern http://en.wikipedia.org/wiki/Visitor_pattern
The problem with this pattern is that I have to know in advance which derived types will be implemented, so (putting together what is written in wikipedia and my code) we'll have sort of:
struct Visitor
{
virtual void visit(ParameterLimitedInt& wheel) = 0;
virtual void visit(ParameterAnyInt& engine) = 0;
virtual void visit(ParameterFilename& body) = 0;
};
Is there any solution to obtain this behaviour in any other way without need to know in advance all the concrete types and without deriving the original visitor?
Edit: Dr. Pizza's solution seems the closest to what I was thinking, but the problem is still the same and the method is actually relying on dynamic_cast, that I was trying to avoid as a kind of (even if weak) RTTI method
Maybe it is better to think to some solution without even citing the visitor Pattern and clean our mind. The purpose is just having the function such:
Widget* CreateWidget(const Parameter& p)
behave differently for each "concrete" parameter without losing info on its type
For a generic implementation of Vistor, I'd suggest the Loki Visitor, part of the Loki library.
I've used this ("acyclic visitor") to good effect; it makes adding new classes to the hierarchy possible without changing existing ones, to some extent.
If I understand this correctly...
We had a object that could use different hardware options. To facilitate this we used a abstract interface of Device. Device had a bunch of functions that would be fired on certain events. The use would be the same but the various implementations of the Device would either have a fully-fleshed out functions or just return immediately. To make life even easier, the functions were void and threw exceptions on when something went wrong.
For completeness's sake:
it's of course completely possible to write an own implementation of a multimethod pointer table for your objects and calculate the method addresses manually at run time. There's a paper by Stroustrup on the topic of implementing multimethods (albeit in the compiler).
I wouldn't really advise anyone to do this. Getting the implementation to perform well is quite complicated and the syntax for using it will probably be very awkward and error-prone. If everything else fails, this might still be the way to go, though.
I am having trouble understanding your requirements. But Ill state - in my own words as it were - what I understand the situation to be:
You have abstract Parameter class, which is subclassed eventually to some concrete classes (eg: ParameterLimitedInt).
You have a seperate GUI system which will be passed these parameters in a generic fashion, but the catch is that it needs to present the GUI component specific to the concrete type of the parameter class.
The restrictions are that you dont want to do RTTID, and dont want to write code to handle every possible type of concrete parameter.
You are open to using the visitor pattern.
With those being your requirements, here is how I would handle such a situation:
I would implement the visitor pattern where the accept() returns a boolean value. The base Parameter class would implement a virtual accept() function and return false.
Concrete implementations of the Parameter class would then contain accept() functions which will call the visitor's visit(). They would return true.
The visitor class would make use of a templated visit() function so you would only override for the concrete Parameter types you care to support:
class Visitor
{
public:
template< class T > void visit( const T& param ) const
{
assert( false && "this parameter type not specialised in the visitor" );
}
void visit( const ParameterLimitedInt& ) const; // specialised implementations...
}
Thus if accept() returns false, you know the concrete type for the Parameter has not implemented the visitor pattern yet (in case there is additional logic you would prefer to handle on a case by case basis). If the assert() in the visitor pattern triggers, its because its not visiting a Parameter type which you've implemented a specialisation for.
One downside to all of this is that unsupported visits are only caught at runtime.