I'm having a custom structure called SortedArrayList<T> which sorts its elements according to a comparator, and I would like to prevent assigning using operator[].
Example:
ArrayList.h
template <typename T> class ArrayList : public List<T> {
virtual T& operator[](const int& index) override; //override List<T>
virtual const T operator[](const int& index) const override; //override List<T>
}
SortedLinkedList.h with following operators
template <typename T> class SortedArrayList : public ArrayList<T> {
public:
SortedArrayList<T>(const std::function<bool(const T&, const T&)>& comparator);
T& operator[](const int& index) override; //get reference (LHS)
const T operator[](const int& index) const override; //get copy (RHS)
}
Test.h
ArrayList<int>* regular = new ArrayList<int>();
ArrayList<int>* sorted = new SortedArrayList<int>(cmpfn);
(*regular)[0] == 5; //allow
(*regular)[0] = 5; //allow
(*sorted)[0] == 7; //allow
(*sorted)[0] = 7; //except
Is this operation possible?
By prevent I mean throwing an exception or something what will warn user to not do it.
Prefer aggregation over inheritance:
template <typename T> class SortedArrayList {
ArrayList<T> m_the_list;
public:
SortedArrayList<T>(const std::function<bool(const T&, const T&)>& comparator);
const T& operator[](const int& index) const {return m_the_list[index];}; // always get const reference
// Can act as a *const* ArrayList<T>, but not as a mutable ArrayList<T>, as that would violate Liskov's substitution principle.
operator const ArrayList<T>&() const {return m_the_list;}
}
As Stephen Newell correctly points out, when you're using inheritance, you're guaranteeing your class SortedArrayList can act as an ArrayList in every possible scenario. This is clearly not the case in your example.
You can read more here about how violating Liskov's Substitution Principle is a bad idea.
You should not do this. It indicates an improper design See the C++ FAQ on Inheritance. Your subclass doesn't fulfill the "is-a" requirement for public inheritance if it can't be used in all ways as the base class (LSP).
If you want to have one type of container that allows member replacement and another that doesn't, then the define the base class that just allows const member access (no need to make it virtual). Then branch from there to MutableList and ImmutableList, and let SortedArrayList derive from Immutable list.
Seems to me like the best practice here would be to implement an at(const int& index) method instead of overloading []. That would be more clear to the user of the interface anyway.
There is a similar function in std::map and other std data structures. For example: http://www.cplusplus.com/reference/map/map/at/
Why do you pass the index as reference at all? Absolutely no need for...
I personally recommend to use unsigned integer types for array indices (what would be the meaning of a negative index anyway???).
const for a type returned by value is (nearly) meaningless - it will be copied to another variable anyway (which then will be modifiable), but you prevent move semantics...
So:
T& operator[](unsigned int index); //get reference (LHS)
T operator[](unsigned int index) const; //get copy (RHS)
(Just some improvement suggestions...)
Now to the actual question: Disallowing modification is quite easy:
//T& operator[](unsigned int index); //get reference (LHS)
T const& operator[](unsigned int index) const; //get copy (RHS)
Just one single index operator, always returning const reference... If user can live with reference, fine, otherwise he/she will copy the value anyway...
Edit in adaption to modified question:
As now inheritance is involved, the stuff gets more complicated. You cannot just get rid of some inherited function, and the inherited one will allow element modification.
In the given situation, I'd consider a redesign (if possible):
class ArrayListBase
{
public:
T const& operator[](unsigned int index) const;
// either copy or const reference, whichever appears more appropriate to you...
};
class ArrayList : public ArrayListBase
{
public:
using ArrayListBase::operator[];
T& operator[](unsigned int index);
}
class SortedArrayList : public ArrayListBase
{
public:
// well, simply does not add an overload...
}
The insertion function(s) might be pure virtual in the base class (where a a common interface appears suitable) or only available in the derived classes. Decide you...
Related
[Apologies if this question seems opinionated or discussionworthy.]
I have a class which, although not a collection class per se, does contain an arbitrary number of elements which are primarily constructed via an 'append' method:
append(TypeA thingA, TypeB thingB, TypeC thingC);
Significantly, there is not an auxiliary struct or class anywhere tying together a triple of (thingA, thingB, thingC), nor has one been needed so far -- the class works fine as-is.
Today I decided I needed an iterator for this class, that could return all the things I'd added to it, almost as if it were a collection, after all. The question is, what's the best way to return thingA, thingB, and thingC?
I could belatedly define an auxiliary tuple struct, just so that the iterator could return instances of it. But this seemed a little odd.
What I implemented instead was something along the lines of
class FunnyCollectionIter {
FunnyCollection* _ctx;
unsigned int _i;
public:
FunnyCollectionIter();
const TypeA& thingA() const;
const TypeB& thingB() const;
const TypeC& thingC() const;
FunnyCollectionIter& operator=(const FunnyCollectionIter &rhs);
FunnyCollectionIter& operator++();
bool operator==(const FunnyCollectionIter &rhs) const;
// ...
}
};
And I'm using it with code like this:
FunnyCollectionIter it;
for(it = funnycollection.begin(); it != funnycollection.end(); ++it) {
// now do things with it.thingA(), it.thingB(), and it.thingC()
}
But this seems a little odd, too. Normally (in the STL, at least) you access an iterator either using *, or ->first and ->second. But in the scheme I've implemented, there's no *, no ->, no first and second (let alone third?), and the interesting names thingA, thingB, and thingC are methods to invoke, not members to access.
So my question is, is this a poor way to construct an iterator in this situation, and is there a better way?
If it helps, the class is actually a scheduler. The three "things" in question are a duration, a callback, and an optional name. The caller constructs a schedule via one or more calls to append(), and then normally the schedule just runs, but now I have a need for the caller to be able to review the just-constructed schedule. (This is the same class I was referring to in this other question.)
You might split your class to have iterator interface on one side, and specific interface on the other side:
class FunnyWrapper
{
FunnyCollection* _ctx;
unsigned int _i;
public:
const TypeA& thingA() const;
const TypeB& thingB() const;
const TypeC& thingC() const;
// ...
// trick to use operator -> in iterator.
FunnyWrapper* operator ->() { return this; }
};
class FunnyCollectionIter
{
FunnyCollection* _ctx;
unsigned int _i;
public:
FunnyCollectionIter();
FunnyCollectionIter& operator=(const FunnyCollectionIter &rhs);
FunnyCollectionIter& operator++();
bool operator==(const FunnyCollectionIter &rhs) const;
// ...
FunnyWrapper operator->() { return FunnyWrapper{_ctx, _i}; }
FunnyWrapper operator*() { return FunnyWrapper{_ctx, _i}; }
};
I have a class that works as a predicate to select value from list.
class Predicate {
public:
// In this example, I am using QString as value type
// this is not what happens in actual code, where more complex data is being validated
virtual bool evaluate(const QString& val) const = 0;
};
Originally, I used lambda functions but this created lot of repetitive garbage code. So instead, I want to use predicate classes that use inheritance. For example:
class PredicateMaxLength: public RowPredicate {
public:
PredicateMaxLength(const int max) : maxLength(max) {}
virtual bool evaluate(const QString& val) const {return val.length()<maxLength;}
protected:
const int maxLength;
};
To allow inheritance do it's deed, pointers are given rather than values:
class SomeDataObject {
// Removes all values that satisfy the given predicate
int removeValues(const std::shared_ptr<Predicate> pred);
}
Now we are surely stil going to use lambdas in cases where code would not be repetitive (eg. some special case). For this purpose, PredicateLambda has been created:
typedef std::function<bool(const QString& val)> StdPredicateLambda;
class PredicateLambda: public Predicate {
public:
PredicateLambda(const StdPredicateLambda& lambda) : RowPredicate(), callback_(lambda) {}
virtual bool evaluate(const QString& val) const override {return callback_(val);}
protected:
const StdPredicateLambda callback_;
};
The nasty effect of this is that whenever lambda is used, it must be wrapped into PredicateLambda constructor:
myObject.deleteItems(std::make_shared<PredicateLambda>([]->bool{ ... lambda code ... }));
This is ugly. I have two options:
for every function that accepts predicate, have an overload that does the conversion seen above. This duplicates number of methods in header file
Have an implicit conversion from std::function<bool(const QString& val)> to std::shared_ptr<Predicate> which would execute this:
std::shared_ptr<Predicate> magicImplicitConversion(const StdPredicateLambda& lambdaFn) {
return std::make_shared<PredicateLambda>(lambdaFn);
}
I came here to ask whether the second option is possible. If it is, does it carry any risk?
If you don't want to use template to not expose code, you may use std::function:
class SomeDataObject {
// Removes all values that satisfy the given predicate
int removeValues(std::function<bool(const QString&)> pred);
};
and your predicate
class PredicateMaxLength {
public:
explicit PredicateMaxLength(int max) : maxLength(max) {}
bool operator ()(const QString& val) const {return val.length()<maxLength;}
protected:
int maxLength;
};
So you can use either
SomeDataObject someDataObject;
someDataObject.removeValues(PredicateMaxLength(42));
someDataObject.removeValues([](const QString& s) { return s.size() < 42; });
You want polymorphism, and you don't want to use template-style header lambdas. And you want to be able to have a few default cases.
The right answer is to throw out your Predicate class.
Use using Predicate = std::function<bool(const QString&)>;.
Next, note that your Predicate sub-types are basically factories (the constructor is a factory) for Predicates with some extra state.
For a std::function, such a factory is just a function returning a Predicate.
using Predicate = std::function<bool(const QString&)>;
Predicate PredicateMaxLength(int max) {
return [max](QString const& str){ return val.length()<max; }
}
where the body of PredicateMaxLength goes in a cpp file.
If you have an insanely complicated set of state for your Predicate-derived class, simply give it an operator() and store it within a std::function. (In the extremely rare case that you have some state you should store in a shared ptr, just store it in a shared ptr).
A std::function<Signature> is a regular type that is polymorphic. It uses a technique known as type erasure to be both a value and polymorphic, but really you can call it magic.
It is the right type to use when you are passing around an object whose only job is to be invoked with some set of arguments and return some value.
To directly answer your question, no, you cannot define a conversion operator between a std::function and a std::shared_ptr<yourtype> without making your program ill formed, no diagnostic required.
Even if you could, a std::function is not a lambda, and a lambda is not a std::function. So your conversion operator wouldn't work.
In various situations I have a collection (e.g. vector) of objects that needs to be processed by a number of functions. Some of the functions need to modify the objects while others don't. The objects' classes may inherit from an abstract base class. Hence, I have something like this:
class A
{
public:
virtual void foo() const = 0;
virtual void bar() = 0;
/* ... */
};
void process_1(std::vector<std::reference_wrapper<A>> const &vec);
void process_2(std::vector<std::reference_wrapper<A const>> const &vec);
Obviously (?) I can't pass the same vector of std::reference_wrapper<A>s to both process_1 and process_2. Solutions I've considered so far include:
Using a C-style cast or reinterpret_cast on a reference to vec
Writing my own reference wrapper that has T& get() and T const & get() const instead of T& get() const
Refactoring with e.g. methods that take a wrapper instead of the vector
Having copies of the vector with and without const
Not using const in reference_wrapper's argument
None of these seems very elegant. Is there something else I could do?
Range adapters.
A range adapter takes a range as input (a container is a range, as it has begin and end returning iterators), and returns a range with different properties.
You'd cast your reference wrappers to the const variant when you dereference the iterator.
boost has iterators that will do this for you (transform iterators), and tools to help write conforming iterators, but it can be done from scratch with some work.
A bit of extra work could even keep the typenames sane.
Even lacking elegance, I would make a reference wrapper:
#include <functional>
template <typename T>
class ReferenceWrapper
{
public:
ReferenceWrapper(T& ref)
: m_ref(ref)
{}
ReferenceWrapper(const std::reference_wrapper<T>& ref)
: m_ref(ref)
{}
const T& get() const noexcept { return m_ref.get(); }
T& get() noexcept { return m_ref.get(); }
operator const T& () const noexcept { return m_ref.get(); }
operator T& () noexcept { return m_ref.get(); }
private:
std::reference_wrapper<T> m_ref;
};
It is a tiny class modeling the original requirements.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How do I remove code duplication between similar const and non-const member functions?
In the following example :
template<typename Type, unsigned int Size>
class MyClass
{
public: inline Type& operator[](const unsigned int i)
{return _data[i];}
public: inline const Type& operator[](const unsigned int i) const
{return _data[i];}
protected: Type _data[Size];
};
the const and non-const operator[] are implemented independently.
In terms of design is it better to have :
1) two independant implementations like here
2) one of the two function calling the other one
If solution 2) is better, what would be the code of the given example ?
It is a well-known and widely accepted implementation pattern, when non-const method is implemented through its const counterpart, as in
class some_class {
const some_type& some_method(arg) const
{
...;
return something;
}
some_type& some_method(arg)
{
return const_cast<some_type&>(
const_cast<const some_class *>(this)->some_method(arg));
}
};
This is a perfectly valid technique, which essentially has no comparable (in convenience) alternatives in situations when the method body is relatively heavy. The evil of const_cast is significantly smaller than the evil of duplicated code.
However, when the body of the method is essentially an one-liner, it might be a better idea to stick to an explicit identical implementation, just to avoid this barely readable pileup of const_casts.
One can probably come up with a formally better designed castless solution implemented along the lines of
class some_class {
template <typename R, typename C>
static R& some_method(C *self, arg)
{
// Implement it here in terms of `self->...` and `R` result type
}
const some_type& some_method(arg) const
{
return some_method<const some_type>(this, arg);
}
some_type& some_method(arg)
{
return some_method<some_type>(this, arg);
}
};
but to me it looks even less elegant than the approach with const_cast.
You couldn't have either implementation calling the other one without casting away constness, which is a bad idea.
The const method can't call the non-const one.
The non-const method shouldn't call the const one because it'd need to cast the return type.
Unfortunately, "constness" templates don't work but I still think it is worth considering the overall idea:
// NOTE: this DOES NOT (yet?) work!
template <const CV>
Type CV& operator[](unsigned int index) CV {
...
}
For the time being, I'd implement trivial functions just twice. If the code become any more complex than a line or two, I'd factor the details into a function template and delegate the implementation.
My situation is the following:
I have a template wrapper that handles the situation of values and object being nullable without having to manually handle pointer or even new. This basically boils down to this:
struct null_t
{
// just a dummy
};
static const null_t null;
template<class T> class nullable
{
public:
nullable()
: _t(new T())
{}
nullable(const nullable<T>& source)
: _t(source == null ? 0 : new T(*source._t))
{}
nullable(const null_t& null)
: _t(0)
{}
nullable(const T& t)
: _t(new T(t))
{}
~nullable()
{
delete _t;
}
/* comparison and assignment operators */
const T& operator*() const
{
assert(_t != 0);
return *_t;
}
operator T&()
{
assert(_t != 0);
return *_t;
}
operator const T&() const
{
assert(_t != 0);
return *_t;
}
private:
T* _t;
};
Now with the comparison operators I can check against the null_t dummy in order to see whether it is set to null before actually trying to retrieve the value or pass it into a function that requires that value and would do the automatic conversion.
This class has served me well for quite some time, until I stumbled about an issue. I have a data class containing some structs which will all be outputted to a file (in this case XML).
So I have functions like these
xml_iterator Add(xml_iterator parent, const char* name,
const MyDataStruct1& value);
xml_iterator Add(xml_iterator parent, const char* name,
const MyDataStruct2& value);
which each fill an XML-DOM with the proper data. This also works correctly.
Now, however, some of these structs are optional, which in code would be declared as a
nullable<MyDataStruct3> SomeOptionalData;
And to handle this case, I made a template overload:
template<class T>
xml_iterator Add(xml_iterator parent, const char* name,
const nullable<T>& value)
{
if (value != null) return Add(parent, name, *value);
else return parent;
}
In my unit tests the compiler, as expected, always preferred to choose this template function whereever a value or structure is wrapped in a nullable<T>.
If however I use the aforementioned data class (which is exported in its own DLL), for some reason the very first time that last template function should be called, instead an automatic conversion from nullable<T> to the respective type T is done, completely bypassing the function meant to handle this case. As I've said above - all unit tests went 100% fine, both the tests and the executable calling the code are being built by MSVC 2005 in debug mode - the issue can definitely not be attributed to compiler differences.
Update: To clarify - the overloaded Add functions are not exported and only used internally within the DLL. In other words, the external program which encounters this issue does not even include the head with the template overloaded function.
The compiler will select primarily an exact match before it finds a templated version but will pick a templated "exact match" over another function that fits, eg, one that uses a base class of your type.
Implicit conversions are dangerous and often bite you. It could simply be that way you are including your headers or the namespaces you are using.
I would do the following:
Make your constructors of Nullable all explicit. You do this with any constructors that take exactly one parameter, or can be called with one (even if there are more that have default values).
template<class T> class nullable
{
public:
nullable()
: _t(new T())
{}
explicit nullable(const nullable<T>& source)
: _t(source == null ? 0 : new T(*source._t))
{}
explicit nullable(const null_t& null)
: _t(0)
{}
explicit nullable(const T& t)
: _t(new T(t))
{}
// rest
};
Replace the operator T& conversions with named functions. Use ref() for the non-const and cref() for the const.
I would also complete the class with
assignment operator (needed for rule of 3)
operator-> two overloads as you are propagating the constness.
If you plan to use this for C++0x also the r-value copy and assign, which is useful in this case.
By the way, you do know your deep copy won't work with base classes as they will slice.
Well, since no real answer was found so far, I've made a workaround. Basically, I put the aforementioned Add functions in a seperate detail namespace, and added two template wrapper functions:
template<class T>
xml_iterator Add(xml_iterator parent, const char* name,
const T& value)
{
return detail::Add(parent, name, value);
}
template<class T>
xml_iterator Add(xml_iterator parent, const char* name,
const nullable<T>& value)
{
return value != null ? detail::Add(parent, name, *value) : parent;
}
I found this to always properly resolve to the correct one of those two functions, and the function for the actual contained type will be chosen in a seperate step inside these, as you can see.