Can a namespace (be a/satisfy a) Concept? - c++

I know the C++ Concepts proposal is intended, perhaps among other things, to place restrictions on template parameters (say, being a "Sequence"), over the current situation in which whatever manages to compile is good enough (and the error messages are abysmal).
But - what about namespaces? I mean, currently, we can't use them as template parameters, but one would think that if a method only uses the static methods and members of a class, then a namespace should also be a satisfactory thing to pass to it. Does the current version of / current implements of the Concepts proposal support that? If not, was this considered and rejected or just not considered?
Related question:
Is a class with only static methods better than a namespace with only non-member functions?

Concepts adds no mechanism to pass namespaces at compile or run time. So there is no way to test a namesoace against a conceot, or parametarize code with a namespace, barring macros.
The reflection TS may permit reflection over namespaces (I am not up to date on its current status), but that is orthogonal to concepts. Maybe reification and reflection of namespaces can be manipulated to permit concept checking of namespaces and passimg them around somehow, but if it does today it might not tomorrow and vice vers as it relies on two different plastic features where such a side effect would be accidental at best.

Related

Does C++20 offer any new solutions to the problem of public member invisibility and source code bloat with inherited class templates?

Does C++20 offer any new solutions to the problem of public member invisibility and source code bloat/repetition with inherited class templates described in this question over 2 years ago ?
The "Problem"
The alleged "problem" is that, in a template, an unqualified name used in a way which isn't dependent on the specialization is truly independent of the specialization and refers to the entity with that name found at that point. The alleged source code "bloat" is using this-> to explicitly make the name dependent or qualifying the name. This is still the situation in C++20.
Just to be clear, the set of entities is not known at the point we refer to them. In the linked question, the base class depends on the template parameter, and we have only seen the primary base class template. The base class template may be specialized later and may have completely different member functions than the ones we've seen. So, any "solution" requires that a name without any obvious contextual dependence on the specialization find entities not yet declared which may be surprising.
Why It's Impossible
Any naive changes in this direction are either pretty big or have severe downsides, or both.
You could postpone all name lookup until instantiation time, discarding two phase lookup. That invites ODR violations which is silent UB, which is a huge downside.
You could restrict specialization so that you cannot specialize the base class later such that a different entity is found. That is difficult to diagnose, so it would likely be a new rule introducing silent UB.
You could opt in with a using declaration: using X::* as you propose or using class X as someone else suggested in a different context. This has the benefit of explicitness. It moves the problem a level up: if X is not dependent, presumably it should be found now, but if it is dependent, what happens? We can't instantiate it prior to instantiating the template we're in now. Thus, we can't interpret any names we see until instantiation. It has similar downsides to discarding two phase lookup.
Any of these changes would add complexity to an already complex area and would also pose significant backwards compatibility hurdles. None of them is a clear win.
Note: In C++20 they did make the rules more uniform by allowing ADL to find function templates with specified parameters to be found: f<int>(1).
Why It's Not a Problem
I doubt there would be consensus that this really is a problem. The linked question makes a poor argument. The derived class adds member function behavior to a certain base class, but a free function works better. These behaviors did not need member access, they aren't required by the language to be members, they can be found as non-members with ADL, and by using a free function they apply even when the static type you have is the base type. So, using inheritance for this is unnecessary coupling, and is a worse option.
Searching for 100s of places to add this-> and adding these 6 characters 100s of times strikes me as Code Bloat and Repetition when I have to templatize a base class
"Searching": The compiler will tell you when a name can't be found, which is better than silent bad behavior, such as making ODR violations easier to hit.
"templatize a base": Templatizing the base class doesn't trigger this. Templatizing the derived class and making the base dependent does. Yes, when templatizing the derived class, specifying that a bare name used in a way that is independent of the template parameter is in fact dependent may seem like boilerplate to some, but others might argue being explicit is clearer.
"100s of times": Seems hyperbolic.
These code patterns are used all the time in real world. Just look at CRTP. (comment on linked question)
Again, this only applies if the derived class is templated. I would dispute the commonality, but these idioms do exist and have a place.
Most importantly, though, is that CRTP is not a goal. CRTP is a hack. It's a C++ idiom because C++ lacks better facilities. CRTP allows a class to opt into certain behaviors that would otherwise be bothersome to write. Relevant C++ proposals do exist, but by and large, they have focused on making extension easier or removing boilerplate, and not on making CRTP, the hack, easier.
These are some that come to mind:
C++20: Comparisons
A very common use of CRTP is for things that require a lot of extra boilerplate. C++ required you to define operator== and operator!=, for instance. By opting into a CRTP base class, one could define only the primitive operation and have the other one generated.
C++20 fixed the underlying problem with comparisons. The typical member-wise comparison can be defaulted, and comparisons can be re-written so that != can invoke ==.
The problem is solved at the root, removing CRTP, not enhancing it.
C++20: Iterators to Ranges
Another common use of CRTP in the same vein as above is iterators. Writing custom iterators requires only a few fundamental operations: advance and dereference for forward iterators. But then there's a lot of extra seemingly unnecessary ceremony: pre-increment, post-increment, const iterators, typedefs, etc.
C++20 took a large step forward by introducing range concepts and the range library. The result is that it should be much less necessary to write a custom iterator. Ranges become a capable concept on their own, and there's a good suite of range combinators.
C++20: Concepts
C++ essentially has two systems for specifying an interface: virtual functions and concepts. They have trade-offs. Virtual functions are intrusive. But prior to C++20, concepts were implicit and emulated. One reason to use CRTP or inheritance generally would be to inject virtual functions. But in C++20, concepts are a language feature removing one big negative.
Future C++: Metaclasses
One value of CRTP in addition to the boilerplate reduction is satisfying an entire collection of multiple type requirements. A comparable class defines all the comparison operators. An clonable base defines a virtual destructor and clone.
This is the topic of metaclasses, which is not yet in C++.
See Also
See also the work on customization points which seems very interesting. And see the debates on unified function call syntax, which seems like we'll never get.
Summary
There's a very good question hiding in here about how C++20 makes it easier to reduce boilerplate, remove hacks like CRTP, and write better and clearer code. C++20 takes several steps in this regard, but they made the expression of intent easier, not a particular idiom.

Pertinence of void pointers

Looking through a colleague's code, I see that some of its handles are stored as void pointers.
// Class header
void* hSomeSdk;
// Class implementation
hSomeSdk = new SomeSDK(...);
((SomeSDK*)hSomeSdk)->DoSomeWork();
Now I know that sometimes handles are void pointers because it may be unknown before runtime what will be the actual type of the handle. Or that it can help when we need to share the pointer without revealing its actual structure. But this does not seem to be the case in my situation: it will always be SomeSDK and it is not shared outside the class where it is created. Also the author of this code is gone from the company.
Are there other reasons why it would make sense to have it be a void pointer?
Since this is a member variable, I'm gonna go out on a limb and say your colleague wanted to minimize dependencies. Including the header for SomeSDK is probably undesirable just to define a pointer. The colleague may have had one of two reasons as far as I can see from the code you show:
They just didn't know they can add a forward declarations like class SomeSDK; to allow defining pointers. Some programmers just aren't aware of it.
They couldn't forward declare it. If SomeSDK is not a class, but a type alias (aka typedef), then it's not possible to forward declare it exactly. One can only declare the class it aliases, but that in turn may be an implementation detail that's hard to keep track of. Even the standard library has a similar problem, that is why it provides iosfwd to make forward declaring standard stream types easier.
If the code is peppered with casts of this handle, then the design should have been reworked ages ago. Otherwise, if it's in one place (or a few at most) only, I can see why the people maintaining it could live with it peacefully.
Nope.
If I had to guess, the ex-colleague was unfamiliar with forward declarations and thus didn't know they could still do SomeSDK* in the header without including the entire SomeSDK definition.
Given the constraints you've mentioned, the only thing this pattern achieves is to eliminate some type safety, make the code harder to read/maintain, and generate a Stack Overflow question.
void* were popular and needed back in C. They are convenient in the sense that they can be easily cast to anything. If you need to cast from double* to char*, you have to make a mid cast to void*.
The problem with void* is that they are too flexible: they do not convey intentions of the writer, making them very unsafe especially in big projects.
In Object Oriented Design it is popular to create abstract interface classes (all members are virtual and not implemented) and make pointers to such classes and then instantiate various possible implementation depending on the usage.
However, nowadays, it is more recommended to work with templates (main advantage of C++ over other languages), as those are much faster and enable more compile-time optimization than OOD allowed. Unfortunately, working with templates is still a huge hassle - they have more complicated syntax and it is difficult to convey intentions of the writer to users about restrictions and demands of the template parameters (Concepts TS that solves this problem decently will be available in C++20 - currently there is only SFINAE, a horrible temporary solution from 20 years ago; while Reflection TS, that will greatly enhance generic programming in C++, is unlikely to be available even in C++23).

How does the compiler define the classes in type_traits?

In C++11 and later, the <type_traits> header contains many classes for type checking, such as std::is_empty, std::is_polymorphic, std::is_trivially_constructible and many others.
While we use these classes just like normal classes, I cannot figure out any way to possibly write the definition of these classes. No amount of SFINAE (even with C++14/17 rules) or other method seems to be able to tell if a class is polymorphic, empty, or satisfy other properties. An class that is empty still occupies a positive amount of space as the class must have a unique address.
How then, might compilers define such classes in C++? Or perhaps it is necessary for the compiler to be intrinsically aware of these class names and parse them specially?
Back in the olden days, when people were first fooling around with type traits, they wrote some really nasty template code in attempts to write portable code to detect certain properties. My take on this was that you had to put a drip-pan under your computer to catch the molten metal as the compiler overheated trying to compile this stuff. Steve Adamczyk, of Edison Design Group (provider of industrial-strength compiler frontends), had a more constructive take on the problem: instead of writing all this template code that takes enormous amounts of compiler time and often breaks them, ask me to provide a helper function.
When type traits were first formally introduced (in TR1, 2006), there were several traits that nobody knew how to implement portably. Since TR1 was supposed to be exclusively library additions, these couldn't count on compiler help, so their specifications allowed them to get an answer that was occasionally wrong, but they could be implemented in portable code.
Nowadays, those allowances have been removed; the library has to get the right answer. The compiler help for doing this isn't special knowledge of particular templates; it's a function call that tells you whether a particular class has a particular property. The compiler can recognize the name of the function, and provide an appropriate answer. This provides a lower-level toolkit that the traits templates can use, individually or in combination, to decide whether the class has the trait in question.

Why and how should I use namespaces in C++?

I have never used namespaces for my code before. (Other than for using STL functions)
Other than for avoiding name conflicts, is there any other reason to use namespaces?
Do I have to enclose both declarations and definitions in namespace scope?
One reason that's often overlooked is that simply by changing a single line of code to select one namespaces over another you can select an alternative set of functions/variables/types/constants - such as another version of a protocol, or single-threaded versus multi-threaded support, OS support for platform X or Y - compile and run. The same kind of effect might be achieved by including a header with different declarations, or with #defines and #ifdefs, but that crudely affects the entire translation unit and if linking different versions you can get undefined behaviour. With namespaces, you can make selections via using namespace that only apply within the active namespace, or do so via a namespace alias so they only apply where that alias is used, but they're actually resolved to distinct linker symbols so can be combined without undefined behaviour. This can be used in a way similar to template policies, but the effect is more implicit, automatic and pervasive - a very powerful language feature.
UPDATE: addressing marcv81's comment...
Why not use an interface with two implementations?
"interface + implementations" is conceptually what choosing a namespace to alias above is doing, but if you mean specifically runtime polymorphism and virtual dispatch:
the resultant library or executable doesn't need to contain all implementations and constantly direct calls to the selected one at runtime
as one implementation's incorporated the compiler can use myriad optimisations including inlining, dead code elimination, and constants differing between the "implementations" can be used for e.g. sizes of arrays - allowing automatic memory allocation instead of slower dynamic allocation
different namespaces have to support the same semantics of usage, but aren't bound to support the exact same set of function signatures as is the case for virtual dispatch
with namespaces you can supply custom non-member functions and templates: that's impossible with virtual dispatch (and non-member functions help with symmetric operator overloading - e.g. supporting 22 + my_type as well as my_type + 22)
different namespaces can specify different types to be used for certain purposes (e.g. a hash function might return a 32 bit value in one namespace, but a 64 bit value in another), but a virtual interface needs to have unifying static types, which means clumsy and high-overhead indirection like boost::any or boost::variant or a worst case selection where high-order bits are sometimes meaningless
virtual dispatch often involves compromises between fat interfaces and clumsy error handling: with namespaces there's the option to simply not provide functionality in namespaces where it makes no sense, giving a compile-time enforcement of necessary client porting effort
Here is a good reason (apart from the obvious stated by you).
Since namespace can be discontiguous and spread across translation units, they can also be used to separate interface from implementation details.
Definitions of names in a namespace can be provided either in the same namespace or in any of the enclosing namespaces (with fully qualified names).
It can help you for a better comprehension.
eg:
std::func <- all function/class from C++ standard library
lib1::func <- all function/class from specific library
module1::func <-- all function/class for a module of your system
You can also think of it as module in your system.
It can also be usefull for an writing documentation (eg: you can easily document namespace entity in doxygen)
Aren't name collisions enough of a reason? ADL subtleties, especially with operator overloads, are another.
That's the easiest way. You can also prefix names with the namespace, e.g. my_namespace::name, when defining.
You can think of namespaces as logical separated units for your application, and logical here means that suppose we have two different classes, putting these two classes each in a file, but when you notice that these classes share something enough to be categorized under one category, that's one strong reason to use namespaces.
Answer: If you ever want to overload the new, placement new, or delete functions you're going to want to do them in a namespace. No one wants to be forced to use your version of new if they don't require the things you require.
Yes

Internal typedefs in C++ - good style or bad style?

Something I have found myself doing often lately is declaring typedefs relevant to a particular class inside that class, i.e.
class Lorem
{
typedef boost::shared_ptr<Lorem> ptr;
typedef std::vector<Lorem::ptr> vector;
//
// ...
//
};
These types are then used elsewhere in the code:
Lorem::vector lorems;
Lorem::ptr lorem( new Lorem() );
lorems.push_back( lorem );
Reasons I like it:
It reduces the noise introduced by the class templates, std::vector<Lorem> becomes Lorem::vector, etc.
It serves as a statement of intent - in the example above, the Lorem class is intended to be reference counted via boost::shared_ptr and stored in a vector.
It allows the implementation to change - i.e. if Lorem needed to be changed to be intrusively reference counted (via boost::intrusive_ptr) at a later stage then this would have minimal impact to the code.
I think it looks 'prettier' and is arguably easier to read.
Reasons I don't like it:
There are sometimes issues with dependencies - if you want to embed, say, a Lorem::vector within another class but only need (or want) to forward declare Lorem (as opposed to introducing a dependency on its header file) then you end up having to use the explicit types (e.g. boost::shared_ptr<Lorem> rather than Lorem::ptr), which is a little inconsistent.
It may not be very common, and hence harder to understand?
I try to be objective with my coding style, so it would be good to get some other opinions on it so I can dissect my thinking a little bit.
I think it is excellent style, and I use it myself. It is always best to limit the scope of names as much as possible, and use of classes is the best way to do this in C++. For example, the C++ Standard library makes heavy use of typedefs within classes.
It serves as a statement of intent -
in the example above, the Lorem class
is intended to be reference counted
via boost::shared_ptr and stored in a
vector.
This is exactly what it does not do.
If I see 'Foo::Ptr' in the code, I have absolutely no idea whether it's a shared_ptr or a Foo* (STL has ::pointer typedefs that are T*, remember) or whatever. Esp. if it's a shared pointer, I don't provide a typedef at all, but keep the shared_ptr use explicitly in the code.
Actually, I hardly ever use typedefs outside Template Metaprogramming.
The STL does this type of thing all the time
The STL design with concepts defined in terms of member functions and nested typedefs is a historical cul-de-sac, modern template libraries use free functions and traits classes (cf. Boost.Graph), because these do not exclude built-in types from modelling the concept and because it makes adapting types that were not designed with the given template libraries' concepts in mind easier.
Don't use the STL as a reason to make the same mistakes.
Typedefs are the ones what policy based design and traits built upon in C++, so The power of Generic Programming in C++ stems from typedefs themselves.
Typdefs are definitely are good style. And all your "reasons I like" are good and correct.
About problems you have with that. Well, forward declaration is not a holy grail. You can simply design your code to avoid multi level dependencies.
You can move typedef outside the class but Class::ptr is so much prettier then ClassPtr that I don't do this. It is like with namespaces as for me - things stay connected within the scope.
Sometimes I did
Trait<Loren>::ptr
Trait<Loren>::collection
Trait<Loren>::map
And it can be default for all domain classes and with some specialization for certain ones.
The STL does this type of thing all the time - the typedefs are part of the interface for many classes in the STL.
reference
iterator
size_type
value_type
etc...
are all typedefs that are part of the interface for various STL template classes.
Another vote for this being a good idea. I started doing this when writing a simulation that had to be efficient, both in time and space. All of the value types had an Ptr typedef that started out as a boost shared pointer. I then did some profiling and changed some of them to a boost intrusive pointer without having to change any of the code where these objects were used.
Note that this only works when you know where the classes are going to be used, and that all the uses have the same requirements. I wouldn't use this in library code, for example, because you can't know when writing the library the context in which it will be used.
Currently I'm working on code, that intensively uses these kind of typedefs. So far that is fine.
But I noticed that there are quite often iterative typedefs, the definitions are split among several classes, and you never really know what type you are dealing with. My task is to summarize the size of some complex data structures hidden behind these typedefs - so I can't rely on existing interfaces. In combination with three to six levels of nested namespaces and then it becomes confusing.
So before using them, there are some points to be considered
Does anyone else need these typedefs? Is the class used a lot by other classes?
Do I shorten the usage or hide the class? (In case of hiding you also could think of interfaces.)
Are other people working with the code? How do they do it? Will they think it is easier or will they become confused?
When the typedef is used only within the class itself (i.e. is declared as private) I think its a good idea. However, for exactly the reasons you give, I would not use it if the typedef's need to be known outside the class. In that case I recommend to move them outside the class.
I recommend to move those typedefs outside the class. This way, you remove direct dependency on shared pointer and vector classes and you can include them only when needed. Unless you are using those types in your class implementation, I consider they shouldn't be inner typedefs.
The reasons you like it are still matched, since they are solved by the type aliasing through typedef, not by declaring them inside your class.