In this article, the writer asserts:
...the program did show that the template instantiation mechanism is a primitive recursive language that can perform nontrivial computations at compile time.
I found this rather interesting, as I help to teach a class in Theory of Computation which delves into the theory of primitive recursive functions. However, I was under the impression that Template Metaprogramming was Turing-complete, which is a strictly stronger statement than to say that it is primitive recursive...And after all, it is not very difficult to create a template metaprogram which fails to halt.
Am I missing something? Is Template Metaprogramming a strictly primitive recursive language, or am I correct in believing it to cover a wider range of programs?
I believe you just read too much into the text, and the "primitive" is not meant as "primitive recursive", but rather it is a "recursive language" (which sounds odd, I'd call describe that as a functional language, but never mind), which is primitive.
If you look at TMP as a functional language, it is not a very sophisticated one; thus, it is a primitive one.
But you are correct, TMP certainly is Turing-complete.
I doubt many people have heard of primitive recursive languages, and so this is just an unfortunate choice of words.
Unruh's program only demonstrated primitive recursion, that is, looping (I was actually present when this happened!). However it was immediately recognized that full recursion was supported (because indeed the implementation did not do tail rec optimisation).
Related
I found about Concepts while reviewing C++20 features. I found that they add validation to templates arguments but apart from that I don't understand what are the real world use cases of C++20 concepts.
C++ already has things like std::is_integral and they can perform validation very well.
I'm sure I am missing something about C++20 concepts and what it enables.
SFINAE (see here & here) was an accidentally Turing complete sublanguage that executes at overload resolution and template specialization selection time.
Turns out it is used a lot in template code.
Concepts and requires clauses are an attempt to take that accidentally useful language feature and make it suck less.
The origin of concepts was going to have 3 pieces; (a) describing what is required for a given bit of template code in a clean way, (b) also provide a way to map other types to satisfy those requirements non-intrusively, and (c) check template code so that any type which satisfies the concept is guaranteed to compile
All attempts at (a) plus (c) sucked, usually taking forever to compile and/or restricting what you can check with (a). (b) was also dropped to ensure (a) was better; you can write such concept map machinery manually in many cases, but C++ doesn't provide it for you.
So, now what is it good for?
auto sum( Addable auto... values )
that uses the concept of Addable to concisely express an interface of a template. Error messages you get when passing a non-addable highlight it isn't Addable, and the expression that doesn't work.
template<class T, class A>
struct vector{
bool operator==(vector<t,A>const& o)requires EquallyComparible<T>;
};
here, we state this vector has a == if and only if the T does. Doing this before concepts is an annoying undertaking, and even adding the specs to the standard is.
This is the turing tar pit; everyting is equivalent, but nothing is easy. All programs can be written with I/O plus a (a=(a-b);(a<0)?goto c:next 3 argument instruction; but a richer language makes programs suck less. Concepts takes an esoteric branch of C++, SFINAE, and makes it clean and simpler (so more people can leverage it), and improves error messages.
Ive spent the day reading notes and watching a video on boost::fusion and I really don't get some aspects to it.
Take for example, the boost::fusion::has_key<S> function. What is the purpose of having this in boost::fusion? Is the idea that we just try and move as much programming as possible to happen at compile-time? So pretty much any boost::fusion function is the same as the run-time version, except it now evaluates at compile time? (and we assume doing more at compile-time is good?).
Related to boost::fusion, i'm also a bit confused why metafunctions always return types. Why is this?
Another way to look at boost::fusion is to think of it as "poor man introspection" library. The original motivation for boost::fusion comes from the direction of boost::spirit parser/generator framework, in particular the need to support what is called "parser attributes".
Imagine, you've got a CSV string to parse:
aaaa, 1.1
The type, this string parses into, can be described as "tuple of string and double". We can define such tuples in "plain" C++, either with old school structs (struct { string a; double b; } or newer tuple<string, double>). The only thing we miss is some sort of adapter, which will allow to pass tuples (and some other types) of arbitrary composition to a unified parser interface and expect it to make sense of it without passing any out of band information (such as string parsing templates used by scanf).
That's where boost::fusion comes into play. The most straightforward way to construct a "fusion sequence" is to adapt a normal struct:
struct a {
string s;
double d;
};
BOOST_FUSION_ADAPT_STRUCT(a, (string, s)(double, d))
The "ADAPT_STRUCT" macro adds the necessary information for parser framework (in this example) to be able to "iterate" over members of struct a to the tune of the following questions:
I just parsed a string. Can I assign it to first member of struct a?
I just parsed a double. Can I assign it to second member of struct a?
Are there any other members in struct a or should I stop parsing?
Obviously, this basic example can be further extended (and boost::fusion supplies the capability) to address much more complex cases:
Variants - let's say parser can encounter either sting or double and wants to assign it to the right member of struct a. BOOST_FUSION_ADAPT_ASSOC_STRUCT comes to the rescue (now our parser can ask questions like "which member of struct a is of type double?").
Transformations - our parser can be designed to accept certain types as parameters but the rest of the programs had changed quite a bit. Yet, fusion metafunctions can be conveniently used to adapt new types to old realities (or vice versa).
The rest of boost::fusion functionality naturally follows from the above basics. fusion really shines when there's a need for conversion (in either direction) of "loose IO data" to strongly typed/structured data C++ programs operate upon (if efficiency is of concern). It is the enabling factor behind spirit::qi and spirit::karma being such an efficient (probably the fastest) I/O frameworks .
Fusion is there as a bridge between compile-time and run-time containers and algorithms. You may or may not want to move some of your processing to compile-time, but if you do want to then Fusion might help. I don't think it has a specific manifesto to move as much as possible to compile-time, although I may be wrong.
Meta-functions return types because template meta-programming wasn't invented on purpose. It was discovered more-or-less by accident that C++ templates can be used as a compile-time programming language. A meta-function is a mapping from template arguments to instantiations of a template. As of C++03 there were are two kinds of template (class- and function-), therefore a meta-function has to "return" either a class or a function. Classes are more useful than functions, since you can put values etc. in their static data members.
C++11 adds another kind of template (for typedefs), but that is kind of irrelevant to meta-programming. More importantly for compile-time programming, C++11 adds constexpr functions. They're properly designed for the purpose and they return values just like normal functions. Of course, their input is not a type, so they can't be mappings from types to something else in the way that templates can. So in that sense they lack the "meta-" part of meta-programming. They're "just" compile-time evaluation of normal C++ functions, not meta-functions.
From what I understand, standard layout allows three things:
Empty base class optimization
Backwards compatibility with C with certain pointer casts
Use of offsetof
Now, included in the library is the is_standard_layout predicate metafunction, but I can't see much use for it in generic code as those C features I listed above seem extremely rare to need checking in generic code. The only thing I can think of is using it inside static_assert, but that is only to make code more robust and isn't required.
How is is_standard_layout useful? Are there any things which would be impossible without it, thus requiring it in the standard library?
General response
It is a way of validating assumptions. You wouldn't want to write code that assumes standard layout if that wasn't the case.
C++11 provides a bunch of utilities like this. They are particularly valuable for writing generic code (templates) where you would otherwise have to trust the client code to not make any mistakes.
Notes specific to is_standard_layout
It looks to me like the (pseudo code) definition of is_pod would roughly be...
// note: applied recursively to all members
bool is_pod(T) { return is_standard_layout(T) && is_trivial(T); }
So, you need to know is_standard_layout in order to implement is_pod. Given that, we might as well expose is_standard_layout as a tool available to library developers. Also of note: if you have a use-case for is_pod, you might want to consider the possibility that is_standard_layout might actually be a better (more accurate) choice in that case, since POD is essentially a subset of standard layout.
I get the feeling that they added every conceivable variant of type evaluation, regardless of any obvious value, just in case someone might encounter a need sometime before the next standard comes out. I doubt if piling on these "extra" type properties adds a significant additional burden to compiler developers.
There is a nice discussion of standard layout here: Why is C++11's POD "standard layout" definition the way it is?
There is also a lot of good detail at cppreference.com: Non-static data members
when using objects I sometimes test for their existence
e.g
if(object)
object->Use();
could i just use
(object && object->Use());
and what differences are there, if any?
They're the same assuming object->Use() returns something that's valid in a boolean context; if it returns void the compiler will complain that a void return isn't being ignored like it should be, and other return types that don't fit will give you something like no match for 'operator&&'
One enormous difference is that the two function very differently if operator&& has been overloaded. Short circuit evaluation is only provided for the built in operators. In the case of an overloaded operator, both sides will be evaluated [in an unspecified order; operator&& also does not define a sequence point in this case], and the results passed to the actual function call.
If object and the return type of object->Use() are both primitive types, then you're okay. But if either are of class type, then it is possible object->Use() will be called even if object evaluates to false.
They are effectively the same thing but the second is not as clear as your first version, whose intent is obvious. Execution speed is probably no different, either.
Functionally they are the same, and a decent compiler should be able to optimize both equally well. However, writing an expression with operators like this and not checking the result is very odd. Perhaps if this style were common, it would be considered concise and easy to read, but it's not - right now it's just weird. You may get used to it and it could make perfect sense to you, but to others who read your code, their first impression will be, "What the heck is this?" Thus, I recommend going with the first, commonly used version if only to avoid making your fellow programmers insane.
When I was younger I think I would have found that appealing. I always wanted to trim down lines of code, but I realized later on that when you deviate too far from the norm, it'll bite you in the long run when you start working with a team. If you want to achieve zen-programming with minimum lines of code, focus on the logic more than the syntax.
I wouldn't do that. If you overloaded operator&& for pointer type pointing to object and class type returned by object->Use() all bets are off and there is no short-circuit evaluation.
Yes, you can. You see, C language, as well as C++, is a mix of two fairly independent worlds, or realms, if you will. There's the realm of statements and the realm of expressions. Each one can be seen as a separate sub-language in itself, with its own implementations of basic programming constructs.
In the realm of statements, the sequencing is achieved by the ; at the end of the single statement or by the } at the end of compound statement. In the realm of expressions the sequencing is provided by the , operator.
Branching in the realm of statements is implemented by if statement, while in the realm of expressions it can be implemented by either ?: operator or by use of the short-circuit evaluation properties of && and || operators (which is what you just did, assuming your expression is valid).
The realm of expressions has no cycles, but it has recursion that can replace it (requires function calls though, which inevitable forces us to switch to statements).
Obviously these realms are far from being equivalent in their power. C and C++ are languages dominated by statements. However, often one can implement fairly complex constructs using the language of expressions alone.
What you did above does implement equivalent branching in the language of expressions. Keep in mind that many people will find it hard to read in the normal code (mostly because, once again, they are used by statement-dominated C and C++ code). But it often comes very handy in some specific contexts, like template metaprogramming, for one example.
One could break the question into two: how to read and to write templated code.
It is very easy to say, "it you want an array of doubles, write std::vector<double>", but it won't teach them how the templates work.
I'd probably try to demonstrate the power of templates, by demonstrating the annoyance of not using them.
A good demonstration would be to write something simple like a stack of doubles (hand-written, not STL), with methods push, pop, and foldTopTwo, which pops off and adds together the top two values in the stack, and pushes the new value back on.
Then tell them to do the same for ints (or whatever, just some different numeric type).
Then show them how, by writing this stack as a template, you can significantly reduce the number of lines of code, and all of that horrible duplication.
There is a saying: "If you can't explain it, you don't understand it."
You can break it down further: How to write code that uses templated code, and how to write code that provides a templated service to others.
The basic explanation is that templates generated code based on a template. That is the source of the term "meta programming". It is programming how programming should be done.
The essential complexity of a vector is not that it is a vector of doubles (or type T), but that it is a vector. The basic structure is the same and templates separate that which is consistent from that which is not.
Further explanation depends on how much of that makes sense to you!
IMHO it is best to explain them as (very) fancy macros. They just work at higher level than C-style text substitution macros.
I found it very instructive to look at duck-typed languages. It doesn't matter, there, what type of argument you give a function, as long as they offer the right interface.
Templates allows to do the same thing: you can take any type, as long as the right interface is present. The additional benefit over duck-typing is, that the interface is checked at compile-time.
Present them as advanced macros. It's a programming language on its own that is executed during compliation.
I would get them to implement something themselves, then experiment with different variations until they understand it. Learning by doing is almost always the better option with programming.
For example, get them to make a template which compares two values and returns the higher one. Then have them see has passing ints or doubles or whatever still allows it to work. Then get them to tweak the the code / copy it and have it return the minimum value. Again, experiment with variations - will the template allow them to pass an int and a double, or will it complain?
From there, you can have them pass in arrays of whatever type (int, double etc), and have it sort the array from highest to lowest, again encouraging experimentation. From there, start to move into templated class definitions, using the same kind of ideas but on a larger scale. This is pretty much how I learnt about templates, ending up with complex array manipulation classes for generic types.
When I was teaching myself C++ I used this site a lot. It explains templates in depth and very well. I would recommend having them read that and try implementing something simple.
For a shorter explanation: Templates are frameworks for complicated constructs that act on data without having to know what that data is. Give them some examples of a simple template (like a linked-list) and walk through how the template is used to generate the final class.
You can tell that a template is a half-written source with parameters to be filled while instatiating the template.