reuse function logic in a const expression - c++

I think my question is, is there anyway to emulate the behaviour that we'll gain from C++0x's constexpr keyword with the current C++ standard (that is if I understand what constexpr is supposed to do correctly).
To be more clear, there are times when it is useful to calculate a value at compile time but it is also useful to be able to calculate it at runtime too, for e.g. if we want to calculate powers, we could use the code below.
template<int X, unsigned int Y>
struct xPowerY_const {
static const int value = X*xPowerY_const<X,Y-1>::value;
};
template<int X>
struct xPowerY_const<X, 1> {
static const int value = X;
};
int xPowerY(int x, unsigned int y) {
return (y==1) ? x : x*xPowerY(x,y-1);
}
This is a simple example but in more complicated cases being able to reuse the code would be helpful. Even if, for runtime performance, the recursive nature of the function is suboptimal and a better algorithm could be devised it would be useful for testing the logic if the templated version could be expressed in a function, as I can't see a reasonable method of testing the validity of the constant template method in a wide range of cases (although perhaps there is one and i just can't see it, and perhaps that's another question).
Thanks.
Edit
Forgot to mention, I don't want to #define
Edit2 Also my code above is wrong, it doesn't deal with x^0, but that doesn't affect the question.

Template metaprogramming implements logic in an entirely different (and incompatible) way from "normal" C++ code. You're not defining a function, you're defining a type. It just happens that the type has a value associated with it, which is built up from a combination of other types.
Because the templates define types, there is no program logic involved. The logic is simply a side effect of the compiler trying to resolve relationships between the templated types. There really isn't any way to automatically extract the high level logic from a template "program" into a function.
FWIW, template metaprogramming wasn't even a glimmer in Bjarne's eye when templates were first implemented. They were actually discovered later on in the language's life by users of the language. It's an "unintended" side-effect of the type system that just happened to become very popular. It's precisely because of this discovery that new features are being added to the language to more thoroughly support the idioms that have evolved.

Related

C++ concepts vs static_assert

What is exactly new in c++ concepts? In my understanding they are functionally equal to using static_assert, but in a 'nice' manner meaning that compiler errors will be more readable (as Bjarne Stroustup said you won't get 10 pages or erros, but just one).
Basically, is it true that everything you can do with concepts you can also achieve using static_assert?
Is there something I am missing?
tl;dr
Compared to static_asserts, concepts are more powerful because:
they give you good diagnostic that you wouldn't easily achieve with static_asserts
they let you easily overload template functions without std::enable_if (that is impossible only with static_asserts)
they let you define static interfaces and reuse them without losing diagnostic (there would be the need for multiple static_asserts in each function)
they let you express your intents better and improve readability (which is a big issue with templates)
This can ease the worlds of:
templates
static polymorphism
overloading
and be the building block for interesting paradigms.
What are concepts?
Concepts express "classes" (not in the C++ term, but rather as a "group") of types that satisfy certain requirements. As an example you can see that the Swappable concept express the set of types that:
allows calls to std::swap
And you can easily see that, for example, std::string, std::vector, std::deque, int etc... satisfy this requirement and can therefore be used interchangeably in a function like:
template<typename Swappable>
void func(const Swappable& a, const Swappable& b) {
std::swap(a, b);
}
Concepts always existed in C++, the actual feature that will be added in the (possibly near) future will just allow you to express and enforce them in the language.
Better diagnostic
As far as better diagnostic goes, we will just have to trust the committee for now. But the output they "guarantee":
error: no matching function for call to 'sort(list<int>&)'
sort(l);
^
note: template constraints not satisfied because
note: `T' is not a/an `Sortable' type [with T = list<int>] since
note: `declval<T>()[n]' is not valid syntax
is very promising.
It's true that you can achieve a similar output using static_asserts but that would require different static_asserts per function and that could get tedious very fast.
As an example, imagine you have to enforce the amount of requirements given by the Container concept in 2 functions taking a template parameter; you would need to replicate them in both functions:
template<typename C>
void func_a(...) {
static_assert(...);
static_assert(...);
// ...
}
template<typename C>
void func_b(...) {
static_assert(...);
static_assert(...);
// ...
}
Otherwise you would loose the ability to distinguish which requirement was not satisfied.
With concepts instead, you can just define the concept and enforce it by simply writing:
template<Container C>
void func_a(...);
template<Container C>
void func_b(...);
Concepts overloading
Another great feature that is introduced is the ability to overload template functions on template constraints. Yes, this is also possible with std::enable_if, but we all know how ugly that can become.
As an example you could have a function that works on Containers and overload it with a version that happens to work better with SequenceContainers:
template<Container C>
int func(C& c);
template<SequenceContainer C>
int func(C& c);
The alternative, without concepts, would be this:
template<typename T>
std::enable_if<
Container<T>::value,
int
> func(T& c);
template<typename T>
std::enable_if<
SequenceContainer<T>::value,
int
> func(T& c);
Definitely uglier and possibly more error prone.
Cleaner syntax
As you have seen in the examples above the syntax is definitely cleaner and more intuitive with concepts. This can reduce the amount of code required to express constraints and can improve readability.
As seen before you can actually get to an acceptable level with something like:
static_assert(Concept<T>::value);
but at that point you would loose the great diagnostic of different static_assert. With concepts you don't need this tradeoff.
Static polymorphism
And finally concepts have interesting similarities to other functional paradigms like type classes in Haskell. For example they can be used to define static interfaces.
For example, let's consider the classical approach for an (infamous) game object interface:
struct Object {
// …
virtual update() = 0;
virtual draw() = 0;
virtual ~Object();
};
Then, assuming you have a polymorphic std::vector of derived objects you can do:
for (auto& o : objects) {
o.update();
o.draw();
}
Great, but unless you want to use multiple inheritance or entity-component-based systems, you are pretty much stuck with only one possible interface per class.
But if you actually want static polymorphism (polymorphism that is not that dynamic after all) you could define an Object concept that requires update and draw member functions (and possibly others).
At that point you can just create a free function:
template<Object O>
void process(O& o) {
o.update();
o.draw();
}
And after that you could define another interface for your game objects with other requirements. The beauty of this approach is that you can develop as many interfaces as you want without
modifying your classes
require a base class
And they are all checked and enforced at compile time.
This is just a stupid example (and a very simplistic one), but concepts really open up a whole new world for templates in C++.
If you want more informations you can read this nice article on C++ concepts vs Haskell type classes.

Are there any C++ language obstacles that prevent adopting D ranges?

This is a C++ / D cross-over question. The D programming language has ranges that -in contrast to C++ libraries such as Boost.Range- are not based on iterator pairs. The official C++ Ranges Study Group seems to have been bogged down in nailing a technical specification.
Question: does the current C++11 or the upcoming C++14 Standard have any obstacles that prevent adopting D ranges -as well as a suitably rangefied version of <algorithm>- wholesale?
I don't know D or its ranges well enough, but they seem lazy and composable as well as capable of providing a superset of the STL's algorithms. Given their claim to success for D, it would seem very nice to have as a library for C++. I wonder how essential D's unique features (e.g. string mixins, uniform function call syntax) were for implementing its ranges, and whether C++ could mimic that without too much effort (e.g. C++14 constexpr seems quite similar to D compile-time function evaluation)
Note: I am seeking technical answers, not opinions whether D ranges are the right design to have as a C++ library.
I don't think there is any inherent technical limitation in C++ which would make it impossible to define a system of D-style ranges and corresponding algorithms in C++. The biggest language level problem would be that C++ range-based for-loops require that begin() and end() can be used on the ranges but assuming we would go to the length of defining a library using D-style ranges, extending range-based for-loops to deal with them seems a marginal change.
The main technical problem I have encountered when experimenting with algorithms on D-style ranges in C++ was that I couldn't make the algorithms as fast as my iterator (actually, cursor) based implementations. Of course, this could just be my algorithm implementations but I haven't seen anybody providing a reasonable set of D-style range based algorithms in C++ which I could profile against. Performance is important and the C++ standard library shall provide, at least, weakly efficient implementations of algorithms (a generic implementation of an algorithm is called weakly efficient if it is at least as fast when applied to a data structure as a custom implementation of the same algorithm using the same data structure using the same programming language). I wasn't able to create weakly efficient algorithms based on D-style ranges and my objective are actually strongly efficient algorithms (similar to weakly efficient but allowing any programming language and only assuming the same underlying hardware).
When experimenting with D-style range based algorithms I found the algorithms a lot harder to implement than iterator-based algorithms and found it necessary to deal with kludges to work around some of their limitations. Of course, not everything in the current way algorithms are specified in C++ is perfect either. A rough outline of how I want to change the algorithms and the abstractions they work with is on may STL 2.0 page. This page doesn't really deal much with ranges, however, as this is a related but somewhat different topic. I would rather envision iterator (well, really cursor) based ranges than D-style ranges but the question wasn't about that.
One technical problem all range abstractions in C++ do face is having to deal with temporary objects in a reasonable way. For example, consider this expression:
auto result = ranges::unique(ranges::sort(std::vector<int>{ read_integers() }));
In dependent of whether ranges::sort() or ranges::unique() are lazy or not, the representation of the temporary range needs to be dealt with. Merely providing a view of the source range isn't an option for either of these algorithms because the temporary object will go away at the end of the expression. One possibility could be to move the range if it comes in as r-value, requiring different result for both ranges::sort() and ranges::unique() to distinguish the cases of the actual argument being either a temporary object or an object kept alive independently. D doesn't have this particular problem because it is garbage collected and the source range would, thus, be kept alive in either case.
The above example also shows one of the problems with possibly lazy evaluated algorithm: since any type, including types which can't be spelled out otherwise, can be deduced by auto variables or templated functions, there is nothing forcing the lazy evaluation at the end of an expression. Thus, the results from the expression templates can be obtained and the algorithm isn't really executed. That is, if an l-value is passed to an algorithm, it needs to be made sure that the expression is actually evaluated to obtain the actual effect. For example, any sort() algorithm mutating the entire sequence clearly does the mutation in-place (if you want a version doesn't do it in-place just copy the container and apply the in-place version; if you only have a non-in-place version you can't avoid the extra sequence which may be an immediate problem, e.g., for gigantic sequences). Assuming it is lazy in some way the l-value access to the original sequence provides a peak into the current status which is almost certainly a bad thing. This may imply that lazy evaluation of mutating algorithms isn't such a great idea anyway.
In any case, there are some aspects of C++ which make it impossible to immediately adopt the D-sytle ranges although the same considerations also apply to other range abstractions. I'd think these considerations are, thus, somewhat out of scope for the question, too. Also, the obvious "solution" to the first of the problems (add garbage collection) is unlikely to happen. I don't know if there is a solution to the second problem in D. There may emerge a solution to the second problem (tentatively dubbed operator auto) but I'm not aware of a concrete proposal or how such a feature would actually look like.
BTW, the Ranges Study Group isn't really bogged down by any technical details. So far, we merely tried to find out what problems we are actually trying to solve and to scope out, to some extend, the solution space. Also, groups generally don't get any work done, at all! The actual work is always done by individuals, often by very few individuals. Since a major part of the work is actually designing a set of abstractions I would expect that the foundations of any results of the Ranges Study Group is done by 1 to 3 individuals who have some vision of what is needed and how it should look like.
My C++11 knowledge is much more limited than I'd like it to be, so there may be newer features which improve things that I'm not aware of yet, but there are three areas that I can think of at the moment which are at least problematic: template constraints, static if, and type introspection.
In D, a range-based function will usually have a template constraint on it indicating which type of ranges it accepts (e.g. forward range vs random-access range). For instance, here's a simplified signature for std.algorithm.sort:
auto sort(alias less = "a < b", Range)(Range r)
if(isRandomAccessRange!Range &&
hasSlicing!Range &&
hasLength!Range)
{...}
It checks that the type being passed in is a random-access range, that it can be sliced, and that it has a length property. Any type which does not satisfy those requirements will not compile with sort, and when the template constraint fails, it makes it clear to the programmer why their type won't work with sort (rather than just giving a nasty compiler error from in the middle of the templated function when it fails to compile with the given type).
Now, while that may just seem like a usability improvement over just giving a compilation error when sort fails to compile because the type doesn't have the right operations, it actually has a large impact on function overloading as well as type introspection. For instance, here are two of std.algorithm.find's overloads:
R find(alias pred = "a == b", R, E)(R haystack, E needle)
if(isInputRange!R &&
is(typeof(binaryFun!pred(haystack.front, needle)) : bool))
{...}
R1 find(alias pred = "a == b", R1, R2)(R1 haystack, R2 needle)
if(isForwardRange!R1 && isForwardRange!R2 &&
is(typeof(binaryFun!pred(haystack.front, needle.front)) : bool) &&
!isRandomAccessRange!R1)
{...}
The first one accepts a needle which is only a single element, whereas the second accepts a needle which is a forward range. The two are able to have different parameter types based purely on the template constraints and can have drastically different code internally. Without something like template constraints, you can't have templated functions which are overloaded on attributes of their arguments (as opposed to being overloaded on the specific types themselves), which makes it much harder (if not impossible) to have different implementations based on the genre of range being used (e.g. input range vs forward range) or other attributes of the types being used. Some work has been being done in this area in C++ with concepts and similar ideas, but AFAIK, C++ is still seriously lacking in the features necessary to overload templates (be they templated functions or templated types) based on the attributes of their argument types rather than specializing on specific argument types (as occurs with template specialization).
A related feature would be static if. It's the same as if, except that its condition is evaluated at compile time, and whether it's true or false will actually determine which branch is compiled in as opposed to which branch is run. It allows you to branch code based on conditions known at compile time. e.g.
static if(isDynamicArray!T)
{}
else
{}
or
static if(isRandomAccessRange!Range)
{}
else static if(isBidirectionalRange!Range)
{}
else static if(isForwardRange!Range)
{}
else static if(isInputRange!Range)
{}
else
static assert(0, Range.stringof ~ " is not a valid range!");
static if can to some extent obviate the need for template constraints, as you can essentially put the overloads for a templated function within a single function. e.g.
R find(alias pred = "a == b", R, E)(R haystack, E needle)
{
static if(isInputRange!R &&
is(typeof(binaryFun!pred(haystack.front, needle)) : bool))
{...}
else static if(isForwardRange!R1 && isForwardRange!R2 &&
is(typeof(binaryFun!pred(haystack.front, needle.front)) : bool) &&
!isRandomAccessRange!R1)
{...}
}
but that still results in nastier errors when compilation fails and actually makes it so that you can't overload the template (at least with D's implementation), because overloading is determined before the template is instantiated. So, you can use static if to specialize pieces of a template implementation, but it doesn't quite get you enough of what template constraints get you to not need template constraints (or something similar).
Rather, static if is excellent for doing stuff like specializing only a piece of your function's implementation or for making it so that a range type can properly inherit the attributes of the range type that it's wrapping. For instance, if you call std.algorithm.map on an array of integers, the resultant range can have slicing (because the source range does), whereas if you called map on a range which didn't have slicing (e.g. the ranges returned by std.algorithm.filter can't have slicing), then the resultant ranges won't have slicing. In order to do that, map uses static if to compile in opSlice only when the source range supports it. Currently, map 's code that does this looks like
static if (hasSlicing!R)
{
static if (is(typeof(_input[ulong.max .. ulong.max])))
private alias opSlice_t = ulong;
else
private alias opSlice_t = uint;
static if (hasLength!R)
{
auto opSlice(opSlice_t low, opSlice_t high)
{
return typeof(this)(_input[low .. high]);
}
}
else static if (is(typeof(_input[opSlice_t.max .. $])))
{
struct DollarToken{}
enum opDollar = DollarToken.init;
auto opSlice(opSlice_t low, DollarToken)
{
return typeof(this)(_input[low .. $]);
}
auto opSlice(opSlice_t low, opSlice_t high)
{
return this[low .. $].take(high - low);
}
}
}
This is code in the type definition of map's return type, and whether that code is compiled in or not depends entirely on the results of the static ifs, none of which could be replaced with template specializations based on specific types without having to write a new specialized template for map for every new type that you use with it (which obviously isn't tenable). In order to compile in code based on attributes of types rather than with specific types, you really need something like static if (which C++ does not currently have).
The third major item which C++ is lacking (and which I've more or less touched on throughout) is type introspection. The fact that you can do something like is(typeof(binaryFun!pred(haystack.front, needle)) : bool) or isForwardRange!Range is crucial. Without the ability to check whether a particular type has a particular set of attributes or that a particular piece of code compiles, you can't even write the conditions which template constraints and static if use. For instance, std.range.isInputRange looks something like this
template isInputRange(R)
{
enum bool isInputRange = is(typeof(
{
R r = void; // can define a range object
if (r.empty) {} // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range
}));
}
It checks that a particular piece of code compiles for the given type. If it does, then that type can be used as an input range. If it doesn't, then it can't. AFAIK, it's impossible to do anything even vaguely like this in C++. But to sanely implement ranges, you really need to be able to do stuff like have isInputRange or test whether a particular type compiles with sort - is(typeof(sort(myRange))). Without that, you can't specialize implementations based on what types of operations a particular range supports, you can't properly forward the attributes of a range when wrapping it (and range functions wrap their arguments in new ranges all the time), and you can't even properly protect your function against being compiled with types which won't work with it. And, of course, the results of static if and template constraints also affect the type introspection (as they affect what will and won't compile), so the three features are very much interconnected.
Really, the main reasons that ranges don't work very well in C++ are the some reasons that metaprogramming in C++ is primitive in comparison to metaprogramming in D. AFAIK, there's no reason that these features (or similar ones) couldn't be added to C++ and fix the problem, but until C++ has metaprogramming capabilities similar to those of D, ranges in C++ are going to be seriously impaired.
Other features such as mixins and Uniform Function Call Syntax would also help, but they're nowhere near as fundamental. Mixins would help primarily with reducing code duplication, and UFCS helps primarily with making it so that generic code can just call all functions as if they were member functions so that if a type happens to define a particular function (e.g. find) then that would be used instead of the more general, free function version (and the code still works if no such member function is declared, because then the free function is used). UFCS is not fundamentally required, and you could even go the opposite direction and favor free functions for everything (like C++11 did with begin and end), though to do that well, it essentially requires that the free functions be able to test for the existence of the member function and then call the member function internally rather than using their own implementations. So, again you need type introspection along with static if and/or template constraints.
As much as I love ranges, at this point, I've pretty much given up on attempting to do anything with them in C++, because the features to make them sane just aren't there. But if other folks can figure out how to do it, all the more power to them. Regardless of ranges though, I'd love to see C++ gain features such as template constraints, static if, and type introspection, because without them, metaprogramming is way less pleasant, to the point that while I do it all the time in D, I almost never do it in C++.

How can I make switching between arithmetics easy in C++?

I am making a project that will use mathematic computations a lot. Also I want to be able to simply change the implementation of real numbers. Let's say between float, double, my own implementation and gmplib float types.
So far I thouht of two ways:
I create a class "Number" which will interface with the rest of the program.
I typedef the arithmetic type and write global functions to interface with the rest of the program.
The first choice seems to be more elegant, but the second seems to have less overhead. Is there a third better choice? Also I am worried by the elementary mathematical functions such as sine, cosine, exp... I figured out that to make the switching easy, I should implement them as templates, but my implementations are hopelessly slow.
I am generally new to programming in C++. I was brought up in the comfortable Matlab and Mathematica environments, where I did not have to worry about such things.
You'll want to use templates with constraints to avoid re-implementing things.
For instance, say you want to use sin in your program differently for float and double. You can overload based on type and create specialized templates.
template<class T> T MySin(const T& f) {
return genericSin(f);
}
template<> float MySin<float>(float f) {
return sinf(f);
}
template<> double MySin<double>(double d) {
return sin(d);
}
For functions. The syntax is similar when partially specializing a Math class if you want to go the OO route. This will enable you to call your routines with any type and have the most specialized and most efficient routine called.
Templates are the way I have done this. it makes it easy to specialize what must be specialized, and provides a good way to reuse implementations when it applies to multiple types.
The number type can be done, but it's actually not simple to do right and introduces some restrictions (compared to templates).
Multiple types are just hopelessly complex, if you want something even close to fast, accurate, and simple to maintain. You'd likely end up using templates to implement these correctly if you were to create a global typedef.
Templates provide all the power, control, and flexibility you would need, and they will be faster than the alternatives posted (technically, #2 could be as fast if you resorted to... templates).
a template class like real numbers should work for you. in that you can overload the required functions and if required use template specializations.
in order to improve efficiency use STL algorithms instead of hand written loops.
good luck
Both alternatives are equivalent in terms of encapsulation: There will be a single point in your program where you'll have to change the number type, and this one change will affect your whole program. If presented with those two alternatives, choose the typedef; it is less elegant (=> simpler, and simpler is better) and has the same power.
When you get more comfortable with C++, templating your functions will be a better fit, since the determination of the number type can be made locally instead of globally. With templates, you determine the number type at the instantiation point (most likely the call site), giving much greater flexibility. However, there is a number of pitfalls in templates, and I'd recommend to you that you get a little more experience with C++ first and then start templating.

Should const functionality be expanded?

EDIT: this question could probably use a more apropos title. Feel free to suggest one in the comments.
In using C++ with a large class set I once came upon a situation where const became a hassle, not because of its functionality, but because it's got a very simplistic definition. Its applicability to an integer or string is obvious, but for more complicated classes there are often multiple properties that could be modified independently of one another. I imagine many people forced to learn what the mutable keyword does might have had similar frustrations.
The most apparent example to me would be a matrix class, representing a 3D transform. A matrix will represent both a translation and a rotation each of which can be changed without modifying the other. Imagine the following class and functions with the hypothetical addition of 'multi-property const'.
class Matrix {
void translate(const Vector & translation) const("rotation");
void rotate(const Quaternion & rotation) const("translation");
}
public void spin180(const("translation") & Matrix matrix);
public void moveToOrigin(const("rotation") & Matrix matrix);
Or imagine predefined const keywords like "_comparable" which allow you to define functions that modify the object at will as long as you promise not to change anything that would affect the sort order of the object, easing the use of objects in sorted containers.
What would be some of the pros and cons of this kind of functionality? Can you imagine a practical use for it in your code? Is there a good approach to achieving this kind of functionality with the current const keyword functionality?
Bear in mind
I know such a language feature could easily be abused. The same can be said of many C++ language features
Like const I would expect this to be a strictly compile-time bit of functionality.
If you already think const is the stupidest thing since sliced mud, I'll take it as read that you feel the same way about this. No need to post, thanks.
EDIT:
In response to SBK's comment about member markup, I would suggest that you don't have any. For classes / members marked const, it works exactly as it always has. For anything marked const("foo") it treats all the members as mutable unless otherwise marked, leaving it up to the class author to ensure that his functions work as advertised. Besides, in a matrix represented as a 2D array internally, you can't mark the individual fields as const or non-const for translation or rotation because all the degrees of freedom are inside a single variable declaration.
Scott Meyers was working on a system of expanding the language with arbitary constraints (using templates).
So you could say a function/method was Verified,ThreadSafe (etc or any other constraints you liked). Then such constrained functions could only call other functions which had at least (or more) constraints. (eg a method maked ThreadSafe could only call another method marked ThreadSafe (unless the coder explicitly cast away that constraint).
Here is the article:
http://www.artima.com/cppsource/codefeatures.html
The cool concept I liked was that the constraints were enforced at compile time.
In cases where you have groups of members that are either const together or mutable together, wouldn't it make as much sense to formalize that by putting them in their own class together? That can be done today without changing the language.
Refinement
When an ADT is indistinguishable from itself after some operation the const property holds for the entire ADT. You wish to define partial constness.
In your sort order example you are asserting that operator< of the ADT is invariant under some other operation on the ADT. Your ad-hoc const names such as "rotation" are defined by the set of operations for which the ADT is invariant. We could leave the invariant unnamed and just list the operations that are invariant inside const(). Due to overloading functions would need to be specified with their full declaration.
void set_color (Color c) const (operator<, std::string get_name());
void set_name (std::string name) const (Color get_color());
So the const names can be seen as a formalism - their existence or absence doesn't change the power of the system. But 'typedef' could be used to name a list of invariants if that proves useful.
typedef const(operator<, std::string get_name()) DontWorryOnlyNameChanged;
It would be hard to think of good names for many cases.
Usefulness
The value in const is that the compiler can check it. This is a different kind of const.
But I see one big flaw in all of this. From your matrix example I might incorrectly infer that rotation and translation are independent and therefore commutative. But there is an obvious data dependency and matrix multiplication is not commutative. Interestingly, this is an example where partial constness is invariant under repeated application of one or the other but not both. 'translate' would be surprised to find that it's object had been translated due to a rotation after a previous translation. Perhaps I am misunderstanding the meaning of rotate and translate. But that's the problem, that constness now seems open to interpretation. So we need ... drum roll ... Logic.
Logic
It appears your proposal is analogous to dependent typing. With a powerful enough type system almost anything is provable at compile time. Your interest is in theorem provers and type theory, not C++. Look into intuitionistic logic, sequent calculus, Hoare logic, and Coq.
Now I've come full circle. Naming makes sense again,
int times_2(int n) const("divisible_by_3");
since divisible_by_3 is actually a type. Here's a prime number type in Qi. Welcome to the rabbit hole. And I pretended to be getting somewhere. What is this place? Why are there no clocks in here?
Such high level concepts are useful for a programmer.
If I wanted to make const-ness fine-grained, I'd do it structurally:
struct C { int x; int y; };
C const<x> *c;
C const<x,y> *d;
C const& e;
C &f;
c=&e; // fail, c->y is mutable via c
d=&e;
c=&f;
d=c;
If you allow me to express a preference for a scope that maximally const methods are preferred (the normal overloading would prefer the non-const method if my ref/pointer is non-const), then the compiler or a standalone static analysis could deduce the sets of must-be-const members for me.
Of course, this is all moot unless you plan on implementing a preprocessor that takes the nice high-level finely grained const C++ and translates it into casting-away-const C++. We don't even have C++0x yet.
I don't think that you can achieve this as strictly compile-time functionality.
I can't think of a good example so this strictly functional one will have to do:
struct Foo{
int bar;
};
bool operator <(Foo l, Foo r){
return (l.bar & 0xFF) < (r.bar & 0xFF);
}
Now I put a some Foos into a sorted set. Obviously the lower 8 bits of bar must remain unchanged so that the order is preserved. The upper bits can however be freely changed. This means the Foos in the set aren't const but aren't mutable either. However I don't see any way you could describe this level of constness in a general useful form without using runtime checking.
If you formalized the requirements I could even imagine, that you could prove that no compiler capable of doing this (at compile time) could even exist.
It could be interesting, but one of the useful features of const's simple definition is that the compiler can check it. If you start adding arbitrary constraints, such as "cannot change sort order", the compiler as it stands now cannot check it. Further, the problem of compile-time checking of arbitrary constraints is, in the general case, impossible to solve due to the halting problem. I would rather see the feature remain limited to what can actually be checked by a compiler.
There is work on enabling compilers to check more and more things — sophisticated type systems (including dependent type systems), and work such as the that done in SPARKAda, allowing for compiler-aided verification of various constraints — but they all eventually hit the theoretical limits of computer science.
I don't think the core language, and especially the const keyword, would be the right place for this. The concept of const in C++ is meant to express the idea that a particular action will not modify a certain area of memory. It is a very low-level idea.
What you are proposing is a logical const-ness that has to do with the high-level semantics of your program. The main problem, as I see it, is that semantics can vary so much between different classes and different programs that there would be no way for there to be a one-size-fits all language construct for this.
What would need to happen is that the programmer would need to be able to write validation code that the compiler would run in order to check that particular operations met his definition of semantic (or "logical") const-ness. When you think about it, though, such code, if it ran at compile-time, would not be very different from a unit test.
Really what you want is for the compiler to test whether functions adhere to a particular semantic contract. That's what unit tests are for. So what you're asking is that there be a language feature that automatically runs unit tests for you during the compilation step. I think that's not terribly useful, given how complicated the system would need to be.

Is Template Metaprogramming faster than the equivalent C code?

Is Template Metaprogramming faster than the equivalent C code ? ( I'm talking about the runtime performance) :)
First, a disclaimer:
What I think you're asking about is not just template metaprogramming, but also generic programming. The two concepts are closely related, and there's no exact definition of what each encompasses. But in short, template metaprogramming is essentially writing a program using templates, which is evaluated at compile-time. (which makes it entirely free at runtime. Nothing happens. The value (or more commonly, type) has already been computed by the compiler, and is available as a constant (either a const variable, or an enum), or as a typedef nested in a class (if you've used it to "compute" a type).
Generic programming is using templates and when necessary, template metaprogramming, to create generic code which works the same (and with no loss in performance), with all and any type. I'm going to use examples of both in the following.
A common use for template metaprogramming is to enable types to be used in generic programming, even if they were not designed for it.
Since template metaprogramming technically takes place entirely at compile-time, your question is a bit more relevant for generic programming, which still takes place at runtime, but is efficient because it can be specialized for the precise types it's used with at compile-time.
Anyway...
Depends on how you define "the equivalent C code".
The trick about template metaprogramming (or generic programming in general) is that it allows a lot of computation to be moved to compile-time, and it enables flexible, parametrized code that is just as efficient as hardcoded values.
The code displayed here for example computes a number in the fibonacci sequence at compile-time.
The C++ code 'unsigned long fib11 = fibonacci<11uL>::value', relies on the template metaprogram defined in that link, and is as efficient as the C code 'unsigned long fib11 = 89uL'. The templates are evaluated at compile-time, yielding a constant that can be assigned to a variable. So at runtime, the code is actually identical to a simple assignment.
So if that is the "equivalent C code", the performance is the same.
If the equivalent C code is "a program that can compute arbitrary fibonacci numbers, applied to find the 11th number in the sequence", then the C version will be much slower, because it has to be implemented as a function, which computes the value at runtime. But this is the "equivalent C code" in the sense that it is a C program that exhibits the same flexibility (it is not just a hardcoded constant, but an actual function that can return any number in the fibonacci sequence).
Of course, this isn't often useful. But it's pretty much the canonical example of template metaprogramming.
A more realistic example of generic programming is sorting.
In C, you have the qsort standard library function taking an array and a comparator function pointer. The call to this function pointer can not be inlined (except in trivial cases), because at compile-time, it is not known which function is going to be called.
Of course the alternative is a hand-written sorting function designed for your specific datatype.
In C++, the equivalent is the function template std::sort. It too takes a comparator, but instead of this being a function pointer, it is a function object, looking like this:
struct MyComp {
bool operator()(const MyType& lhs, const MyType& rhs) {
// return true if lhs < rhs, however this operation is defined for MyType objects
}
};
and this can be inlined. The std::sort function is passed a template argument, so it knows the exact type of the comparator, and so it knows that the comparator function is not just an unknown function pointer, but MyComp::operator().
The end result is that the C++ function std::sort is exactly as efficient as your hand-coded implementation in C of the same sorting algorithm.
So again, if that is "the equivalent C code", then the performance is the same.
But if the "equivalent C code" is "a generalized sorting function which can be applied to any type, and allows user-defined comparators", then the generic programming-version in C++ is vastly more efficient.
That's really the trick. Generic programming and template metaprogramming are not "faster than C". They are methods to achieve general, reusable code which is as fast as handcoded, and hardcoded C
It is a way to get the best of both worlds. The performance of hardcoded algorithms, and the flexibility and reusability of general, parameterized ones.
Template Metaprogramming (TMP) is 'run' at compile time, so it's not really comparing apples to apples when comparing it to normal C/C++ code.
But, if you have something evaluated by TMP, then there's no runtime cost at all.
If you mean reusable code, then yes without a doubt. Metaprogramming is a superior way to produce libraries, not client code. Client code is not generic, it is written to do specific things.
For example, look at the qsort from C standard library, and C++ standard sort. This is how qsort works :
int compare(const void* a, const void* b)
{
return (*(int*)a > *(int*)b);
}
int main()
{
int data[5] = {5, 4, 3, 2, 1};
qsort(data, 5, sizeof(int), compare);
}
Now look at sort :
struct compare
{
bool operator()(int a, int b)
{ return a < b; }
};
int main()
{
int data[5] = {5, 4, 3, 2, 1};
std::sort(data, data+5, compare());
}
sort is cleaner, safer and more efficient because the comparison function is inlined inside the sort. That is the benefit of metaprogramming in my opinion, you write generic code, but the compiler produce code like the hand-coded one!
Another place where I find metaprogramming very beautiful is when you write a library like boost::spirit or boost::xpressive, with spirit you can write EBNF inside C++ and let the compile check the EBNF syntax for you and with xpressive you can write regex and let the compiler check the regex syntax for you also!
I am not sure if the questioner means by TMP calculating values at compile time. This is an example I wrote using boost :)
unsigned long greatestCommonDivisor = boost::math::static_gcd<25657, 54887524>::value;
What ever you do with C you can't mimic the above code, basically you have to hand-calculate it then assign the result to greatestCommonDivisor!
The answer is it depends.
Template metaprogramming can be used to easily write recursive descent language parsers and these can be inefficient compared to a carefully crafted C program or a table-based implementation (e.g. flex/bison/yacc).
On the other hand, you can write metaprograms that generate unrolled loops which can be more efficient than a more an conventional C implementation that uses loops.
The main benefit is that metaprograms allow the programmer to do more with less code.
The downside is that it also gives you a gatling gun to shoot yourself in the foot with.
Template metaprogramming can be thought of as compile-time execution.
The compile-time is going to take longer to compile your code since it has to compile and then execute the templates, generate code, then compile again.
The run-time overhead I am not sure about, it shouldn't be much more than if you wrote it yourself in C code I would imagine.
I worked on a project where another programmer had tried out metaprogramming. It was terrible. It was a complete headache. I'm an average programmer with a lot of C++ experience, and trying to devise what the hell they were trying to do took way more time than if they had written it straight out to begin with.
I'm jaded against C++ MetaProgramming because of this experience.
I'm a firm believer that the best code is most easily readable by an average developer. It's the readability of the software that is the #1 priority. I can make anything work using any language... but the skill is in making it readable and easily workable for the next person on the project. C++ MetaProgramming fails to pass muster.
Template metaprogramming does not give you any magical powers in terms of performance. It's basically a very sophisticated preprocessor; you can always write the equivalent in C or C++, it just might take you a very long time.
I do not think there is any hype, but a clear and simple answer about templates is given by C++ FAQ: https://isocpp.org/wiki/faq/templates#overview-templates
About the original question: it cannot be answered, as those things are not comparable.