Can you alias a tuple type? - tuples

I'm playing around with Ceylon and I'm trying to create an alias for a tuple. The following do not work:
class MyPair(Integer i, Float f) => [i, f];
class MyPair(Integer i, Float f) => [Integer, Float](i, f);
class MyPair(Integer i, Float f) =>
Tuple<Integer|Float, Integer, Tuple<Float, Float, Empty>>(i, [f]);
class MyPair(Integer i, Float f) =>
Tuple<Integer|Float, Integer, Tuple<Integer|Float, Float, Empty>>(i, [f]);
class MyPair(Integer i, Float f) =>
Tuple<Integer|Float,Integer,Tuple<Float,Float,Empty>>(i, Tuple<Float,Float,Empty>(f, []));
The error I get on the first two revolves around the use of brackets:
Incorrect syntax: missing statement-ending ; at [ expecting statement-ending ;
There are two separate errors on the second:
Some variation of
Alias parameter distance must be assignable to corresponding class parameter rest: Integer is not assignable to [Integer]
on class MyPair and
Argument must be a parameter reference to distance
on f, [f], or the tuple construction.
Is there a way to do this?

Yeah, the instantiation expression on the RHS of the => in a class alias declaration is currently extremely restricted, not by design, but just because it will take some extra work to implement full support for arbitrary instantiation expressions in the compiler backends.
But what I would actually do for now would be to use a regular type alias, like this:
alias MyPair => [Integer,Float];
And use it like this:
MyPair pair = [1, 1.0];
I think that's actually even cleaner than using a class alias.
HTH.

After tinkering around a bit I came across
class MyPair(Integer i, [Float] f) =>
Tuple<Integer|Float, Integer, Tuple<Float, Float, Empty>>(i, f);
which works.

Can't do much better than your solution, but you can at least use a shortcut for the Rest type parameter:
class Pair([Integer i, [Float] f]) => Tuple<Integer|Float, Integer, [Float]>(i, f);
You're limited here because the parameter types of your class alias must match the parameter types of the class that you're aliasing. If I'm interpreting the spec correctly:
Note: currently the compiler imposes a restriction that the callable type of the aliased class must be assignable to the callable type of the class alias. This restriction will be removed in future.
then this might work in subsequent releases:
class Pair(Integer i, Float f) => Tuple<Integer|Float, Integer, [Float]>(i, [f]);
or maybe even:
class Pair(Integer i, Float f) => [i, f];
Then again, if your aim is to destructure a tuple, Ceylon 1.2 will let you do that directly:
value [i, f] = [2, 0.5];

Related

Why argument conversion is not considered when calling a templated function?

I have a template class and a friend operator* function
StereoSmp<TE> operator* (const StereoSmp<TE>& a, TE b)
I use it with TE=float but I need to multiply a StereoSmp<float> * double
I think that should be possibile because it should converts double to float automatically and works but I get the error:
no match for ‘operator*’ (operand types are ‘StereoSmp<float>’ and
‘__gnu_cxx::__alloc_traits<std::allocator<double> >::value_type {aka double}’)
deduced conflicting types for parameter ‘TE’ (‘float’ and ‘double’)
Why it doesn't convert double to float automatically? And what can I do to allow the automatic conversion between types?
Don't make your friend a template.
template<class TE>
struct StereoSmp {
friend StereoSmp operator* (const StereoSmp& a, TE b) {
return multiply( a, b ); // implement here
}
};
this is a non-template friend to each type instance of the template StereoSmp. It will consider conversions.
Template functions don't consider conversions, they simply do exact pattern matching. Overload resolution is already insane enough in C++.
Keep it simple, silly:
template<class U>
StereoSmp<TE> operator* (const StereoSmp<TE>& a, U b);
or if it applies:
StereoSmp<TE> a {/* ... */};
double b = /* ... */;
auto c = a * static_cast<float>(b);
Why it doesn't convert double to float automatically?
Because template deduction happens before possible conversions are taken into consideration. If you call a*b with a a StereoSmp<float> and b a double, template substitution will fail before a float to double conversion can be considered, and name lookup will continue, until failing short of candidates.
This process is called Template argument deduction.
Although Yakk's answer is probably the best in this particular scenario, I want to point out that you can prevent this deduction conflict and get your expected result (pass StereoSmp<float>, deduce TE as float) by making the other argument ineligible for use in deduction:
StereoSmp<TE> operator* (const StereoSmp<TE>& a, remove_reference<TE>::type b)
Related reading: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3766.html
Why it doesn't convert double to float automatically? And what can I do to allow the automatic conversion between types?
This isn't a conversion problem, it's a template parameter inference issue. Since the declaration is of the form:
StereoSmp<TE> operator* (const StereoSmp<TE>& a, TE b)
... and the operands are of type StereoSmp<float> and double, the C++ inference rules do not work, because they do not take into account that a double is convertible to a float. These rules are specified by the language specification; the reason why they don't take potential conversions into account is probably because "it would make deduction very complicated, otherwise". The rules are already complex enough!
You can of course cast your double parameter to a float and it will work fine. Also, you could make the operator* a member function of StereoSmp, or you could independently parameterise the types of the type parameters:
template <class TE, class U> StereoSmp<TE> operator* (const StereoSmp<TE>& a, U b);

Using auto for a defined function

I've used auto to store a lambda that is constructed right in the auto assignment, but today I was looking at this interesting paper on functional programming using c++ templates and came across this code:
template < typename T , typename Ops >
T fold ( Linked <T > * p )
{
T acc = Ops :: initial () ;
while ( p ) {
acc = Ops :: bin ( acc , p - > head ) ;
p = p - > tail ;
}
return acc ;
}
// later, in main():
auto sumup = fold <int , IntOps >;
I am trying to understand what the type of sumup would be, since it is assigned not to the output of fold but rather to the actual function fold itself! I decided to take a look at the various ways auto is shown to be used here. I am assuming that this use of auto falls under (1) on that page, a general variable initializer. What is not clear is what is the type of sumup?
And, would auto potentially be the same here as doing this:
using functionType = int (Linked<int>*);
functionType sumup = fold <int , IntOps >;
This is probably not correct, but I'd be curious if my thinking is in the right direction. When instantiated, fold <int , IntOps > will be a function that returns an int and taking a single argument of Linked<int>*, so my using declaration is saying the same thing? Is this using declaration a bonafide "type" and is the auto arriving at the same deduction as this using?
While every function has a type, you cannot have expressions of that type, nor variables. So int foo(float) has type int(float), for instance, but you cannot have a variable of that type.
You can have expressions and variables of type pointer to function, so int(*)(float). For instance, &foo is such a pointer to function.
The two types are quite closely related, obviously. In fact, the conversion is implicit: int (*pFoo)(float) = foo; does the conversion automatically.
What you do here is pretty much the same: int (*sumup)(Linked<int>*) = fold <int , IntOps >;. You see that auto makes the definition much more readable.
auto works by the same rules as template argument deduction. That is, when unqualified, it will take things by value. Since here you're returning a function reference, it will have to decay down to a pointer, because there's no "value type" for a function, with a specific size and whatnot.
You could also capture with auto&& which would make the type of sumup be int (&)(Linked<int>*), i.e. a reference to the function.

What is the type of this self-applying factorial function?

I wrote an anonymous factorial function in C++ and compiled my code with g++4.9.2.
It works well. However, I don't know the type of my function.
#include<iostream>
#include<functional>
using std::function;
int main()
{
//tested at g++ 4.9.2
//g++ -std=c++1y -o anony anony.cpp
auto fac = [](auto self,auto n)->auto{
if(n < 1)
return 1;
else
return n * self(self,n-1);
};
std::cout<<fac(fac,3)<<std::endl;//6
return 0;
}
So, I wonder: what are the types of fac and self?
If I just translate the C++ code into Haskell, it won't compile because
it involves infinite types:
fac2 self 0 = 1
fac2 self n = n * (self self $ n-1)
and I have to define some recursive type work around it:
data Y a = Y ((Y a)->a->a)
fac2 self 0 = 1
fac2 self n = n * ((applY self self) (n-1))
where applY (Y f1) f2 = f1 f2
fact2 = fac2 $ Y fac2
So, why could g++ get exactly the right type of the fac function, and what type does g++ think the fac function is?
The C++ fac isn't really a function, but a struct which has a member function.
struct aaaa // Not its real name.
{
template<typename a, typename b>
auto operator()(a self, b n) const
{
}
};
The overloaded call operator hides some of the trickery that C++ performs in order to implement "lambda functions"
When you "call" fac, what happens is
fac.operator() (fac, 3);
so the argument to the function isn't the function itself, but an object which has it as a member.
One effect of this is that the function's type (i.e. the type of operator()) does not occur in the type of the operator() function itself.
(The type of self is the struct that defines the function.)
The template part isn't necessary for this to work; this is a non-generic version of the fac "function":
struct F
{
int operator()(const F& self, int n) const
{
// ...
}
};
F fac;
fac(fac, 3);
If we keep the template and rename operator() to applY:
// The Y type
template<typename a>
struct Y
{
// The wrapped function has type (Y<a>, a) -> a
a applY(const Y<a>& self, a n) const
{
if(n < 1)
return 1;
else
return n * self.applY(self, n-1);
}
};
template<typename a>
a fac(a n)
{
Y<a> y;
return y.applY(y, n);
}
we see that your working Haskell program and your C++ program are very similar - the differences are mainly punctuation.
In contrast, in Haskell
fac2 self 0 = 1
fac2 self n = n * (self self $ n-1)
self is a function, and fac2's type would have to be
X -> Int -> Int
for some X.
Since self is a function, and self self $ n-1 is an Int, self's type is also X -> Int -> Int.
But what could X be?
It must be the same as the type of self itself, i.e X -> Int -> Int.
But that means that the type of self is (substituting for X):
(X -> Int -> Int) -> Int -> Int
so the type X must also be
(X -> Int -> Int) -> Int -> Int
so self's type must be
((X -> Int -> Int) -> Int -> Int) -> Int -> Int
and so on, ad infinitum.
That is, in Haskell the type would be infinite.
Your solution for Haskell essentially explicitly introduces the necessary indirection that C++ generates through its structure with a member function.
As others pointed out, the lambda acts as a structure involving a template. The question then becomes: why Haskell can not type the self-application, while C++ can?
The answer lies on the difference between C++ templates and Haskell polymorphic functions. Compare these:
-- valid Haskell
foo :: forall a b. a -> b -> a
foo x y = x
// valid C++
template <typename a, typename b>
a foo(a x, b y) { return x; }
While they might look nearly equivalent, they are not really such.
When Haskell type checks the above declaration, it checks that the definition is type safe for any types a,b. That is, if we substitute a,b with any two types, the function must be well-defined.
C++ follows another approach. At template definition, it is not checked that any substitution for a,b will be correct. This check is deferred to the point of use of the template, i.e. at instantiation time. To stress the point, let's add a +1 in our code:
-- INVALID Haskell
foo :: forall a b. a -> b -> a
foo x y = x+1
// valid C++
template <typename a, typename b>
a foo(a x, b y) { return x+1; }
The Haskell definition will not type check: there's no guarantee you can perform x+1 when x is of an arbitrary type. The C++ code is fine, instead. The fact that some substitutions of a lead to incorrect code is irrelevant right now.
Deferring this check causes some "infinitely-typed values" to be allowed, roughly. Dynamic languages such as Python or Scheme further defer these type errors until run-time, and of course will handle self-application just fine.
The expression following auto fac = is a lambda expression, and the compiler will automatically generate a closure object from it. The type of that object is unique and known only to the compiler.
From N4296, §5.1.2/3 [expr.prim.lambda]
The type of the lambda-expression (which is also the type of the closure object) is a unique, unnamed non-union class type — called the closure type — whose properties are described below. This class type is neither an aggregate (8.5.1) nor a literal type (3.9). The closure type is declared in the smallest block scope, class scope, or namespace scope that contains the corresponding lambda-expression.
Note that because of this, even two identical lambda expressions will have distinct types. For example,
auto l1 = []{};
auto l2 = []{}; // l1 and l2 are of different types
Your lambda expression is a C++14 generic lambda, and will be translated by the compiler to a class that resembles the following:
struct __unique_name
{
template<typename Arg1, typename Arg2>
auto operator()(Arg1 self, Arg2 n) const
{
// body of your lambda
}
};
I cannot comment on the Haskell part, but the reason the recursive expression works in C++ is because you're simply passing a copy of the closure object instance (fac) in each call. The operator() being a template is able to deduce the type of the lambda even though it is not one you can name otherwise.

signature constraint for generic types

struct S(int a, int b) { }
void fun(T)(T t) { }
I want fun to work with S only. What would the signature constraint look like?
I can't make fun a member of S, and with void fun(T)(T t) if(is(T : S)) { } I get Error: struct t1.S(int a,int b) is used as a type
S is not a type. It's a template for a type. S!(5, 4) is a type. It's quite possible that different instantiations of S generate completely different code, so the definition of S!(5, 4) could be completely different from S!(2, 5). For instance, S could be
struct S(int a, int b)
{
static if(a > 3)
string foo;
static if(b == 4)
int boz = 17;
else
float boz = 2.1;
}
Note that the number and types of the member variables differ such that you can't really use an S!(5, 4) in place of an S!(2, 5). They might as well have been structs named U and V which weren't templatized at all for all of the relation that they really have to one another.
Now, different instantiations of a particular template are generally similar with regards to their API (or they probably wouldn't have been done with the same template), but from the compiler's perspective, they have no relation with one another. So, the normal way to handle it is to use constraints purely on the API of the type and not on its name or what template it was instantiated from.
So, if you expect S to have the functions foo, bar, and foozle, and you want your fun to use those functions, then you'll construct a constraint that tests that the type that's given to fun has those functions and that they work as expected. For instance
void fun(T)(T t)
if(is({ auto a = t.foo(); t.bar(a); int i = t.foozle("hello", 22);}))
{}
Then any type which has a function called foo which returns a value, a function named bar which may or may not return a value and which takes the result of foo, and a function named foozle which takes a string and an int and returns an int will compile with fun. So, fun is far more flexible than if you had insisted on it taking only instantiations of S. In most cases, such constraints are separated out into separate eponymous templates (e.g. isForwardRange or isDynamicArray) rather than putting raw code in an is expression so that they're reusable (and more user friendly), but expressions like that are what such eponymous templates use internally.
Now, if you really insist on constraining fun such that it only works with instantiations of S, then there are two options that I'm aware of.
1. Add a declaration of some kind which S always has and you don't expect any other type to have. For instance
struct S(int a, int b)
{
enum isS = true;
}
void fun(T)(T t)
if(is(typeof(T.isS)))
{}
Note that the actual value of the declaration doesn't matter (nor does its type). It's the simple fact that it exists that you're testing for.
2. The more elegant (but far less obvious solution) is to do this:
struct S(int a, int b)
{
}
void fun(T)(T t)
if(is(T u : S!(i, j), int i, int j))
{}
is expressions have a tendancy to border on voodoo once they get very complicated, but the version with commas is precisely what you need. The T u is the type that you're testing and an identifier; the : S!(i, j) gives the template specialization that you want T to be an instantiation of; and the rest is a TemplateParameterList declaring the symbols which are used in the stuff to the left but which haven't previously been declared - in this case, i and j.
I think there are a couple of small red herrings in the other answers. You can use pattern matching to figure whether T is some instance of S, as follows.
Simplest way is to just pattern-match the argument itself:
void fun(int a, int b)(S!(a, b) t) {
}
More generally you can pattern-match in separation, inside a template constraint:
void fun(T)(T t) if (is(T U == S!(a, b), int a, int b)) {
}
In both cases you have access to the instantiation arguments.
"Work with S only" doesn't really make sense in D, because
S is not a type, it's a template.
A template is itself "something" in D, unlike in other languages.
What you wrote is a shorthand for:
template S(int a, int b) { struct S { } }
So the type's full name is S(a, b).S, for whatever a or b you use. There's no way to make it "generically" refer to S.
If you need to put a constraint like this, I suggest putting something private inside S, and checking that T has the same member.

C++11 - std::function, templates and function objects, weird issues

I'm playing with some code and I am a little puzzled about some stuff. Here's a simplified example:
I have Nodes that perform arithmetical operations (addition, subtraction, etc). I have a container with the different operations that are available in my program. Here's an example:
typedef std::binary_function<double, std::vector<double>&, std::vector<Node*>& > my_binary_function;
auto const & product = [](double v, Node* n){ return v * n->GetEvaluation(); };
struct addition : public my_binary_function {
double
operator()(std::vector<double>& weights, std::vector<Node*>& subtrees) {
return std::inner_product(weights.begin(), weights.end(),
subtrees.begin(), 0, std::plus<double>(), product);
}
};
Now, at this point there are two choices:
1) use a function type:
typedef double (*my_function)(std::vector<double>&, std::vector<Node*>&);
Then use the following templated function to convert the functors:
template<typename F> typename F::result_type
func(typename F::first_argument_type arg1, typename F::second_argument_type arg2) {
return F()(arg1, arg2);
}
2) use a function wrapper type, namely std::function, so that I have
typedef std::function<double (std::vector<double>&, std::vector<Node*>&)> my_function;
It all boils down to something like this:
LoadDefaultFunctions() {
int minArity = 2;
int maxArity = 2;
function_set_.AddFunction("Add", func<addition> , minArity, maxArity, 1.0); // case 1
OR
function_set_.AddFunction("Add", addition(), minArity, maxArity, 1.0); // case 2
And now the problems:
a) If I use Method 1, I get this compilation error:
error: invalid initialization of non-const reference of type
'std::binary_function<double, std::vector<double>&,
std::vector<Node*>&>::result_type {aka std::vector<Node*>&}'
from an rvalue of type 'double'
The error goes away if I change the template (notice how the arguments don't really make sense now):
template <typename F> typename F::first_argument_type
func1(typename F::second_argument_type arg1, typename F::result_type arg2) {
return F()(arg1, arg2);
}
I find it very strange, because for other types such as binary_op<double, double, double>, the first form works fine. So, what's happening?
b) 1) is faster than 2) (by a small margin). I'm thinking I'm probably missing some neat trick of passing the functor by reference or in some way that would enable std::function to wrap it more efficiently. Any ideas?
c) If I use the typedef from 2) but additionally I still use func to produce a function out of the functor, and let std::function deal with it, it's still faster than 2). That is:
`my_function = func<addition>` is faster than `my_function = addition()`
I would really appreciate it if someone could help me understand the mechanics behind all of this.
Thanks.
b) 1) is faster than 2) (by a small margin). I'm thinking I'm probably missing some neat trick of passing the functor by reference or in some way that would enable std::function to wrap it more efficiently. Any ideas?
Yes, I would expect 1 to be faster than 2. std::function performs type-erasure (the exact type of the stored callable is not present in the enclosing std::function type) which requires the use of a virtual function call. On the other hand, when you use a template, the exact type is known, and the compiler has greater chances of inlining the calls, making it a no-cost solution. This is not related to how you pass the functor.
The error
The order of the template arguments is incorrect:
typedef std::binary_function<double, // arg1
std::vector<double>&, // arg2
std::vector<Node*>& // return type
> my_binary_function;
struct addition : public my_binary_function {
double // return type
operator()( std::vector<double>& weights, // arg1
std::vector<Node*>& subtrees) // arg2
{ ...
That is, inheritance from binary_function is adding some typedefs to your class, but those are not the correct typedefs. Then when you use the typedefs in the next template, the types don't match. The template is expecting that your operator() will return a std::vector<Node*>&, and that is the return type of func, but when you call the functor what you get is a double which leads to the error:
invalid initialization of ... std::vector<Node*>& from double