I am conducting a research on type systems. For this work I am investigating the usages of Variants, structural subtyping, universal polymorphism and existential polymorphism in popular languages. Functional languages like heskell, ocaml provides such functionaries. But I want to whether a popular language like C++ provide above functionality. That means how C++ implemented
variants
structural subtyping
universal polymorphism
existential polymorphism.
Unions can be viewed as a rudimentary form of variant, but in reality, they are more a primitive mechanism for overlaying memory (and unsafe).
There is no structural typing, let alone subtyping, in C++. All types are nominal.
Templates have some superficial similarity to universal polymorphism, but are actually quite different. In essence, they are glorified macros with little to no type checking (like with macros, both checking and code generation happens after expansion).
There is no form of existential types in C++ (there is a limited form in Java, namely wildcards).
Some of these features can be simulated to some extent using subtyping, but that remains far less expressive (or convenient).
Related
Concepts for C++ from the Concepts TS have been recently merged into GCC trunk. Concepts allow one to constrain generic code by requiring types to satisfy the conditions of the concept ('Comparable' for instance).
Haskell has type classes. I'm not so familiar with Haskell. How are concepts and type classes related?
Concepts (as defined by the Concepts TS) and type classes are related only in the sense that they restrict the sets of types that can be used with a generic function. Beyond that, I can only think of ways in which the two features differ.
I should note that I am not a Haskell expert. Far from it. However, I am an expert on the Concepts TS (I wrote it, and I implemented it for GCC).
Concepts (and constraints) are predicates that determine whether a type is a member of a set. You do not need to explicitly declare whether a type is a model of concept (an instance of a type class). That's determined by a set of requirements and checked by the compiler. In fact, concepts do not allow you to write "T is a model of C" at all, although this is readily supported using various metaprogramming techniques.
Concepts can be used to constrain non-type arguments, and because of constexpr functions and template metaprogramming, express pretty much any constraint you could ever hope to write (e.g., a hash array whose extent must be a prime number). I don't believe this is true for type classes.
Concepts are not part of the type system. They constrain the use of declarations and, in some cases template argument deduction. Type classes are part of the type system and participate in type checking.
Concepts do not support modular type checking or compilation. Template definitions are not checked against concepts, so you can still get late caught type errors during instantiation, but this does add a certain degree of flexibility for library writers (e.g., adding debugging code to an algorithm won't change the interface). Because type classes are part of the type system, generic algorithms can be checked and compiled modularly.
The Concepts TS supports the specialization of generic algorithms and data structures based based on the ordering of constraints. I am not at all an expert in Haskell, so I don't know if there is an equivalent here or not. I can't find one.
The use of concepts will never add runtime costs. The last time I looked, type classes could impose the same runtime overhead as a virtual function call, although I understand that Haskell is very good at optimizing those away.
I think that those are the major differences when comparing feature (Concepts TS) to feature (Haskell type classes).
But there's an underlying philosophical difference in two languages -- and it isn't functional vs. whatever flavor of C++ you're writing. Haskell wants to be modular: being so has many nice properties. C++ templates refuse to be modular: instantiation-time lookup allows for type-based optimization without runtime overhead. This is why C++ generic libraries offer both broad reuse and unparalleled performance.
You might be interested in the following research paper:
"A comparison of C++ concepts and Haskell type classes", Bernardy et al., WGP 2008. Pdf More details.
Update: as a short summary of the paper: the paper defines a precise mapping between terminology for C++ concepts and terminology for Haskell type classes and uses this mapping to provide a detailed feature comparison between the two.
Their conclusion says:
Out of our 27 criteria, summarised in table 2, 16 are equally supported in both languages, and only one or two are not portable. So, we can safely conclude as we started — C++ concepts and Haskell type classes are very similar.
As noted by T.C. below, it is worth pointing out that the paper is comparing C++0x concepts, not Concepts TS. I am not aware of a good reference describing the differences.
I want to ask what sort of type safety languages constructs are there on Clojure?
I've read 'Practical Clojure' from Luke VanderHart and Stuart Sierra several times now, but i still have the distinct impression that Clojure (like other lisps) don't take compilation-time validation checking very seriously. Type safety is just but one (very popular) strategy for doing compilation-time checking of correct semantics
I'm asking this question because i'm aching to be proven wrong; what sort of design patterns are there available on clojure to validate (at compilation-time, not at run-time) that a function that expects a string doesn't get called with, say, a list of integers?
Also, i've read very smart people like Paul Graham openly advocate about lisp allowing to implement everything from lower-level languages on top of it (most would say that the language themselves are being reimplemented on top of it), so if that assertion would be true, then trivially stuff like type checking should be a piece of cake. So do you feel that there exist type systems (or the ability to implement such type systems) in clojure or other lisps, that give the programmer the ability to offset validation checking from run-time to compile-time, or even better, design-time?
Compilation units in Clojure are very small - a single function. Lispers tend to change small portions of running programs while they develop. Introducing static type checking into this style of development is problematic - for a deeper discussion why I recommend the post Types are Anti-Modular by Gilad Bracha. Thus Clojure's prefers pre/post-conditions which jive better with Lisp's highly REPL-oriented development.
That said, it's certainly desirable and possible to build an a la carte type system for Clojure. This trail has been blazed by Qi/Shen, and Typed Racket. This functionality could be easily provided as a library. I'm hoping to build something like that in the future with core.logic - https://github.com/clojure/core.logic.
Since Clojure is a dynamic language the whole idea is not to check the types (or much of anything) at compile time.
Even when you add type hints to your function they do not get checked at compile-time.
Since Clojure is a Lisp you can do whatever you want at compile-time with macros and macros are powerful enough that you can write your own type systems. Some people have made type systems for lisps Typed Racket and Qi. These Type systems can be just as powerful as any Type system in a "normal" language.
Ok, we now know that it is possible but does Clojure has such a optional type system? The answer is currently no but there is a logic engine (core.logic) that could be used to implement a typesystem but the author has not worked (yet) in that direction.
There is a library that adds an optional type system to Clojure,
http://typedclojure.org/
Rationale
Static typing has well known benefits. For example, statically typed languages catch many common programming errors at the earliest time possible: compile time. Types also serve as an excellent form of (machine checkable) documentation that almost always augment existing hand-written documentation.
Languages without static type checking (dynamically typed) bring other benefits. Without the strict rigidity of mandatory static typing, they can provide more flexible and forgiving idioms that can help in rapid prototyping. Often the benefits of static type checking are desired as the program grows.
This work adds static type checking (and some of its benefits) to Clojure, a dynamically typed language, while still preserving idioms that characterise the language. It allows static and dynamically typed code to be mixed so the programmer can use whichever is more appropriate.
i'm wondering if C++0x (C++11) (with lambdas and perfect forwarding) is (a superset of) a functional language.
is there any feature of functional languages, that C++ doesn't have?
The functional programming paradigm models computation as a relation between sets, and is thus inherently declarative. However, in practice, we often think of functions as imperative, ie you put in an input value and get out an output value, same as with a procedure. From this point of view, the characteristic property of a function is that it has no side-effects. Because of ambiguity of the terms, we call such a function pure, and a language which only has pure functions would be a purely functional language.
However, not all functional languages are pure: A functional language is a language with syntax and semantics which allows the programmer to use the functional paradigm efficiently. Some of the concepts which make using the paradigm feasible include - among others - lambda expressions with lexical closure, higher-order functions, variant types and pattern matching, lazy evaluation, type-inference (in case of statically-typed languages).
This is by no means an authorative list, and a language can very well be functional without providing all or even most of them, but if a language does - ie makes them usable without having to jump through major hoops - their presence is a strong indicator that the language should be considered functional.
I don't know enough about Boost to decide whether or not C++03 + Boost is a viable functional language, but C++0x definitely makes C++ more functional, perhaps even pushing it over the subjective boundary of the realm of functional languages.
As an aside, the same considerations apply to other programming paradigms: C++ is also not a purely object-oriented language (indeed, it's very hard - perhaps even theoretically impossible - to design a language which is both purely functional and purely object-oriented), and most features one commonly associates with OO-languages (classes, inheritance, encapsulation) are actually in no way authorative as well...
Check out the list of Functional Programming Languages definitions and discussion on the C2 wiki.
Some of the most common (and least disputed features) are:
First class functions - function class represents first class functions.
Higher Order Functions - Can be emulated with function objects.
Lexical Closures - Can be emulated with classes.
Single Assignment - More of a convention. You can do this by declaring all variables const.
Lazy Evaluation - Can be achieved with TMP
Garbage Collection - still missing. Pretty much necessary in a functional language, since lifetime and scope are not the same, as #Pascal pointed out in the comments above.
Type Inference - auto
Tail Call Optimization - Not strictly necessary for a functional language, but compiler dependent in C++.
I keep reading the term :
template programming
generic programming
meta-programming
maybe another idiom/term..
for any c++ code that use template, which one is the correct or more accurate term of this?
AFAIK:
Template programming is just referring to the classic "programming with templates", i.e. "I have a function/class that I want to make usable with any type, I'll just make it template".
It can also be can also be seen as the "catch-all" category that includes any programming technique that employs templates.
Generic programming can be synthetically described as the programming paradigm used by the STL.
Wikipedia defines it as
a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters
IMHO, it's better to say that all the containers are designed to be used with any type (without sacrificing type safety) and algorithms are designed to be generic enough to work on any container type (as long as it's sensible to use them, obviously, i.e. it makes no sense to sort an unordered container).
Notice that generic programming (with this definition) does not strictly require the use of templates, in facts it can be achieved with inheritance and dynamic polymorphism (thanks to Ben Voigt).
In general, I'd say that template programming and generic programming partially overlap, and many people use the terms generic programming and template programming interchangeably.
Template metaprogramming is a programming style in which templates are used to perform compile-time computations/decisions/checks normally not doable without templates (statical assertions, compile-time constants computations, ...).
Such code is often quite contrived, since C++ wasn't designed for this style of programming (which was actually "discovered" later), and may look unfamiliar to C++ programmers also because it often gets near to functional programming (without having nice syntax facilities for it) instead of following the imperative paradigm normally used in C++.
It's usually referred to as generic programming.
Template meta programming is something else than normal use of templates, in TMP types are manipulated at compile time (see boost.Mpl).
I have been learning about various functional languages for some time now including Haskell, Scala and Clojure. Haskell has a very strict and well-defined static type system. Scala is also statically typed. Clojure on the other hand, is dynamically typed.
So my questions are
What role does the type system play in a functional language?
Is it necessary for a language to have a type system for it to be functional?
How is the "functional" level of a language related to the kind of the type system of the language?
A language does not need to be typed to be functional - at the heart of functional programming is the lambda calculus, which comes in untyped and typed variants.
The type system plays two roles:
it provides a guarantee at compile time that a class of errors cannot occur at run-time. The class of errors usually includes things like trying to add two strings together, or trying to apply an integer as a function.
it has some efficiency benefits, in that objects at runtime do not need to carry their types around, because the types have already been established at compile-time. This is known as type erasure.
In advanced type systems like Haskell's, the type system can provide more benefits:
overloading: using one identifier to refer to operations on different types
it allows a library to automatically choose an optimised implementation based on what type it is used at (using Type Families)
it allows powerful invariants to be proven at compile time, such as the invariant in a red-black tree (using Generalised Algebraic Datatypes)
What role does the type system play in a functional language?
To Simon Marlow's excellent answer, I would add that a type system, especially one that includes algebraic data types, makes it easier to write programs:
Software designs, which in object-oriented languages are sometimes expressed using UML diagrams, are very clearly expressed using types. This clarity manifests especially when not only values have types, but also modules have types, as in Objective Caml or Standard ML.
When a person is writing code, a couple of simple heuristics make it very, very easy to write pure functions based on the types:
A value of function type can always be created with a lambda.
A value of function type can always be consumed by applying it.
A value of an algebraic data type can be created by applying any of the type's constructors.
A value of an algebraic data type can be consumed by scrutinizing it with a case expression.
Based on these observations, and on the simple rule that unless there's a good reason, a function should consume each of its arguments, it's pretty easy to cut down the space of possible code you could write to a very small number of candidates. For example, there just aren't that many sensible functions of type (using Haskell notation)
forall a . (a -> Bool) -> [a] -> Bool
The art of using types to create code is called type-directed programming. When it works well, you hear functional programmers say things like "once we got the types right, the code practically wrote itself." Since the types are usually much smaller than the
code, this is a big win.
Same as in any programming language: it helps you to avoid/find errors in your code. In case of static typing a good type system prevents programs with certain types of errors from compiling.
No. The untyped lambda calculus is what you could call the prototype of functional programming languages and it is, as the name suggests, entirely untyped.
In a functional language (as well as any other language where a function can be used as a value) the type system needs to know what the type of a function is. Other than that there is nothing special about type systems for functional languages.
In a purely functional language you need to abstract side-effects, so you'd want the type system to somehow be able to support that. For example if you want to have a world type like in Clean, you'd want the type system to support uniqueness types to ensure proper usage.
If you want to have an IO monad like in haskell, you'd need an IO type (though a monad typeclass like in haskell is not required to have an IO monad, so you don't need a type system, which supports that).
1: Same as any other, it stops you from doing operations that are either ill-defined, or whose result would be 'nonsensical' to humans. Like float addition on integers.
2: Nope, the oldest programming language in the world, the (untyped) lambda calculus, is both functional and untyped.
3: Hardly, functional just means no side effects, no mutations, referential transparency et cetera.
Just remember that the oldest functional language, the untyped lambda calculus has no type system.