Proper use of the PURE keyword Fortran - fortran

I'm currently delving into Fortran and I've come across the pure keyword specifying functions/subroutines that have no side effects.
I have a book, Fortran 90/95 by S Chapman which introduces the pure keyword but strangely provides no "good coding practice" uses.
I'm wondering how liberally one should use this keyword in their procedures. Just by looking around it's obvious to me that most procedures without side effects don't find it necessary to include the pure keyword.
So where is it best used? Only in procedures one wants to completely guarantee have no side effects? Or perhaps in procedures one plans to convert to elemental procedures later? (As elemental procedures must first be pure.)

PURE is required in some cases - for example, procedures called within specification expressions or from FORALL or DO CONCURRENT constructs. PURE is required in these case to give the Fortran processor flexibility in the ordering of procedure invocations while still having a reasonably deterministic outcome from a particular stretch of code.
Beyond those required cases, whether to use PURE or not is basically a question of style, which is somewhat subjective.
There are costs to use of PURE (inability to do IO within the procedure, inability to call procedures that are not PURE) and benefits (a pure procedure written today can be called from a context written tomorrow that requires a pure procedure, because PURE procedures have no side effects the implications of an invocation of such a procedure may be clearer to a reader of the code), the trade-off between the two depends on specifics.
The standard may give Fortran processors considerable lee-way in how they evaluate expressions and function references within expressions. It definitely constrains programs in some ways around side effects of function execution and modification of function arguments. The requirements on a pure function are consistent with that lee-way and those constraints, consequently some people use a style where most functions are pure. Again, it may still depend on specifics, and exceptions may have to exist for things like C interoperability or interaction with external API's.

As suggested by chw21, the primary motivation of PURE is to allow the compiler to better optimize. In particular, the lack of PURE for a function will prevent parallelization due to unknown side effects. Note that PURE subroutines, unlike functions, may have INTENT(INOUT) arguments, but there is still the restriction on side effects (and that a PURE procedure can call only other PURE procedures.)
Up through Fortran 2003, ELEMENTAL procedures are implicitly PURE. Fortran 2008 adds an IMPURE prefix that can be used with ELEMENTAL procedures to disable that aspect.

Related

Will it be possible in the foreseeable future to enforce purity/referential transparency in C++?

I think I know what refertially transparent and pure mean. However here's a question about the two properties and how they differ.
As regards how referential transparency and/or purity are enforced in a language, I don't know much. (Not even this helped me understand.) I mean, I might know (kind of) how Haskell can deal with IO though being purely functional (see this), and I understand that I can't write impure functions because the type system just doesn't let me (or, better, it does let me in a controlled way, as I have to write unsafe explicitly).
But in C++, like in many other languages, functions are normally non pure nor referentially transparent.
So on the one hand I have Haskell which is constructed as a pure language, where every function is pure. On the other hand I have C++ which has no way to enforce purity (or does it?¹).²
But would it be possible, in the future, for the C++ language to provide a pure/whatever attribute that one could attach to a function so that the compiler would have to verify that the function is indeed pure (or compile-time fail otherwise)?
(¹) This question popped up in my mind when I first knew of [[gnu:pure]] and [[gnu:const]]. My understanding is that those (non-portable) attributes are for giving more guarantees to the compiler, so that it can optimize more stuff, not for telling it to check if the function is truly pure. After all this example seems to compile and run just fine.
(²) But I also remember another, very old language, which is not pure like Haskell, but gives you a PURE attribute to tell that a functions must be pure, and the compiler checked it: Fortran.

Identify impure functions in Clojure

I am learning Clojure and functional programming in general (coming from python). In Clojure it is possible to make impure functions, since you can use slurp and other means of input. Is there a way to easily identify impure functions in Clojure, or is it practice to just keep those functions in a separate section of the code?
Theoretically, there is no way to identify whether a function produces side effects or not (due to Rice's Theorem). So, it is probably impossible to distinguish pure functions from impure functions. Of course, there might be a way to check whether a function is definitely impure at a syntactic level, but I doubt this would actually help in practice.
There is quite a common convention to end a function name with a bang (eg. swap!) where that function is not safe for use inside an STM transaction. This includes IO and many types of side-effect, so there is some overlap here with impurity, however, many impure functions are also entirely safe.

Pure subroutines in Fortran - Compiler optimization

I recently discovered the use of pure functions and subroutines in Fortran. From what the Fortran manual indicates, it seems that most of my subroutines can actually be defined as pure (since I always specify the intent of all the arguments, and usually I dont have "save", "pause", or external I/O in most of my subroutines).
My question is then: Should I do it? I was wondering if the compiler optimizes better pure subroutines or if it just does not matter, or if it can make things worse.
Thanks !
You work with the compiler to generate good code, and the more information you provide the compiler, the better a job the two of you can do together.
Whether it's labelling with intent(in) any dummy arguments you don't change, or using parameter for constants, or explicitly making pure any subprogram which doesn't have any side effects, or using forall when you don't really care about the order a loop is calculated in, by being more explicit about what you want to happen, you benefit because:
the compiler can now flag more errors at compile time - hey, you modified that argument you said was intent 'in', or you modified that module variable in a pure subroutine
your code is clearer to the next person to come to it without knowing what it's supposed to be doing (and that person could well be you three months later)
the compiler can be more aggressive with optimization (if the compiler has a guarantee from you that nothing is going to change, it can crankup the optimization).
Of those three benefits, the optimization is probably not the most important; in the case of pure subroutines, a smart compiler can probably see just through static analysis that your subroutine has no side effects. Still, the more guarantees you can give it, the better a job it can do of optimizing your code while maintaining correctness.
As far as I know, it just does not matter in a sequential mode. But if you activate "auto parallelization" options, then a compiler may sometimes take advantage of the PURE declaration to parallelize loops (multi-threading) containing calls to pure subroutines (it cannot take the risk if the subroutines are not pure).
For the same reason, the PURE declaration is also useful for the programmer who wants to set manually // directives (OpenMP for instance) because the risk of trouble with such procedures is rather limited. It is often possible to parallelize loops with calls to non pure subroutines, but this needs a deep verification...

Why were concepts (generic programming) conceived when we already had classes and interfaces?

Also on programmers.stackexchange.com:
I understand that STL concepts had to exist, and that it would be silly to call them "classes" or "interfaces" when in fact they're only documented (human) concepts and couldn't be translated into C++ code at the time, but when given the opportunity to extend the language to accomodate concepts, why didn't they simply modify the capabilities of classes and/or introduced interfaces?
Isn't a concept very similar to an interface (100% abstract class with no data)? By looking at it, it seems to me interfaces only lack support for axioms, but maybe axioms could be introduced into C++'s interfaces (considering an hypothetical adoption of interfaces in C++ to take over concepts), couldn't them? I think even auto concepts could easily be added to such a C++ interface (auto interface LessThanComparable, anyone?).
Isn't a concept_map very similar to the Adapter pattern? If all the methods are inline, the adapter essentially doesn't exist beyond compile time; the compiler simply replaces calls to the interface with the inlined versions, calling the target object directly during runtime.
I've heard of something called Static Object-Oriented Programming, which essentially means effectively reusing the concepts of object-orientation in generic programming, thus permitting usage of most of OOP's power without incurring execution overhead. Why wasn't this idea further considered?
I hope this is clear enough. I can rewrite this if you think I was not; just let me know.
There is a big difference between OOP and Generic Programming, Predestination.
In OOP, when you design the class, you had the interfaces you think will be useful. And it's done.
In Generic Programming, on the other hand, as long as the class conforms to a given set of requirements (mainly methods, but also inner constants or types), then it fits the bill and may be used. The Concept proposal is about formalizing this, so that detection may occur directly when checking the method signature, rather than when instantiating the method body. It also makes checking template methods more easily, since some methods can be rejected without any instantiation if the concepts do not match.
The advantage of Concepts is that you do not suffer from Predestination, you can pick a class from Library1, pick a method from Library2, and if it fits, you're gold (if it does not, you may be able to use a concept map). In OO, you are required to write a full-fledged Adapter, every time.
You are right that both seem similar. The difference is mainly about the time of binding (and the fact that Concept still have static dispatch instead of dynamic dispatch like with interfaces). Concepts are more open, thus easier to use.
Classes are a form of named conformance. You indicate that class Foo conforms with interface I by inheriting from I.
Concepts are a form of structural and/or runtime conformance. A class Foo does not need to state up front which concepts it conforms to.
The result is that named conformance reduces the ability to reuse classes in places that were not expected up front, even though they would be usable.
The concepts are in fact not part of C++, they are just concepts! In C++ there is no way to "define a concept". All you have is, templates and classes (STL being all template classes, as the name says: S tandard T emplate L ibrary).
If you mean C++0x and not C++ (in which case I suggest you change the tag), please read here:
http://en.wikipedia.org/wiki/Concepts_(C++)
Some parts I am going to copy-paste for you:
In the pending C++0x revision of the C++ programming language, concepts and the related notion of axioms were a proposed extension to C++'s template system, designed to improve compiler diagnostics and to allow programmers to codify in the program some formal properties of templates that they write. Incorporating these limited formal specifications into the program (in addition to improving code clarity) can guide some compiler optimizations, and can potentially help improve program reliability through the use of formal verification tools to check that the implementation and specification actually match.
In July 2009, the C++0x committee decided to remove concepts from the draft standard, as they are considered "not ready" for C++0x.
The primary motivation of the introduction of concepts is to improve the quality of compiler error messages.
So as you can see, concepts are not there to replace interfaces etc, they are just there to help the compiler optimize better and produce better errors.
While I agree with all the posted answers, they seem to have missed one point which is performance. Unlike interfaces, concepts are checked in compile-time and therefore don't require virtual function calls.

C++0x (C++11) as functional language?

i'm wondering if C++0x (C++11) (with lambdas and perfect forwarding) is (a superset of) a functional language.
is there any feature of functional languages, that C++ doesn't have?
The functional programming paradigm models computation as a relation between sets, and is thus inherently declarative. However, in practice, we often think of functions as imperative, ie you put in an input value and get out an output value, same as with a procedure. From this point of view, the characteristic property of a function is that it has no side-effects. Because of ambiguity of the terms, we call such a function pure, and a language which only has pure functions would be a purely functional language.
However, not all functional languages are pure: A functional language is a language with syntax and semantics which allows the programmer to use the functional paradigm efficiently. Some of the concepts which make using the paradigm feasible include - among others - lambda expressions with lexical closure, higher-order functions, variant types and pattern matching, lazy evaluation, type-inference (in case of statically-typed languages).
This is by no means an authorative list, and a language can very well be functional without providing all or even most of them, but if a language does - ie makes them usable without having to jump through major hoops - their presence is a strong indicator that the language should be considered functional.
I don't know enough about Boost to decide whether or not C++03 + Boost is a viable functional language, but C++0x definitely makes C++ more functional, perhaps even pushing it over the subjective boundary of the realm of functional languages.
As an aside, the same considerations apply to other programming paradigms: C++ is also not a purely object-oriented language (indeed, it's very hard - perhaps even theoretically impossible - to design a language which is both purely functional and purely object-oriented), and most features one commonly associates with OO-languages (classes, inheritance, encapsulation) are actually in no way authorative as well...
Check out the list of Functional Programming Languages definitions and discussion on the C2 wiki.
Some of the most common (and least disputed features) are:
First class functions - function class represents first class functions.
Higher Order Functions - Can be emulated with function objects.
Lexical Closures - Can be emulated with classes.
Single Assignment - More of a convention. You can do this by declaring all variables const.
Lazy Evaluation - Can be achieved with TMP
Garbage Collection - still missing. Pretty much necessary in a functional language, since lifetime and scope are not the same, as #Pascal pointed out in the comments above.
Type Inference - auto
Tail Call Optimization - Not strictly necessary for a functional language, but compiler dependent in C++.