Clojure: Compile time insertion of pre/post functions - clojure

This is a followup to Clojure: pre post functions
Goal
For every Clojure function, I want to have a pre and post function that gets executed:
right before the function is evaluated and
right after the function returns
Now, I want to do this all functions in my *.clj files.
I would prefer (this is also an learning exercise) to do this at the Clojure Compiler level.
Question:
How do I get started on this? What part of the Clojure Compiler source code should I be reading? What documentation / tutorials on the internals of the Clojure Compiler I should be aware of?
Thanks!

First off, this sounds like a slightly crazy thing to do in general. There are almost certainly better ways to achieve any sensible objective (i.e. this is screaming "XY Problem"). But as long as you say it is just for a learning exercise, that is fine :-)
I can think of a couple of strategies you might want to consider before hacking the compiler:
Create your own defn macro that does the wrapping when functions are created. Obviously you'll need to make sure your own version of defn is used rather than the built-in one. Probably the simplest solution.
Walk your namespaces at runtime (after they are loaded) and redefine all functions to a wrapped version of the same function. Could get a bit messy but will certainly enhance your understanding of namespaces :-)
If you really want to hack the compiler, the easiest place to make this change would probably be just by hacking defn in core.clj

Related

Persisting variables in core.async style

I need to do a big trick and am keen on hearing your suggestions.
What I need is a macro that takes ordinary clojure code peppered with a special "await" form. The await forms contains only clojure code and are supposed to return the code's return value. Now, what I want is that when I run whatever is being produced by this macro, it should stop executing when the first "await" form is due for evaluation.
Then, it should dump all the variables defined in its scope so far to the database (I will ignore the problem that not all Clojure types can be serialised to EDN, e.g. functions can't), together with some marker of the place it has stopped in.
Then, if I want to run this code again (possibly on a different machine, another day) - it will read its state from the DB and continue where it stopped.
Therefore I could have, for example:
(defexecutor my-executor
(let [x 7
y (await (+ 3 x))]
(if (await (> y x))
"yes"
"no")))
Now, when I do:
(my-executor db-conn "unique-job-id")
the first time I should get a special return value, something like
:deferred
The second time it should be like this as well, only the third time a real return value should be returned.
The question I have is not how to write such executor, but rather how to gather information from within the macro about all the declared variables to be able to store them. Later I also want to re-establish them when I continue execution. The await forms can be nested, of course :)
I had a peek into core.async source code because it is doing a similar thing inside, but what I have found there made me shiver - it seems they employ the Clojure AST analyser to get this info. Is this really so complex? I know of &env variable inside a macro, but do not have any idea how to use it in this situation. Any help would be appreciated.
And one more thing. Please do not ask me why I need this or that there is a different way of solving a problem - I want this specific solution.
I will ignore the problem that not all Clojure types can be serialised to EDN, e.g. functions can't
If you ignore this, it will be very restrictive for the kinds of Clojure expressions you can handle. Functions are everywhere, e.g. in the implementation of things like doseq and for. Likewise, a lot of interesting programs will depend on some Java object like a file handle or whatever.
The question I have is not how to write such executor, but rather how to gather information from within the macro about all the declared variables to be able to store them.
If you manage to write such an executor, I suspect its implementation will need to know about local variables anyway. So you can put off this question until you are done implementing your executor - you will probably find it obsolete, if you can implement your executor.
I had a peek into core.async source code because it is doing a similar thing inside, but what I have found there made me shiver - it seems they employ the Clojure AST analyser to get this info. Is this really so complex?
Yes, this is very intrusive. You are basically writing a compiler. Thank your lucky stars they wrote the analyzer for you already, instead of having to analyze expressions yourself.
I know of &env variable inside a macro, but do not have any idea how to use it in this situation.
This is the easy part. If you like, you can write a simple macro that gives you all the locals in scope. This question has been asked and answered before, e.g. in Clojure get local lets.
And one more thing. Please do not ask me why I need this or that there is a different way of solving a problem - I want this specific solution.
This is generally an unproductive attitude when asking a question. It's admitting you're posing an XY problem, and still refusing to tell anyone what the Y is.

Clojure closure efficiency?

Quite often, I swap! an atom value using an anonymous function that uses one or more external values in calculating the new value. There are two ways to do this, one with what I understand is a closure and one not, and my question is which is the better / more efficient way to do it?
Here's a simple made-up example -- adding a variable numeric value to an atom -- showing both approaches:
(def my-atom (atom 0))
(defn add-val-with-closure [n]
(swap! my-atom
(fn [curr-val]
;; we pull 'n' from outside the scope of the function
;; asking the compiler to do some magic to make this work
(+ curr-val n)) ))
(defn add-val-no-closure [n]
(swap! my-atom
(fn [curr-val val-to-add]
;; we bring 'n' into the scope of the function as the second function parameter
;; so no closure is needed
(+ curr-val val-to-add))
n))
This is a made-up example, and of course, you wouldn't actually write this code to solve this specific problem, because:
(swap! my-atom + n)
does the same thing without any need for an additional function.
But in more complicated cases you do need a function, and then the question arises. For me, the two ways of solving the problem are of about equal complexity from a coding perspective. If that's the case, which should I prefer? My working assumption is that the non-closure method is the better one (because it's simpler for the compiler to implement).
There's a third way to solve the problem, which is not to use an anonymous function. If you use a separate named function, then you can't use a closure and the question doesn't arise. But inlining an anonymous function often makes for more readable code, and I'd like to leave that pattern in my toolkit.
Thanks!
edit in response to A. Webb's answer below (this was too long to put into a comment):
My use of the word "efficiency" in the question was misleading. Better words might have been "elegance" or "simplicity."
One of the things that I like about Clojure is that while you can write code to execute any particular algorithm faster in other languages, if you write idiomatic Clojure code it's going to be decently fast, and it's going to be simple, elegant, and maintainable. As the problems you're trying to solve get more complex, the simplicity, elegance and maintainability get more and more important. IMO, Clojure is the most "efficient" tool in this sense for solving a whole range of complex problems.
My question was really -- given that there are two ways that I can solve this problem, what's the more idiomatic and Clojure-esque way of doing it? For me when I ask that question, how 'fast' the two approaches are is one consideration. It's not the most important one, but I still think it's a legitimate consideration if this is a common pattern and the different approaches are a wash from other perspectives. I take A. Webb's answer below to be, "Whoa! Pull back from the weeds! The compiler will handle either approach just fine, and the relative efficiency of each approach is anyway unknowable without getting deeper into the weeds of target platforms and the like. So take your hint from the name of the language and when it makes sense to do so, use closures."
closing edit on April 10, 2014
I'm going to mark A. Webb's answer as accepted, although I'm really accepting A. Webb's answer and omiel's answer -- unfortunately I can't accept them both, and adding my own answer that rolls them up seems just a bit gratuitous.
One of the many things that I love about Clojure is the community of people who work together on it. Learning a computer language doesn't just mean learning code syntax -- more fundamentally it means learning patterns of thinking about and understanding problems. Clojure, and Lisp behind it, has an incredibly powerful set of such patterns. For example, homoiconicity ("code as data") means that you can dynamically generate code at compile time using macros, or destructuring allows you to concisely and readably unpack complex data structures. None of the patterns are unique to Clojure, but Clojure brings them all together in ways that make solving problems a joy. And the only way to learn those patterns is from people who know and use them already. When I first picked Clojure more than a year ago, one of the reasons that I picked it over Scala and other contenders was the reputation of the Clojure community for being helpful and constructive. And I haven't been disappointed -- this exchange around my question, like so many others on StackOverflow and elsewhere, shows how willing the community is to help a newcomer like me -- thank you!
After you figure out the implementation details of the current compiler version for the current version of your current target host, then you'll have to start worrying about the optimizer and the JIT and then the target computer's processors.
You are too deep in the weeds, turn back to the main path.
Closing over free variables when applicable is the natural thing to do and an extremely important idiom. You may assume a language named Clojure has good support for closures.
I prefer the first approach as being simpler (as long as the closure is simple) and somewhat easier to read. I often struggle reading code where you have an anonymous function immediately called with parameters ; I have to resolve to count parentheses to be sure of what's happening, and I feel it's not a good thing.
I think the only way it could be the wrong thing to do is if the closures closes over a value that shouldn't be captured, like the head of a long lazy sequence.

How do I effectively manage a Clojure code base?

A coworker and I are Clojure newbies. We started a project a couple months back, but quickly found that we had a tough time dealing with our code base -- by 500 LOC we basically had no idea where to start with the debugging, when things went wrong (which was often). Instead of pairs, functions were getting lists, or numbers, or what-have-you.
Now we're starting a new but related project and migrating a lot of the old code over. But we're again hitting a wall.
We're wondering, how do we effectively manage a Clojure project, especially as we make changes to existing code?
What we've come up with:
liberal use of unit-tests
liberal use of pre-, post-conditions
informal type declarations in function comments
use defrecord/defstruct/defprotocol to implement a data model, which would really simplify testing
But post-, pre-conditions seem not to be used very often. Unit-testing + comments will only help so much. And it seems like Clojure programmers don't typically implement formal data models.
Do we just not get Clojure? How do Clojure programmers know that their code is robust and correct?
I think this is actually an evolving area - Clojure hasn't really been around long enough for all of the best practices and associated tools for managing a large code base to be developed yet.
Some suggestions from my experience:
Structure your code in a "bottom up" way - in general, the way you want to structure you code will have the "utility" code at the top of the file (or imported from another namespace) and the "business logic" code that uses these utility functions towards the end of the file. If this seems difficult to do, then it's probably a hint that your code needs some refactoring.
Tests as examples - Test code in clojure works very well both to sanity check your code but also as documentation (e.g. "what kind of parameter is this function expecting?"). If you hit a bug, refer to your tests to check your assumptions and write a couple of new tests to flush out what is going wrong.
Keep functions simple and compose them - Kind of an extension of the "single responsibility principle" to functional programming. I consider more than 5-10 lines in a Clojure function as a major code smell (if this seems extreme, just remember that you can probably achieve as much in 5-10 lines of Clojure as you could with 50-100 lines of Java/C#)
Watch out for "imperative habits" - when I first started using Clojure, I wrote a lot of pseudo-imperative code in Clojure. An example would be emulating a for loop with "dotimes" and accumulating some result within an atom. This can be painful - it's not idiomatic, it's confusing and usually there is a much smarter, simpler and less error-prone functional way of doing it. This takes practice, but it is worth it in the long run...
Debug at the REPL - usually when I hit an issue, coding at the REPL is the easiest way to flush it out. Generally this means running some specific parts of the larger algorithm to check assumptions etc.
Refactor common utility functions out - you'll probably find a bunch of common or structure repeated in many functions. Well worth pulling this out into a function or macro that you can re-use in other places or projects - that way you can test it much more rigorously and have the benefits in multiple places. Bonus points if you can get it all the way upstream into Clojure itself! If you do this well enough, then your main code base will be extremely succinct and therefore easy to manage, containing nothing but the genuinely domain-specific code.
simple composable abstractions
"It is better to have 100 functions operate on one data structure than to have 10 functions operate on 10 data structures." - Alan J. Perlis
For me its all about composing simple functions. Try to break every function down into the smallest units you can and then have another function that composes them to do the work your need. You know you are in good shape is every function can be tested independently. If you go too heavy on the macroes then it can make this step harder because macroes compose differently.
D.R.Y, Seriously, just don't repeat yourself
starting with well decomposed functions in a a bunch of namespaces; every time I need one of the composable parts somewhere else I "hoist" that function up to a library included by both namespaces. This way your commonly used abstractions sort of evolve over the course of the project into "just enough framework". It is very difficult to do this unless you really have discrete composable abstractions.
Sorry to dig up this old question, the answers by mikera and Arthur are excellent, but it's something I've also wondered about as I've been learning Clojure, and thought I'd mention how we organise files.
In a similar vein to ensuring each function has a single job, we group related functions into namespaces to make it easier to navigate the code. So we might have a namespace for functions providing access to a particular database, or providing a collection of HTTP-related utilities. This keeps each file relatively small, and makes tests easier to find. It also makes refactoring much more straightforward. This is hardly anything new, but it's worth bearing in mind.

What alternative syntax exist for C/C++? (think SPECS or Mirah)

I wondered if there are any simpler or more powerful syntax for C or C++. I have already come across SPECS. That is an alternative syntax for C++. But are there any others and what about C?
It could also be a sort of code generator so that things like functors could be defined less verbosely. I imagine it could be made as a code generator that compiles to C or C++ code which is very similar to the code you wrote in the alternative syntax.
Mirah is an example of doing this for Java.
Ideally I would want to write C in Go like syntax. I like how they fixed switch-case, and in general made everything much less verbose.
#define BEGIN {
#define END }
No! Just say NO!
The only general-purpose tool that I'm aware of is Lazy C++, which lets you create a single .lzz source file from which it can generate the .h and .cpp files.
There are also numerous approaches to doing code generation for C++. (For examples, see Cog, Pump, or Wikipedia's list.) These aren't full-fledged alternate syntaxes, but they can help with particular categories of syntax (such as automatically generating templates taking 1 to N arguments, to work around the lack of variadic templates).
Instead of a change in syntax, consider a change in abstraction: Increase your abstraction with a custom-defined DSL. Tool support would be necessary to reach optimal productivity.
If your goal is simplification, a lightweight modeling approach, either text-based (like XText), graph-based (like MetaEdit+) or tree-based (like AtomWeaver) would remove some complexity on the project by simplifying the solution.
If it is only a syntax you're after, why can't you define your own, as a trivial preprocessor->parser->C-pretty-printer chain? It will be no more than a semantically reach preprocessor, something of a CamlP4 style, but for C. No one but you knows what kind of syntax you'd find suitable, so its implementation is entirely up to you.
It doesn't look to me like SPECS is really C++ anymore, I certainly would have a hard time reading such code (at least initially).
You should pick a language based on your needs, not pick a specific language and then modify it to fit what you want to do.
If you want to program Go, then program in Go, don't try to write C in a Go-like syntax as that'll just make it hard for anyone who actually knows C to read your code.

C++ "Named Parameter Idiom" vs. Boost::Parameter library

I've looked at both the Named Parameter Idiom and the Boost::Parameter library. What advantages does each one have over the other? Is there a good reason to always choose one over the other, or might each of them be better than the other in some situations (and if so, what situations)?
Implementing the Named Parameter Idiom is really easy, almost about as easy as using Boost::Parameter, so it kind of boils down to one main point.
-Do you already have boost dependencies? If you don't, Boost::parameter isn't special enough to merit adding the dependency.
Personally I've never seen Boost::parameter in production code, 100% of the time its been a custom implementation of Named Parameters, but that's not necessarily a good thing.
Normally, I'm a big fan of Boost, but I wouldn't use the Boost.Parameter library for a couple of reasons:
If you don't know what's going on,
the call looks like you're assigning
a value to a variable in the scope
on the calling function before
making the call. That can be very
confusing.
There is too much boilerplate code necessary to set it up in the first place.
Another point, while I have never used Named Parameter Idiom, I have used Boost Parameter for defining up to 20 optional arguments. And, my compile times are insane. What used to take a couple seconds, now takes 30sec. This adds up if you have a library of stuff that use your one little application that you wrote using boost parameter. Of course, I might be implementing it wrongly, but I hope this changes, because other than that, i really like it.
The Named Parameter idiom is a LOT simpler. I can't see (right now) why we would need the complexity of the Boost::Parameter library. (Even the supposed "feature" Deduced parameters, seems like a way to introduce coding errors ;) )
You probably don't want Boost.Parameter for general application logic so much as you would want it for library code that you are developing where it can be quite a time saver for clients of the library.
Never heard of either, but reviewing the links, named parameter is WAY easier and more obvious to understand. I'd pick it in a heartbeat over the boost implementation.