Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Backstory: I've made a lot of large and relatively complex projects in Java, have a lot of experience in embedded C programming. I've got acquainted with scheme and CL syntax and wrote some simple programms with racket.
Question: I've planned a rather big project and want to do it in racket. I've heard a lot of "if you "get" lisp, you will become a better programmer", etc. But every time I try to plan or write a program I still "decompose" the task with familiar stateful objects with interfaces.
Are there "design patterns" for lisp? How to "get" lisp-family "mojo"? How to escape object-oriented constraint on your thinking? How to apply functional programming ideas boosted by powerful macro-facitilties? I tried studying source code of big projects on github (Light Table, for instance) and got more confused, rather than enlightened.
EDIT1 (less ambigious questions): is there a good literatue on the topic, that you can recommend or are there good open source projects written in cl/scheme/clojure that are of high quality and can serve as a good example?
A number of "paradigms" have come into fashion over the years:
structured programming, object oriented, functional, etc. More will come.
Even after a paradigm falls out of fashion, it can still be good at solving the particular problems that first made it popular.
So for example using OOP for a GUI is still natural. (Most GUI frameworks have a bunch of states modified by messages/events.)
Racket is multi-paradigm. It has a class system. I rarely use it,
but it's available when an OO approach makes sense for the problem.
Common Lisp has multimethods and CLOS. Clojure has multimethods and Java class interop.
And anyway, basic stateful OOP ~= mutating a variable in a closure:
#lang racket
;; My First Little Object
(define obj
(let ([val #f])
(match-lambda*
[(list) val]
[(list 'double) (set! val (* 2 val))]
[(list v) (set! val v)])))
obj ;#<procedure:obj>
(obj) ;#f
(obj 42)
(obj) ;42
(obj 'double)
(obj) ;84
Is this a great object system? No. But it helps you see that the essence of OOP is encapsulating state with functions that modify it. And you can do this in Lisp, easily.
What I'm getting at: I don't think using Lisp is about being "anti-OOP" or "pro-functional". Instead, it's a great way to play with (and use in production) the basic building blocks of programming. You can explore different paradigms. You can experiment with ideas like "code is data and vice versa".
I don't see Lisp as some sort of spiritual experience. At most, it's like Zen, and satori is the realization that all of these paradigms are just different sides of the same coin. They're all wonderful, and they all suck. The paradigm pointing at the solution, is not the solution. Blah blah blah. :)
My practical advice is, it sounds like you want to round out your experience with functional programming. If you must do this the first time on a big project, that's challenging. But in that case, try to break your program into pieces that "maintain state" vs. "calculate things". The latter are where you can try to focus on "being more functional". Look for opportunities to write pure functions. Chain them together. Learn how to use higher-order functions. And finally, connect them to the rest of your application -- which can continue to be stateful and OOP and imperative. That's OK, for now, and maybe forever.
A way to compare programming in OO vs Lisp (and "functional" programming in general) is to look at what each "paradigm" enables for the programmer.
One viewpoint in this line of reasoning, which looks at representations of data, is that the OO style makes it easier to extend data representations, but makes it more difficult to add operations on data. In contrast, the functional style makes it easier to add operations but harder to add new data representations.
Concretely, if there is a Printer interface, with OO, it's very easy to add a new HPPrinter class that implements the interface, but if you want to add a new method to an existing interface, you must edit every existing class that implements the interface, which is more difficult and may be impossible if the class definitions are hidden in a library.
In contrast, with the functional style, functions (instead of classes) are the unit of code, so one can easily add a new operation (just write a function). However, each function is responsible for dispatching according to the kind of input, so adding a new data representation requires editing all existing functions that operate on that kind of data.
Determining which style is more appropriate for your domain depends on whether you are more likely to add representations or operations.
This is a high-level generalization of course, and each style has developed solutions to cope with the tradeoffs mentioned (eg mixins for OO), but I think it still holds to a large degree.
Here is a well-known academic paper that captured the idea 25 years ago.
Here are some notes from a recent course (I taught) describing the same philosophy.
(Note that the course follows the How to Design Programs curriculum, which initially emphasizes the functional approach, but later transitions to the OO style.)
edit: Of course this only answers part of your question and does not address the (more or less orthogonal) topic of macros. For that I refer to Greg Hendershott's excellent tutorial.
A personal view:
If you parameterise an object design in the names of the classes and their methods - as you might do with C++ templates - then you end up with something that looks quite like a functional design. In other words, functional programming does not make useless distinctions between similar structures because their parts go by different names.
My exposure has been to Clojure, which tries to steal the good bit from object programming
working to interfaces
while discarding the dodgy and useless bits
concrete inheritance
traditional data hiding.
Opinions vary about how successful this programme has been.
Since Clojure is expressed in Java (or some equivalent), not only can objects do what functions can do, there is a regular mapping from one to the other.
So where can any functional advantage lie? I'd say expressiveness. There are lots of repetitive things you do in programs that are not worth capturing in Java - who used lambdas before Java provided compact syntax for them? Yet the mechanism was always there.
And Lisps have macros, which have the effect of making all structures first class. And there's a synergy between these aspects that you will enjoy.
The "Gang of 4" design patterns apply to the Lisp family just as much as they do to other languages. I use CL, so this is more of a CL perspective/commentary.
Here's the difference: Think in terms of methods that operate on families of types. That's what defgeneric and defmethod are all about. You should use defstruct and defclass as containers for your data, keeping in mind that all you really get are accessors to the data. defmethod is basically your usual class method (more or less) from the perspective of an operator on a group of classes or types (multiple inheritance.)
You'll find that you'll use defun and define a lot. That's normal. When you do see commonality in parameter lists and associated types, then you'll optimize using defgeneric/defmethod. (Look for CL quadtree code on github, for an example.)
Macros: Useful when you need to glue code around a set of forms. Like when you need to ensure that resources are reclaimed (closing files) or the C++ "protocol" style using protected virtual methods to ensure specific pre- and post-processing.
And, finally, don't hesitate to return a lambda to encapsulate internal machinery. That's probably the best way to implement an iterator ("let over lambda" style.)
Hope this gets you started.
Related
I have been reading about Immutable data structures and understood that change detection has been made easy . And quite often, I hear that it makes the application maintenance simpler and provides an easy to understand programming model.
I need help to understand the way it simplifies the job.
The Clojure community has embraced immutability and it is an eye opener. The best I can do is send you to the source: Rich Hickey's essay on State and his talk The Value of Values. Rich explains how separating the concept of a variable into three distinct concepts: identity, state, and value helps you model your system and reason about it.
It boils down to this: in your programming model, you should only allow things to change if they change in the system you are trying to model. Otherwise you are adding moving parts (mutable variables and objects) to a model that doesn't need them. This makes it harder to understand the model (specially as time evolves) but has little or no benefit.
Even though reading helps, the only way to grok this is to program in a language that takes immutability as a default until you realize how most of the systems you model actually have only a handful of things that change instead of pages and pages of mutable variables.
Immutability is certainly more embraced in functional languages than in imperative ones, even if you can have a Java programming style that limits mutability (see this for immutability in Java). That said, I will just comment on [functional/immutability] and [object/mutability].
I'm Clojure fan and find functional programming really powerful, but...
May be I spent too much time with C++ & Java and not enough with Lisp & Clojure, but I reckon that the simpler maintenance argument has yet to be proven by facts. I'm not sure there are reliable surveys on the actual cost of maintenance in big production systems with data on the technology used and associated costs.
Certainly, in terms of LOC, language like Clojure are really more focused and concise than Java. Hence you can say that less code leads to less maintenance, but I think functional style gives really more compact code that needs a very focused attention to fully understand what a function is doing comparing to imperative style which is more verbose but kind of straightforward. One big advantage of functional programming associated with immutability, is the ability to isolate a function and experiment with it without the need to drag a heavy context of satellite objects or build a bunch of mocks, which is very often the case with OO languages. Putting aside the experimentation, a pure function won't modify its arguments, which ease the fear to break unintentionally some piece of code outside the scope of the function.
But, putting aside the merits of functional/immutability over oop/mutability, in terms of maintenance, my experience leads me to think that it's not the technology which is the main issue, but the design, code quality and evolution of this code over time even when the initial one was of good quality. By "good", I mean that the code is respectful of style conventions (like basic naming), managed complexity, and has a sensible test harness, in a continuous (or at least automated) build environment.
Then, the question becomes: is there a paradigm (functional/immutability, object-oriented/mutability) that enforced a better design and better code. My feeling is that functional languages are the land of computer science passionates, OTOH OOP is more mainstream. Isn't it because OOP is easier to apprehend or is ot just a matter of education? but then, in order to maintain a system in the long run, should one go for a "clever" functional environment with few people able to tackle it, or some mainstream OO technology - with its unsafeness or permissiveness - but lots of people having some knowledge in it?
Certainly the solution is to choose the right technologies (plural) with the right, motivated people...
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am learning Clojure and I really am loving some of its features. The time is coming to think of some real "pet projects" and I realize I'm not sure how to actually use Clojure.
I see many web and templating frameworks (e.g. Compojure), but somehow I'm in doubt on whether it's worth it. I feel that in the long run it can't serve the needs of real world applications which you address with Spring, Hibernate and some pieces of the Java EE stack.
On the other hand, I see great potential in the concurrency features but I'm short on ideas on how to really use them.
Enough background, my questions are:
What are the feasible applications of Clojure and functional programming? What idea for a pet project can you suggest which wouldn't be rewriting the stuff I did with OO/Java EE into different syntax? I'm looking for something what really exploits Clojure's potential and leads to a solution which feels a lot better (not just in syntax) than OO/structural programming.
Is it common, or at least reasonable, to mix Clojure and Java? I mean either of using Clojure for tiny libraries in 95% Java projects, or building Java apps on top of the core written in Clojure.
Edit: Thanks for all the great answers. They're all really inspiring. So if you have anything else to add, go ahead and don't be put off by the fact that one has been accepted.
In answer to the "background" part of the question:
I think you should read Jörg W. Mittag's answer to an SO question entitled "Real world Haskell programming". He makes a number of excellent points. Read on for my take on the FP in the real world issue; scroll past the horizontal line for answers to the two actual questions.
There's a number of FP-centric companies which seem to be really good at what they're doing; for some examples, google Jane Street (OCaml), Galois (Haskell), FlightCaster (Clojure for backend heavy lifting; I seem to remember reading that their frontend is currently done in Rails). Supposedly automated trading strategies are often coded in FP-oriented languages; that would indeed make perfect sense, although I have no inside data to confirm this. For additional examples to do with Clojure, see the list of companies on the success stories page.
Some people seem to be enjoying a degree of success in addressing the needs of real world applications in Rails, Django etc. It would appear that they feel no need to touch J2EE & friends. Not that these have much to do with FP, but they are like FP in that they're nothing like the "Enterprise Languages" of the present.
As for the two actual questions:
Why not just pick up whatever it is you've last been thinking to do and do it in Clojure? Obviously anything can be done in Java (and most things probably have been), but a leaner language might make the product cleaner, the experience more pleasant and less time consuming etc.
About mixing Clojure and Java -- I've seen a decent amount of Clojure code using a couple of classes coded directly in Java (for whatever reason). I've tried going the other way around myself and it's a bit of a pain in that it's much simpler to work with interface inheritance than class inheritance in Clojure, unexpected coupling in the Java code can seriously interfere with the ability of the Clojure code to do things in the most natural way etc. Still, it's entirely possible to extend a Java programme in Clojure and it seems like a particularly safe & sane way of experimenting with it for the worried Java developer.
Functional programming can be applied to almost any task. Web applications, scientific applications, games, you name it.
It is very common to mix Clojure and Java, since Clojure does not have many dedicated libraries for things like I/O or networking.
Organizations that already have a lot of Java code can use Clojure for small subsections of their Java projects.
For new projects, it is usually more effective to use Clojure as the high-level driver language, calling Java libraries where necessary.
I have been working on a small web application using Clojure, and while there is nothing special about the application that could not have been done in a different language, the experience of writing it has been completely different. I have written web apps using ASP.net and moving to Clojure was less about learning the different syntax and more about learning a different way to think and program. Having to learn a different way to think will occur regardless of the project you choose to work on, so I would worry less about finding the perfect functional project and more about finding something you just want to work on.
I think the answer to this has a great deal to do with the context your project is embedded in, and the constraints that imposes on you. Absent social factors I think Clojure is likely at least as "good" a language as Java is for any problem, with the possible exception of cases where you need the last bit of performance. And even in those cases things are not nearly as simple as they seem. For one thing some future version of Clojure can probably, in the theoretical limit, be compiled to bytecode that is as "fast" as what Java is compiled into (assuming a bit more work from the programmer at bottlenecks.) More importantly, optimization is a multi-factorial problem, and one in which programmer productivity and the flexibility of code factors heavily. So while there is a sense in which it would be accurate to say that Clojure is slower than Java, that sense might not be the important one when discussing the performance of a particular application.
So I'd say that if you disregard social factors Clojure's use cases are close to a superset of Java's. I wouldn't try to write a Linux kernel module in clojure though...
Of course, it's true that not all problems have equally natural solutions in functional languages. But people have come up with some interesting ways of dealing with some of the cases where FP seems to map badly to the domain, and anyway Clojure actually offers you enough escape hatches from pure FP that if you really feel the need to write part of your program in an imperative style you can (though of course you give up some of the benefits of Clojure in that case.) In the worst case you could use Clojure to drive the Java library in much the same way that you would in Java... it's hard to imagine a case where that would be a good idea, but in most cases that would not be markedly inferior to just using Java, and in many it might be better.
I'm still a neophyte at Clojure, though I've been programming in CL and scheme for a long time, and I spent about five years writing Java for a living. But I would probably prefer Clojure to Java for just about anything even without knowing it quite as well, as long as there were no social factors involved.
It would be a mistake to dismiss social factors though. I've been a Lisp programmer long enough to have a finely honed instinct for how well a Lisp will work in a given context. I've introduced Lisp to commercial settings where it has been a big win, and I've introduced it to settings where it really wasn't. I'd think long and hard about staking your career on successfully transitioning a team of programmers to any Lisp, Clojure included, particularly if they are not too keen on the idea.
Just to give you an idea of what I think Clojure might be useful for, I am currently writing a lot of poker-related code in Clojure. Some of it is pretty simple stuff (finding the best five card hand you can make from seven cards) and some of it is a bit more interesting (looking at someone's playing history and extracting meaningful trends from it using a few heuristics and some basic statistics.) None of it requires much in the way of Clojure's sophisticated concurrency mechanisms, but it is still much nicer (for me at least) in Clojure than it would be in, say, Java.
There are certainly some other cases that someone might describe where Clojure wins big because of its sophisticated mechanisms for managing concurrency, etc. I am aiming at something more modest- I am just pointing out that even if you don't need those mechanisms you might find Clojure a very congenial language for general purpose programming, albeit one that requires you to rethink how you abstract things if you're coming from an imperative/OO background. And hey, if you need the concurrency mechanisms (as you might, the way things are going), at least you already know Clojure.
I like writing game programs when I learn a new language.
I am in the process of learning Clojure and started writing a Spider solitaire player. If you have never played Spider, don't start; it is very additive :-). See http://www.spidersolitaire.org/.
In writing this game, I am getting to use several things that I want to learn: functional programming, concurrency, Java-interop (for Swing), etc.
I have also started writing a Bejeweled player (http://www.popcap.com/games/free/bejeweled2), but have run into a problem finding the definitive rules for scoring the game.
I find myself always trying to fit everything into the OOP methodology, when I'm coding in C/C++. But I realize that I don't always have to force everything into this mold. What are some pros/cons for using the OOP methodology versus not? I'm more interested in the pros/cons of NOT using OOP (for example, are there optimization benefits to not using OOP?). Thanks, let me know.
Of course it's very easy to explain a million reasons why OOP is a good thing. These include: design patterns, abstraction, encapsulation, modularity, polymorphism, and inheritance.
When not to use OOP:
Putting square pegs in round holes: Don't wrap everything in classes when they don't need to be. Sometimes there is no need and the extra overhead just makes your code slower and more complex.
Object state can get very complex: There is a really good quote from Joe Armstrong who invented Erlang:
The problem with object-oriented
languages is they’ve got all this
implicit environment that they carry
around with them. You wanted a banana
but what you got was a gorilla holding
the banana and the entire jungle.
Your code is already not OOP: It's not worth porting your code if your old code is not OOP. There is a quote from Richard Stallman in 1995
Adding OOP to Emacs is not clearly an
improvement; I used OOP when working
on the Lisp Machine window systems,
and I disagree with the usual view
that it is a superior way to program.
Portability with C: You may need to export a set of functions to C. Although you can simulate OOP in C by making a struct and a set of functions who's first parameter takes a pointer to that struct, it isn't always natural.
You may find more reasons in this paper entitled Bad Engineering Properties
of Object-Oriented Languages.
Wikipedia's Object Oriented Programming page also discusses some pros and cons.
One school of thought with object-oriented programming is that you should have all of the functions that operate on a class as methods on the class.
Scott Meyers, one of the C++ gurus, actually argues against this in this article:
How Non-Member Functions Improve Encapsulation.
He basically says, unless there's a real compelling reason to, you should keep the function SEPARATE from the class. Otherwise the class can turn into this big bloated unmanageable mess.
Based on experiences in a previous large project, I totally agree with him.
A benefit of non-oop functionality is that it often makes exporting your functionality to different languages easier. For example a simple DLL containing only functions is much easier to use in C#, you can use the P/Invoke to simply call the C++ functions. So in this sense it can be useful for writing extremely time critical algorithms that fit nicely into single/few function calls.
OOP is used a lot in GUI code, computer games, and simulations. Windows should be polymorphic - you can click on them, resize them, and so on. Computer game objects should be polymorphic - they probably have a location, a path to follow, they might have health, and they might have some AI behavior. Simulation objects also have behavior that is similar, but breaks down into classes.
For most things though, OOP is a bit of a waste of time. State usually just causes trouble, unless you have put it safely in the database where it belongs.
I suggest you read Bjarne's Paper about Why C++ is not just an Object-Oriented Programming Language
If we consider, for a moment, not object-orienatation itself but one
of the keystones of object-orientation: encapsulation.
It can be shown that change-propagation probability cannot increase
with distance from the change: if A depends on B and B depends on C,
and we change C, then the probability that A will change
cannot be larger than the proabability that B will
change. If B is a direct dependency on C and A is an indirect
dependency on C, then, more generally, to minimise the potential cost
of any change in a system we must miminimise the potential number of
direct dependencies.
The ISO defines encapsulation as the property that the information
contained in an object is accessible only through interactions at the
interfaces supported by the object.
We use encapsulation to minimise the number of potential dependencies
with the highest change-propagation probability. Basically,
encapsulation mitigates the ripple effect.
Thus one reason not to use encapsulation is when the system is so
small or so unchanging that the cost of potential ripple effects is
negligible. This is also, therefore, a case when OO might not be used
without potentially costly consequences.
Well, there are several alternatives. Non-OOP code in C++ may instead be:
C-style procedural code, or
C++-style generic programming
The only advantages to the first are the simplicity and backwards-compatibility. If you're writing a small trivial app, then messing around with classes is just a waste of time. If you're trying to write a "Hello World", just call printf already. Don't bother wrapping it in a class. And if you're working with an existing C codebase, it's probably not object-oriented, and trying to force it into a different paradigm than it already uses is just a recipe for pain.
For the latter, the situation is different, in that this approach is often superior to "traditional OOP".
Generic programming gives you greater performance (among other things because you often avoid the overhead of vtables, and because with less indirection, the compiler is better able to inline), better type safety (because the exact type is known, rather than hiding it behind an interface), and often cleaner and more concise code as well (STL iterators and algorithms enable much of this, without using a single instance of runtime polymorphism or virtual functions.
OOP is little more than an aging buzzword. A methodology that everyone misunderstood (The version supported by C++ and Java has little to do with what OOP originally meant, as in SmallTalk), and then pretended was the holy grail. There are aspects to it that are useful, certainly, but it is often not the best approach for designing an application.
Rather, express the overall logic by other means, for example generic programming, and when you need a class to encapsulate some simple concept, by all means design it according to OOP principles.
OOP is just a tool among many. The goal is not to write OOP code, but to write good code. Sometimes, the way to do this is by using OOP principles, but often, you can get better code using generic programmming principles, or functional programming.
It is a very project dependent decision. My general feel of OOP is that its useful for organizing large projects that involve multiple components. One area I find that OOP is especially pointless is school assignments. Excepting those specifically designed to teach OOP concepts, or large software design concepts, many of my assignments, specifically those in more algorithmy type classes are best suited to non-OOP design.
So specifically, smaller projects, that are not likely to grow large, and projects that center around a single algorithm seem to be non-OOP candidates in my books. Also, if you can write the specification as a linear set of steps, e.g., with no interactive GUI or state to maintain, this would also be an opportunity.
Of course, if you're required to use an OOP design, or an OOP toolkit, or if you have well defined 'objects' in you're spec, or if you need the features of polymorphism, etc. etc. etc...there are plenty of reasons to use it, the above seem to be indicators of when it would be simple not to.
Just my $0.02.
Having an Ada background, I develop in C in terms of packages containing data and their associated functions. This gives a code very modular with pieces of code that can be taken apart and reused on other projects. I don't feel the need to use OOP.
When I develop in Objective-C, objects are the natural container for data and code. I still develop with more or less the package concept in mind with some new cool features.
I'm used to be an OOP fanboy... Then realized using functions, generics and callbacks can often make a more elegant and change-friendly solution in C++ than classes and virtual functions.
Other big names realized it too: http://harmful.cat-v.org/software/OO_programming/
IMHO, I have a feeling that the OOP concept is not really suits the needs of the Big Data, as OOP assume all the stuff to be kept in memory (concept of Objects and member variables). This always result in memory demanding and heavy applications when OOP is used for example for big images processing. Instead, the simplicity of C maybe used with intensive parallel I/O making apps more efficient and easy to implement. It is the year 2019 I am writing this message...Everything may change in a year! :)
In my mind it comes down to what kind of model suits the problem at hand. It seems to me that OOP is best suited to coding GUI programs, in that the data and functionality for a graphical object is easily bundled together. Other problems- (such as a webserver, as an example off the top of my head), might be more easily modeled with a data centric approach, where there's no strong advantage to having a method and its data near each-other.
tl;dr depends on the problem.
I'd say the greatest benefit of C++ OOP is inheritance and polymorphism (Virtual function etc...) .
This allows for code reuse and extendibility
C++, use OOP - - - C, no, with certain exceptions
In C++ you should use OOP. It's a nice abstraction and it's the tool you are given. You either use it or leave it in the box where it can't help. You don't use the power saw for everything but I would read the manual and have it ready for the right job.
In C, it's a more difficult call. While you can certainly write arbitrarily object-oriented code in C, it's enough of a pain that you immediately find yourself fighting the language in order to use it. You may be more productive dropping the doesn't-fit-so-well design pattern and programming as C was intended to be used.
Furthermore, every time you make an array of function pointers or something in an OOP-in-C design pattern, you sever almost completely all visible links in the inheritance chain, making the code hard to maintain. In real OOP languages, there is an obvious chain of derived classes, often analyzed and documented for you. (mmm, javadoc.) Not so in OOP-in-C, and the tools available won't be able to see it.
So, I would argue in general against OOP in C. For a really complex program, you may well need the abstraction, and then you will have to do it despite needing to fight the language in the process and despite making the program quite hard to follow by anyone other than the original author.
But if you knew the program was going to become that complicated, you shouldn't have written it in C in the first place...
In C, there are some times when I 'emulate' the object oriented approach, by defining some sort of constructor with granular control over things like callbacks, when running several instances of it.
For instance, lets say I have some spiffy event handler library and I know that down the road I'm going to need many allocated copies:
So I would have (in C)
MyEvent *ev1 = new_eventhandler();
set_event_callback_func(ev1, callback_one);
ev1->setfd(fd1);
MyEvent *ev2 = new_eventhandler();
set_event_callback_func(ev2, callback_two);
ev2->setfd(fd2);
destroy_eventhandler(ev1);
destroy_eventhandler(ev2);
Obviously, I would later do something useful with that like handle received events in the two respective callback functions. I'm not going to really elaborate on the method of typing function pointers and structures to hold them, nor what would go on in the 'constructor' because its pretty obvious.
I think, this approach works for more advanced interfaces where its desirable to allow the user to define their own callbacks (and change them on the fly), or when working on complex non-blocking I/O services.
Otherwise, I much prefer a more procedural / functional approach.
Probably an unpopular idea but I think you should stick with non-OOP unless it adds something useful. In most practical problems OOP is useful but if I'm just playing with an idea I start writing non-object code and put functions and data into classes if it becomes useful.
Of course I still use other objects in my code (std::vector et al) and I use namespaces to help organise my functions but why put code into objects until it is useful? Equally don't shy away from free functions in an OO solution.
The question is tricky because OOP encompasses several concepts: object encapsulation, polymorphism, inheritance, etc. It's easy to take those ideas too far. Here's a concrete example:
When C++ first caught on, zillions of string classes sprung into being. Everything you could possibly imagine doing to a string (upcasing, downcasing, trimming, tokenizing, parsing, etc.) was a member function of some string class.
Notice, though, that std::strings from the STL don't have all these methods. STL is object-oriented--the state and implementation details of a string object are well encapsulated, only a small, orthogonal interface is exposed to the world. All the crazy manipulations that people used to include as member functions are now delegated to non-member functions.
This is powerful, because these functions can now work on any string class that exposes the same interface. If you use STL strings for most things and a specialty version tuned to your program's idiosyncracies, you don't have to duplicate member functions. You just have to implement the basic string interface and then you can re-use all those crazy manipulations.
Some people call this hybrid approach generic programming. It's still object-oriented programming, but it moves away from the "everything is a member-function" mentality that a lot of people associate with OOP.
Yesterday, Rich pulled the 'new' branch of Clojure into master. We are now embracing the beauty that is deftype and defprotocol. Of course, I, coming from Haskell, am very tempted to define types like I would in Haskell, which would be for virtually everything short of a throwaway tuple, but I don't think it works like that in Clojure world ;). In the Common Mistakes thread for Clojure, one guy mentioned that overusing structs was a mistake he made when he first started, coming from OOP. Since deftypes seem to be similar to structs, I was wondering if the same thing applies there.
So, my question is: when is it a good time to use deftype?
One area deftype shines is performance. Methods of protocols are very fast on a deftype. Also deftype may have primitive fields, so no boxing anymore when crunching numbers...
Another area might be Java interoperation, since deftype can implement interfaces (and if AOT compiled) have a named class.
In general is the basic idea to define abstractions with protocols and to implement them with deftype.
Rich details his motivation here: http://www.assembla.com/wiki/show/clojure/Datatypes
I find myself attached to a project to integerate an interpreter into an existing application. The language to be interpreted is a derivative of Lisp, with application-specific builtins. Individual 'programs' will be run batch-style in the application.
I'm surprised that over the years I've written a couple of compilers, and several data-language translators/parsers, but I've never actually written an interpreter before. The prototype is pretty far along, implemented as a syntax tree walker, in C++. I can probably influence the architecture beyond the prototype, but not the implementation language (C++). So, constraints:
implementation will be in C++
parsing will probably be handled with a yacc/bison grammar (it is now)
suggestions of full VM/Interpreter ecologies like NekoVM and LLVM are probably not practical for this project. Self-contained is better, even if this sounds like NIH.
What I'm really looking for is reading material on the fundamentals of implementing interpreters. I did some browsing of SO, and another site known as Lambda the Ultimate, though they are more oriented toward programming language theory.
Some of the tidbits I've gathered so far:
Lisp in Small Pieces, by Christian Queinnec. The person recommending it said it "goes from the trivial interpreter to more advanced techniques and finishes presenting bytecode and 'Scheme to C' compilers."
NekoVM. As I've mentioned above, I doubt that we'd be allowed to incorporate an entire VM framework to support this project.
Structure and Interpretation of Computer Programs. Originally I suggested that this might be overkill, but having worked through a healthy chunk, I agree with #JBF. Very informative, and mind-expanding.
On Lisp by Paul Graham. I've read this, and while it is an informative introduction to Lisp principles, is not enough to jump-start constructing an interpreter.
Parrot Implementation. This seems like a fun read. Not sure it will provide me with the fundamentals.
Scheme from Scratch. Peter Michaux is attacking various implementations of Scheme, from a quick-and-dirty Scheme interpreter written in C (for use as a bootstrap in later projects) to compiled Scheme code. Very interesting so far.
Language Implementation Patterns: Create Your Own Domain-Specific and General Programming Languages, recommended in the comment thread for Books On Creating Interpreted Languages. The book contains two chapters devoted to the practice of building interpreters, so I'm adding it to my reading queue.
New (and yet Old, i.e. 1979): Writing Interactive Compilers and Interpreters by P. J. Brown. This is long out of print, but is interesting in providing an outline of the various tasks associated with the implementation of a Basic interpreter. I've seen mixed reviews for this one but as it is cheap (I have it on order used for around $3.50) I'll give it a spin.
So how about it? Is there a good book that takes the neophyte by the hand and shows how to build an interpreter in C/C++ for a Lisp-like language? Do you have a preference for syntax-tree walkers or bytecode interpreters?
To answer #JBF:
the current prototype is an interpreter, and it makes sense to me as we're accepting a path to an arbitrary code file and executing it in our application environment. The builtins are used to affect our in-memory data representation.
it should not be hideously slow. The current tree walker seems acceptable.
The language is based on Lisp, but is not Lisp, so no standards compliance required.
As mentioned above, it's unlikely that we'll be allowed to add a full external VM/interpreter project to solve this problem.
To the other posters, I'll be checking out your citations as well. Thanks, all!
Short answer:
The fundamental reading list for a lisp interpreter is SICP. I would not at all call it overkill, if you feel you are overqualified for the first parts of the book jump to chapter 4 and start interpreting away (although I feel this would be a loss since chapters 1-3 really are that good!).
Add LISP in Small Pieces (LISP from now on), chapters 1-3. Especially chapter 3 if you need to implement any non-trivial control forms.
See this post by Jens Axel Søgaard on a minimal self-hosting Scheme: http://www.scheme.dk/blog/2006/12/self-evaluating-evaluator.html .
A slightly longer answer:
It is hard to give advice without knowing what you require from your interpreter.
does it really really need to be an interpreter, or do you actually need to be able to execute lisp code?
does it need to be fast?
does it need standards compliance? Common Lisp? R5RS? R6RS? Any SFRIs you need?
If you need anything more fancy than a simple syntax tree walker I would strongly recommend embedding a fast scheme subsystem. Gambit scheme comes to mind: http://dynamo.iro.umontreal.ca/~gambit/wiki/index.php/Main_Page .
If that is not an option chapter 5 in SICP and chapters 5-- in LISP target compilation for faster execution.
For faster interpretation I would take a look at the most recent JavaScript interpreters/compilers. There seem to be a lot of thought going into fast JavaScript execution, and you can probably learn from them. V8 cites two important papers: http://code.google.com/apis/v8/design.html and squirrelfish cites a couple: http://webkit.org/blog/189/announcing-squirrelfish/ .
There is also the canonical scheme papers: http://library.readscheme.org/page1.html for the RABBIT compiler.
If I engage in a bit of premature speculation, memory management might be the tough nut to crack. Nils M Holm has published a book "Scheme 9 from empty space" http://www.t3x.org/s9fes/ which includes a simple stop-the-world mark and sweep garbage collector. Source included.
John Rose (of newer JVM fame) has written a paper on integrating Scheme to C: http://library.readscheme.org/servlets/cite.ss?pattern=AcmDL-Ros-92 .
Yes on SICP.
I've done this task several times and here's what I'd do if I were you:
Design your memory model first. You'll want a GC system of some kind. It's WAAAAY easier to do this first than to bolt it on later.
Design your data structures. In my implementations, I've had a basic cons box with a number of base types: atom, string, number, list, bool, primitive-function.
Design your VM and be sure to keep the API clean. My last implementation had this as a top-level API (forgive the formatting - SO is pooching my preview)
ConsBoxFactory &GetConsBoxFactory() { return mConsFactory; }
AtomFactory &GetAtomFactory() { return mAtomFactory; }
Environment &GetEnvironment() { return mEnvironment; }
t_ConsBox *Read(iostream &stm);
t_ConsBox *Eval(t_ConsBox *box);
void Print(basic_ostream<char> &stm, t_ConsBox *box);
void RunProgram(char *program);
void RunProgram(iostream &stm);
RunProgram isn't needed - it's implemented in terms of Read, Eval, and Print. REPL is a common pattern for interpreters, especially LISP.
A ConsBoxFactory is available to make new cons boxes and to operate on them. An AtomFactory is used so that equivalent symbolic atoms map to exactly one object. An Environment is used to maintain the binding of symbols to cons boxes.
Most of your work should go into these three steps. Then you will find that your client code and support code starts to look very much like LISP too:
t_ConsBox *ConsBoxFactory::Cadr(t_ConsBox *list)
{
return Car(Cdr(list));
}
You can write the parser in yacc/lex, but why bother? Lisp is an incredibly simple grammar and scanner/recursive-descent parser pair for it is about two hours of work. The worst part is writing predicates to identify the tokens (ie, IsString, IsNumber, IsQuotedExpr, etc) and then writing routines to convert the tokens into cons boxes.
Make it easy to write glue into and out of C code and make it easy to debug issues when things go wrong.
The Kamin Interpreters from Samuel Kamin's book Programming Languages, An Interpreter-Based Approach, translated to C++ by Timothy Budd. I'm not sure how useful the bare source code will be, as it was meant to go with the book, but it's a fine book that covers the basics of implementing Lisp in a lower-level language, including garbage collection, etc. (That's not the focus of the book, which is programming languages in general, but it is covered.)
Lisp in Small Pieces goes into more depth, but that's both good and bad for your case. There's a lot of material on compiling and such that won't be relevant to you, and its simpler interpreters are in Scheme, not C++.
SICP is good, definitely. Not overkill, but of course writing interpreters is only a small fraction of the book.
The JScheme suggestion is a good one, too (and it incorporates some code by me), but won't help you with things like GC.
I might flesh this out with more suggestions later.
Edit: A few people have said they learned from my awklisp. This is admittedly kind of a weird suggestion, but it's very small, readable, actually usable, and unlike other tiny-yet-readable toy Lisps it implements its own garbage collector and data representation instead of relying on an underlying high-level implementation language to provide them.
Check out JScheme from Peter Norvig. I found this amazingly simple to understand and port to C++. Uh, dunno about using scheme as a scripting language though - teaching it to jnrs is cumbersome and feels dated (helloooo 1980's).
I would like to extend my recommendation for Programming Languages: Application and Interpretation. If you want to write an interpreter, that book takes you there in a very short path. If you read through writing the code you read and doing the exercise you end up with a bunch of similar interpreters but different (one is eager, the other is lazy, one is dynamic, the other has some typing, one has dynamic scope, the other has lexical scope, etc).