Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Backstory: I've made a lot of large and relatively complex projects in Java, have a lot of experience in embedded C programming. I've got acquainted with scheme and CL syntax and wrote some simple programms with racket.
Question: I've planned a rather big project and want to do it in racket. I've heard a lot of "if you "get" lisp, you will become a better programmer", etc. But every time I try to plan or write a program I still "decompose" the task with familiar stateful objects with interfaces.
Are there "design patterns" for lisp? How to "get" lisp-family "mojo"? How to escape object-oriented constraint on your thinking? How to apply functional programming ideas boosted by powerful macro-facitilties? I tried studying source code of big projects on github (Light Table, for instance) and got more confused, rather than enlightened.
EDIT1 (less ambigious questions): is there a good literatue on the topic, that you can recommend or are there good open source projects written in cl/scheme/clojure that are of high quality and can serve as a good example?
A number of "paradigms" have come into fashion over the years:
structured programming, object oriented, functional, etc. More will come.
Even after a paradigm falls out of fashion, it can still be good at solving the particular problems that first made it popular.
So for example using OOP for a GUI is still natural. (Most GUI frameworks have a bunch of states modified by messages/events.)
Racket is multi-paradigm. It has a class system. I rarely use it,
but it's available when an OO approach makes sense for the problem.
Common Lisp has multimethods and CLOS. Clojure has multimethods and Java class interop.
And anyway, basic stateful OOP ~= mutating a variable in a closure:
#lang racket
;; My First Little Object
(define obj
(let ([val #f])
(match-lambda*
[(list) val]
[(list 'double) (set! val (* 2 val))]
[(list v) (set! val v)])))
obj ;#<procedure:obj>
(obj) ;#f
(obj 42)
(obj) ;42
(obj 'double)
(obj) ;84
Is this a great object system? No. But it helps you see that the essence of OOP is encapsulating state with functions that modify it. And you can do this in Lisp, easily.
What I'm getting at: I don't think using Lisp is about being "anti-OOP" or "pro-functional". Instead, it's a great way to play with (and use in production) the basic building blocks of programming. You can explore different paradigms. You can experiment with ideas like "code is data and vice versa".
I don't see Lisp as some sort of spiritual experience. At most, it's like Zen, and satori is the realization that all of these paradigms are just different sides of the same coin. They're all wonderful, and they all suck. The paradigm pointing at the solution, is not the solution. Blah blah blah. :)
My practical advice is, it sounds like you want to round out your experience with functional programming. If you must do this the first time on a big project, that's challenging. But in that case, try to break your program into pieces that "maintain state" vs. "calculate things". The latter are where you can try to focus on "being more functional". Look for opportunities to write pure functions. Chain them together. Learn how to use higher-order functions. And finally, connect them to the rest of your application -- which can continue to be stateful and OOP and imperative. That's OK, for now, and maybe forever.
A way to compare programming in OO vs Lisp (and "functional" programming in general) is to look at what each "paradigm" enables for the programmer.
One viewpoint in this line of reasoning, which looks at representations of data, is that the OO style makes it easier to extend data representations, but makes it more difficult to add operations on data. In contrast, the functional style makes it easier to add operations but harder to add new data representations.
Concretely, if there is a Printer interface, with OO, it's very easy to add a new HPPrinter class that implements the interface, but if you want to add a new method to an existing interface, you must edit every existing class that implements the interface, which is more difficult and may be impossible if the class definitions are hidden in a library.
In contrast, with the functional style, functions (instead of classes) are the unit of code, so one can easily add a new operation (just write a function). However, each function is responsible for dispatching according to the kind of input, so adding a new data representation requires editing all existing functions that operate on that kind of data.
Determining which style is more appropriate for your domain depends on whether you are more likely to add representations or operations.
This is a high-level generalization of course, and each style has developed solutions to cope with the tradeoffs mentioned (eg mixins for OO), but I think it still holds to a large degree.
Here is a well-known academic paper that captured the idea 25 years ago.
Here are some notes from a recent course (I taught) describing the same philosophy.
(Note that the course follows the How to Design Programs curriculum, which initially emphasizes the functional approach, but later transitions to the OO style.)
edit: Of course this only answers part of your question and does not address the (more or less orthogonal) topic of macros. For that I refer to Greg Hendershott's excellent tutorial.
A personal view:
If you parameterise an object design in the names of the classes and their methods - as you might do with C++ templates - then you end up with something that looks quite like a functional design. In other words, functional programming does not make useless distinctions between similar structures because their parts go by different names.
My exposure has been to Clojure, which tries to steal the good bit from object programming
working to interfaces
while discarding the dodgy and useless bits
concrete inheritance
traditional data hiding.
Opinions vary about how successful this programme has been.
Since Clojure is expressed in Java (or some equivalent), not only can objects do what functions can do, there is a regular mapping from one to the other.
So where can any functional advantage lie? I'd say expressiveness. There are lots of repetitive things you do in programs that are not worth capturing in Java - who used lambdas before Java provided compact syntax for them? Yet the mechanism was always there.
And Lisps have macros, which have the effect of making all structures first class. And there's a synergy between these aspects that you will enjoy.
The "Gang of 4" design patterns apply to the Lisp family just as much as they do to other languages. I use CL, so this is more of a CL perspective/commentary.
Here's the difference: Think in terms of methods that operate on families of types. That's what defgeneric and defmethod are all about. You should use defstruct and defclass as containers for your data, keeping in mind that all you really get are accessors to the data. defmethod is basically your usual class method (more or less) from the perspective of an operator on a group of classes or types (multiple inheritance.)
You'll find that you'll use defun and define a lot. That's normal. When you do see commonality in parameter lists and associated types, then you'll optimize using defgeneric/defmethod. (Look for CL quadtree code on github, for an example.)
Macros: Useful when you need to glue code around a set of forms. Like when you need to ensure that resources are reclaimed (closing files) or the C++ "protocol" style using protected virtual methods to ensure specific pre- and post-processing.
And, finally, don't hesitate to return a lambda to encapsulate internal machinery. That's probably the best way to implement an iterator ("let over lambda" style.)
Hope this gets you started.
I have been reading about Immutable data structures and understood that change detection has been made easy . And quite often, I hear that it makes the application maintenance simpler and provides an easy to understand programming model.
I need help to understand the way it simplifies the job.
The Clojure community has embraced immutability and it is an eye opener. The best I can do is send you to the source: Rich Hickey's essay on State and his talk The Value of Values. Rich explains how separating the concept of a variable into three distinct concepts: identity, state, and value helps you model your system and reason about it.
It boils down to this: in your programming model, you should only allow things to change if they change in the system you are trying to model. Otherwise you are adding moving parts (mutable variables and objects) to a model that doesn't need them. This makes it harder to understand the model (specially as time evolves) but has little or no benefit.
Even though reading helps, the only way to grok this is to program in a language that takes immutability as a default until you realize how most of the systems you model actually have only a handful of things that change instead of pages and pages of mutable variables.
Immutability is certainly more embraced in functional languages than in imperative ones, even if you can have a Java programming style that limits mutability (see this for immutability in Java). That said, I will just comment on [functional/immutability] and [object/mutability].
I'm Clojure fan and find functional programming really powerful, but...
May be I spent too much time with C++ & Java and not enough with Lisp & Clojure, but I reckon that the simpler maintenance argument has yet to be proven by facts. I'm not sure there are reliable surveys on the actual cost of maintenance in big production systems with data on the technology used and associated costs.
Certainly, in terms of LOC, language like Clojure are really more focused and concise than Java. Hence you can say that less code leads to less maintenance, but I think functional style gives really more compact code that needs a very focused attention to fully understand what a function is doing comparing to imperative style which is more verbose but kind of straightforward. One big advantage of functional programming associated with immutability, is the ability to isolate a function and experiment with it without the need to drag a heavy context of satellite objects or build a bunch of mocks, which is very often the case with OO languages. Putting aside the experimentation, a pure function won't modify its arguments, which ease the fear to break unintentionally some piece of code outside the scope of the function.
But, putting aside the merits of functional/immutability over oop/mutability, in terms of maintenance, my experience leads me to think that it's not the technology which is the main issue, but the design, code quality and evolution of this code over time even when the initial one was of good quality. By "good", I mean that the code is respectful of style conventions (like basic naming), managed complexity, and has a sensible test harness, in a continuous (or at least automated) build environment.
Then, the question becomes: is there a paradigm (functional/immutability, object-oriented/mutability) that enforced a better design and better code. My feeling is that functional languages are the land of computer science passionates, OTOH OOP is more mainstream. Isn't it because OOP is easier to apprehend or is ot just a matter of education? but then, in order to maintain a system in the long run, should one go for a "clever" functional environment with few people able to tackle it, or some mainstream OO technology - with its unsafeness or permissiveness - but lots of people having some knowledge in it?
Certainly the solution is to choose the right technologies (plural) with the right, motivated people...
I know they are dialects of the same family of language called lisp, but what exactly are the differences? Could you give an overview, if possible, covering topics such as syntax, characteristics, features and resources.
They all have a lot in common:
Dynamic languages
Strongly typed
Compiled
Lisp-style syntax, i.e. code is written as a Lisp data structures (forms) with the most common pattern being function calls like: (function-name arg1 arg2)
Powerful macro systems that allow you to treat code as data and generate arbitrary code at runtime (often used to either "extend the language" with new syntax or create DSLs)
Often used in functional programming style, although have the ability to accommodate other paradigms
Emphasis in interactive development with a REPL (i.e. you interactively develop in a running instance of the code)
Common Lisp distinctive features:
A powerful OOP subsystem (Common Lisp Object System)
Probably the best compiler (Common Lisp is the fastest Lisp according to http://benchmarksgame.alioth.debian.org/u64q/which-programs-are-fastest.html although there isn't much in it.....)
Clojure distinctive features:
Largest library ecosystem, since you can directly use any Java libraries
Vectors [] and maps {} used as standard in addition to the standard lists () - in addition to the general usefullness of vectors and maps some believe this is a innovation which makes generally more readable
Greater emphasis on immutability and lazy functional programming, somewhat inspired by Haskell
Strong concurrency capabilities supported by software transactional memory at the language level (worth watching: http://www.infoq.com/presentations/Value-Identity-State-Rich-Hickey)
Scheme distinctive features:
Arguably the simplest and easiest to learn Lisp
Hygienic macros (see http://en.wikipedia.org/wiki/Hygienic_macro) - elegantly avoids the problems with accidental symbol capture in macro expansions
The people above missed a few things
Common Lisp has vectors and hash tables as well. The difference is that Common Lisp uses #() for vectors and no syntax for hash tables. Scheme has vectors, I believe
Common Lisp has reader macros, which allow you to use new brackets (as does Racket, a descendant of Scheme).
Scheme and Clojure have hygienic macros, as opposed to Common Lisp's unhygienic ones
All of the languages are either modern or have extensive renovation projects. Common Lisp has gotten extensive libraries in the past five years (thanks mostly to Quicklisp), Scheme has some modern implementations (Racket, Chicken, Chez Scheme, etc.), and Clojure was created relatively recently
Common Lisp has a built-in OO system, though it's quite different from other OO systems you might have used. Notably, it is not enforced--you don't have to write OO code.
The languages have somewhat different design philosophies. Scheme was designed as a minimal dialect for understanding the Actor Model; it later became used for pedagogy. Common Lisp was designed to unify the myriad Lisp dialects that had sprung up. Clojure was designed for concurrency. As a result, Scheme has a reputation of being minimal and elegant, Common Lisp of being powerful and paradigm-agnostic (functional, OO, whatever), and Clojure of favoring functional programming.
Don't forget about Lisp-1 and Lisp-2 differences.
Scheme and Clojure are Lisp-1:
That means both variables and functions names resides in same namespace.
Common Lisp is Lisp-2:
Function and variables has different namespaces (in fact, CL has many namespaces).
Gimp is written in Scheme :)
In fact allot of software some folks think might be written in C++ was probably done under the Lisp umbrella, its hard to pick out the golden apples out of the bunch. The fact is C++ was not always popular, it only seems to be popular today because of a history of updates. For the lesser half of the century C++ didn't even utilize multithreading, it was where Python is today a cesspool of useless untested buggy glue code. Fasterforward a little and now we are seeing a rise in functional programming, its more like adapt or die. I think Java has it right as far as the adapt part is concerned.
Scheme was designed to simplify the Lisp language, that was its only intent except it never really caught on. I think Clojure does something similar its meant to simplify Scheme for the JVM nothing more. Its just like every other JVM language just there to inflate the user experience, only to simplify writting boilerplate in Java land.
Now I'm generally in Java/C# (love both of them, can't really say I'm dedicated to one).
And I've recently been discussing the differences between F# and C# with a friend, when he surprised me saying: "So.. F# sounds a lot like lisp, but with way less 'Swiss-army knife' feel to it."
Now, I was partly ashamed of saying this but I have no idea what lisp was.
After some searching, I saw that lisp is very interesting, but got stumped by the multiple dialects and running environments.
Here is what I know:
I know of 3 dialects:
Common Lisp (I have the Practical Common Lisp book in my bookmarks.
Scheme (a more "theoretical" version of CL)
Clojure. Seems to be a version of CL that runs on JVM.
The basic idea of lisp seems to be about using code as data.
What I want to know:
What is the running environment for different dialects? How do they work/get installed (by this I mean is it a runtime like Java Virtual Machine, or if it requires something else, or if it's supported generally by the OS (as in compiled)). And how to get them (if something is to be gotten)
What is the better dialect to learn (I want the dialect not to be a "learning language" but one you can fully use afterwards without regret of not learning some other one, for example one should first learn C++ before trying out Visual C++, if you know what I mean)
What are the main advantages of lisp in general (I've seen many pages about that saying it's faster in development and execution, but they were all pretty vague about the details)
Can it be generally used for general purpose, or is it concentrated on AI? (By this I mean if, for example, one could make a full console app with it, and then implement OpenGL just as easily and make a game. Learning a language specialized on something precise is worthwhile, but not at the moment for me)
I would also be very happy about any additional details you guys can give me! (Links are appreciated too! E-Books and whatnot.)
Edit: all of the answers here were very useful. As such, I gave them all a +1 to rep, but chose the more concrete one as best. Thank you all.
I also learnt Java and C# intensively before coming to Lisp so hopefully can share some useful perspectives.
Firstly, all Lisps are great and you should definitely consider learning one. There's a famous quote by Eric Raymond:
"Lisp is worth learning for the profound enlightenment experience you
will have when you finally get it; that experience will make you a
better programmer for the rest of your days, even if you never
actually use Lisp itself a lot."
Reasons that Lisps are particularly interesting and powerful are:
Homoiconicity - in Lisp "code is data" - the language itself is written in Lisp data structures. In itself this is interesting, but where it gets really powerful is when you start using this for code generation and advanced macros. Some believe that this features is a key reason why Lisp can help you be more productive than anyone else (short Paul Graham essay)
Interactice development at the REPL - a few other languages also have this, but it is particularly idiomatic and deep-rooted in Lisp culture. It's remarkably productive and liberating to develop while altering a live running program. Recent examples that caught my eye include music hacking with overtone and editing a live game simulation.
Dynamic typing - opinion is more divide on whether this is an advantage or not (I'm personally neutral) but many people thing that dynamically typed langauges give you a productivity advantage, at least in terms of building things quickly. YMMV.
My personal recommendation for a Lisp to learn nowadays would be Clojure. Clojure has a few distinct advantages that make it stand out:
Modern language design - Clojure "refines" Lisp in a number of ways. For example, Clojure adds some new syntax for vectors [] and hashmaps {} in addition to lists (). Purists may disapprove, but I personally believe these find of innovations make the language much nicer to use and read.
Functional first and foremost - all the Lisps are good as functional languages, however Clojure takes it much further. All the standard library is written in terms of pure functions. All data structures are immutable. Mutable state is strictly limited. Lazy sequences (including infinite sequences) are supported. In some senses it feels a bit more like Haskell than the other Lisps.
Concurrency - Clojure has a unique approach to managing concurrency, supported by a very good STM implementation. Worth watching this excellent video for a much deeper explanation.
Runs on the JVM - whatever you think of Java, the JVM is a great platform with extremely good GC, JIT compilation, cross platform portability etc. This can be a barrier to entry for some, but anyone used to Java or C# should quickly feel at home.
Library ecosystem - since Clojure runs on the JVM, it can use Java libraries extremely easily. Calling a Java API from Clojure is trivial - it's just like any other function call with a syntax of (.methodName someObject arg1 arg2). With the availability of the huge Java library ecosystem (mostly open source) Clojure basically leapfrogs all the "niche" languages in terms of practical usefulness
In terms of applications, Clojure is designed to be a fully general purpose langauge so can be used in any field - certainly not limited to AI. I know of people using it in startups, using it for big data processing, even writing games.
Finally on the performance point: you are basically always going to pay a slight performance penalty for using higher level language constructs. However Clojure in my experience is "close enough" to Java or C# that you won't notice the difference for general purpose development. It helps that Clojure is always compiled and that you can use optional type hints to get the performance benefits of static typing.
The flawed benchmarks (as of early 2012) put Clojure within a factor of 2-3 of the speed of statically typed languages like Java, Scala and C#, a little bit behind Common Lisp and a little bit ahead of Scheme (Racket).
Lisp, as you've discovered, is not one language; it's a family of languages that have certain features in common.
There are two primary dialects of Lisp: Common Lisp and Scheme. Each of those two dialects has many implementations, each with their own features. However, both Common Lisp and Scheme are standardized, and the standards define a certain baseline of features which you can expect any implementation to have.
Scheme is a minimalistic language with a very small standard library. It is used primarily by students and theoreticians. Common Lisp has many more language features and a much larger standard library, including a powerful object system, and has been used in large production systems.
Clojure is another minor, more recent dialect. If you want to understand Lisp, you're better off first learning either Common Lisp or Scheme.
My recommendation is to learn Scheme first; it's a purer expression of the ideas that Lisp is made of, and will help you understand the essence of the language. In many ways, Lisp is completely different from Java and other imperative languages; however, what you learn from it will make you a better programmer in those languages. You can easily learn Common Lisp after you know Scheme.
The advantage of Lisp is, simply put, that it's more powerful than other languages. All Lisp code is Lisp data and can be manipulated as such; this allows you to do really cool things with metaprogramming that simply can't be done in other languages, because they don't give you direct access to the data structures that comprise your code. (The reason Lisp can do this and they can't is intimately related to its strange-looking syntax. Every compiler or interpreter, after reading the source code, must translate it into abstract syntax trees. Unlike other languages, Lisp's syntax is a direct representation of the ASTs that Lisp code is translated into, so you know what those trees look like and can manipulate them directly.) The most commonly used metaprogramming feature is macros; Lisp macros can literally translate a bit of source code into anything you can program. You can't do that with, say, C macros.
The "faster in development and execution" thing may have been a reference to one specific feature which most Lisp implementations provide: the read-eval-print loop. You can type an expression into a prompt and the interpreter will evaluate it and print the result. This is wonderful both for learning the language and for debugging or otherwise investigating code.
Lisp is dynamically typed (though statically typed flavors do exist). Most implementations of Lisp run on their own virtual machine; however, many can also be compiled to machine code. Clojure was written specifically to target the JVM; it can also target .NET and JavaScript.
Though originally created for AI research, Lisp is by no means exclusively for AI. The main reason why it's not more popular in mainstream production environments (apart from the self-perpetuating dominance of Java and C#) is library support. Common Lisp has many good libraries out there (Scheme less so), but it pales in comparison to the vast amount of library support available for Java or Python.
If you want to get started, I recommend downloading Racket, a highly popular implementation of Scheme. It has everything you need, including a simple-but-very-powerful IDE with a read-eval-print loop, right out of the box. Though originally developed as a teaching language, it comes with a very large standard library more characteristic of Common Lisp than of Scheme. As a result, it's seeing use in real production environments.
Runtime Environments
Common Lisp and Scheme generally have their own unique runtime environments. There are some variants of Scheme (Chicken and Gambit) which can be translated to C and then linked with their environments so as to be able to be deployed as stand alone executable programs. Clojure runs in the JVM, and there is also a CLR port, but its not clear to me that the CLR port is current with the JVM. Clojure also has Clojurescript, which targets a Javascript runtime.
Which is Better to Learn First
I don't think that question has a good answer. Its up to you. Although if you have experience with the JVM, Clojure might be a bit smoother to start with.
What is Better about Lisp
That's a question liable to start a flame war. I don't have much lisp experience. I started learning Clojure a few months ago in earnest, have looked at Common Lisp and Scheme on and off over the years.
What I like is their dynamic natures. You need to change a function at runtime while your program is running? No problem! Like any power tool, you have to be careful not to chop your bits off when using this.
The power and expressiveness is addicting too. I am able to do some things with little effort that I know I could not achieve in Java, or I know would require a lot more work. Specifically, I was able to put together a description of a data structure - and though the use of macros, delay evaluation of parts of the data until the right time. If I had done that in Java, I would not have been able to nest the declarations like I did because they would have evaluated in the wrong order. Pain would have ensued.
I also like Clojure's view of functional programming, although I have to say it requires work to adjust.
Is Lisp General Purpose
Yes.
--
Mark Volkman has a really good article on Clojure. Many basics are there. One thing that I did in the beginning was to just fire up a repl and experiment when I needed to figure something out programmatically. e.g. explore an API or do some calculations. After a short period of time with that I started working on more building up levels of effort, and I have a project that I'm working on right now that involves Clojure.
There isn't a bad book about Clojure that has been written. The Stuart Sierra book is being updated; and the Oreilly book is about to come out soon, so you might want to wait. The Joy of Clojure is good, but I don't think its a good starter book.
For Common Lisp, I highly recommend the Land of Lisp.
For Scheme, there are several classics including The Little Schemer and SICP.
Oh, and this: http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey (maybe one of the most important talks you'll ever watch), and this http://www.infoq.com/presentations/hickey-clojure (IIRC, really good intro to Clojure).
common lisp
Common Lisp is both compiled and interpreted. Deployments (in Windows) can be done by an exe with DLLs. Or by a precompiled bytecode. Or by installing a Lisp system on the target device and executing the source against it.
Common Lisp is a fully usable industrial language with an active community and libraries for many different tasks.
Lisps are generally faster for development and due to the abstraction capabilities, better at developing higher level concepts. It's hard to explain. Ruby vs. C is an example of this sort of thing. All Lisps carry this capacity IMO.
Common Lisp is a general purpose language. I don't know offhand if modern Common Lisp implementations directly support executing assembly, so it may be difficult to write drivers or use compiler-unsupported CPU instructions.
I like Common Lisp, but Clojure and Racket are not to be sneezed at either. Clojure in particular represents a very interesting track, in my opinion.
For e-books, you can get On Lisp by Graham and Gentle Introduction to Symbolic Computation. Possibly others but those are the ones I can recall.
First a little background ...
In what follows, I use C,C++ and Java for coding (general) algorithms, not gui's and fancy program's with interfaces, but simple command line algorithms and libraries.
I started out learning about programming in Java. I got pretty good with Java and I learned to use the Java containers a lot as they tend to reduce complexity of book keeping while guaranteeing great performance. I intermittently used C++, but I was definitely not as good with it as with Java and it felt cumbersome. I did not know C++ enough to work in it without having to look up every single function and so I quickly reverted back to sticking to Java as much as possible.
I then made a sudden transition into cracking and hacking in assembly language, because I felt I was concentrated too much attention on a much too high level language and I needed more experience with how a CPU interacts with memory and whats really going on with the 1's and 0's. I have to admit this was one of the most educational and fun experiences I've had with computers to date.
For obviously reasons, I could not use assembly language to code on a daily basis, it was mostly reserved for fun diversions. After learning more about the computer through this experience I then realized that C++ is so much closer to the "level of 1's and 0's" than Java was, but I still felt it to be incredibly obtuse, like a swiss army knife with far too many gizmos to do any one task with elegance. I decided to give plain vanilla C a try, and I quickly fell in love. It was a happy medium between simplicity and enough "micromanagent" to not abstract what is really going on. However, I did miss one thing about Java: the containers. In particular, a simple container (like the stl vector) that expands dynamically in size is incredibly useful, but quite a pain to have to implement in C every time. Hence my code currently looks like almost entirely C with containers from C++ thrown in, the only feature I use from C++.
I'd like to know if its consider okay in practice to use just one feature of C++, and ignore the rest in favor of C type code?
The short answer is, "This is not really the most effective way to use C++."
When used correctly, the strong type system, the ability to pass by reference, and idioms like RAII make C++ programs more likely to be correct, readable, and maintainable.
No one can stop you from using the language the way you want to. But you may be limiting yourself by not learning and leveraging actual C++ features.
If you write code that other people will have to read and maintain, they will probably appreciate the use of "real C++" instead of "C with classes" (in the words of a previous commenter).
Seems fine to me. That's the only part of C++ that I really use as well.
Right now, I'm writing a number cruncher. There's no polymorphism, no control delegation, no interaction. <iostream> was a bottleneck so I rewrote I/O in C.
The functions are mostly inside one class which represents a work thread. So that's not so much OO as having thread-local variables.
As well as vector, I use <algorithms> pretty heavily. But the heavy-duty data structures are written in plain C. Mainly circular singly-linked lists, which can't even easily have distinct begin() and end(), meaning not only containers but sequences (and for-loops) are off-limits. And then templates help the preprocessor to generate the main inner loop.
The most natural way of solving your problem is probably right. You don't want solutions in search of a problem. Learning to use C++ is well and good, but object orientation is suited to some problems and not others.
On the other hand, using bsearch from stdlib.h in a C++ program would be wrong.
You should use C++ in whatever way makes the most sense for you.