Creating an OpenGL window from the Clojure REPL with lwjgl - opengl

I'm trying to use lwjgl with clojure for game developement.
My first step is trying to display something on an OpenGL screenfrom the REPL. After launching the repl with lein repl this is what I have done so far:
(import org.lwjgl.opengl GL11 Display DisplayMode
(Display/setDisplayMode (DisplayMode. 800 600))
(Display/create) ; This shows a black 800x600 window as expected
(GL11/glClearColor 1.0 0.0 0.0 1.0)
(GL11/glClear (bit-or GL11/GL_COLOR_BUFFER_BIT GL11/GL_DEPTH_BUFFER_BIT))
(Display/update)
Note that this works, if done quick enough. But after a while (even if I just wait) I start getting errors about the current OpenGL context not being bound to the current thread.
(Display/update)
IllegalStateException No context is current org.lwjgl.opengl.LinuxContextImplementation.swapBuffers (LinuxContextImplementation.java:72)
(GL11/glClear ...)
RuntimeException No OpenGL context found in the current thread. org.lwjgl.opengl.GLContext.getCapabilities (GLContext.java:124)
But maybe the most intriguing of the errors happens when I try to call Display/destroy
(Display/destroy)
IllegalStateException From thread Thread[nREPL-worker-4,5,main]: Thread[nREPL-worker-0,5,] already has the context current org.lwjgl.opengl.ContextGL.checkAccess (ContextGL.java:184)
It all looks as if the repl randomly spawned another thread after some time of inactivity. As I've been able to read, LWJGL only lets you make OpenGL calls from the thread from where it originally was created, so I bet this is causing those errors.
But how could the REPL be randomly switching threads? And especially if I'm not doing anything, just waiting.

It's a known issue already reported against nREPL project (and discussed on Clojure Google Group). It seems that nREPL uses thread pool which terminates idle threads (probably according to keepalive setting).
Until it's fixed you can use a workaround for this (a bit awkward, I admit):
(import '(java.util.concurrent Executors))
(def opengl-executor (Executors/newSingleThreadExecutor))
(defmacro with-executor [executor & body]
`(.submit ~executor (fn [] ~#body)))
(on-executor opengl-executor
(println (.getId (Thread/currentThread))))
By using your own executor, all code wrapped in on-executor will be executed in its thread. newSingleThreadExecutor creates one single thread which according to doc will replace it only when the current one fails due to exception. When you try to execute the last expression with long delays, the printed thread ID should remain the same.
Remember that you should shutdown the executor when stopping your application.

Related

Threads parked with HTTP-Kit

I have a few threads on the go, each of which make a blocking call to HTTP Kit. My code's been working but has recently taken to freezing after about 30 minutes. All of my threads are stuck at the following point:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
clojure.core$promise$reify__7005.deref(core.clj:6823)
clojure.core$deref.invokeStatic(core.clj:2228)
clojure.core$deref.invoke(core.clj:2214)
my_project.web$fetch.invokeStatic(web.clj:35)
Line my_project.web.clj:35 is something like:
(let [result #(org.httpkit.client/get "http://example.com")]
(I'm using plain Java threads rather than core.async because I'm running the context of a set of concurrent Apache Kafka clients each in their own thread. The Kafka Client does spin up a lot of its own threads, especially as I'm running it a few times, e.g. 5 in parallel).
The fact that all of my threads end up parked like this in HTTP Kit suggests a resource leak, or some code in HTTP Kit dying before it has chance to deliver, or perhaps resource starvation.
Another thread seems to be stuck here. It's possible that it's blocking all of the promise deliveries.
sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:850)
sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
org.httpkit.client.HttpsRequest.unwrapRead(HttpsRequest.java:35)
org.httpkit.client.HttpClient.doRead(HttpClient.java:131)
org.httpkit.client.HttpClient.run(HttpClient.java:377)
java.lang.Thread.run(Thread.java:748)
Any ideas what the problem could be, or pointers for how to diagnose it?
A common thing to do is to set up a DefaultUncaughtExceptionHandler.
This will at least give you an indication if there are exceptions in your threads.
(defn init-jvm-uncaught-exception-logging []
(Thread/setDefaultUncaughtExceptionHandler
(reify Thread$UncaughtExceptionHandler
(uncaughtException [_ thread ex]
(log/error ex "Uncaught exception on" (.getName thread))))))
Stuart Sierra has written nicely on this: https://stuartsierra.com/2015/05/27/clojure-uncaught-exceptions

Are transducers blocking in core.async?

When using transducers on channels are the execution of that trasducer blocking the main thread?
For example (chan 1 long-running-trans)
Would this code delay the main thread until the execution is done?
Creating a channel doesn't per-se do anything on the main execution thread, as the transducer will only get into effect once you put things into the channel.
When you do, there are different consequences from a thread occupation point of view, depending on whether you're running over the JVM or on a JS runtime:
JVM
In the following block:
(let [pipe (chan 1 long-running-trans)]
(go
(>! pipe "stuff"))
(go
(let [stuff (<! pipe)]
(println stuff))))
Whatever code appears within the go blocks will be executed within a dedicate threadpool. As such, neither the put nor the get will keep your main thread busy.
If you were to use the blocking version of the channel operators (>!! or <!!), then you are explicitly asking the runtime to perform them on the main thread, and the transducer will hog your main thread any time you put (>!!) an element into the channel.
JS
When you run on a JS engine, you have only one thread of execution and the non-blocking version of the channel operators (>! and <!). Thus, your transducers will indeed affect your main and only thread. Execution would then follow the normal rules of the JS event loop.

How to best shut down a clojure core.async pipeline of processes

I have a clojure processing app that is a pipeline of channels. Each processing step does its computations asynchronously (ie. makes a http request using http-kit or something), and puts it result on the output channel. This way the next step can read from that channel and do its computation.
My main function looks like this
(defn -main [args]
(-> file/tmp-dir
(schedule/scheduler)
(search/searcher)
(process/resultprocessor)
(buy/buyer)
(report/reporter)))
Currently, the scheduler step drives the pipeline (it hasn't got an input channel), and provides the chain with workload.
When I run this in the REPL:
(-main "some args")
It basically runs forever due to the infinity of the scheduler. What is the best way to change this architecture such that I can shut down the whole system from the REPL? Does closing each channel means the system terminates?
Would some broadcast channel help?
You could have your scheduler alts! / alts!! on a kill channel and the input channel of your pipeline:
(def kill-channel (async/chan))
(defn scheduler [input output-ch kill-ch]
(loop []
(let [[v p] (async/alts!! [kill-ch [out-ch (preprocess input)]]
:priority true)]
(if-not (= p kill-ch)
(recur))))
Putting a value on kill-channel will then terminate the loop.
Technically you could also use output-ch to control the process (puts to closed channels return false), but I normally find explicit kill channels cleaner, at least for top-level pipelines.
To make things simultaneously more elegant and more convenient to use (both at the REPL and in production), you could use Stuart Sierra's component, start the scheduler loop (on a separate thread) and assoc the kill channel on to your component in the component's start method and then close! the kill channel (and thereby terminate the loop) in the component's stop method.
I would suggest using something like https://github.com/stuartsierra/component to handle system setup. It ensures that you could easily start and stop your system in the REPL. Using that library, you would set it up so that each processing step would be a component, and each component would handle setup and teardown of channels in their start and stop protocols. Also, you could probably create an IStream protocol for each component to implement and have each component depend on components implementing that protocol. It buys you some very easy modularity.
You'd end up with a system that looks like the following:
(component/system-map
:scheduler (schedule/new-scheduler file/tmp-dir)
:searcher (component/using (search/searcher)
{:in :scheduler})
:processor (component/using (process/resultprocessor)
{:in :searcher})
:buyer (component/using (buy/buyer)
{:in :processor})
:report (component/using (report/reporter)
{:in :buyer}))
One nice thing with this sort of approach is that you could easily add components if they rely on a channel as well. For example, if each component creates its out channel using a tap on an internal mult, you could add a logger for the processor just by a logging component that takes the processor as a dependency.
:processor (component/using (process/resultprocessor)
{:in :searcher})
:processor-logger (component/using (log/logger)
{:in processor})
I'd recommend watching his talk as well to get an idea of how it works.
You should consider using Stuart Sierra's reloaded workflow, which depends on modelling your 'pipeline' elements as components, that way you can model your logical singletons as 'classes', meaning you can control the construction and destruction (start/stop) logic for each one of them.

Clojure agent's send function is blocking

(def queue-agent (agent (clojure.lang.PersistentQueue/EMPTY)))
(send queue-agent conj "some data for the queue")
(println "test output")
If I run this code, after a couple (!) of seconds the console will output test output and then nothing happens (program is not terminating). I've just checked against a couple of sources that all said the send function is asynchronous and should return immediately to the calling thread. So what's wrong with this? Why is it not returning? Is there something wrong with me? Or with my environment?
So you have two issues: long startup time, and the program does not exit.
Startup: Clojure does not do any tree shaking. When you run a Clojure program, you load and bootstrap the compiler, and initialize namespaces, on every run. A couple of seconds sounds about right for a bare bones Clojure program.
Hanging: If you use the agent thread pool, you must run shutdown-agents if you want the program to exit. The vm simply doesn't know it is safe to shut them down.

In clojure, sh is stuck

I am trying to use sh from clojure.java.shell. In the REPL, it works fine but from a script, it gets stuck.
(ns tutorial.shell
(:use clojure.java.shell))
(println (:out (sh "ls" )))
What should I fix?
The problem is that sh uses futures and Clojure programs which use futures or agents hang around a bit before quitting when they have nothing more to do due to how some internal machinery works.
To get around this, add
(shutdown-agents)
at the end of your script, which terminates that piece of machinery. (So it does more than the name promises in that futures are also affected.)
Note that this cannot be undone and therefore should not be used at the REPL.