I want to use Overtone purely for purposes of sending data to midi instruments. Is there a quick way to load Overtone without SuperCollider support? I figured out that midi support can be added to a program by using overtone.studio.midi, but I'm also interested in loading all the helpers that make working with data that represents music easier. Figuring out which files to load and which ones to exclude is a time consuming task, hence the question.
Nope, all of Overtone is relied upon Supercollider, you could do some hacks but it will be a very painful one. I would recommend checking out Pink from Steven Yi, he has implemented java sound with Clojure where you can connect clojure to midi devices trough javasound.
https://github.com/kunstmusik/pink
You can use overtone.core and get at a lot of the studio functionality without actually connecting to the server. You can't definst or defsynth or anything that would trigger any OSC communication to the SC server, but you have full access to Overtone's own OSC facilities. You can make listeners and handlers. You also have access to the MIDI subsystem and the event system.
You should be able to do everything you want to with overtone.core. All of the following code will work without running (connect-external-server) or any of the other related functions:
(ns beatboxchad-live.midi
[:require [overtone.core :refer :all]
[beatboxchad-live.sooperlooper]
]
)
(def fcb (midi-mk-full-device-key (midi-find-connected-device "mio")))
(def overtone-osc (osc-server 9960 "osc-overtone"))
(defn loop-setting [loop-index setting value]
(osc-send engine
(format "/sl/%s/set" loop-index)
setting
value
)
)
(def loop-ops
{0 {:action "record" :hit false}
1 {:action "overdub" :hit false}
2 {:action "trigger" :hit true}
3 {:action "pause" :hit true}
4 {:action "reverse" :hit true}
}
)
(on-event (conj fcb :note-on)
(fn [e]
(let [note (:note e)]
(let [loop-index (int (/ note 10))
cmd (mod note 10)
loop-op (if (:hit (get loop-ops cmd))
"hit"
"down")
]
(beatboxchad-live.sooperlooper/loop-op
loop-index
(:action (get loop-ops cmd))
loop-op
)
)
)
)
::fcb-note-on
)
(on-event (conj fcb :note-off)
(fn [e]
(let [note (:note e)]
(let [loop-index (int (/ note 10))
cmd (mod note 10)
]
(if-not (:hit (get loop-ops cmd))
(beatboxchad-live.sooperlooper/loop-op
loop-index
(:action (get loop-ops cmd))
"up"
)
)
)
)
)
::fcb-note-off
)
This code controls Sooperlooper over OSC based on MIDI from my Behringer FCB1010. It's super simple to send MIDI events out to a device, too. See: https://github.com/overtone/overtone/wiki/MIDI#sending-midi-messages
Related
I want to figure out how best to create an async component, or accommodate async code in a Component-friendly way. This is the best I can come up with, and... it just doesn't feel quite right.
The Gist: take words, uppercase them and reverse them, finally print them.
Problem 1: I can't get the system to stop at the end. I expect to see a println of the individual c-chans stopping, but don't.
Problem 2: How do I properly inject deps. into the producer/consumer fns? I mean, they're not components, and I think they should not be components since they have no sensible lifecycle.
Problem 3: How do I idiomatically handle the async/pipeline-creating side-effects named a>b, and b>c? Should a pipeline be a component?
(ns pipelines.core
(:require [clojure.core.async :as async
:refer [go >! <! chan pipeline-blocking close!]]
[com.stuartsierra.component :as component]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; PIPELINES
(defn a>b [a> b>]
(pipeline-blocking 4
b>
(map clojure.string/upper-case)
a>))
(defn b>c [b> c>]
(pipeline-blocking 4
c>
(map (comp (partial apply str)
reverse))
b>))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; PRODUCER / CONSUMER
(defn producer [a>]
(doseq [word ["apple" "banana" "carrot"]]
(go (>! a> word))))
(defn consumer [c>]
(go (while true
(println "Your Word Is: " (<! c>)))))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; SYSTEM
(defn pipeline-system [config-options]
(let [c-chan (reify component/Lifecycle
(start [this]
(println "starting chan: " this)
(chan 1))
(stop [this]
(println "stopping chan: " this)
(close! this)))]
(-> (component/system-map
:a> c-chan
:b> c-chan
:c> c-chan)
(component/using {}))))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; RUN IT!
(def system (atom nil))
(let [_ (reset! system (component/start (pipeline-system {})))
_ (a>b (:a> #system) (:b> #system))
_ (b>c (:b> #system) (:c> #system))
_ (producer (:a> #system))
_ (consumer (:c> #system))
_ (component/stop #system)])
EDIT:
I started thinking about the following, but I'm not quite sure if it's closing properly...
(extend-protocol component/Lifecycle
clojure.core.async.impl.channels.ManyToManyChannel
(start [this]
this)
(stop [this]
(close! this)))
I rewrote your example a little to make it reloadable:
Reloadable Pipeline
(ns pipeline
(:require [clojure.core.async :as ca :refer [>! <!]]
[clojure.string :as s]))
(defn upverse [from to]
(ca/pipeline-blocking 4
to
(map (comp s/upper-case
s/reverse))
from))
(defn produce [ch xs]
(doseq [word xs]
(ca/go (>! ch word))))
(defn consume [ch]
(ca/go-loop []
(when-let [word (<! ch)]
(println "your word is:" word)
(recur))))
(defn start-engine []
(let [[from to] [(ca/chan) (ca/chan)]]
(upverse to from)
(consume from)
{:stop (fn []
(ca/close! to)
(ca/close! from)
(println "engine is stopped"))
:process (partial produce to)}))
this way you can just do (start-engine) and use it to process word sequences:
REPL time
boot.user=> (require '[pipeline])
boot.user=> (def engine (pipeline/start-engine))
#'boot.user/engine
running with it
boot.user=> ((engine :process) ["apple" "banana" "carrot"])
your word is: TORRAC
your word is: ANANAB
your word is: ELPPA
boot.user=> ((engine :process) ["do" "what" "makes" "sense"])
your word is: OD
your word is: SEKAM
your word is: ESNES
your word is: TAHW
stopping it
boot.user=> ((:stop engine))
engine is stopped
;; engine would not process anymore
boot.user=> ((engine :process) ["apple" "banana" "carrot"])
nil
State Management
Depending on how you intend to use this pipeline, a state management framework like Component might not be needed at all: no need to add anything "just in case", starting and stopping the pipeline in this case is a matter of calling two functions.
However in case this pipeline is used within a larger app with more states you could definitely benefit from a state management library.
I am not a fan of Component primarily because it requires a full app buyin (which makes it a framework), but I do respect other people using it.
mount
I would recommend to either not use anything specific in case the app is small: you, for example could compose this pipeline with other pipelines / logic and kick it off from -main, but if the app is any bigger and has more unrelated states, here is all you need to do to add mount to it:
(defstate engine :start (start-engine)
:stop ((:stop engine)))
starting pipeline
boot.user=> (mount/start)
{:started ["#'pipeline/engine"]}
running with it
boot.user=> ((engine :process) ["do" "what" "makes" "sense"])
your word is: OD
your word is: SEKAM
your word is: ESNES
your word is: TAHW
stopping it
boot.user=> (mount/stop)
engine is stopped
{:stopped ["#'pipeline/engine"]}
Here is a gist with a full example that includes build.boot.
You can just download and play with it via boot repl
[EDIT]: to answer the comments
In case you are already hooked on Component, this should get you started:
(defrecord WordEngine []
component/Lifecycle
(start [component]
(merge component (start-engine)))
(stop [component]
((:stop component))
(assoc component :process nil :stop nil)))
This, on start, would create a WordEngine object that would have a :process method.
You won't be able to call it as you would a normal Clojure function: i.e. from REPL or any namespace just by :requireing it, unless you pass a reference to the whole system around which is not recommended.
So in order to call it, this WordEngine would need to be plugged into a Component system, and injected into yet another Component which can then destructure the :process function and call it.
Consider a dataset like this:
(def data [{:url "http://www.url1.com" :type :a}
{:url "http://www.url2.com" :type :a}
{:url "http://www.url3.com" :type :a}
{:url "http://www.url4.com" :type :b}])
The contents of those URL's should be requested in parallel. Depending on the item's :type value those contents should be parsed by corresponding functions. The parsing functions return collections, which should be concatenated, once all the responses have arrived.
So let's assume that there are functions parse-a and parse-b, which both return a collection of strings when they are passed a string containing HTML content.
It looks like core.async could be a good tool for this. One could either have separate channels for each item ore one single channel. I'm not sure which way would be preferable here. With several channels one could use transducers for the postprocessing/parsing. There is also a special promise-chan which might be proper here.
Here is a code-sketch, I'm using a callback based HTTP kit function. Unfortunately, I could not find a generic solution inside the go block.
(defn f [data]
(let [chans (map (fn [{:keys [url type]}]
(let [c (promise-chan (map ({:a parse-a :b parse-b} type)))]
(http/get url {} #(put! c %))
c))
data)
result-c (promise-chan)]
(go (put! result-c (concat (<! (nth chans 0))
(<! (nth chans 1))
(<! (nth chans 2))
(<! (nth chans 3)))))
result-c))
The result can be read like so:
(go (prn (<! (f data))))
I'd say that promise-chan does more harm than good here. The problem is that most of core.async API (a/merge, a/reduce etc.) relies on fact that channels will close at some point, promise-chans in turn never close.
So, if sticking with core.async is crucial for you, the better solution will be not to use promise-chan, but ordinary channel instead, which will be closed after first put!:
...
(let [c (chan 1 (map ({:a parse-a :b parse-b} type)))]
(http/get url {} #(do (put! c %) (close! c)))
c)
...
At this point, you're working with closed channels and things become a bit simpler. To collect all values you could do something like this:
;; (go (put! result-c (concat (<! (nth chans 0))
;; (<! (nth chans 1))
;; (<! (nth chans 2))
;; (<! (nth chans 3)))))
;; instead of above, now you can do this:
(->> chans
async/merge
(async/reduce into []))
UPD (below are my personal opinions):
Seems, that using core.async channels as promises (either in form of promise-chan or channel that closes after single put!) is not the best approach. When things grow, it turns out that core.async API overall is (you may have noticed that) not that pleasant as it could be. Also there are several unsupported constructs, that may force you to write less idiomatic code than it could be. In addition, there is no built-in error handling (if error occurs within go-block, go-block will silently return nil) and to address this you'll need to come up with something of your own (reinvent the wheel). Therefore, if you need promises, I'd recommend to use specific library for that, for example manifold or promesa.
I wanted this functionality as well because I really like core.async but I also wanted to use it in certain places like traditional JavaScript promises. I came up with a solution using macros. In the code below, <? is the same thing as <! but it throws if there's an error. It behaves like Promise.all() in that it returns a vector of all the returned values from the channels if they all are successful; otherwise it will return the first error (since <? will cause it to throw that value).
(defmacro <<? [chans]
`(let [res# (atom [])]
(doseq [c# ~chans]
(swap! res# conj (serverless.core.async/<? c#)))
#res#))
If you'd like to see the full context of the function it's located on GitHub. It's heavily inspired from David Nolen's blog post.
Use pipeline-async in async.core to launch asynchronous operations like http/get concurrently while delivering the result in the same order as the input:
(let [result (chan)]
(pipeline-async
20 result
(fn [{:keys [url type]} ch]
(let [parse ({:a parse-a :b parse-b} type)
callback #(put! ch (parse %)(partial close! ch))]
(http/get url {} callback)))
(to-chan data))
result)
if anyone is still looking at this, adding on to the answer by #OlegTheCat:
You can use a separate channel for errors.
(:require [cljs.core.async :as async]
[cljs-http.client :as http])
(:require-macros [cljs.core.async.macros :refer [go]])
(go (as-> [(http/post <url1> <params1>)
(http/post <url2> <params2>)
...]
chans
(async/merge chans (count chans))
(async/reduce conj [] chans)
(async/<! chans)
(<callback> chans)))
I am implementing an app using Stuart Sierra component. As he states in the README :
Having a coherent way to set up and tear down all the state associated
with an application enables rapid development cycles without
restarting the JVM. It can also make unit tests faster and more
independent, since the cost of creating and starting a system is low
enough that every test can create a new instance of the system.
What would be the preferred strategy here ? Something similar to JUnit oneTimeSetUp / oneTimeTearDown , or really between each test (similar to setUp / tearDown) ?
And if between each test, is there a simple way to start/stop a system for all tests (before and after) without repeating the code every time ?
Edit : sample code to show what I mean
(defn test-component-lifecycle [f]
(println "Setting up test-system")
(let [s (system/new-test-system)]
(f s) ;; I cannot pass an argument here ( https://github.com/clojure/clojure/blob/master/src/clj/clojure/test.clj#L718 ), so how can I pass a system in parameters of a test ?
(println "Stopping test-system")
(component/stop s)))
(use-fixtures :once test-component-lifecycle)
Note : I am talking about unit-testing here.
I would write a macro, which takes a system-map and starts all components before running tests and stop all components after testing.
For example:
(ns de.hh.new-test
(:require [clojure.test :refer :all]
[com.stuartsierra.component :as component]))
;;; Macro to start and stop component
(defmacro with-started-components [bindings & body]
`(let [~(bindings 0) (component/start ~(bindings 1))]
(try
(let* ~(destructure (vec (drop 2 bindings)))
~#body)
(catch Exception e1#)
(finally
(component/stop ~(bindings 0))))))
;; Test Component
(defprotocol Action
(do-it [self]))
(defrecord TestComponent [state]
component/Lifecycle
(start [self]
(println "====> start")
(assoc self :state (atom state)))
(stop [self]
(println "====> stop"))
Action
(do-it [self]
(println "====> do action")
#(:state self)))
;: TEST
(deftest ^:focused component-test
(with-started-components
[system (component/system-map :test-component (->TestComponent"startup-state"))
test-component (:test-component system)]
(is (= "startup-state" (do-it test-component)))))
Running Test you should see the out put like this
====> start
====> do action
====> stop
Ran 1 tests containing 1 assertions.
0 failures, 0 errors.
The following code just blocks the execution. It seems that map could not return the lazy-sequence for some reason. I am not sure why.
(ns testt.consumer
(:require
;internal
;external
[clojure.walk :refer [stringify-keys]]
[clojure.pprint :as pprint])
(:import
[kafka.consumer ConsumerConfig Consumer KafkaStream ]
[kafka.javaapi.consumer ConsumerConnector ]
[kafka.message MessageAndMetadata ]
[java.util Properties ])
(:gen-class))
; internal
(defn hashmap-to-properties
[h]
(doto (Properties.)
(.putAll (stringify-keys h))))
; external
(defn consumer-connector
[h]
(let [config (ConsumerConfig. (hashmap-to-properties h))]
(Consumer/createJavaConsumerConnector config)))
(defn message-stream
[^ConsumerConnector consumer topic thread-pool-size]
;this is dealing only with the first stream, needs to be fixed to support multiple streams
(let [ stream (first (.get (.createMessageStreams consumer {topic thread-pool-size}) topic)) ]
(map #(.message %) (iterate (.next (.iterator ^KafkaStream stream))))))
The configuration the connector expects:
{ :zookeeper.connect "10.0.0.1:2181"
:group.id "test-0"
:thread.pool.size "1"
:topic "test_topic"
:zookeeper.session.timeout.ms "1000"
:zookeeper.sync.time.ms "200"
:auto.commit.interval.ms "1000"
:auto.offset.reset "smallest"
:auto.commit.enable "true" }
Actually there is no need for Kafka's iterate because Clojure provides a better way of dealing with streams. You can think about Kafka streams as a sequence and just do the following:
(doseq
[^kafka.message.MessageAndMetadata message stream] (do-some-stuff message))
This is probably the most efficient way of doing that.
More of the code is here:
https://github.com/l1x/shovel
I am follwing the example here http://patternhatch.com/2013/06/12/messaging-using-clojure-and-zeromq/
I have verified that I can serialize MarketData and have built the protobuf for it.
Instead of using chesire serialization I decided to try my new learned protobuf serialization knowledge. When I modified the functions in that example into their gpb versions, when I run
(future-call market-data-publisher-gpb)
It seems ok. However, when I run the client
(get-market-data-gpb 100)
Nothing happens. I have two questions:
1) Is there some sort of graphical or otherwise debugger for Clojure?
2) If someone can point me in the right direction as to what I am doing wrong on my modfied example that would also be helpful.
I seem to remember that over ZMQ with a [protobuf] binary data payload required a different set of calls?
(ns clj-zmq.core
(:import [org.jeromq ZMQ])
)
(use 'flatland.protobuf.core)
(import com.example.Example$MarketData)
(def MarketData (protodef Example$MarketData))
(def ctx (ZMQ/context 1))
(defn market-data-publisher-gpb
[]
(let [s (.socket ctx ZMQ/PUB)
market-data-event (fn []
{:symbol (rand-nth ["CAT" "UTX"])
:size (rand-int 1000)
:price (format "%.2f" (rand 50.0))})]
(.bind s "tcp://127.0.0.1:6666")
(while :true
(.send s ( protobuf-dump(market-data-event))))))
; Client
(defn get-market-data-gpb
[num-events]
(let [s (.socket ctx ZMQ/SUB)]
(.subscribe s "")
(.connect s "tcp://127.0.0.1:6666")
(dotimes [_ num-events]
(println (protobuf-load MarketData (.recv s))))
(.close s)))
Both Eclipse Counterclockwise and IntelliJ Cursive have Clojure debug support.
Also, your address looks bad - should be "tcp://127.0.0.1:6666".