How to play a note using a custom sample in overtone? - clojure

How can one play a note in overtone using a custom sample?
For example, you can play a note using predefined piano sample like (piano (note :C4)) but how can I do the same for custom sample that I loaded using sample or load-sample?
In other words: let's say I have (def my-piano (load-sample "/path/to/my/piano_sample.wav")) and want to use it instead of predefined piano instrument.
My understanding is that I need to define a new instrument that takes either note or frequency as an argument. The question is how to define such instrument. Neither scaled-play-buf nor play-buf don't take frequency as a parameter.
I've seen an example - 'how to define a custom instrument in overtone' here - and it looks like that I should have a separate sample per note. Is that correct?

Found an answer (sort of) - :rate parameter in scaled-play-buf can be used to achieve the desired effect (well, it's better used in combination with others actually, esp. if you want to play multiple octaves with your instrument):
;; define sample and instrument, rate is the key here
(def piano (sample "~/Music/Samples/mypiano.wav"))
(definst i-piano
[note 60 level 1 rate 1 loop? 0 attack 0 decay 1 sustain 1 release 0.1 curve -4 gate 1]
(let [env (env-gen (adsr attack decay sustain release level curve)
:gate gate
:action FREE)]
(scaled-play-buf 1 piano :rate rate :level level :loop loop? :action FREE)))
;; try it
(i-piano :rate 1) ; original note
(i-piano :rate 1.2)
(i-piano :rate 0.7)

Related

Why does calling code as a function take longer than calling it directly in Clozure Common lisp?

I have around 900000 RECORDS:
(defparameter RECORDS
'((293847 "john menk" "john.menk#example.com" 0123456789 2300 2760 "CHEQUE" 012345 "menk freeway" "high rose")
(244841 "january agami" "j.a#example.com" 0123456789 2300 2760 "CHEQUE" 012345 "ishikawa street" "fremont apartments")
...))
(These are read from a file. The above code is provided only as an example. It helps show the internal structure of this data.)
For quick prototyping I use aliased names for selectors:
(defmacro alias (new-name existing-name)
"Alias NEW-NAME to EXISTING-NAME. EXISTING-NAME has to be a function."
`(setf (fdefinition ',new-name) #',existing-name))
(progn
(alias account-number first)
(alias full-name second)
(alias email third)
(alias mobile fourth)
(alias average-paid fifth)
(alias highest-paid sixth)
(alias usual-payment-mode seventh)
(alias pincode eighth)
(alias road ninth)
(alias building tenth))
Now I run:
(time (loop for field in '(full-name email)
append (loop for record in RECORDS
when (cl-ppcre:scan ".*?january.*?agami.*?"
(funcall (symbol-function field) record))
collect record)))
The repl outputs:
...
took 1,714 milliseconds (1.714 seconds) to run.
During that period, and with 4 available CPU cores,
1,698 milliseconds (1.698 seconds) were spent in user mode
9 milliseconds (0.009 seconds) were spent in system mode
40 bytes of memory allocated.
...
Define a function doing the same thing:
(defun searchx (regex &rest fields)
(loop for field in fields
append (loop for record in RECORDS
when (cl-ppcre:scan regex (funcall (symbol-function field) record))
collect record)))
And then call it:
(time (searchx ".*?january.*?agami.*?" 'full-name 'email))
The output:
...
took 123,389 milliseconds (123.389 seconds) to run.
992 milliseconds ( 0.992 seconds, 0.80%) of which was spent in GC.
During that period, and with 4 available CPU cores,
118,732 milliseconds (118.732 seconds) were spent in user mode
4,569 milliseconds ( 4.569 seconds) were spent in system mode
2,970,867,648 bytes of memory allocated.
501 minor page faults, 0 major page faults, 0 swaps.
...
It's almost 70 times slower ?!!
I thought maybe it's a computer specific issue. So I ran the same code on two different machines. A macbook air, and a macbook pro. The individual timings vary, but the behaviour is consistent. Calling it as a function takes much longer than calling it directly on both machines. Surely the overhead of a single function call should not be that much slower.
Then I thought it might be that Clozure CL is responsible. So I ran the same code in SBCL and even there the behaviour is similar. The difference isn't as big, but it's still pretty big. It's about 22 times slower.
SBCL output when running direct:
Evaluation took:
1.519 seconds of real time
1.477893 seconds of total run time (0.996071 user, 0.481822 system)
97.30% CPU
12 lambdas converted
2,583,290,520 processor cycles
492,536 bytes consed
SBCL output when running as a function:
Evaluation took:
33.522 seconds of real time
33.472137 seconds of total run time (33.145166 user, 0.326971 system)
[ Run times consist of 0.254 seconds GC time, and 33.219 seconds non-GC time. ]
99.85% CPU
56,989,918,442 processor cycles
2,999,581,336 bytes consed
Why is calling the code as a function so much slower? And how do I fix it?
The difference is probably due to the regular expression.
Here the regex is a literal string:
(cl-ppcre:scan ".*?january.*?agami.*?"
(funcall (symbol-function field) record))
The cl-ppcre:scan function has a compiler macro that detects this case and generates a (load-time-value (create-scanner ...)) expression (since the string cannot possibly depend on runtime values, this is acceptable).
The compiler-macro might be applied in your test too, in which case the load-time-value is probably executed only once.
In the following code, however, the regular expression is a runtime value, obtained as an input of the function:
(defun searchx (regex &rest fields)
(loop for field in fields
append (loop for record in RECORDS
when (cl-ppcre:scan regex (funcall (symbol-function field) record))
collect record)))
In that case, the scanner object is built when evaluating scan, i.e. each time the loop iterates over a record.
In order to test this hypothesis, you may want to do the following:
(defun searchx (regex &rest fields)
(loop
with scanner = (cl-ppcre:create-scanner regex)
for field in fields
append (loop for record in RECORDS
when (cl-ppcre:scan scanner (funcall (symbol-function field) record))
collect record)))
Alternatively, do not change the function but give it a scanner:
(time (searchx (cl-ppcre:create-scanner
".*?january.*?agami.*?")
'full-name
'email))

Watermark trigger in Onyx does not fire

I have an Onyx stream of segments that are messages with a timestamp (coming in in chronological order). Say, they look like this:
{:id 1 :timestamp "2018-09-04 13:15:42" :msg "Hello, World!"}
{:id 2 :timestamp "2018-09-04 21:32:03" :msg "Lorem ipsum"}
{:id 3 :timestamp "2018-09-05 03:01:52" :msg "Dolor sit amet"}
{:id 4 :timestamp "2018-09-05 09:28:16" :msg "Consetetur sadipscing"}
{:id 5 :timestamp "2018-09-05 12:45:33" :msg "Elitr sed diam"}
{:id 6 :timestamp "2018-09-06 08:14:29" :msg "Nonumy eirmod"}
...
For each time window (of one day) in the data, I want to run a computation on the set of all its segments. I.e., in the example, I would want to operate on the segments with ids 1 and 2 (for Sept 4th), next on the ids 3, 4 and 5 (for Sept 5th), and so on.
Onyx offers windows and triggers, and they should do what I want out of the box. If I use a window of :window/type :fixed and aggregate over :window/range [1 :day] with respect to :window/window-key :timestamp, I will aggregate all segments of each day.
To only trigger my computations when all segments of a day have arrived, Onyx offers the trigger behaviour :onyx.triggers/watermark. According to the documentation, it should fire
if the value of :window/window-key in the segment exceeds the upper-bound in the extent of an active window
However, the trigger does not fire, even though I can see that later segments are already coming in and several windows should be full. As a sanity check, I tried a simple :onyx.triggers/segment trigger, which worked as expected.
My failed attempt at creating a minimal example:
I modified the fixed windows toy job to test watermark triggering, and it worked there.
However, I found out that in this toy job, the reason the watermark trigger is firing might be:
Did it close the input channel? Maybe the job just completed which can trigger the watermark too.
Another aspect that interacts with watermark triggering is the distributed work on tasks by peers.
The comments to issue #839 (:trigger/emit not working with :onyx.triggers/watermark) in the Onyx repo pointed me to issue #840 (Watermark doesn't work with Kafka topic having > 1 partition), where I found this clue (emphasis mine):
The problem is that all of your data is ending up on one partition, and the watermarks always takes the minimum watermark over all of the input peers (and if using the native kafka watermarks, the minimum watermark for a given peer).
As you call g/send with small amounts of data, and auto partition assignment, all of your data is ending up on one partition, meaning that the other partition's peer continues emitting a watermark of 0.
I found out that:
It’s impossible to use it with the current watermark trigger, which relies on the input source. You could try to pull the previous watermark implementation [...]
In my task graph, however, the segments I want to aggregate in windows, are only created in some intermediate task, they don't originate from the input task as such. The input segments only provide information how to create/retrieve the content of the segments to that intermediate task.
Again, this constructs works fine in above mentioned toy job. The reason is that the input channel is closed at some point, which ends the job, which in turn triggers the watermark. So my toy example is actually not a good model, because it is not an open-ended stream.
If a job does get the segments in question from an actual input source, but without timestamps, Onyx seems to provide room to specify a assign-watermark-fn, which is an optional attribute of an input task. That function sets the watermark on each arrival of a new segment. In my case, this does not help, since the segments do not originate from an input task.
I came up with a work-around myself now. The documentation basically gives a clue how that can be done:
This is a shortcut function for a punctuation trigger that fires when any piece of data has a time-based window key that is above another extent, effectively declaring that no more data for earlier windows will be arriving.
So I changed the task that emits the segments so that for every segment there will be emitted another "sentinel" like segment as well:
[{:id 1 :timestamp "2018-09-04 13:15:42" :msg "Hello, World!"}
{:timestamp "2018-09-03 13:15:42" :over :out}]
Note that the :timestamp is predated by the window range (here, 1 day). So it will be sent to the previous window. Since my data comes in chronologically, a :punctuation trigger can tell from the presence of a "sentinel" segment (with keyword :over) that the window can be closed. Don't forget to evict (i.e., :trigger/post-evictor [:all]) and throw away the "sentinel" segment from the final window. Adding :onyx/max-peers 1 in the task map makes sure that a sentinel always arrives eventually, especially when using grouping.
Note that two assumptions go into this work-around:
The data comes in chronological
There are no windows without segments

Clojure core.async in core.test

I have some core.async code with a pipeline of two chans and three nodes :
a producer - function that puts values into chan1 with >!! (it's not in a go-block but the function is called from inside a go-loop)
a filter - another function that's not in a go-block but is called within a go-loop, which pulls items from chan1 (with <!!), does a test and if the test passes pushes them onto chan2 (with >!!)
a consumer - an ordinary loop that pulls n values of chan2 with
This code works as expected when I run it as a simple program. But when I copy and paste it to work within a unit-test, it freezes up.
My test code is roughly
(deftest a-test
(testing "blah"
(is (= (let [c1 (chan)
c2 (chan)
gen (make-generator c1)
filt (make-filter c1 c2)
result (collector c2 10) ]
result)
[0 2 4 6 8 10 12 14 16 18 20]))
))
where the generator creates a sequence of integers counting up from zero and the filter tests for evenness.
As far as I can tell, the filter is able to pull the first value from the c1, but is blocked waiting for a second value. Meanwhile, the generator is blocking while waiting to push its next value into c1.
But this doesn't happen when I run the code in a simple stand-alone program.
So, is there any reason that the unit-test framework might be interfering or causing problems with the threading management that core.async is providing? Is it possible to do unit-testing on async code like this?
I'm concerned that I'm not running the collector in any kind of go-block or go-loop so presumably it might be blocking the main thread. But equally, I presume I have to pull all the data back into the main thread eventually. And if not through that mechanism, how?
While using blocking IO within go-blocks/go-loops isn't the best solution, thread macro may be better fit here. It will execute passed body on separate thread, so you may freely use blocking operations there.

clojure Riemann project collectd

I am trying to do a custom configuration apparently simple using Riemann and Collectd. Basically I'd like to calculate the ratio between two streams. In order to do that I tried something like (as in Rieamann API project suggestion here):
(project [(service "cahe-miss")
(service "cache-all")]
(smap folds/quotient
(with :service "ratio"
index)))
Which apparently works, but after a while I noticed some of the results where miss calculated. After log debugging I finished with the following configuration in order to see what's happening and proint the values:
(project [(service "cache-miss")
(service "cache-all")]
(fn [[miss all]]
(if (or (nil? miss) (nil? all))
(do nil)
(do (where (= (:time miss) (:time all))
;to print time marks
(println (:time all))
(println (:time miss))
; to distinguish easily each event
(println "NEW LINE")
))
)
)
)
My surprise is that each time I get new data from collectd (every 10 seconds) the function I created is executed twice, like reusing previous unused data, and more over it looks like it doesn't care at all about my time equality constraint in the (where (= :time....) clasue. The problem is than I am dividing metrics with different time stamp. Below some ouput of the previous code:
1445606294
1445606294
NEW LINE -- First time I get data
1445606304
1445606294
NEW LINE
1445606304
1445606304
NEW LINE -- Second time I get data
1445606314
1445606304
NEW LINE
1445606314
1445606314
NEW LINE -- Third time I get data
Is there anyone that can give a hint on how to get the data formatted as I expected? I assume there is something I am not understading about the "project" function. Or something related to how incoming data is processed in riemann.
Thanks in advance!
Updated
I managed to solve my problem but still I don't have a clear idea of how it works, however I managed to do so. Right now I am receiving two different streams from collectd tail plugin (from nginx logs) and I managed to make the quotient between them as it follows:
(where (or (service "nginx/counter-cacheHit") (service "nginx/counter-cacheAll"))
(coalesce
(smap folds/quotient (with :service "cacheHit" (scale (* 1 100) index)))))
I have tested it widely and up to now it produces the right results. However I still don't understand several things... First, how it is that coalesce only returns data after both events are processed. Collectd sends the events of the both streams every two seconds with the same time mark, usin "project" instead of "coalesce" resulted in two different executions of smap each two seconds (one for each event), however coalesce results only with one execution of smap with the two events with the same time mark, which is exactly what I wanted.
Finally, I don't know which is the criteria to choose which is the numerator and denominator. Is it becaouse of the "or" clauses in "where" clause?
Anyway, with some blackmagic behind it but I managed to solve my problem ;^)
Thank you all!
taking the ratios between streams that where moving at different rates didn't work out for me. I have since settled on calculating ratios and rates within a fixed time interval or a moving time interval. This way you are capturing a consistent snapshot of events in a time block and calculating over this. Here is some elided code from comparing the rate a service is receiving events to the rate at which it is forwarding events:
(moving-time-window 30 ;; seconds
(smap (fn [events]
(let [in (or (->> events
(filter #(= (:service %) "event-received"))
count)
0)
out (or (->> events
(filter #(= (:service %) "event-sent"))
count)
0)
flow-rate (float (if (> in 0) (/ out in) 0))]
{:service "flow rate"
:metric flow-rate
:host "All"
:state (if (< flow-rate 0.99) "WARNING" "OK")
:time (:time (last events))
:ttl default-interval}))
(tag ["some" "tags" "here"] index)
(where (and
(< (:metric event) 0.9)
(= (:environment event) "production"))
(throttle 1 3600 send-to-slack))))
This takes in a fixed window of events, calculates the ratio for that block and emits an event containing that ratio as it's metric. then if the metric is bad it calls me on slack.

in depth explanation of the side effects interface in clojure overtone generators

I an new to overtone/supercollider. I know how sound forms physically. However I don't understand the magic inside overtone's sound generating functions.
Let's say I have a basic sound:
(definst sin-wave [freq 440 attack 0.01 sustain 0.4 release 0.1 vol 0.4]
(* (env-gen (lin-env attack sustain release) 1 1 0 1 FREE)
(+ (sin-osc freq)
(sin-osc (* freq 2))
(sin-osc (* freq 4)))
vol))
I understand the ASR cycle of sound envelope, sin wave, frequency, volume here. They describe the amplitude of the sound over time. What I don't understand is the time. Since time is absent from the input of all functions here, how do I control stuffs like echo and other cool effects into the thing?
If I am to write my own sin-osc function, how do I specify the amplitude of my sound at specific time point? Let's say my sin-osc has to set that at 1/4 of the cycle the output reaches the peak of amplitude 1.0, what is the interface that I can code with to control it?
Without knowing this, all sound synth generators in overtone doesn't make sense to me and they look like strange functions with unknown side-effects.
Overtone does not specify the individual samples or shapes over time for each signal, it is really just an interface to the supercollider server (which defines a protocol for interaction, of which the supercollider language is the canonical client to this server, and overtone is another). For that reason, all overtone is doing behind the scenes is sending signals for how to construct a synth graph to the supercollider server. The supercollider server is the thing that is actually calculating what samples get sent to the dac, based on the definitions of the synths that are playing at any given time. That is why you are given primitive synth elements like sine oscillators and square waves and filters: these are invoked on the server to actually calculate the samples.
I got an answer from droidcore at #supercollider/Freenode IRC
d: time is really like wallclock time, it's just going by
d: the ugen knows how long each sample takes in terms of milliseconds, so it knows how much to advance its notion of time
d: so in an adsr, when you say you want an attack time of 1.0 seconds, it knows that it needs to take 44100 samples (say) to get there
d: the sampling rate is fixed and is global. it's set when you start the synthesis process
d: yeah well that's like doing a lookup in a sine wave table
d: they'll just repeatedly look up the next value in a table that
represents one cycle of the wave, and then just circle around to
the beginning when they get to the end
d: you can't really do sample-by sample logic from the SC side
d: Chuck will do that, though, if you want to experiment with it
d: time is global and it's implicit it's available to all the oscillators all the time
but internally it's not really like it's a closed form, where you say "give me the sample for this time value"
d: you say "time has advanced 5 microseconds. give me the new value"
d: it's more like a stream
d: you don't need to have random access to the oscillators values, just the next one in time sequence