Idiomatic way to add error bars to plot in Incanter - clojure

I'm creating a plot of a robot's belief of its distance to a landmark. The x-axis is number of measurements, and the y-axis is distance to landmark, which should include error bars to indicate the confidence in this estimate.
I haven't been able to find an good way to add error bars to the plot based off a value for the variance. Currently I'm creating a box-plot at each measurement by generating sample data about the mean with my value for the variance. This is clearly not ideal, in that it is computationally inefficient and is an imprecise representation of the information I'm trying to display.
Any ideas for how to do this? Ideally it would be on an xy-plot, and it could be done without having to resort to JFreeChart commands.

I think I have something pretty close. First let's create some random data to graph:
(def y (for [i (range 20)] (rand-int 100)))
user> (11 14 41 33 25 71 52 34 83 90 80 35 81 63 94 69 97 92 4 91)
Now create a plot. You can use xy-plot but I like the look of scatter-plot better.
(def plot (scatter-plot (range 20) y))
(view plot)
That gives me the following plot
Now we have to define a function that takes a point (x,y) and returns a vector of the lower and upper bounds of the error bar. I'll use a simplistic one that just calculates 5% above and below the y value.
(defn calc-error-bars [x y]
(let [delta (* y 0.05)]
[(- y delta) (+ y delta)]))
Now we just map that function over the set of data using the add-lines function like this...
(map #(add-lines plot [%1 %1] (calc-error-bars %1 %2)) (range 20) y)
And that gives us this plot:
The main problem is that all the bars are different colors. I'm not sure if there is a way around this without using JFreeChart calls. Hopefully, someone will see this and tell me how to fix it. Anyway, that's pretty close.

Related

How to code for a probability map in opencv?

Hope I am posting in correct forum.
Just want to sound out my ideas and approach to solve problem. Would welcome any pointers, help (if given code would definitely be ideal :) )
Problem:
I want to code for the probability distribution (in a 400 x 400 map) in order to find the spatial location (x,y) of another line (let us call it fL) based upon the probability, in the probability map.
I have gotten a nearly horizontal line cue (let call it lC) from prior processing to calculate the probability to determine fL. fL is estimated to lie at D distance away from this horizontal line cue. My task is to calculate this probability map
Approach:
1) I would take the probability map distribution as Gaussian and to be
P(fL | point ) = exp( ( x-D )^2 /sigma^2 )
which is giving probability of the line fL given the point in line cue lC is at D distance away, pending on sigma (which defines how fast the probability decrease)
2) I would use a LineIterator to find every single pixel that lie on the line cue lC (given that I know the start and end point of line). Let say I have gotten n pixel in this line
3) For every single pixel in the 400 x 400 image, I would calculate the probability using 1) as described above for all n points that I have gotten for the line. I would sum up each line point contribution
4) After finishing all the pixel calculation in the 400x400 image, I would then normalize the probability based the largest pixel probability value. This part I am not unsure that I should normalize by the sum of all pixel probability or by using the step above.
5) After this I would multiply this probability map with other probability map. So I would get
P(fL | Cuefromthisline, Cuefromsomeother....) = P( fL | Cuefromthisline)P( fL | Cuefromsomeother).....
And I would set pixel with near 0 probability to be 0.001
6) That outlines my approach
Question
1) Is this workable? Or if there is any better method to doing this? ie getting the probability map
2) How do I normalize the map. by normalizing with the sum of all pixel probability or by normalizing with the max value
Thanks in advance for reading out this long post

What does lighter color mean in tensorboard?

I attached an image of my distributions on tensorboard. I can see some very light color in the graph. It looks very noisy. What is this?
These curves indicate percentile, in which the light color curves are max (99 th percentile) and min (1 th percentile).
As the definition of percentile, a percentile is a number where a certain percentage of values are fall below that percentile value.
For example in the following figure, assume that we are talking about steps 1000, at the 93 th percentile line, there were 93% of the values are below 0.200
Why is (99th, 93th, 84th, 69th, 50th, 31th, 16th, 7th, 1th) but not other percentiles ? Because they were declared in the tensorboard README.md, which is documented here.
Hope this helps !

Performance of pixel transformation using clojure and quil

Let's assume I'd like to wright an ocr algorithm. Therefore I want to create a binary image. Using clojure and quil I came up with:
(defn setup []
(load-pixels)
(let [pxls (pixels)
]
(letfn [(pxl-over-threshold? [idx] (if (> (red (aget pxls idx)) 128) true false))
]
(time (dotimes [idx 25500] (aset pxls idx (color (rem idx 255)))))
(time (dotimes [idx 25500] (if (pxl-over-threshold? idx)
(aset pxls idx (color 255))
(aset pxls idx (color 0)))))))
(update-pixels))
(defn draw [])
(defsketch example
:title "image demo"
:setup setup
:draw draw
:size [255 100]
:renderer :p2d)
;"Elapsed time: 1570.58932 msecs"
;"Elapsed time: 2781.334345 msecs"
The code generates a grayscale and afterwards iterates over all pixels to set them black or white. It performs the requested behavior, but takes about 4.3 sec to get there (1.3 dual core). I don't have a reference to put the 4.3 sec in context. But thinking of processing a larger image, this must become incredibly slow.
Am I doing something terribly wrong or is there a way to fasten up things? Is the combination of clojure and quil even capable of doing pixel transformations faster or should I choose a different language/environment?
Please also let me know if I'm doing something weird in the code. I'm still new to clojure.
Thanks in advance.
The timings you've taken aren't particularly meaningful because the code isn't warm yet. You need to "warm up" the code so that the JVM will JIT-compile it, and that's when you should start seeing good speed. You should look at How to benchmark functions in Clojure? (You should use Criterium.)
As for your code, you're using arrays, so that should give you good performance. Style-wise, the two hanging ] you have are really weird. Maybe that's just a formatting error? It's usually good to eliminate as much duplicate code as possible, so I'd also change this
(if (pxl-over-threshold? idx)
(aset pxls idx (color 255))
(aset pxls idx (color 0)))
to this
(aset pxls idx (color (if (pxl-over-threshold? idx) 255 0)))
If you feel looks too confusing/complex (I'm kind of right on the edge as to whether I'd consider that too hard to read or not), you could alternatively write it either of these ways instead:
(let [c (if (pxl-over-threshold? idx) 255 0)]
(aset pxls idx (color c)))
(->> (if (pxl-over-threshold? idx) 255 0) color (aset pxls idx))

Passing options to the draw function declaration in quil and clojure.

I am a beginner in using Quil and Clojure and am attempting to draw some rectangles from some existing data structures. If I define the draw function to take a structure of some kind how do I pass the structure to draw using defsketch?
(defn draw [x]
(stroke 80)
(stroke-weight 3)
(fill 23 181 100)
(rect x x x x))
(defn create-sketch []
"Automatically uses the given setup and draw functions. Magic."
(defsketch example
:setup setup
:draw draw
:size [2000 2000]))
In the code above (taken from one of Quil's examples) I could define draw to take a random parameter x which it would then use. I can't figure out how to pass in a parameter when defining the sketch. The :draw draw declaration as it is now works for a function with an empty parameter-list. I have tried every way I can think of to pass it in some x value. I'm not knowledgeable enough on what the problem actually is to be able to fix it.
quil draw takes no arguments. The partial trick shown in the comments works if you always draw the same list of rectangles.
If you want to have an animation draw must access a mutable state (eg an atom).

My neural net learns sin x but not cos x

I have build my own neural net and I have a weird problem with it.
The net is quite a simple feed-forward 1-N-1 net with back propagation learning. Sigmoid is used as activation function.
My training set is generated with random values between [-PI, PI] and their [0,1]-scaled sine values (This is because the "Sigmoid-net" produces only values between [0,1] and unscaled sine -function produces values between [-1,1]).
With that training-set, and the net set to 1-10-1 with learning rate of 0.5, everything works great and the net learns sin-function as it should. BUT.. if I do everything exately the same way for COSINE -function, the net won't learn it. Not with any setup of hidden layer size or learning rate.
Any ideas? Am I missing something?
EDIT: My problem seems to be similar than can be seen with this applet. It won't seem to learn sine-function unless something "easier" is taught for the weights first (like 1400 cycles of quadratic function). All the other settings in the applet can be left as they initially are. So in the case of sine or cosine it seems that the weights need some boosting to atleast partially right direction before a solution can be found. Why is this?
I'm struggling to see how this could work.
You have, as far as I can see, 1 input, N nodes in 1 layer, then 1 output. So there is no difference between any of the nodes in the hidden layer of the net. Suppose you have an input x, and a set of weights wi. Then the output node y will have the value:
y = Σi w_i x
= x . Σi w_i
So this is always linear.
In order for the nodes to be able to learn differently, they must be wired differently and/or have access to different inputs. So you could supply inputs of the value, the square root of the value (giving some effect of scale), etc and wire different hidden layer nodes to different inputs, and I suspect you'll need at least one more hidden layer anyway.
The neural net is not magic. It produces a set of specific weights for a weighted sum. Since you can derive a set weights to approximate a sine or cosine function, that must inform your idea of what inputs the neural net will need in order to have some chance of succeeding.
An explicit example: the Taylor series of the exponential function is:
exp(x) = 1 + x/1! + x^2/2! + x^3/3! + x^4/4! ...
So if you supplied 6 input notes with 1, x1, x2 etc, then a neural net that just received each input to one corresponding node, and multiplied it by its weight then fed all those outputs to the output node would be capable of the 6-term taylor expansion of the exponential:
in hid out
1 ---- h0 -\
x -- h1 --\
x^2 -- h2 ---\
x^3 -- h3 ----- y
x^4 -- h4 ---/
x^5 -- h5 --/
Not much of a neural net, but you get the point.
Further down the wikipedia page on Taylor series, there are expansions for sin and cos, which are given in terms of odd powers of x and even powers of x respectively (think about it, sin is odd, cos is even, and yes it is that straightforward), so if you supply all the powers of x I would guess that the sin and cos versions will look pretty similar with alternating zero weights. (sin: 0, 1, 0, -1/6..., cos: 1, 0, -1/2...)
I think you can always compute sine and then compute cosine externally. I think your concern here is why the neural net is not learning the cosine function when it can learn the sine function. Assuming that this artifact if not because of your code; I would suggest the following:
It definitely looks like an error in the learning algorithm. Could be because of your starting point. Try starting with weights that gives the correct result for the first input and then march forward.
Check if there is heavy bias in your learning - more +ve than -ve
Since cosine can be computed by sine 90 minus angle, you could find the weights and then recompute the weights in 1 step for cosine.