How can I add some random jitter to the timing of notes in SuperCollider Patterns? - scheduling

I'd like to add a very simple "humanisation" to some note patterns, by randomly jittering the time that the notes play. This code repeatedly plays a chord:
p = Pbind(\legato, 0.1, \dur, 0.2, \midinote, [66, 69, 74]).play
But I'd like each of the three notes in the chord to randomly have some independent "error" in the exact timing. I do not want the tempo to vary.

There's a key you can use called \timingOffset - described in section 8 of the pattern guide.
The simple example is:
p = Pbind(\legato, 0.1, \dur, 0.4, \midinote, [66, 69, 74],
\timingOffset, Pwhite(-0.1, 0.1)
).play;
so that's a random offset of +-0.1 beats. Unfortunately it applies one deviation to the whole chord, which is not what you want. So you need to feed an array into the Pwhite:
p = Pbind(\legato, 0.1, \dur, 0.4, \midinote, [66, 69, 74],
\timingOffset, Pwhite([-0.1, -0.1, -0.1], [0.1, 0.1, 0.1])
).play;
and of course you can derive those arrays from some value:
~jitter = 0.1;
p = Pbind(\legato, 0.1, \dur, 0.4, \midinote, [66, 69, 74],
\timingOffset, Pwhite({0-~jitter}.dup(3), {~jitter}.dup(3))
).play;
Here's a slightly different approach which makes the right number of values, in case the array size of "midinote" is variable:
~jitter = 0.1;
p = Pbind(\legato, 0.1, \dur, 0.4, \midinote, [66, 69, 74],
\timingOffset, Pcollect({|val| {~jitter.bilinrand}.dup(val.size)}, Pkey(\midinote))
).play;

Related

Combine the ratios of two different vectors in c++

I don't know how to explain it so it if this question exists, just point me to the right topic. I have searched the forum but couldn't find any answer (maybe there is a keyword in mathematics or statistics for this that I am missing).
How do I produce a vector with combined weights from two vectors?
For example given two vectors:
vector_1 = {3, 3, 4}
vector_2 = {5, 5}
We calculate their weights which is the element divided by the sum of the elements in the vector.
weights_1 = {0.3, 0.3, 0.4}
weights_2 = {0.5, 0.5}
These are then combined to produce this vector. The combined weights is the combined ratios of the two vectors.
combined_weights = {0.3, 0.2, 0.1, 0.4}
is there a function that can calculate the combined weights?
combined_weights = calculate(weights_1, weights_2)
The process is:
Step 1: combined_weights = {0.3}
0.3 is the first element of weights_1.
Step 2: combined_weights = {0.3, 0.2, 0.1}
The sum of 0.2 and 0.1 is the the second element of weights_1. The sum of the combined_weights vector is equal to the first element of weights_2.
Step 3: combined_weights = {0.3, 0.2, 0.1, 0.4}
From the combined_weights vector we can get both weights_1 and weights_2, i.e.
weights_1 = {0.3, 0.2 + 0.1, 0.4}
weights_2 = {0.3 + 0.2, 0.1 + 0.4}
My goal is to make vector_1 and vector_2 have similar size.
new_vector_1 = {3, 2, 1, 4}
new_vector_2 = {3, 2, 1, 4}
More generally,
Rather than dividing each weight by it's total, you can find the LCM of your totals, and multiply, which keeps you in integer arithmetic
int total1 = std::accumulate(weights1.begin(), weights1.end(), 0);
int total2 = std::accumulate(weights2.begin(), weights2.end(), 0);
int lcm = std::lcm(total1, total2);
We want to do destructive things below, so we may as well do that to adjusted values
std::deque<int> working1;
std::transform(weights1.begin(), weights1.end(), std::back_inserter(working1), [=](int w){ return w * lcm / total1; });
std::deque<int> working2;
std::transform(weights2.begin(), weights2.end(), std::back_inserter(working2), [=](int w){ return w * lcm / total2; });
Comparing the front elements, you pop the smaller, add it to the output (unless zero), and decrement the larger by that value. Repeat that process until both copies are empty
std::vector<int> combined;
while (!working1.empty() && !working2.empty())
{
int & top1 = working1.front();
int & top2 = working2.front();
if (top1 < top2)
{
if (top1 > 0) { combined.push_back(top1) }
top2 -= top1;
working1.pop_front();
}
else
{
if (top2 > 0) { combined.push_back(top2) }
top1 -= top2;
working2.pop_front();
}
}

General::ivar "list" not a valid variable

So I have this data which relates concentrations to a rate
cAo = {0.1, 0.2, 0.1, 0.2, 0.05, 0.2}(*M*);
cBo = {0.1, 0.1, 0.2, 0.2, 0.2, 0.05}(*M*);
r = {.0000246, .0000483, .0000501, .0001003, 0000239, .0000262}(*M/s*);
And I was also given an equation for the rate in the form of
k*(cAo^a)*(cBo^b)=r
So in order to find the constants a, b, and k, I used NonlinearModelFit
EQ1 = NonlinearModelFit[Data2, k*(cAo^a)*(cBo^b), {k, a, b}, {cAo, cBo}];
EQ1["BestFit"]
However for some reason I keep getting the error General::ivar: "{0.1,0.2,0.1,0.2,0.05,0.2} is not a valid variable." which does not make any sense to me.
Thanks in advance
One approach is something like this:
NMinimize[Total[(k*(cAo^a)*(cBo^b)-r)^2],{a,b,k}]
Note I can't actually try this at this time. I suspect you might need to add constraints, eg:
NMinimize[{Total[(k*(cAo^a)*(cBo^b)-r)^2],{a>0,b>0,k>0}},{a,b,k}]

Random Forest - or other Machine Learning - with Different number of features

I am trying to compare a list of numbers with another list of lists to see how many of them match fairly closely. However each of my data sets could have a different length.
As an example, if I had a list of time spent studying, student 1 might have
1 - [ 10.0, 25.0, 15.7, 45.0]
and be compared against the list of other students that were
2 - [ 9.0, 30.0, 3.0]
3 - [ 26.0, 44.0]
4 - [ 5.0, 70.0, 90.0, 100.0]
5 - [ 9.0, 27.0, 13.7, 42.0, 56.0, 60.0, 75.0]
I would want the comparison to score highly comparing study 1 vs 5 because there were 4 times that all scored well, even though student 5 had extra times that student 1 didn't have, and I would want it to score fairly well for student 1 vs 3 because some of the numbers matched closely, even though some did not
I am just getting started with machine learning, and am only passingly familiar with Random Forests. Can you use them to do this type of comparison or do they have to have the same parameters ? Can you suggest a different method ?
Effectively what I am looking for is an intersection of sets, with loose parameters. I would like to implement this in python
Thank you!
Normalization
Start by first normalizing the data in the range 0 to 1. This can be done using the following formula.
Norm(e) = (e - Emin) / (Emax - Emin)
for each value e in each vector. (I don't know how to put math symbols in here or I would.)
So for example the first vector would become...
1 - [ 10.0, 25.0, 15.7, 45.0]
Norm(10.0) = (10.0 - 10.0) / (45.0 - 10.0) = 0.0
1 - [ 0.0, 25.0, 15.7, 45.0]
Norm(25.0) = (25.0 - 10.0) / 35.0 = 15/35 = 3/7 ~= 0.42857142
1 - [ 0.0, 0.42857142, 15.7, 45.0]
...
1 - [ 0.0, 0.42857142, 0.30571428, 1.0]
Do this for every vector and then calculate the mean squared error of each pair
adding/removing necessary 0's. This should give you a pretty good scoring mechanism. If you need to you can also split a 1.0 into 2 0.5 entries.
Mean squared error
You can calculate the mean squared error using the following equation.
Where n is the number of elements in each vector and Y hat, Y are the two vectors which you are looking to get the MSE for.
in code the function would look something like...
public long getMSE(long[] v1, long[] v2) {
long returnValue = 0.0L;
for (int i = 0; i < v1.length; i++) {
returnValue += Math.pow(v1[i] - v2[i], 2);
}
return (long) (returnValue / v1.length);
}

Pyqtgraph clip line

i'm trying to plot a smith chart in pyqtgraph. I would like to know if there is a method to clip
the ellipse items representing the imaginary circles with the real circle having radius = 1.
This is what i've done so far: i used the methods start angle e span angle of
QGraphicsEllipse, but in this way a plot also the vertical and horizontal lines of the circle.
There's a method in matplotlib called set_clip_path(), do you know if there is something like this in pyqtgraph?
import pyqtgraph as pg
plot = pg.plot()
plot.setAspectLocked()
plot.addLine(y=0)
#vector for real circle
rline = [0.2, 0.5, 1.0, 2.0, 5.0]
#vector for imaginary
xline = [0.2, 0.5, 1, 2, 5]
circle1 = pg.QtGui.QGraphicsEllipseItem(1, -1, -2, 2)
circle1.setPen(pg.mkPen(1))
plot.addItem(circle1)
for r in rline:
raggio = 1./(1+r)
circle = pg.QtGui.QGraphicsEllipseItem(1, -raggio, -raggio*2, raggio*2)
circle.setPen(pg.mkPen(0.2))
plot.addItem(circle)
for x in xline:
#printing the imaginary circle
circle = pg.QtGui.QGraphicsEllipseItem(x + 1, 0, -x*2, x*2)
circle.setPen(pg.mkPen(0.2))
circle.setStartAngle(1440)
circle.setSpanAngle(1440)
plot.addItem(circle)
EDIT
That's my final code
plot.setAspectLocked()
plot.setXRange(-1,1, padding = 0)
plot.setYRange(-1,1, padding = 0)
#plot.addLine(y=0)
rline = [0.2, 0.5, 1.0, 2.0, 5.0]
xline = [0.2, 0.5, 1, 2, 5]
circle1 = pg.QtGui.QGraphicsEllipseItem(1, -1, -2, 2)
circle1.setPen(pg.mkPen('w', width=0))
circle1.setFlag(circle1.ItemClipsChildrenToShape)
plot.addItem(circle1)
pathItem = pg.QtGui.QGraphicsPathItem()
path = pg.QtGui.QPainterPath()
path.moveTo(1, 0)
for r in rline:
raggio = 1./(1+r)
path.addEllipse(1, -raggio, -raggio*2, raggio*2)
for x in xline:
path.arcTo(x + 1, 0, -x*2, x*2, 90, -180)
path.moveTo(1, 0)
path.arcTo(x + 1, 0, -x*2, -x*2, 270, 180)
pathItem.setPath(path)
pathItem.setPen(pg.mkPen('g', width = 0.2))
pathItem.setParentItem(circle1)
`
Clipping is supported, but probably not the best option. A few possibilities:
Use QGraphicsPathItem combined with QPainterPath.arcTo to draw arcs without radial lines. This would also allow you to add multiple arcs to a single item rather than adding many items, which should improve performance.
Use Something like PlotCurveItem or arrayToQPath to manually draw your own arcs. If you use the connect argument, you'll again be able to generate multiple separate arcs on a single item.
Clipping is handled by Qt; see QGraphicsItem.itemClipsToShape and QGraphicsItem.itemClipsChildrenToShape. Beware: if you use this, you must set the pen width of the clipping object to 0 (Qt only partially supports cosmetic pens with width > 0). Example:
import pyqtgraph as pg
plot = pg.plot()
e1 = pg.QtGui.QGraphicsEllipseItem(0, 0, 4, 4)
# MUST have width=0 here, or use a non-cosmetic pen:
e1.setPen(pg.mkPen('r', width=0))
e1.setFlag(e1.ItemClipsChildrenToShape)
plot.addItem(e1)
e2 = pg.QtGui.QGraphicsEllipseItem(2, 2, 4, 4)
e2.setPen(pg.mkPen('g'))
e2.setParentItem(e1)

Shaded uncertainty/error region in matplotlib.pyplot

I'm searching for a way to draw shaded error-regions instead of error-bars in Python.
I know that there is matplotlib.pyplot.fill_between() with which you can build a workaround for the y-error but that does not include the x-uncertainty.
Any ideas? Unfortunately I had not enough reputation to put a comment here.
Thanks in advance!
Edit
matplotlib.pyplot.fill_betweenx() leads to something like:
Edit 2
Furthermore I suppose it to be incorrect for a complete uncertainty area. Below I draw what I suppose to be the correct shape - I hope, I'm not wrong here...
import numpy as np
import matplotlib.pyplot as plt
x = np.asarray([1.0, 2.0, 3.0, 4.0])
y = np.asarray([1.0, 2.3, 3.0, 4.0])
xerr = np.asarray([0.1, 0.7, 0.1, 0.1])
yerr = np.asarray([0.1, 0.9, 1.2, 0.1])
plt.errorbar(x, y, yerr, xerr)
plt.fill_between(x, y-yerr, y+yerr, facecolor='#F0F8FF', alpha=1.0, edgecolor='none')
plt.fill_betweenx(y,x-xerr, x+xerr, facecolor='#F0F8FF', alpha=1.0, edgecolor='#8F94CC', linewidth=1, linestyle='dashed')
plt.show()
# Red lines added with inkscape.
I got this to work with the fill_betweenx function:
import numpy as np
import matplotlib.pyplot as plt
x = np.asarray([1.0, 2.0, 3.0, 4.0, 5.0])
y = np.asarray([1.0, 2.0, 3.0, 4.0, 5.0])
xerr = np.asarray([0.2, 0.4, 0.6, 0.8, 1.0])
yerr = np.asarray([0.1, 0.2, 0.3, 0.4, 0.5])
plt.errorbar(x, y, yerr, xerr)
plt.fill_between(x, y-yerr, y+yerr,facecolor='r',alpha=0.5)
plt.fill_betweenx(y,x-xerr,x+xerr,facecolor='b',alpha=0.5)
plt.show()
Which results in this plot:
EDIT
In your specific example, it may be sufficient to use:
plt.fill_between(x, y-yerr, y+yerr,facecolor='#F0F8FF',alpha=1.0,edgecolor='none')
plt.fill_betweenx(y,x-xerr, x+xerr,facecolor='#F0F8FF',alpha=1.0,edgecolor='none')
That way, you do not have edges, which would cross and give this "fold" away. Furthermore, it might generate a sufficient illusion of one error band. You'd have to use both the fill_between and fill_betweenx, though.