openoffice calc sumproduct with a twist - openoffice-calc

my first attempt in VBA apart from using simple functions; asking for a kick start here:
assume this (part of a) sheet
factor b-count c-count d-count
A2 b2 c2 d2 ...
A3 b3 c3 d3 ...
Assume that these are the first columns and rows A1 to D3, holding numeric values each.
If factor is 1, I want A(N) (column 'A', row N >= 2) to hold the sumproduct of row 1 and row N.
The twist comes when factor is not 1. In that case I want a sumproduct of
count*round(value * factor).
Example:
1.5 2 1 0 4
=myfunc(2) 4 8 11 15
=myfunc(3) 11 20 28 36
=myfunc(4) 29 53 74 94
where myfunc(2) should result in
round(4*1,5)*2+round(8*1,5)*1+round(15*1,5)*4 = 6*2+12*1+23*4 = 12+12+92 = 116, myfunc(3) = 17*2+30+54*4 = 34+30+216 = 280, myfunc(4) = 44*2+80+141*4 = 88+80+564 = 732 etc.
I could just insert a row below each one, multiplying every value with the factor; but I would love something fancier.
basically thought (pun not intended):
col='B'
sum=0
do while (col)(N)>0
sum=sum+(col)(1)*round((col)(N)*A1;0)
col=col+1
loop
A(n)=sum
where (col)(N) refers to the cell in column col and row N.
Not important enough to study the manual; but it would be great if someone can do this off the cuff.
Another point: I have read that custom functions must be stored in the "Standard Library";
but I could not find any mention on HOW to do that. Who will point me to the right manual page?

Go to Tools -> Macros -> Organize Macros -> OpenOffice Basic. Select My Macros -> Standard -> Module 1 (that is what is meant by the Standard library), and press Edit.
Paste the following code.
Function SumProductOfTwoRows(firstColumn As Long, row As Long, firstRow As Long)
'For example: =SUMPRODUCTOFTWOROWS(COLUMN(); ROW(); ROW($A$1))
firstColumn = firstColumn - 1 'column A is index 0
row = row - 1 'row 1 is index 0
firstRow = firstRow - 1 'row 1 is index 0
oSheet = ThisComponent.CurrentController.ActiveSheet
sum = 0
column = firstColumn + 1
factor = oSheet.getCellByPosition(firstColumn, firstRow).getValue()
Do
value = oSheet.getCellByPosition(column, row).getValue()
count = oSheet.getCellByPosition(column, firstRow).getValue()
If value = 0 Then Exit Do
sum = sum + count * CLng(value * factor)
column = column + 1
Loop
SumProductOfTwoRows = sum
End Function
Enter this formula in A2 and drag to fill down to A4.
=SUMPRODUCTOFTWOROWS(COLUMN(); ROW(); ROW($A$1))
The result:
This kind of user-defined function produces an error when re-opening the file. To avoid the error, see my answer at https://stackoverflow.com/a/39254907/5100564.

Related

PowerBI: Calculate difference between columns

I'm struggling with measures and I would appreciate any help.
I have a table with columns: CategoryName, CaseID, CaseOrder and Value
For example:
CategoryName CaseID CaseOrder Value
A C1 2 10
B C1 2 20
C C1 2 30
A C2 3 15
C C2 3 25
A C3 1 10
B C3 1 15
C C3 1 10
I want to sum up for the different CaseID (sort by CaseOrder. I've already sort CaseID column by CaseOrder in Data/Modelling) and CategoryName and, calculate the difference between the total value for each case and CaseOrder = 1.
(obs: in this particular example, there is only one record for the same CaseID and Category, so the "sum" is useless, but in real data, I need to sum up by CategoryName)
What I would like to see as result (I'm using Matrix component to see the data):
CategoryName C3 C1 C2
Value Diff Value Diff Value Diff
A 10 0 10 0 15 5
B 15 0 20 5 -15
C 10 0 20 10 25 15
I've tried PowerBI: how do you calculate the difference between two columns in matrix but it does not work. In this particular example, the Diff column for C1 and C2 cases returns exact the same value of "Value" column and not the difference.
It is important to say that CaseID may be 3, 4, 5, ... different cases denpending on the data I'm importing.
Here's the result I was able to get:
Using this measure:
Diff =
var CaseOrderValue = CALCULATE(MAX(Table1[Value]), ALLEXCEPT(Table1,Table1[CategoryName]),Table1[CaseOrder] = "1")
return Max(Table1[Value]) - CaseOrderValue
You might get some weird stuff going on if you try to aggregate the values -- this is designed to work at the finest level of detail.

how can we find the nth 3 word combination from a word corpus of 3000 words

I have a word corpus of say 3000 words such as [hello, who, this ..].
I want to find the nth 3 word combination from this corpus.I am fine with any order as long as the algorithm gives consistent output.
What would be the time complexity of the algorithm.
I have seen this answer but was looking for something simple.
(Note that I will be using 1-based indexes and ranks throughout this answer.)
To generate all combinations of 3 elements from a list of n elements, we'd take all elements from 1 to n-2 as the first element, then for each of these we'd take all elements after the first element up to n-1 as the second element, then for each of these we'd take all elements after the second element up to n as the third element. This gives us a fixed order, and a direct relation between the rank and a specific combination.
If we take element i as the first element, there are (n-i choose 2) possibilities for the second and third element, and thus (n-i choose 2) combinations with i as the first element. If we then take element j as the second element, there are (n-j choose 1) = n-j possibilities for the third element, and thus n-j combinations with i and j as the first two elements.
Linear search in tables of binomial coefficients
With tables of these binomial coefficients, we can quickly find a specific combination, given its rank. Let's look at a simplified example with a list of 10 elements; these are the number of combinations with element i as the first element:
i
1 C(9,2) = 36
2 C(8,2) = 28
3 C(7,2) = 21
4 C(6,2) = 15
5 C(5,2) = 10
6 C(4,2) = 6
7 C(3,2) = 3
8 C(2,2) = 1
---
120 = C(10,3)
And these are the number of combinations with element j as the second element:
j
2 C(8,1) = 8
3 C(7,1) = 7
4 C(6,1) = 6
5 C(5,1) = 5
6 C(4,1) = 4
7 C(3,1) = 3
8 C(2,1) = 2
9 C(1,1) = 1
So if we're looking for the combination with e.g. rank 96, we look at the number of combinations for each choice of first element i, until we find which group of combinations the combination ranked 96 is in:
i
1 36 96 > 36 96 - 36 = 60
2 28 60 > 28 60 - 28 = 32
3 21 32 > 21 32 - 21 = 11
4 15 11 <= 15
So we know that the first element i is 4, and that within the 15 combinations with i=4, we're looking for the eleventh combination. Now we look at the number of combinations for each choice of second element j, starting after 4:
j
5 5 11 > 5 11 - 5 = 6
6 4 6 > 4 6 - 4 = 2
7 3 2 <= 3
So we know that the second element j is 7, and that the third element is the second combination with j=7, which is k=9. So the combination with rank 96 contains the elements 4, 7 and 9.
Binary search in tables of running total of binomial coefficients
Instead of creating a table of the binomial coefficients and then performing a linear search, it is of course more efficient to create a table of the running total of the binomial coefficient, and then perform a binary search on it. This will improve the time complexity from O(N) to O(logN); in the case of N=3000, the two look-ups can be done in log2(3000) = 12 steps.
So we'd store:
i
1 36
2 64
3 85
4 100
5 110
6 116
7 119
8 120
and:
j
2 8
3 15
4 21
5 26
6 30
7 33
8 35
9 36
Note that when finding j in the second table, you have to subtract the sum corresponding with i from the sums. Let's walk through the example of rank 96 and combination [4,7,9] again; we find the first value that is greater than or equal to the rank:
3 85 96 > 85
4 100 96 <= 100
So we know that i=4; we then subtract the previous sum next to i-1, to get:
96 - 85 = 11
Now we look at the table for j, but we start after j=4, and subtract the sum corresponding to 4, which is 21, from the sums. then again, we find the first value that is greater than or equal to the rank we're looking for (which is now 11):
6 30 - 21 = 9 11 > 9
7 33 - 21 = 12 11 <= 12
So we know that j=7; we subtract the previous sum corresponding to j-1, to get:
11 - 9 = 2
So we know that the second element j is 7, and that the third element is the second combination with j=7, which is k=9. So the combination with rank 96 contains the elements 4, 7 and 9.
Hard-coding the look-up tables
It is of course unnecessary to generate these look-up tables again every time we want to perform a look-up. We only need to generate them once, and then hard-code them into the rank-to-combination algorithm; this should take only 2998 * 64-bit + 2998 * 32-bit = 35kB of space, and make the algorithm incredibly fast.
Inverse algorithm
The inverse algorithm, to find the rank given a combination of elements [i,j,k] then means:
Finding the index of the elements in the list; if the list is sorted (e.g. words sorted alphabetically) this can be done with a binary search in O(logN).
Find the sum in the table for i that corresponds with i-1.
Add to that the sum in the table for j that corresponds with j-1, minus the sum that corresponds with i.
Add to that k-j.
Let's look again at the same example with the combination of elements [4,7,9]:
i=4 -> table_i[3] = 85
j=7 -> table_j[6] - table_j[4] = 30 - 21 = 9
k=9 -> k-j = 2
rank = 85 + 9 + 2 = 96
Look-up tables for N=3000
This snippet generates the look-up table with the running total of the binomial coefficients for i = 1 to 2998:
function C(n, k) { // binomial coefficient (Pascal's triangle)
if (k < 0 || k > n) return 0;
if (k > n - k) k = n - k;
if (! C.t) C.t = [[1]];
while (C.t.length <= n) {
C.t.push([1]);
var l = C.t.length - 1;
for (var i = 1; i < l / 2; i++)
C.t[l].push(C.t[l - 1][i - 1] + C.t[l - 1][i]);
if (l % 2 == 0)
C.t[l].push(2 * C.t[l - 1][(l - 2) / 2]);
}
return C.t[n][k];
}
for (var total = 0, x = 2999; x > 1; x--) {
total += C(x, 2);
document.write(total + ", ");
}
This snippet generates the look-up table with the running total of the binomial coefficients for j = 2 to 2999:
for (var total = 0, x = 2998; x > 0; x--) {
total += x;
document.write(total + ", ");
}
Code example
Here's a quick code example, unfortunately without the full hardcoded look-up tables, because of the size restriction on answers on SO. Run the snippets above and paste the results into the arrays iTable and jTable (after the leading zeros) to get the faster version with hard-coded look-up tables.
function combinationToRank(i, j, k) {
return iTable[i - 1] + jTable[j - 1] - jTable[i] + k - j;
}
function rankToCombination(rank) {
var i = binarySearch(iTable, rank, 1);
rank -= iTable[i - 1];
rank += jTable[i];
var j = binarySearch(jTable, rank, i + 1);
rank -= jTable[j - 1];
var k = j + rank;
return [i, j, k];
function binarySearch(array, value, first) {
var last = array.length - 1;
while (first < last - 1) {
var middle = Math.floor((last + first) / 2);
if (value > array[middle]) first = middle;
else last = middle;
}
return (value <= array[first]) ? first : last;
}
}
var iTable = [0]; // append look-up table values here
var jTable = [0, 0]; // and here
// remove this part when using hard-coded look-up tables
function C(n,k){if(k<0||k>n)return 0;if(k>n-k)k=n-k;if(!C.t)C.t=[[1]];while(C.t.length<=n){C.t.push([1]);var l=C.t.length-1;for(var i=1;i<l/2;i++)C.t[l].push(C.t[l-1][i-1]+C.t[l-1][i]);if(l%2==0)C.t[l].push(2*C.t[l-1][(l-2)/2])}return C.t[n][k]}
for (var iTotal = 0, jTotal = 0, x = 2999; x > 1; x--) {
iTable.push(iTotal += C(x, 2));
jTable.push(jTotal += x - 1);
}
document.write(combinationToRank(500, 1500, 2500) + "<br>");
document.write(rankToCombination(1893333750) + "<br>");

Need help writing estimates statements in proc genmod

I'm using proc genmod to predict an outcome measured at 4 time points. The outcome is a total score on a mood inventory, which can range from 0 to 82. A lot of participants have a score of 0, so the negative binomial distribution in proc genmod seemed like a good fit for the data.
Now, I'm struggling with how to write/interpret the estimates statements. The primary predictors are TBI status at baseline (0=no/1=yes), and visit (0=baseline, 1=second visit, 2=third visit, 4=fourth visit), and an interaction of TBI status and visit.
How do I write my estimates, such that I'm getting out:
1. the average difference in mood inventory score for person with TBI versus a person without, at baseline.
and
2. the average difference in mood inventory change score for a person with TBI versus a person without, over the 4 study visits?
Below is what I have thus far, but I'm not sure how to interpret the output, also below, if indeed my code is correct.:
proc genmod data = analyze_long_3 ;
class id screen_tbi (param = ref ref = first) ;
model nsi_total = visit_cent screen_tbi screen_tbi*visit_cent /dist=negbin ;
output predicted = predstats;
repeated subject=id /type=cs;
estimate "tbi" intercept 1 visit_cent 0 0 0 0 screen_tbi 1 0 /exp;
estimate "no tbi" intercept 1 visit_cent 0 0 0 0 screen_tbi 0 1 /exp;
estimate 'longitudinal TBI' intercept 1
visit_cent -1 1 1 1
screen_tbi 1 0
screen_tbi*visit_cent 1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0 / exp;
estimate 'longitudinal no TBI ' intercept 1
visit_cent -1 1 1 1
screen_tbi 0 1
screen_tbi*visit_cent 0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1 / exp;
where sample = 1 ;
run;
The first research question is to have the average difference score, at baseline, for person with TBI versus a person without. It can be achieved by the following steps:
1) Get the estimated average log (score) when TBI = yes, and Visit = baseline;
2) Get the estimated average log (score) when TBI = no, and Visit =baseline;
3) 1) – 2) to have the difference in log(score) values
4) Exp[3)] to have the difference as percentage of change in scores
To simplify, let T=TBI levels, and V = Visit Levels. One thing to clarify, in your post, there are 4 visit points, the first as reference; therefore there should be 3 parameters for V, not four.
Taking the example of step 1), let’s try to write the ESTIMATE statement. It is a bit tricky. At first it sounds like this (T=0 and V =0 as reference):
ESTIMATE ‘Overall average’ intercept T 1 V 0 0 0;
But it is wrong. In the above statement, all arguments for V are set to 0. When all arguments are 0, it is the same as taking out V from the statement:
ESTIMATE ‘Overall average’ intercept T 1;
This is not the estimate of average for T=1 at baseline level. Rather, it produces an average for T=1, regardless of visit points, or, an average for all visit levels.
The problem is that the reference is set as V=0. In that case, SAS cannot tell the difference between estimates for the reference level, and the estimates for all levels. Indeed it always estimates the average for all levels. To solve it, the reference has to be set to -1, i.e., T=-1 and V=-1 as reference, such that the statement likes:
ESTIMATE ‘Average of T=1 V=baseline’ intercept T 1 V -1 -1 -1;
Now that SAS understands: fine! the job is to get the average at baseline level, not at all levels.
To make the reference value as -1 instead of 0, in the CLASS statement, the option should be specified as PARAM = EFFECT, not PARAM = REF. That brings another problem: once PARAM is not set as REF, SAS will ignore the user defined references. For example:
CLASS id T (ref=’…’) V (ref=’…’) / PARAM=EFFECT;
The (ref=’…’) is ignored when PARAM=EFFECT. How to let SAS make TBI=No and Visit=baseline as references? Well, SAS automatically takes the last level as the reference. For example, if the variable T is ordered ascendingly, the value -1 comes as the first level, while the value 1 comes as the last level; therefore 1 will be the reference. Conversely, if T is ordered in descending order, the value -1 comes at the end and will be used as the ref. This is achieved by the option ‘DESCENDING’ in the CLASS statement.
CLASS id T V / PARAM=EFFECT DESCENDING;
That way, the parameters are ordered as:
T 1 (TBI =1)
T -1 (ref level of TBI, i.e., TBI=no)
V 1 0 0 (for visit =4)
V 0 1 0 (visit = 3)
V 0 0 1 (visit =2)
V -1 -1 -1 (this is the ref level, visit=baseline)
The above information is reported in the ODS table ‘Class Level Information’. It is always good to check the very table each time after running PROC GENMOD. Note that the level (visit = 4) comes before the level (visit =3), visit =3 coming before visit=2.
Now, let’s talk a bit about the parameters and the model equation. As you might know, in SAS, the V for multi-levels is indeed broken down into dummy Vs. If baseline is set as ref level, the dummies will be like:
V4 = the fourth visit or baseline
V3= the third visit, or baseline
V2 = the second visit or baseline
Accordingly, the equation can be written as:
LOG(s) = b0 + b1*T + b2*V4 + b3*V3 + b4*V2
whereas:
s = the total score on a mood inventory
T = 1 for TBI status of yes, = -1 for TBI status of no
V4 = 1 for the fourth visit, = -1 for baseline
V3 = 1 for the third visit, =-1 for baseline
V2 = 1 for the second visit, = -1 for the baseline
b0 to b4 are beta estimates for the parameters
Of note, the order in the model is the same as the order defined in the statement CLASS, and the same as the order in the ODS table ‘Class Level Information’. The V4, V3, V2 have to appear in the model, all or none, i.e., if the VISIT term is to be included, V4 V3 V2 should be all introduced into the model equation. If the VISIT term is not included, none of V4, V3, and V2 should be in the equation.
With interaction terms, 3 more dummy terms must be created:
T_V4 = T*V4
T_V3 = T*V3
T_V2 = T*V2
Hence the equation with interaction terms:
Log(s) = b0 + b1*T + b2*V4 + b3*V3 + b4*V2 + b5*T_V4 + b6* T_V3 + b7* T_V2
The SAS statement of ‘ESTIMATE’ is correspondent to the model equation.
For example, to estimate an overall average for all parameters and all levels, the equation is:
[Log(S)] = b0 ;
whereas [LOG(S)] stands for the expected LOG(score). Accordingly, the statement is:
ESTIMATE ‘overall (all levels of T and V)’ INTERCEPT;
In the above statement, ‘INTERCEPT’ in the statement is correspondent to ‘b0’ in the equation
To estimate an average of log (score) for T =1, and for all levels of visit points, the equation is
[LOG(S)] = b0 + b1 * T = b0 + b1 * 1
And the statement is
ESTIMATE ‘T=Yes, V= all levels’ INTERCEPT T 1;
In the above case, ‘T 1’ in the statement is correspondent to the part “*1” in the equation (i.e., let T=1)
To estimate an average of log (score) for T =1, and for visit = baseline, the equation is:
[Log(s)] = b0 + b1*T + b2*V4 + b3*V3 + b4*V2
= b0 + b1*(1) + b2*(-1)+ b3*(-1) + b4*(-1)
The statement is:
ESTIMATE ‘T=Yes, V=Baseline’ INTERCEPT T 1 V -1 -1 -1;
‘V -1 -1 -1’ in the statement is correspondent to the values of V4, V3, and V2 in the equation. We’ve mentioned above that the dummies V4 V3 and V2 must be all introduced into the model. That is why for the V term, there are always three numbers, such as ‘V -1 -1 -1’, or ‘V 1 1 1’, etc. SAS will give warning in log if you make it like ‘V -1 -1 -1 -1’, because there are four '-1's, 1 more than required. In that case, the excessive '-1' will be ignored. On the contrary, ‘V 1 1’ is fine. It is the same as ‘V 1 1 0’. But what does 'V 1 1 0' means? To figure it out, you have to read Allison’s book (see reference).
For now, let’s carry on, and add the interaction terms. The equation:
[Log(s)] = b0 + b1*T + b2*V4 + b3*V3 + b4*V2 + b5*T_V4 + b6*T_V3 + b7*T_V2
As T_V4 = T*V4 = 1 * (-1) = -1, similarly T_V3 = -1, T_V2=-1, substitute into the equation:
[Log(s)] = b0 + b1*1 + b2*(-1)+ b3*(-1)+ b4*(-1)+ b5*(-1) + b6*(-1) + b7*(-1)
The statement is:
ESTIMATE ‘(1) T=Yes, V=Baseline, with interaction’ INTERCEPT T 1 V -1 -1 -1 T*V -1 -1 -1;
The ‘T*V -1 -1 -1’ are correspondent to the values of T_V4, T_V3 and T_V2 in the equation.
And that is the statement for step 1)!
Step 2 follows the same thoughts. To get the estimated average log (score) when TBI = no, and Visit =baseline.
T = -1, V4=-1, V3=-1, V2=-1.
T_V4 = T * V4 = (-1) * (-1) = 1
T_V3 = T * V3 = (-1) * (-1) = 1
T_V2 = T * V2 = (-1) * (-1) = 1
Substituting the values in the equation:
[Log(s)] = b0 + b1*1 + b2*(-1)+ b3*(-1)+ b4*(-1)+ b5*(1) + b6*(1) + b7*(1)
Note that the numbers: For T: 1; for V: -1 -1 -1; for interaction terms: 1 1 1
And the SAS statement:
ESTIMATE ‘(2) T=No, V=Baseline, with interaction’ INTERCEPT T 1 V -1 -1 -1 T*V 1 1 1;
The estimate results can be found in the ODS table ‘Contrast Estimate Results’.
For step 3), subtract the estimate (1) – (2), to have the difference of log(score); and for step(4), have the exponent of the diff in step 3).
For the second research question:
The average difference in mood inventory change score for a person with TBI versus a person without, over the 4 study visits.
Over the 4 study visits means for all visit levels. By now, you might have known that the statement is simpler:
ESTIMATE ‘(1) T=Yes, V=all levels’ INTERCEPT T 1;
ESTIMATE ‘(2) T=Yes, V=all levels’ INTERCEPT T -1;
Why there are no interaction terms? Because all visit levels are considered. And when all levels are considered, you do not have to put any visit-related terms into the statement.
Finally, the above approach requires some manual calculation. Indeed it is possible to make one single line of ESTIMATE statement that is equivalent to the aforementioned approach. However, the method we discussed above is way easier to understand. For more sophisticated methods, please read Allison’s book.
Reference:
1. Allison, Paul D. Logistic Regression Using SAS®: Theory and Application, Second Edition. Copyright © 2012, SAS Institute Inc.,Cary, North Carolina, USA.

Downscale array for decimal factor

Is there efficient way to downscale number of elements in array by decimal factor?
I want to downsize elements from one array by certain factor.
Example:
If I have 10 elements and need to scale down by factor 2.
1 2 3 4 5 6 7 8 9 10
scaled to
1.5 3.5 5.5 7.5 9.5
Grouping 2 by 2 and use arithmetic mean.
My problem is what if I need to downsize array with 10 elements to 6 elements? In theory I should group 1.6 elements and find their arithmetic mean, but how to do that?
Before suggesting a solution, let's define "downsize" in a more formal way. I would suggest this definition:
Downsizing starts with an array a[N] and produces an array b[M] such that the following is true:
M <= N - otherwise it would be upsizing, not downsizing
SUM(b) = (M/N) * SUM(a) - The sum is reduced proportionally to the number of elements
Elements of a participate in computation of b in the order of their occurrence in a
Let's consider your example of downsizing 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 to six elements. The total for your array is 55, so the total for the new array would be (6/10)*55 = 33. We can achieve this total in two steps:
Walk the array a totaling its elements until we've reached the integer part of N/M fraction (it must be an improper fraction by rule 1 above)
Let's say that a[i] was the last element of a that we could take as a whole in the current iteration. Take the fraction of a[i+1] equal to the fractional part of N/M
Continue to the next number starting with the remaining fraction of a[i+1]
Once you are done, your array b would contain M numbers totaling to SUM(a). Walk the array once more, and scale the result by N/M.
Here is how it works with your example:
b[0] = a[0] + (2/3)*a[1] = 2.33333
b[1] = (1/3)*a[1] + a[2] + (1/3)*a[3] = 5
b[2] = (2/3)*a[3] + a[4] = 7.66666
b[3] = a[5] + (2/3)*a[6] = 10.6666
b[4] = (1/3)*a[6] + a[7] + (1/3)*a[8] = 13.3333
b[5] = (2/3)*a[8] + a[9] = 16
--------
Total = 55
Scaling down by 6/10 produces the final result:
1.4 3 4.6 6.4 8 9.6 (Total = 33)
Here is a simple implementation in C++:
double need = ((double)a.size()) / b.size();
double have = 0;
size_t pos = 0;
for (size_t i = 0 ; i != a.size() ; i++) {
if (need >= have+1) {
b[pos] += a[i];
have++;
} else {
double frac = (need-have); // frac is less than 1 because of the "if" condition
b[pos++] += frac * a[i]; // frac of a[i] goes to current element of b
have = 1 - frac;
b[pos] += have * a[i]; // (1-frac) of a[i] goes to the next position of b
}
}
for (size_t i = 0 ; i != b.size() ; i++) {
b[i] /= need;
}
Demo.
You will need to resort to some form of interpolation, as the number of elements to average isn't integer.
You can consider computing the prefix sum of the array, i.e.
0 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 10
yields by summation
0 1 2 3 4 5 6 7 8 9
1 3 6 10 15 21 28 36 45 55
Then perform linear interpolation to get the intermediate values that you are lacking, like at 0*, 10/6, 20/6, 30/5*, 40/6, 50/6, 60/6*. (Those with an asterisk are readily available).
0 1 10/6 2 3 20/6 4 5 6 40/6 7 8 50/6 9
1 3 15/3 6 10 35/3 15 21 28 100/3 36 45 145/3 55
Now you get fractional sums by subtracting values in pairs. The first average is
(15/3-1)/(10/6) = 12/5
I can't think of anything in the C++ library that will crank out something like this, all fully cooked and ready to go.
So you'll have to, pretty much, roll up your sleeves and go to work. At this point, the question of what's the "efficient" way of doing it boils down to its very basics. Which means:
1) Calculate how big the output array should be. Based on the description of the issue, you should be able to make that calculation even before looking at the values in the input array. You know the input array's size(), you can calculate the size() of the destination array.
2) So, you resize() the destination array up front. Now, you no longer need to worry about the time wasted in growing the size of the dynamic output array, incrementally, as you go through the input array, making your calculations.
3) So what's left is the actual work: iterating over the input array, and calculating the downsized values.
auto b=input_array.begin();
auto e=input_array.end();
auto p=output_array.begin();
Don't see many other options here, besides brute force iteration and calculations. Iterate from b to e, getting your samples, calculating each downsized value, and saving the resulting value into *p++.

Ranking and unranking of permutations with duplicates

I'm reading about permutations and I'm interested in ranking/unranking methods.
From the abstract of a paper:
A ranking function for the permutations on n symbols assigns a unique
integer in the range [0, n! - 1] to each of the n! permutations. The corresponding
unranking function is the inverse: given an integer between 0 and n! - 1, the
value of the function is the permutation having this rank.
I made a ranking and an unranking function in C++ using next_permutation. But this isn't practical for n>8. I'm looking for a faster method and factoradics seem to be quite popular.
But I'm not sure if this also works with duplicates. So what would be a good way to rank/unrank permutations with duplicates?
I will cover one half of your question in this answer - 'unranking'. The goal is to find the lexicographically 'K'th permutation of an ordered string [abcd...] efficiently.
We need to understand Factorial Number System (factoradics) for this. A factorial number system uses factorial values instead of powers of numbers (binary system uses powers of 2, decimal uses powers of 10) to denote place-values (or base).
The place values (base) are –
5!= 120 4!= 24 3!=6 2!= 2 1!=1 0!=1 etc..
The digit in the zeroth place is always 0. The digit in the first place (with base = 1!) can be 0 or 1. The digit in the second place (with base 2!) can be 0,1 or 2 and so on. Generally speaking, the digit at nth place can take any value between 0-n.
First few numbers represented as factoradics-
0 -> 0 = 0*0!
1 -> 10 = 1*1! + 0*0!
2 -> 100 = 1*2! + 0*1! + 0*0!
3 -> 110 = 1*2! + 1*1! + 0*0!
4 -> 200 = 2*2! + 0*1! + 0*0!
5 -> 210 = 2*2! + 1*1! + 0*0!
6 -> 1000 = 1*3! + 0*2! + 0*1! + 0*0!
7 -> 1010 = 1*3! + 0*2! + 1*1! + 0*0!
8 -> 1100 = 1*3! + 1*2! + 0*1! + 0*0!
9 -> 1110
10-> 1200
There is a direct relationship between n-th lexicographical permutation of a string and its factoradic representation.
For example, here are the permutations of the string “abcd”.
0 abcd 6 bacd 12 cabd 18 dabc
1 abdc 7 badc 13 cadb 19 dacb
2 acbd 8 bcad 14 cbad 20 dbac
3 acdb 9 bcda 15 cbda 21 dbca
4 adbc 10 bdac 16 cdab 22 dcab
5 adcb 11 bdca 17 cdba 23 dcba
We can see a pattern here, if observed carefully. The first letter changes after every 6-th (3!) permutation. The second letter changes after 2(2!) permutation. The third letter changed after every (1!) permutation and the fourth letter changes after every (0!) permutation. We can use this relation to directly find the n-th permutation.
Once we represent n in factoradic representation, we consider each digit in it and add a character from the given string to the output. If we need to find the 14-th permutation of ‘abcd’. 14 in factoradics -> 2100.
Start with the first digit ->2, String is ‘abcd’. Assuming the index starts at 0, take the element at position 2, from the string and add it to the Output.
Output String
c abd
2 012
The next digit -> 1.String is now ‘abd’. Again, pluck the character at position 1 and add it to the Output.
Output String
cb ad
21 01
Next digit -> 0. String is ‘ad’. Add the character at position 1 to the Output.
Output String
cba d
210 0
Next digit -> 0. String is ‘d’. Add the character at position 0 to the Output.
Output String
cbad ''
2100
To convert a given number to Factorial Number System,successively divide the number by 1,2,3,4,5 and so on until the quotient becomes zero. The reminders at each step forms the factoradic representation.
For eg, to convert 349 to factoradic,
Quotient Reminder Factorial Representation
349/1 349 0 0
349/2 174 1 10
174/3 58 0 010
58/4 14 2 2010
14/5 2 4 42010
2/6 0 2 242010
Factoradic representation of 349 is 242010.
One way is to rank and unrank the choice of indices by a particular group of equal numbers, e.g.,
def choose(n, k):
c = 1
for f in xrange(1, k + 1):
c = (c * (n - f + 1)) // f
return c
def rank_choice(S):
k = len(S)
r = 0
j = k - 1
for n in S:
for i in xrange(j, n):
r += choose(i, j)
j -= 1
return r
def unrank_choice(k, r):
S = []
for j in xrange(k - 1, -1, -1):
n = j
while r >= choose(n, j):
r -= choose(n, j)
n += 1
S.append(n)
return S
def rank_perm(P):
P = list(P)
r = 0
for n in xrange(max(P), -1, -1):
S = []
for i, p in enumerate(P):
if p == n:
S.append(i)
S.reverse()
for i in S:
del P[i]
r *= choose(len(P) + len(S), len(S))
r += rank_choice(S)
return r
def unrank_perm(M, r):
P = []
for n, m in enumerate(M):
S = unrank_choice(m, r % choose(len(P) + m, m))
r //= choose(len(P) + m, m)
S.reverse()
for i in S:
P.insert(i, n)
return tuple(P)
if __name__ == '__main__':
for i in xrange(60):
print rank_perm(unrank_perm([2, 3, 1], i))
For large n-s you need arbitrary precision library like GMP.
this is my previous post for an unranking function written in python, I think it's readable, almost like a pseudocode, there is also some explanation in the comments: Given a list of elements in lexicographical order (i.e. ['a', 'b', 'c', 'd']), find the nth permutation - Average time to solve?
based on this you should be able to figure out the ranking function, it's basically the same logic ;)
Java, from https://github.com/timtiemens/permute/blob/master/src/main/java/permute/PermuteUtil.java (my public domain code, minus the error checking):
public class PermuteUtil {
public <T> List<T> nthPermutation(List<T> original, final BigInteger permutationNumber) {
final int size = original.size();
// the return list:
List<T> ret = new ArrayList<>();
// local mutable copy of the original list:
List<T> numbers = new ArrayList<>(original);
// Our input permutationNumber is [1,N!], but array indexes are [0,N!-1], so subtract one:
BigInteger permNum = permutationNumber.subtract(BigInteger.ONE);
for (int i = 1; i <= size; i++) {
BigInteger factorialNminusI = factorial(size - i);
// casting to integer is ok here, because even though permNum _could_ be big,
// the factorialNminusI is _always_ big
int j = permNum.divide(factorialNminusI).intValue();
permNum = permNum.mod(factorialNminusI);
// remove item at index j, and put it in the return list at the end
T item = numbers.remove(j);
ret.add(item);
}
return ret;
}
}