Most efficient way to calculate lexicographic index - c++

Can anybody find any potentially more efficient algorithms for accomplishing the following task?:
For any given permutation of the integers 0 thru 7, return the index which describes the permutation lexicographically (indexed from 0, not 1).
For example,
The array 0 1 2 3 4 5 6 7 should return an index of 0.
The array 0 1 2 3 4 5 7 6 should return an index of 1.
The array 0 1 2 3 4 6 5 7 should return an index of 2.
The array 1 0 2 3 4 5 6 7 should return an index of 5039 (that's 7!-1 or factorial(7)-1).
The array 7 6 5 4 3 2 1 0 should return an index of 40319 (that's 8!-1). This is the maximum possible return value.
My current code looks like this:
int lexic_ix(int* A){
int value = 0;
for(int i=0 ; i<7 ; i++){
int x = A[i];
for(int j=0 ; j<i ; j++)
if(A[j]<A[i]) x--;
value += x*factorial(7-i); // actual unrolled version doesn't have a function call
}
return value;
}
I'm wondering if there's any way I can reduce the number of operations by removing that inner loop, or if I can reduce conditional branching in any way (other than unrolling - my current code is actually an unrolled version of the above), or if there are any clever bitwise hacks or filthy C tricks to help.
I already tried replacing
if(A[j]<A[i]) x--;
with
x -= (A[j]<A[i]);
and I also tried
x = A[j]<A[i] ? x-1 : x;
Both replacements actually led to worse performance.
And before anyone says it - YES this is a huge performance bottleneck: currently about 61% of the program's runtime is spent in this function, and NO, I don't want to have a table of precomputed values.
Aside from those, any suggestions are welcome.

Don't know if this helps but here's an other solution :
int lexic_ix(int* A, int n){ //n = last index = number of digits - 1
int value = 0;
int x = 0;
for(int i=0 ; i<n ; i++){
int diff = (A[i] - x); //pb1
if(diff > 0)
{
for(int j=0 ; j<i ; j++)//pb2
{
if(A[j]<A[i] && A[j] > x)
{
if(A[j]==x+1)
{
x++;
}
diff--;
}
}
value += diff;
}
else
{
x++;
}
value *= n - i;
}
return value;
}
I couldn't get rid of the inner loop, so complexity is o(n log(n)) in worst case, but o(n) in best case, versus your solution which is o(n log(n)) in all cases.
Alternatively, you can replace the inner loop by the following to remove some worst cases at the expense of another verification in the inner loop :
int j=0;
while(diff>1 && j<i)
{
if(A[j]<A[i])
{
if(A[j]==x+1)
{
x++;
}
diff--;
}
j++;
}
Explanation :
(or rather "How I ended with that code", I think it is not that different from yours but it can make you have ideas, maybe)
(for less confusion I used characters instead and digit and only four characters)
abcd 0 = ((0 * 3 + 0) * 2 + 0) * 1 + 0
abdc 1 = ((0 * 3 + 0) * 2 + 1) * 1 + 0
acbd 2 = ((0 * 3 + 1) * 2 + 0) * 1 + 0
acdb 3 = ((0 * 3 + 1) * 2 + 1) * 1 + 0
adbc 4 = ((0 * 3 + 2) * 2 + 0) * 1 + 0
adcb 5 = ((0 * 3 + 2) * 2 + 1) * 1 + 0 //pb1
bacd 6 = ((1 * 3 + 0) * 2 + 0) * 1 + 0
badc 7 = ((1 * 3 + 0) * 2 + 1) * 1 + 0
bcad 8 = ((1 * 3 + 1) * 2 + 0) * 1 + 0 //First reflexion
bcda 9 = ((1 * 3 + 1) * 2 + 1) * 1 + 0
bdac 10 = ((1 * 3 + 2) * 2 + 0) * 1 + 0
bdca 11 = ((1 * 3 + 2) * 2 + 1) * 1 + 0
cabd 12 = ((2 * 3 + 0) * 2 + 0) * 1 + 0
cadb 13 = ((2 * 3 + 0) * 2 + 1) * 1 + 0
cbad 14 = ((2 * 3 + 1) * 2 + 0) * 1 + 0
cbda 15 = ((2 * 3 + 1) * 2 + 1) * 1 + 0 //pb2
cdab 16 = ((2 * 3 + 2) * 2 + 0) * 1 + 0
cdba 17 = ((2 * 3 + 2) * 2 + 1) * 1 + 0
[...]
dcba 23 = ((3 * 3 + 2) * 2 + 1) * 1 + 0
First "reflexion" :
An entropy point of view. abcd have the fewest "entropy". If a character is in a place it "shouldn't" be, it creates entropy, and the earlier the entropy is the greatest it becomes.
For bcad for example, lexicographic index is 8 = ((1 * 3 + 1) * 2 + 0) * 1 + 0 and can be calculated that way :
value = 0;
value += max(b - a, 0); // = 1; (a "should be" in the first place [to create the less possible entropy] but instead it is b)
value *= 3 - 0; //last index - current index
value += max(c - b, 0); // = 1; (b "should be" in the second place but instead it is c)
value *= 3 - 1;
value += max(a - c, 0); // = 0; (a "should have been" put earlier, so it does not create entropy to put it there)
value *= 3 - 2;
value += max(d - d, 0); // = 0;
Note that the last operation will always do nothing, that's why "i
First problem (pb1) :
For adcb, for example, the first logic doesn't work (it leads to an lexicographic index of ((0* 3+ 2) * 2+ 0) * 1 = 4) because c-d = 0 but it creates entropy to put c before b. I added x because of that, it represents the first digit/character that isn't placed yet. With x, diff cannot be negative.
For adcb, lexicographic index is 5 = ((0 * 3 + 2) * 2 + 1) * 1 + 0 and can be calculated that way :
value = 0; x=0;
diff = a - a; // = 0; (a is in the right place)
diff == 0 => x++; //x=b now and we don't modify value
value *= 3 - 0; //last index - current index
diff = d - b; // = 2; (b "should be" there (it's x) but instead it is d)
diff > 0 => value += diff; //we add diff to value and we don't modify x
diff = c - b; // = 1; (b "should be" there but instead it is c) This is where it differs from the first reflexion
diff > 0 => value += diff;
value *= 3 - 2;
Second problem (pb2) :
For cbda, for example, lexicographic index is 15 = ((2 * 3 + 1) * 2 + 1) * 1 + 0, but the first reflexion gives : ((2 * 3 + 0) * 2 + 1) * 1 + 0 = 13 and the solution to pb1 gives ((2 * 3 + 1) * 2 + 3) * 1 + 0 = 17. The solution to pb1 doesn't work because the two last characters to place are d and a, so d - a "means" 1 instead of 3. I had to count the characters placed before that comes before the character in place, but after x, so I had to add an inner loop.
Putting it all together :
I then realised that pb1 was just a particular case of pb2, and that if you remove x, and you simply take diff = A[i], we end up with the unnested version of your solution (with factorial calculated little by little, and my diff corresponding to your x).
So, basically, my "contribution" (I think) is to add a variable, x, which can avoid doing the inner loop when diff equals 0 or 1, at the expense of checking if you have to increment x and doing it if so.
I also checked if you have to increment x in the inner loop (if(A[j]==x+1)) because if you take for example badce, x will be b at the end because a comes after b, and you will enter the inner loop one more time, encountering c. If you check x in the inner loop, when you encounter d you have no choice but doing the inner loop, but x will update to c, and when you encounter c you will not enter the inner loop. You can remove this check without breaking the program
With the alternative version and the check in the inner loop it makes 4 different versions. The alternative one with the check is the one in which you enter the less the inner loop, so in terms of "theoretical complexity" it is the best, but in terms of performance/number of operations, I don't know.
Hope all of this helps (since the question is rather old, and I didn't read all the answers in details). If not, I still had fun doing it. Sorry for the long post. Also I'm new on Stack Overflow (as a member), and not a native speaker, so please be nice, and don't hesitate to let me know if I did something wrong.

Linear traversal of memory already in cache really doesn't take much times at all. Don't worry about it. You won't be traversing enough distance before factorial() overflows.
Move the 8 out as a parameter.
int factorial ( int input )
{
return input ? input * factorial (input - 1) : 1;
}
int lexic_ix ( int* arr, int N )
{
int output = 0;
int fact = factorial (N);
for ( int i = 0; i < N - 1; i++ )
{
int order = arr [ i ];
for ( int j = 0; j < i; j++ )
order -= arr [ j ] < arr [ i ];
output += order * (fact /= N - i);
}
return output;
}
int main()
{
int arr [ ] = { 11, 10, 9, 8, 7 , 6 , 5 , 4 , 3 , 2 , 1 , 0 };
const int length = 12;
for ( int i = 0; i < length; ++i )
std::cout << lexic_ix ( arr + i, length - i ) << std::endl;
}

Say, for a M-digit sequence permutation, from your code, you can get the lexicographic SN formula which is something like: Am-1*(m-1)! + Am-2*(m-2)! + ... + A0*(0)! , where Aj range from 0 to j. You can calculate SN from A0*(0)!, then A1*(1)!, ..., then Am-1 * (m-1)!, and add these together(suppose your integer type does not overflow), so you do not need calculate factorials recursively and repeatedly. The SN number is a range from 0 to M!-1 (because Sum(n*n!, n in 0,1, ...n) = (n+1)!-1)
If you are not calculating factorials recursively, I cannot think of anything that could make any big improvement.
Sorry for posting the code a little bit late, I just did some research, and find this:
http://swortham.blogspot.com.au/2011/10/how-much-faster-is-multiplication-than.html
according to this author, integer multiplication can be 40 times faster than integer division. floating numbers are not so dramatic though, but here is pure integer.
int lexic_ix ( int arr[], int N )
{
// if this function will be called repeatedly, consider pass in this pointer as parameter
std::unique_ptr<int[]> coeff_arr = std::make_unique<int[]>(N);
for ( int i = 0; i < N - 1; i++ )
{
int order = arr [ i ];
for ( int j = 0; j < i; j++ )
order -= arr [ j ] < arr [ i ];
coeff_arr[i] = order; // save this into coeff_arr for later multiplication
}
//
// There are 2 points about the following code:
// 1). most modern processors have built-in multiplier, \
// and multiplication is much faster than division
// 2). In your code, you are only the maximum permutation serial number,
// if you put in a random sequence, say, when length is 10, you put in
// a random sequence, say, {3, 7, 2, 9, 0, 1, 5, 8, 4, 6}; if you look into
// the coeff_arr[] in debugger, you can see that coeff_arr[] is:
// {3, 6, 2, 6, 0, 0, 1, 2, 0, 0}, the last number will always be zero anyway.
// so, you will have good chance to reduce many multiplications.
// I did not do any performance profiling, you could have a go, and it will be
// much appreciated if you could give some feedback about the result.
//
long fac = 1;
long sn = 0;
for (int i = 1; i < N; ++i) // start from 1, because coeff_arr[N-1] is always 0
{
fac *= i;
if (coeff_arr[N - 1 - i])
sn += coeff_arr[N - 1 - i] * fac;
}
return sn;
}
int main()
{
int arr [ ] = { 3, 7, 2, 9, 0, 1, 5, 8, 4, 6 }; // try this and check coeff_arr
const int length = 10;
std::cout << lexic_ix(arr, length ) << std::endl;
return 0;
}

This is the whole profiling code, I only run the test in Linux, code was compiled using G++8.4, with '-std=c++11 -O3' compiler options. To be fair, I slightly rewrote your code, pre-calculate the N! and pass it into the function, but it seems this does not help much.
The performance profiling for N = 9 (362,880 permutations) is:
Time durations are: 34, 30, 25 milliseconds
Time durations are: 34, 30, 25 milliseconds
Time durations are: 33, 30, 25 milliseconds
The performance profiling for N=10 (3,628,800 permutations) is:
Time durations are: 345, 335, 275 milliseconds
Time durations are: 348, 334, 275 milliseconds
Time durations are: 345, 335, 275 milliseconds
The first number is your original function, the second is the function re-written that gets N! passed in, the last number is my result. The permutation generation function is very primitive and runs slowly, but as long as it generates all permutations as testing dataset, that is alright. By the way, these tests are run on a Quad-Core 3.1Ghz, 4GBytes desktop running Ubuntu 14.04.
EDIT: I forgot a factor that the first function may need to expand the lexi_numbers vector, so I put an empty call before timing. After this, the times are 333, 334, 275.
EDIT: Another factor that could influence the performance, I am using long integer in my code, if I change those 2 'long' to 2 'int', the running time will become: 334, 333, 264.
#include <iostream>
#include <vector>
#include <chrono>
using namespace std::chrono;
int factorial(int input)
{
return input ? input * factorial(input - 1) : 1;
}
int lexic_ix(int* arr, int N)
{
int output = 0;
int fact = factorial(N);
for (int i = 0; i < N - 1; i++)
{
int order = arr[i];
for (int j = 0; j < i; j++)
order -= arr[j] < arr[i];
output += order * (fact /= N - i);
}
return output;
}
int lexic_ix1(int* arr, int N, int N_fac)
{
int output = 0;
int fact = N_fac;
for (int i = 0; i < N - 1; i++)
{
int order = arr[i];
for (int j = 0; j < i; j++)
order -= arr[j] < arr[i];
output += order * (fact /= N - i);
}
return output;
}
int lexic_ix2( int arr[], int N , int coeff_arr[])
{
for ( int i = 0; i < N - 1; i++ )
{
int order = arr [ i ];
for ( int j = 0; j < i; j++ )
order -= arr [ j ] < arr [ i ];
coeff_arr[i] = order;
}
long fac = 1;
long sn = 0;
for (int i = 1; i < N; ++i)
{
fac *= i;
if (coeff_arr[N - 1 - i])
sn += coeff_arr[N - 1 - i] * fac;
}
return sn;
}
std::vector<std::vector<int>> gen_permutation(const std::vector<int>& permu_base)
{
if (permu_base.size() == 1)
return std::vector<std::vector<int>>(1, std::vector<int>(1, permu_base[0]));
std::vector<std::vector<int>> results;
for (int i = 0; i < permu_base.size(); ++i)
{
int cur_int = permu_base[i];
std::vector<int> cur_subseq = permu_base;
cur_subseq.erase(cur_subseq.begin() + i);
std::vector<std::vector<int>> temp = gen_permutation(cur_subseq);
for (auto x : temp)
{
x.insert(x.begin(), cur_int);
results.push_back(x);
}
}
return results;
}
int main()
{
#define N 10
std::vector<int> arr;
int buff_arr[N];
const int length = N;
int N_fac = factorial(N);
for(int i=0; i<N; ++i)
arr.push_back(N-i-1); // for N=10, arr is {9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
std::vector<std::vector<int>> all_permus = gen_permutation(arr);
std::vector<int> lexi_numbers;
// This call is not timed, only to expand the lexi_numbers vector
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix2(&x[0], length, buff_arr));
lexi_numbers.clear();
auto t0 = high_resolution_clock::now();
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix(&x[0], length));
auto t1 = high_resolution_clock::now();
lexi_numbers.clear();
auto t2 = high_resolution_clock::now();
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix1(&x[0], length, N_fac));
auto t3 = high_resolution_clock::now();
lexi_numbers.clear();
auto t4 = high_resolution_clock::now();
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix2(&x[0], length, buff_arr));
auto t5 = high_resolution_clock::now();
std::cout << std::endl << "Time durations are: " << duration_cast<milliseconds> \
(t1 -t0).count() << ", " << duration_cast<milliseconds>(t3 - t2).count() << ", " \
<< duration_cast<milliseconds>(t5 - t4).count() <<" milliseconds" << std::endl;
return 0;
}

Related

Optimization. How to speed up the given C ++ code?

This is my code. 1<=i<=j<=n j-i<=a 1<=n<=1000000 0<=a<=1000000
#include <iostream>
using namespace std;
int main(){
int n, a, r = 0;
cin>>n>>a;
for(int i = 1; i <= n; i++){
int j = i;
for(j; j <= n; j++){
if(j-i<=a){
r++;
}
}
}
cout<<r;
}
Instead of loops, I changed it to a simple check of variables, which greatly accelerated the code. there is no need to calculate thousands of options.
My final, optimized code is:
#include <iostream>
using namespace std;
int main(){
unsigned long long n, a, r = 0;
cin>>n>>a;
if(a==0){
r = n;
}
if(n<=a){
r = (n*(n+1))/2;
}
if(n>a){
r += (n-a)*(a+1) + (a*(a+1))/2;
}
cout<<r;
}
After accounting for both positive numbers, negative numbers, and zeros, your double-nested for-loop can be simplified into this:
if (n < 1)
{
r = 0;
}
else if (a == 0)
{
r = n;
}
else if (a < 0)
{
r = 0;
}
else if (n <= a)
{
r = (n * (n + 1)) / 2;
}
else
{
r = (n-a)*(a + 1) + (a * (a + 1)) / 2;
}
Recall that summing a sequence of digits from 1..N is:
N*(N+1)
-------
2
If n <= a (positive numbers), r is incremented n times in the inner loop on the first iteration of the outer loop. Then n-1 times, then n-2 times... all the way down to 1.
For cases where n > a, then there are n-a summations of a+1 followed by a decrementing summation from a down to 1
This strikes me as something to speed up by doing a bit of math, not by massaging the code.
Basically, we can think of the loops as defining a square matrix of the values of i and j. So let's assume n = 9, and a = 3. I'll draw in a + for each place we increment r, a blank for the values we don't generate, and a 0 for the places we generate values, but don't increment r.
i\j 1 2 3 4 5 6 7 8 9
1 + + + + 0 0 0 0 0
2 + + + + 0 0 0 0
3 + + + + 0 0 0
4 + + + + 0 0
5 + + + + 0
6 + + + +
7 + + +
8 + +
9 +
So, ignoring the last a rows (i.e., for the first n-a rows), in each row we have a band a + 1 elements wide where we do an increment. Then at the end, we have a triangle, where we're basically summing a + a-1 + a-2 ... 0.
So, the first piece is (a+1) * (n-a) and the second piece is a * (a+1) / 2. Add those together, and we get the final answer.
Seems like
for(j; j <= n; j++){
if(j-i<=a){
r++;
}
}
could be replaced by
r += f(i,n,a);
Where f() is some simple expression involving those 3 values, probably including the equivalent of min(..,..)
If you want to speed up your code, instead of just tuning your algorithm, you can also try to use some parallel api.
Parallel computing api such as OpenMP enables you take advantage of your cpu resources.
If you uses OpenMP, you can try to use it to parallel your loop.

C++ - Code Optimization

I have a problem:
You are given a sequence, in the form of a string with characters ‘0’, ‘1’, and ‘?’ only. Suppose there are k ‘?’s. Then there are 2^k ways to replace each ‘?’ by a ‘0’ or a ‘1’, giving 2^k different 0-1 sequences (0-1 sequences are sequences with only zeroes and ones).
For each 0-1 sequence, define its number of inversions as the minimum number of adjacent swaps required to sort the sequence in non-decreasing order. In this problem, the sequence is sorted in non-decreasing order precisely when all the zeroes occur before all the ones. For example, the sequence 11010 has 5 inversions. We can sort it by the following moves: 11010 →→ 11001 →→ 10101 →→ 01101 →→ 01011 →→ 00111.
Find the sum of the number of inversions of the 2^k sequences, modulo 1000000007 (10^9+7).
For example:
Input: ??01
-> Output: 5
Input: ?0?
-> Output: 3
Here's my code:
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <string.h>
#include <math.h>
using namespace std;
void ProcessSequences(char *input)
{
int c = 0;
/* Count the number of '?' in input sequence
* 1??0 -> 2
*/
for(int i=0;i<strlen(input);i++)
{
if(*(input+i) == '?')
{
c++;
}
}
/* Get all possible combination of '?'
* 1??0
* -> ??
* -> 00, 01, 10, 11
*/
int seqLength = pow(2,c);
// Initialize 2D array of integer
int **sequencelist, **allSequences;
sequencelist = new int*[seqLength];
allSequences = new int*[seqLength];
for(int i=0; i<seqLength; i++){
sequencelist[i] = new int[c];
allSequences[i] = new int[500000];
}
//end initialize
for(int count = 0; count < seqLength; count++)
{
int n = 0;
for(int offset = c-1; offset >= 0; offset--)
{
sequencelist[count][n] = ((count & (1 << offset)) >> offset);
// cout << sequencelist[count][n];
n++;
}
// cout << std::endl;
}
/* Change '?' in former sequence into all possible bits
* 1??0
* ?? -> 00, 01, 10, 11
* -> 1000, 1010, 1100, 1110
*/
for(int d = 0; d<seqLength; d++)
{
int seqCount = 0;
for(int e = 0; e<strlen(input); e++)
{
if(*(input+e) == '1')
{
allSequences[d][e] = 1;
}
else if(*(input+e) == '0')
{
allSequences[d][e] = 0;
}
else
{
allSequences[d][e] = sequencelist[d][seqCount];
seqCount++;
}
}
}
/*
* Sort each sequences to increasing mode
*
*/
// cout<<endl;
int totalNum[seqLength];
for(int i=0; i<seqLength; i++){
int num = 0;
for(int j=0; j<strlen(input); j++){
if(j==strlen(input)-1){
break;
}
if(allSequences[i][j] > allSequences[i][j+1]){
int temp = allSequences[i][j];
allSequences[i][j] = allSequences[i][j+1];
allSequences[i][j+1] = temp;
num++;
j = -1;
}//endif
}//endfor
totalNum[i] = num;
}//endfor
/*
* Sum of all Num of Inversions
*/
int sum = 0;
for(int i=0;i<seqLength;i++){
sum = sum + totalNum[i];
}
// cout<<"Output: "<<endl;
int out = sum%1000000007;
cout<< out <<endl;
} //end of ProcessSequences method
int main()
{
// Get Input
char seq[500000];
// cout << "Input: "<<endl;
cin >> seq;
char *p = &seq[0];
ProcessSequences(p);
return 0;
}
the results were right for small size input, but for bigger size input I got time CPU time limit > 1 second. I also got exceeded memory size. How to make it faster and optimal memory use? What algorithm should I use and what better data structure should I use?, Thank you.
Dynamic programming is the way to go. Imagine You are adding the last character to all sequences.
If it is 1 then You get XXXXXX1. Number of swaps is obviously the same as it was for every sequence so far.
If it is 0 then You need to know number of ones already in every sequence. Number of swaps would increase by the amount of ones for every sequence.
If it is ? You just add two previous cases together
You need to calculate how many sequences are there. For every length and for every number of ones (number of ones in the sequence can not be greater than length of the sequence, naturally). You start with length 1, which is trivial, and continue with longer. You can get really big numbers, so You should calculate modulo 1000000007 all the time. The program is not in C++, but should be easy to rewrite (array should be initialized to 0, int is 32bit, long in 64bit).
long Mod(long x)
{
return x % 1000000007;
}
long Calc(string s)
{
int len = s.Length;
long[,] nums = new long[len + 1, len + 1];
long sum = 0;
nums[0, 0] = 1;
for (int i = 0; i < len; ++i)
{
if(s[i] == '?')
{
sum = Mod(sum * 2);
}
for (int j = 0; j <= i; ++j)
{
if (s[i] == '0' || s[i] == '?')
{
nums[i + 1, j] = Mod(nums[i + 1, j] + nums[i, j]);
sum = Mod(sum + j * nums[i, j]);
}
if (s[i] == '1' || s[i] == '?')
{
nums[i + 1, j + 1] = nums[i, j];
}
}
}
return sum;
}
Optimalization
The code above is written to be as clear as possible and to show dynamic programming approach. You do not actually need array [len+1, len+1]. You calculate column i+1 from column i and never go back, so two columns are enough - old and new. If You dig more into it, You find out that row j of new column depends only on row j and j-1 of the old column. So You can go with one column if You actualize the values in the right direction (and do not overwrite values You would need).
The code above uses 64bit integers. You really need that only in j * nums[i, j]. The nums array contain numbers less than 1000000007 and 32bit integer is enough. Even 2*1000000007 can fit into 32bit signed int, we can make use of it.
We can optimize the code by nesting loop into conditions instead of conditions in the loop. Maybe it is even more natural approach, the only downside is repeating the code.
The % operator is, as every dividing, quite expensive. j * nums[i, j] is typically far smaller that capacity of 64bit integer, so we do not have to do modulo in every step. Just watch the actual value and apply when needed. The Mod(nums[i + 1, j] + nums[i, j]) can also be optimized, as nums[i + 1, j] + nums[i, j] would always be smaller than 2*1000000007.
And finally the optimized code. I switched to C++, I realized there are differences what int and long means, so rather make it clear:
long CalcOpt(string s)
{
long len = s.length();
vector<long> nums(len + 1);
long long sum = 0;
nums[0] = 1;
const long mod = 1000000007;
for (long i = 0; i < len; ++i)
{
if (s[i] == '1')
{
for (long j = i + 1; j > 0; --j)
{
nums[j] = nums[j - 1];
}
nums[0] = 0;
}
else if (s[i] == '0')
{
for (long j = 1; j <= i; ++j)
{
sum += (long long)j * nums[j];
if (sum > std::numeric_limits<long long>::max() / 2) { sum %= mod; }
}
}
else
{
sum *= 2;
if (sum > std::numeric_limits<long long>::max() / 2) { sum %= mod; }
for (long j = i + 1; j > 0; --j)
{
sum += (long long)j * nums[j];
if (sum > std::numeric_limits<long long>::max() / 2) { sum %= mod; }
long add = nums[j] + nums[j - 1];
if (add >= mod) { add -= mod; }
nums[j] = add;
}
}
}
return (long)(sum % mod);
}
Simplification
Time limit still exceeded? There is probably better way to do it. You can either
get back to the beginning and find out mathematically different way to calculate the result
or simplify actual solution using math
I went the second way. What we are doing in the loop is in fact convolution of two sequences, for example:
0, 0, 0, 1, 4, 6, 4, 1, 0, 0,... and 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,...
0*0 + 0*1 + 0*2 + 1*3 + 4*4 + 6*5 + 4*6 + 1*7 + 0*8...= 80
The first sequence is symmetric and the second is linear. It this case, the sum of convolution can be calculated from sum of the first sequence which is = 16 (numSum) and number from second sequence corresponding to the center of the first sequence, which is 5 (numMult). numSum*numMult = 16*5 = 80. We replace the whole loop with one multiplication if we are able to update those numbers in each step, which fortulately seems the case.
If s[i] == '0' then numSum does not change and numMult does not change.
If s[i] == '1' then numSum does not change, only numMult increments by 1, as we shift the whole sequence by one position.
If s[i] == '?' we add original and shiftet sequence together. numSum is multiplied by 2 and numMult increments by 0.5.
The 0.5 means a bit problem, as it is not the whole number. But we know, that the result would be whole number. Fortunately in modular arithmetics in this case exists inversion of two (=1/2) as a whole number. It is h = (mod+1)/2. As a reminder, inversion of 2 is such a number, that h*2=1 modulo mod. Implementation wisely it is easier to multiply numMult by 2 and divide numSum by 2, but it is just a detail, we would need 0.5 anyway. The code:
long CalcOptSimpl(string s)
{
long len = s.length();
long long sum = 0;
const long mod = 1000000007;
long numSum = (mod + 1) / 2;
long long numMult = 0;
for (long i = 0; i < len; ++i)
{
if (s[i] == '1')
{
numMult += 2;
}
else if (s[i] == '0')
{
sum += numSum * numMult;
if (sum > std::numeric_limits<long long>::max() / 4) { sum %= mod; }
}
else
{
sum = sum * 2 + numSum * numMult;
if (sum > std::numeric_limits<long long>::max() / 4) { sum %= mod; }
numSum = (numSum * 2) % mod;
numMult++;
}
}
return (long)(sum % mod);
}
I am pretty sure there exists some simple way to get this code, yet I am still unable to see it. But sometimes path is the goal :-)
If a sequence has N zeros with indexes zero[0], zero[1], ... zero[N - 1], the number of inversions for it would be (zero[0] + zero[1] + ... + zero[N - 1]) - (N - 1) * N / 2. (you should be able to prove it)
For example, 11010 has two zeros with indexes 2 and 4, so the number of inversions would be 2 + 4 - 1 * 2 / 2 = 5.
For all 2^k sequences, you can calculate the sum of two parts separately and then add them up.
1) The first part is zero[0] + zero[1] + ... + zero[N - 1]. Each 0 in the the given sequence contributes index * 2^k and each ? contributes index * 2^(k-1)
2) The second part is (N - 1) * N / 2. You can calculate this using a dynamic programming (maybe you should google and learn this first). In short, use f[i][j] to present the number of sequence with j zeros using the first i characters of the given sequence.

Computing the number of times fib(n) is called FOR EACH n

I want to compute the number of times fib(n) is called FOR EACH n. I have written the code as below:
#include <stdio.h>
#define N 10
int count[N + 1]; // count[n] keeps track of the number of times each fib(n) is called
int fib(int n) {
count[n]++;
if(n <= 1)
return n;
else
return fib(n - 1) + fib(n - 2);
}
int main() {
for(int i = 0; i <= N; i++) {
count[i] = 0; // initialize count to 0
}
fib(N);
// print values of count[]
for(int i = 0; i <= N; i++) {
printf("count[%d] = %d", i, count[i]);
}
}
I have tried printing the array count[] to get the result, where the result is resembles the fibonacci numbers except for count[0]:
count[0] = 34 count[1] = 55 count[2] = 34 count[3] = 21 count[4] = 13
count[5] = 8 count[6] = 5 count[7] = 3 count[8] = 2 count[9] = 1
count[10] = 1
Is there a way to mathematically show this result, maybe a recursive formula? Also, why doesn't count[0], or rather fib(0), doesn't continue the fibonacci sequence? Thanks.
Because count[1] is going to be called for each count[2] + count[3] but count[0] is only going to be called for count[2]...count[1] doesn't contribute because it's a terminus.
As for mathematical formula:
if n == 0: fib(N - 1)
else: fib(N-(n-1))
As for calculation
call(n)=call(n-1)+call(n-2)+1
call(1)=1
call(0)=1
Hope this make things clear.
n | calls
---+--------
0 | 1
1 | 1
2 | 3
3 | 5 f(3)= f(2)[= f(1)+ f(0)]+ f(1)
5 | 9
.
fib(n) 2*fib(n)-1

Calculate Nth multiset combination (with repetition) based only on index

How can i calculate the Nth combo based only on it's index.
There should be (n+k-1)!/(k!(n-1)!) combinations with repetitions.
with n=2, k=5 you get:
0|{0,0,0,0,0}
1|{0,0,0,0,1}
2|{0,0,0,1,1}
3|{0,0,1,1,1}
4|{0,1,1,1,1}
5|{1,1,1,1,1}
So black_magic_function(3) should produce {0,0,1,1,1}.
This will be going into a GPU shader, so i want each work-group/thread to be able to figure out their subset of permutations without having to store the sequence globally.
with n=3, k=5 you get:
i=0, {0,0,0,0,0}
i=1, {0,0,0,0,1}
i=2, {0,0,0,0,2}
i=3, {0,0,0,1,1}
i=4, {0,0,0,1,2}
i=5, {0,0,0,2,2}
i=6, {0,0,1,1,1}
i=7, {0,0,1,1,2}
i=8, {0,0,1,2,2}
i=9, {0,0,2,2,2}
i=10, {0,1,1,1,1}
i=11, {0,1,1,1,2}
i=12, {0,1,1,2,2}
i=13, {0,1,2,2,2}
i=14, {0,2,2,2,2}
i=15, {1,1,1,1,1}
i=16, {1,1,1,1,2}
i=17, {1,1,1,2,2}
i=18, {1,1,2,2,2}
i=19, {1,2,2,2,2}
i=20, {2,2,2,2,2}
The algorithm for generating it can be seen as MBnext_multicombination at http://www.martinbroadhurst.com/combinatorial-algorithms.html
Update:
So i thought i'd replace the binomial coefficient in pascals triangle with (n+k-1)!/(k!(n-1)!) to see how it looks.
(* Mathematica code to display pascal and other triangle *)
t1 = Table[Binomial[n, k], {n, 0, 8}, {k, 0, n}];
t2 = Table[(n + k - 1)!/(k! (n - 1)!), {n, 0, 8}, {k, 0, n}];
(*display*)
{Row[#, "\t"]} & /# t1 // Grid
{Row[#, "\t"]} & /# t2 // Grid
T1:
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
1 8 28 56 70 56 28 8 1
T2:
Indeterminate
1 1
1 2 3
1 3 6 10
1 4 10 20 35
1 5 15 35 70 126
1 6 21 56 126 252 462
1 7 28 84 210 462 924 1716
1 8 36 120 330 792 1716 3432 6435
Comparing with the n=3,k=5 console output at the start of this post: the third diagonal {3,6,10,15,21,28,36} gives the index of each roll-over point {0,0,0,1,1} -> {0,0,1,1,1} -> {0,1,1,1,1}, etc. And the diagonal to the left of it seems to show how many values are contained in the previous block (diagonal[2][i] == diagonal[3][i] - diagonal[3][i-1])). And if you read the 5th row of the pyramid horizontally you get the max amount of combinations for increasing values of N in (n+k-1)!/(k!(n-1)!) where K=5.
There is probably a way to use this information to determine the exact combo for an arbitrary index, without enumerating the whole set, but i'm not sure if i need to go that far. The original problem was just to decompose the full combo space into equal subsets, that can be generated locally, and worked on in parallel by the GPU. So the triangle above gives us the starting index of every block, of which the combo can be trivially derived, and all its successive elements incrementally enumerated. It also gives us the block size, and how many total combinations we have. So now it becomes a packing problem of how to fit unevenly sized blocks into groups of equal work load across X amount of threads.
See the example at:
https://en.wikipedia.org/wiki/Combinatorial_number_system#Finding_the_k-combination_for_a_given_number
Just replace the binomial Coefficient with (n+k-1)!/(k!(n-1)!).
Assuming n=3,k=5, let's say we want to calculate the 19th combination (id=19).
id=0, {0,0,0,0,0}
id=1, {0,0,0,0,1}
id=2, {0,0,0,0,2}
...
id=16, {1,1,1,1,2}
id=17, {1,1,1,2,2}
id=18, {1,1,2,2,2}
id=19, {1,2,2,2,2}
id=20, {2,2,2,2,2}
The result we're looking for is {1,2,2,2,2}.
Examining our 'T2' triangle: n=3,k=5 points to 21, being the 5th number (top to bottom) of the third diagonal (left to right).
Indeterminate
1 1
1 2 3
1 3 6 10
1 4 10 20 35
1 5 15 35 70 126
1 6 21 56 126 252 462
1 7 28 84 210 462 924 1716
1 8 36 120 330 792 1716 3432 6435
We need to find the largest number in this row (horizontally, not diagonally) that does not exceed our id=19 value. So moving left from 21 we arrive at 6 (this operation is performed by the largest function below). Since 6 is the 2nd number in this row it corresponds to n==2 (or g[2,5] == 6 from the code below).
Now that we've found the 5th number in the combination, we move up a floor in the pyramid, so k-1=4. We also subtract the 6 we encountered below from id, so id=19-6=13. Repeating the entire process we find 5 (n==2 again) to be the largest number less than 13 in this row.
Next: 13-5=8, Largest is 4 in this row (n==2 yet again).
Next: 8-4=4, Largest is 3 in this row (n==2 one more time).
Next: 4-3=1, Largest is 1 in this row (n==1)
So collecting the indices at each stage we get {1,2,2,2,2}
The following Mathematica code does the job:
g[n_, k_] := (n + k - 1)!/(k! (n - 1)!)
largest[i_, nn_, kk_] := With[
{x = g[nn, kk]},
If[x > i, largest[i, nn-1, kk], {nn,x}]
]
id2combo[id_, n_, 0] := {}
id2combo[id_, n_, k_] := Module[
{val, offset},
{val, offset} = largest[id, n, k];
Append[id2combo[id-offset, n, k-1], val]
]
Update:
The order that the combinations were being generated by MBnext_multicombination wasn't matching id2combo, so i don't think they were lexicographic. The function below generates them in the same order as id2combo and matches the order of mathematica's Sort[]function on a list of lists.
void next_combo(unsigned int *ar, unsigned int n, unsigned int k)
{
unsigned int i, lowest_i;
for (i=lowest_i=0; i < k; ++i)
lowest_i = (ar[i] < ar[lowest_i]) ? i : lowest_i;
++ar[lowest_i];
i = (ar[lowest_i] >= n)
? 0 // 0 -> all combinations have been exhausted, reset to first combination.
: lowest_i+1; // _ -> base incremented. digits to the right of it are now zero.
for (; i<k; ++i)
ar[i] = 0;
}
Here is a combinatorial number system implementation which handles combinations with and without repetition (i.e multiset), and can optionally produce lexicographic ordering.
/// Combinatorial number system encoder/decoder
/// https://en.wikipedia.org/wiki/Combinatorial_number_system
struct CNS(
/// Type for packed representation
P,
/// Type for one position in unpacked representation
U,
/// Number of positions in unpacked representation
size_t N,
/// Cardinality (maximum value plus one) of one position in unpacked representation
U unpackedCard,
/// Produce lexicographic ordering?
bool lexicographic,
/// Are repetitions representable? (multiset support)
bool multiset,
)
{
static:
/// Cardinality (maximum value plus one) of the packed representation
static if (multiset)
enum P packedCard = multisetCoefficient(unpackedCard, N);
else
enum P packedCard = binomialCoefficient(unpackedCard, N);
alias Index = P;
private P summand(U value, Index i)
{
static if (lexicographic)
{
value = cast(U)(unpackedCard-1 - value);
i = cast(Index)(N-1 - i);
}
static if (multiset)
value += i;
return binomialCoefficient(value, i + 1);
}
P pack(U[N] values)
{
P packed = 0;
foreach (Index i, value; values)
{
static if (!multiset)
assert(i == 0 || value > values[i-1]);
else
assert(i == 0 || value >= values[i-1]);
packed += summand(value, i);
}
static if (lexicographic)
packed = packedCard-1 - packed;
return packed;
}
U[N] unpack(P packed)
{
static if (lexicographic)
packed = packedCard-1 - packed;
void unpackOne(Index i, ref U r)
{
bool checkValue(U value, U nextValue)
{
if (summand(nextValue, i) > packed)
{
r = value;
packed -= summand(value, i);
return true;
}
return false;
}
// TODO optimize: (rolling product / binary search / precomputed tables)
// TODO optimize: don't check below N-i
static if (lexicographic)
{
foreach_reverse (U value; 0 .. unpackedCard)
if (checkValue(value, cast(U)(value - 1)))
break;
}
else
{
foreach (U value; 0 .. unpackedCard)
if (checkValue(value, cast(U)(value + 1)))
break;
}
}
U[N] values;
static if (lexicographic)
foreach (Index i, ref r; values)
unpackOne(i, r);
else
foreach_reverse (Index i, ref r; values)
unpackOne(i, r);
return values;
}
}
Full code: https://gist.github.com/CyberShadow/67da819b78c5fd16d266a1a3b4154203
I have done some preliminary analysis on the problem. Before I talk about the inefficient solution I found, let me give you a link to a paper I wrote on how to translate the k-indexes (or combination) to the rank or lexigraphic index to the combinations associated with the binomial coefficient:
http://tablizingthebinomialcoeff.wordpress.com/
I started out the same way in trying to solve this problem. I came up with the following code that uses one loop for each value of k in the formula (n+k-1)!/k!(n-1)! when k = 5. As written, this code will generate all combinations for the case of n choose 5:
private static void GetCombos(int nElements)
{
// This code shows how to generate all the k-indexes or combinations for any number of elements when k = 5.
int k1, k2, k3, k4, k5;
int n = nElements;
int i = 0;
for (k5 = 0; k5 < n; k5++)
{
for (k4 = k5; k4 < n; k4++)
{
for (k3 = k4; k3 < n; k3++)
{
for (k2 = k3; k2 < n; k2++)
{
for (k1 = k2; k1 < n; k1++)
{
Console.WriteLine("i = " + i.ToString() + ", " + k5.ToString() + " " + k4.ToString() +
" " + k3.ToString() + " " + k2.ToString() + " " + k1.ToString() + " ");
i++;
}
}
}
}
}
}
The output from this method is:
i = 0, 0 0 0 0 0
i = 1, 0 0 0 0 1
i = 2, 0 0 0 0 2
i = 3, 0 0 0 1 1
i = 4, 0 0 0 1 2
i = 5, 0 0 0 2 2
i = 6, 0 0 1 1 1
i = 7, 0 0 1 1 2
i = 8, 0 0 1 2 2
i = 9, 0 0 2 2 2
i = 10, 0 1 1 1 1
i = 11, 0 1 1 1 2
i = 12, 0 1 1 2 2
i = 13, 0 1 2 2 2
i = 14, 0 2 2 2 2
i = 15, 1 1 1 1 1
i = 16, 1 1 1 1 2
i = 17, 1 1 1 2 2
i = 18, 1 1 2 2 2
i = 19, 1 2 2 2 2
i = 20, 2 2 2 2 2
This is the same values as you gave in your edited answer. I also have tried it with 4 choose 5 as well, and it looks like it generates the correct combinations as well.
I wrote this in C#, but you should be able to use it with other languages like C/C++, Java, or Python without too many edits.
One idea for a somewhat inefficient solution is to modify GetCombos to accept k as an input as well. Since k is limited to 6, it would then be possible to put in a test for k. So the code to generate all possible combinations for an n choose k case would then look like this:
private static void GetCombos(int k, int nElements)
{
// This code shows how to generate all the k-indexes or combinations for any n choose k, where k <= 6.
//
int k1, k2, k3, k4, k5, k6;
int n = nElements;
int i = 0;
if (k == 6)
{
for (k6 = 0; k6 < n; k6++)
{
for (k5 = 0; k5 < n; k5++)
{
for (k4 = k5; k4 < n; k4++)
{
for (k3 = k4; k3 < n; k3++)
{
for (k2 = k3; k2 < n; k2++)
{
for (k1 = k2; k1 < n; k1++)
{
Console.WriteLine("i = " + i.ToString() + ", " + k5.ToString() + " " + k4.ToString() +
" " + k3.ToString() + " " + k2.ToString() + " " + k1.ToString() + " ");
i++;
}
}
}
}
}
}
}
else if (k == 5)
{
for (k5 = 0; k5 < n; k5++)
{
for (k4 = k5; k4 < n; k4++)
{
for (k3 = k4; k3 < n; k3++)
{
for (k2 = k3; k2 < n; k2++)
{
for (k1 = k2; k1 < n; k1++)
{
Console.WriteLine("i = " + i.ToString() + ", " + k5.ToString() + " " + k4.ToString() +
" " + k3.ToString() + " " + k2.ToString() + " " + k1.ToString() + " ");
i++;
}
}
}
}
}
}
else if (k == 4)
{
// One less loop than k = 5.
}
else if (k == 3)
{
// One less loop than k = 4.
}
else if (k == 2)
{
// One less loop than k = 3.
}
else
{
// k = 1 - error?
}
}
So, we now have a method that will generate all the combinations of interest. But, the problem is to obtain a specific combination from the lexigraphic order or rank of where that combination lies within the set. So, this can accomplished by a simple count and then returning the proper combination when it hits the specified value. So, to accommodate this an extra parameter that represents the rank needs to be added to the method. So, a new function to do this looks like this:
private static int[] GetComboOfRank(int k, int nElements, int Rank)
{
// Gets the combination for the rank using the formula (n+k-1)!/k!(n-1)! where k <= 6.
int k1, k2, k3, k4, k5, k6;
int n = nElements;
int i = 0;
int[] ReturnArray = new int[k];
if (k == 6)
{
for (k6 = 0; k6 < n; k6++)
{
for (k5 = 0; k5 < n; k5++)
{
for (k4 = k5; k4 < n; k4++)
{
for (k3 = k4; k3 < n; k3++)
{
for (k2 = k3; k2 < n; k2++)
{
for (k1 = k2; k1 < n; k1++)
{
if (i == Rank)
{
ReturnArray[0] = k1;
ReturnArray[1] = k2;
ReturnArray[2] = k3;
ReturnArray[3] = k4;
ReturnArray[4] = k5;
ReturnArray[5] = k6;
return ReturnArray;
}
i++;
}
}
}
}
}
}
}
else if (k == 5)
{
for (k5 = 0; k5 < n; k5++)
{
for (k4 = k5; k4 < n; k4++)
{
for (k3 = k4; k3 < n; k3++)
{
for (k2 = k3; k2 < n; k2++)
{
for (k1 = k2; k1 < n; k1++)
{
if (i == Rank)
{
ReturnArray[0] = k1;
ReturnArray[1] = k2;
ReturnArray[2] = k3;
ReturnArray[3] = k4;
ReturnArray[4] = k5;
return ReturnArray;
}
i++;
}
}
}
}
}
}
else if (k == 4)
{
// Same code as in the other cases, but with one less loop than k = 5.
}
else if (k == 3)
{
// Same code as in the other cases, but with one less loop than k = 4.
}
else if (k == 2)
{
// Same code as in the other cases, but with one less loop than k = 3.
}
else
{
// k = 1 - error?
}
// Should not ever get here. If we do - it is some sort of error.
throw ("GetComboOfRank - did not find rank");
}
ReturnArray returns the combination associated with the rank. So, this code should work for you. However, it will be much slower than what could be achieved if a table lookup was done. The problem with 300 choose 6 is that:
300 choose 6 = 305! / (6!(299!) = 305*304*303*302*301*300 / 6! = 1,064,089,721,800
That is probably way too much data to store in memory. So, if you could get n down to 20, through preprocessing then you would be looking at a total of:
20 choose 6 = 25! / (6!(19!)) = 25*24*23*22*21*20 / 6! = 177,100
20 choose 5 = 24! / (5!(19!)) = 24*23*22*21,20 / 5! = 42,504
20 choose 4 = 23! / (4!(19!)) = 23*22*21*20 / 4! = 8,855
20 choose 3 = 22! / (3!(19!)) = 22*21*20 / 3! = 1,540
20 choose 2 = 21! / (2!(19!)) = 22*21 / 2! = 231
=======
230,230
If one byte is used for each value of the combination, then the total number of bytes used to store a table (via a jagged array or perhaps 5 separate tables) in memory could be calculated as:
177,100 * 6 = 1,062,600
42,504 * 5 = 212,520
8,855 * 4 = 35,420
1,540 * 3 = 4,620
231 * 2 = 462
=========
1,315,622
It depends on the target machine and how much memory is available, but 1,315,622 bytes is not that much memory when many machines today have gigabytes of memory available.

Splitting Vector into blocks - Strange results

I've been given a MATLAB function that takes in a 1D vector and two sizes, the function then splits the data into blocks and finally stores them inside a 2D vector. I have been writing a C++ version of this function, but the (incorrect) results of my C++ function do not match the (correct) results of the MATLAB function.
Matlab function:
function f = block(v, N, M)
% This function separates the vector
% into blocks. Each block has size N.
% and consecutive blocks differ in
% their starting positions by M
%
% Typically
% N = 30 msec (600 samples)
% M = 10 msec (200 samples)
n = length(v);
maxblockstart = n - N + 1;
lastblockstart = maxblockstart - mod(maxblockstart-1 , M);
% Remove the semicolon to see the number of blocks
% numblocks = (lastblockstart-1)/M + 1
numblocks = (lastblockstart-1)/M + 1;
%f = zeros(numblocks,N);
for i = 1:numblocks
for j = 1:N
f(i,j) = ((i-1)*M+j);
end
end
For the purpose of this example, I'm just outputting the results of ((i-1)*M+j) and in MatLab I get these results (example):
1 201 401 601 .. 1001 1201 .. 1401 .. 1601 .. 1801
And here is my C++ function:
vector<iniMatrix> Audio::subBlocks(vector<float>& theData, int N, int M)
{
// This method splits the vector into blocks
// Each block has size N.
// and consecutive blocks differ
int n = theData.size();
int maxblockstart = n - N+1;
int lastblockstart = maxblockstart - mod(maxblockstart-1, M);
int numblocks = (lastblockstart-1)/M + 1;
vector<float> subBlock;
vector<iniMatrix> block;
for(int i=1; (i < numblocks); i++)
{
for(int j=1; (j < N); j++)
{
cout << ((i-1)*M+j);
}
}
return block;
}
The result I get from this:
1 2 3 4 .. 7 8 9 .. 13 14 15 etc..
P.S.
iniMatrix is just a typdef for the a vector of floats..
Another note, the variables:
n
maxblockstart
lastblockstart
numblocks
All have the same value in the Matlab program and the C++ so I think it's something to do with the for loops..
Anyone have any suggestions?
Let me see if I've understood your desired algorithm correctly.
Say
n = 10
N = 3
M = 2
This should yield
0, 2, 4, 6 right? (since 8 won't fit because 8+M is outside the range 0 <= x < n).
So, maxblockstart should be 7 = 10 - 3, that is n - N
lastblockstart should be 6 = 7 - 7 % 2, that is maxblockstart - maxblockstart % M
and numblocks should be 4 = 6/2 + 1, that is lastblockstart/M + 1
Modifying your code as follows, seems to yield the right results (only worked this out on paper, haven't tried compiling or executing.....):
vector<iniMatrix> Audio::subBlocks(vector<float>& theData, int N, int M)
{
int n = theData.size();
int maxblockstart = n - N;
int lastblockstart = maxblockstart - (maxblockstart % M);
int numblocks = (lastblockstart)/M + 1;
vector<iniMatrix> block;
for(int i=0; (i < numblocks); i++)
{
vector<float> subBlock;
for(int j=0; (j < N); j++)
{
subBlock.push_back(theData[i*M+j]); //cout << (i*M+j);
}
block.push_back(subBlock);
}
return block;
}
Give it a try...
Note that comparing results against MATLAB can be confusing, since C++ indicing is zero-based. Thus try the following;
1) Change the line cout << (i*M+j); to cout << theData[i*M+j];
2) Try the following test:
vector<float> test;
for(int i=0; i<=10000; i++)
test.push_back(i);
Audio::subBlocks(test, 1023, 200);
I'm not sure but where do you write the values of the 1D vector in the output ?
maybe you need to change this :
for i = 1:numblocks
for j = 1:N
f(i,j) = v((i-1)*M+j);
end
end