Optimization. How to speed up the given C ++ code? - c++

This is my code. 1<=i<=j<=n j-i<=a 1<=n<=1000000 0<=a<=1000000
#include <iostream>
using namespace std;
int main(){
int n, a, r = 0;
cin>>n>>a;
for(int i = 1; i <= n; i++){
int j = i;
for(j; j <= n; j++){
if(j-i<=a){
r++;
}
}
}
cout<<r;
}
Instead of loops, I changed it to a simple check of variables, which greatly accelerated the code. there is no need to calculate thousands of options.
My final, optimized code is:
#include <iostream>
using namespace std;
int main(){
unsigned long long n, a, r = 0;
cin>>n>>a;
if(a==0){
r = n;
}
if(n<=a){
r = (n*(n+1))/2;
}
if(n>a){
r += (n-a)*(a+1) + (a*(a+1))/2;
}
cout<<r;
}

After accounting for both positive numbers, negative numbers, and zeros, your double-nested for-loop can be simplified into this:
if (n < 1)
{
r = 0;
}
else if (a == 0)
{
r = n;
}
else if (a < 0)
{
r = 0;
}
else if (n <= a)
{
r = (n * (n + 1)) / 2;
}
else
{
r = (n-a)*(a + 1) + (a * (a + 1)) / 2;
}
Recall that summing a sequence of digits from 1..N is:
N*(N+1)
-------
2
If n <= a (positive numbers), r is incremented n times in the inner loop on the first iteration of the outer loop. Then n-1 times, then n-2 times... all the way down to 1.
For cases where n > a, then there are n-a summations of a+1 followed by a decrementing summation from a down to 1

This strikes me as something to speed up by doing a bit of math, not by massaging the code.
Basically, we can think of the loops as defining a square matrix of the values of i and j. So let's assume n = 9, and a = 3. I'll draw in a + for each place we increment r, a blank for the values we don't generate, and a 0 for the places we generate values, but don't increment r.
i\j 1 2 3 4 5 6 7 8 9
1 + + + + 0 0 0 0 0
2 + + + + 0 0 0 0
3 + + + + 0 0 0
4 + + + + 0 0
5 + + + + 0
6 + + + +
7 + + +
8 + +
9 +
So, ignoring the last a rows (i.e., for the first n-a rows), in each row we have a band a + 1 elements wide where we do an increment. Then at the end, we have a triangle, where we're basically summing a + a-1 + a-2 ... 0.
So, the first piece is (a+1) * (n-a) and the second piece is a * (a+1) / 2. Add those together, and we get the final answer.

Seems like
for(j; j <= n; j++){
if(j-i<=a){
r++;
}
}
could be replaced by
r += f(i,n,a);
Where f() is some simple expression involving those 3 values, probably including the equivalent of min(..,..)

If you want to speed up your code, instead of just tuning your algorithm, you can also try to use some parallel api.
Parallel computing api such as OpenMP enables you take advantage of your cpu resources.
If you uses OpenMP, you can try to use it to parallel your loop.

Related

What is the big-O of this for loop...?

The first loop runs O(log n) time but the second loop's runtime depends on the counter of the first loop, If we examine it more it should run like (1+2+4+8+16....+N) I just couldn't find a reasonable answer to this series...
for (int i = 1; i < n; i = i * 2)
{
for (int j = 1; j < i; j++)
{
//const time
}
}
Just like you said. If N is power of two, then 1+2+4+8+16....+N is exactly 2*N-1 (sum of geometric series) . This is same as O(N) that can be simplified to N.
It is like :
1 + 2 + 4 + 8 + 16 + ....+ N
= 2 ^ [O(log(N) + 1] - 1
= O(N)

How to find the difference of all consecutive sub-sequences?

I need to get an effective algorithm, which can find sum of the difference of all consecutive sub-sequences, but I don't know how to do it.
For example, all consecutive sub-sequences for 12345:
12 (Dif = 1)
23 (Dif = 1)
34 (Dif = 1)
45 (Dif = 1)
123 (Dif = 2)
234 (Dif = 2)
345 (Dif = 2)
1234 (Dif = 3)
2345 (Dif = 3)
12345 (Dif = 4)
Sum of the difference = 20
Count of sequence elements >= 2 <= 300000.
Each element >= 1 <= 10^7.
Time limit: 1s.
I wrote the code, but it's too slow:
#include <bits/stdc++.h>
using namespace std;
int main() {
cin.tie(0);
iostream::sync_with_stdio(false);
int count;
cin >> count;
int elem;
vector<int> vec;
int sum = 0;
for (int i = 0; i < count; i++) {
cin >> elem;
if (vec.size() > 0) {
sum += abs(vec.back() - elem);
}
vec.push_back(elem);
if (vec.size() > 2) {
sum += abs(*max_element(vec.begin(), vec.end()) - *min_element(vec.begin(), vec.end()));
}
for (int z = 3; z < count; z++) {
if (vec.size() > z) {
sum += abs(*max_element(vec.begin() + i - z + 1, vec.end()) - *min_element(vec.begin() + i - z + 1, vec.end()));
}
}
}
cout << sum;
return 0;
}
I found that the count of sub-sequences can be found by the triangle numbers formula (Where is n - length of sequence):
count = 1/2 * n * (n - 1);
For n = 300000, count of sub-sequence is 45 billion.
How to do it faster? I need algorithm.
My first thoughts were to build a tree in order to remember sub answers (i.e. dynamic-programming) and just combine the answers together. However, each higher branch isn't strictly speaking the sum of the nodes below it. Consider for example:
I noticed, however, that the nodes were predictable. Namely:
And when expanded to 6 consecutive nodes:
Which, summarily expressed is
SUM( i * (n - i) ) with i = [1 .. n) where n >=2
This of course runs in O(N) time and doesn't require anything other than add + multiply.
However, it bothered me that perhaps this summation formula could be reduced to a simple equation. So I looked up the properties of summation formulas and worked through to get a simple equation:
Which means that (n^3 - n) / 6 should execute in O(1) time. I tested it for the first 6 and it gave the right answers...

C++ - Code Optimization

I have a problem:
You are given a sequence, in the form of a string with characters ‘0’, ‘1’, and ‘?’ only. Suppose there are k ‘?’s. Then there are 2^k ways to replace each ‘?’ by a ‘0’ or a ‘1’, giving 2^k different 0-1 sequences (0-1 sequences are sequences with only zeroes and ones).
For each 0-1 sequence, define its number of inversions as the minimum number of adjacent swaps required to sort the sequence in non-decreasing order. In this problem, the sequence is sorted in non-decreasing order precisely when all the zeroes occur before all the ones. For example, the sequence 11010 has 5 inversions. We can sort it by the following moves: 11010 →→ 11001 →→ 10101 →→ 01101 →→ 01011 →→ 00111.
Find the sum of the number of inversions of the 2^k sequences, modulo 1000000007 (10^9+7).
For example:
Input: ??01
-> Output: 5
Input: ?0?
-> Output: 3
Here's my code:
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <string.h>
#include <math.h>
using namespace std;
void ProcessSequences(char *input)
{
int c = 0;
/* Count the number of '?' in input sequence
* 1??0 -> 2
*/
for(int i=0;i<strlen(input);i++)
{
if(*(input+i) == '?')
{
c++;
}
}
/* Get all possible combination of '?'
* 1??0
* -> ??
* -> 00, 01, 10, 11
*/
int seqLength = pow(2,c);
// Initialize 2D array of integer
int **sequencelist, **allSequences;
sequencelist = new int*[seqLength];
allSequences = new int*[seqLength];
for(int i=0; i<seqLength; i++){
sequencelist[i] = new int[c];
allSequences[i] = new int[500000];
}
//end initialize
for(int count = 0; count < seqLength; count++)
{
int n = 0;
for(int offset = c-1; offset >= 0; offset--)
{
sequencelist[count][n] = ((count & (1 << offset)) >> offset);
// cout << sequencelist[count][n];
n++;
}
// cout << std::endl;
}
/* Change '?' in former sequence into all possible bits
* 1??0
* ?? -> 00, 01, 10, 11
* -> 1000, 1010, 1100, 1110
*/
for(int d = 0; d<seqLength; d++)
{
int seqCount = 0;
for(int e = 0; e<strlen(input); e++)
{
if(*(input+e) == '1')
{
allSequences[d][e] = 1;
}
else if(*(input+e) == '0')
{
allSequences[d][e] = 0;
}
else
{
allSequences[d][e] = sequencelist[d][seqCount];
seqCount++;
}
}
}
/*
* Sort each sequences to increasing mode
*
*/
// cout<<endl;
int totalNum[seqLength];
for(int i=0; i<seqLength; i++){
int num = 0;
for(int j=0; j<strlen(input); j++){
if(j==strlen(input)-1){
break;
}
if(allSequences[i][j] > allSequences[i][j+1]){
int temp = allSequences[i][j];
allSequences[i][j] = allSequences[i][j+1];
allSequences[i][j+1] = temp;
num++;
j = -1;
}//endif
}//endfor
totalNum[i] = num;
}//endfor
/*
* Sum of all Num of Inversions
*/
int sum = 0;
for(int i=0;i<seqLength;i++){
sum = sum + totalNum[i];
}
// cout<<"Output: "<<endl;
int out = sum%1000000007;
cout<< out <<endl;
} //end of ProcessSequences method
int main()
{
// Get Input
char seq[500000];
// cout << "Input: "<<endl;
cin >> seq;
char *p = &seq[0];
ProcessSequences(p);
return 0;
}
the results were right for small size input, but for bigger size input I got time CPU time limit > 1 second. I also got exceeded memory size. How to make it faster and optimal memory use? What algorithm should I use and what better data structure should I use?, Thank you.
Dynamic programming is the way to go. Imagine You are adding the last character to all sequences.
If it is 1 then You get XXXXXX1. Number of swaps is obviously the same as it was for every sequence so far.
If it is 0 then You need to know number of ones already in every sequence. Number of swaps would increase by the amount of ones for every sequence.
If it is ? You just add two previous cases together
You need to calculate how many sequences are there. For every length and for every number of ones (number of ones in the sequence can not be greater than length of the sequence, naturally). You start with length 1, which is trivial, and continue with longer. You can get really big numbers, so You should calculate modulo 1000000007 all the time. The program is not in C++, but should be easy to rewrite (array should be initialized to 0, int is 32bit, long in 64bit).
long Mod(long x)
{
return x % 1000000007;
}
long Calc(string s)
{
int len = s.Length;
long[,] nums = new long[len + 1, len + 1];
long sum = 0;
nums[0, 0] = 1;
for (int i = 0; i < len; ++i)
{
if(s[i] == '?')
{
sum = Mod(sum * 2);
}
for (int j = 0; j <= i; ++j)
{
if (s[i] == '0' || s[i] == '?')
{
nums[i + 1, j] = Mod(nums[i + 1, j] + nums[i, j]);
sum = Mod(sum + j * nums[i, j]);
}
if (s[i] == '1' || s[i] == '?')
{
nums[i + 1, j + 1] = nums[i, j];
}
}
}
return sum;
}
Optimalization
The code above is written to be as clear as possible and to show dynamic programming approach. You do not actually need array [len+1, len+1]. You calculate column i+1 from column i and never go back, so two columns are enough - old and new. If You dig more into it, You find out that row j of new column depends only on row j and j-1 of the old column. So You can go with one column if You actualize the values in the right direction (and do not overwrite values You would need).
The code above uses 64bit integers. You really need that only in j * nums[i, j]. The nums array contain numbers less than 1000000007 and 32bit integer is enough. Even 2*1000000007 can fit into 32bit signed int, we can make use of it.
We can optimize the code by nesting loop into conditions instead of conditions in the loop. Maybe it is even more natural approach, the only downside is repeating the code.
The % operator is, as every dividing, quite expensive. j * nums[i, j] is typically far smaller that capacity of 64bit integer, so we do not have to do modulo in every step. Just watch the actual value and apply when needed. The Mod(nums[i + 1, j] + nums[i, j]) can also be optimized, as nums[i + 1, j] + nums[i, j] would always be smaller than 2*1000000007.
And finally the optimized code. I switched to C++, I realized there are differences what int and long means, so rather make it clear:
long CalcOpt(string s)
{
long len = s.length();
vector<long> nums(len + 1);
long long sum = 0;
nums[0] = 1;
const long mod = 1000000007;
for (long i = 0; i < len; ++i)
{
if (s[i] == '1')
{
for (long j = i + 1; j > 0; --j)
{
nums[j] = nums[j - 1];
}
nums[0] = 0;
}
else if (s[i] == '0')
{
for (long j = 1; j <= i; ++j)
{
sum += (long long)j * nums[j];
if (sum > std::numeric_limits<long long>::max() / 2) { sum %= mod; }
}
}
else
{
sum *= 2;
if (sum > std::numeric_limits<long long>::max() / 2) { sum %= mod; }
for (long j = i + 1; j > 0; --j)
{
sum += (long long)j * nums[j];
if (sum > std::numeric_limits<long long>::max() / 2) { sum %= mod; }
long add = nums[j] + nums[j - 1];
if (add >= mod) { add -= mod; }
nums[j] = add;
}
}
}
return (long)(sum % mod);
}
Simplification
Time limit still exceeded? There is probably better way to do it. You can either
get back to the beginning and find out mathematically different way to calculate the result
or simplify actual solution using math
I went the second way. What we are doing in the loop is in fact convolution of two sequences, for example:
0, 0, 0, 1, 4, 6, 4, 1, 0, 0,... and 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,...
0*0 + 0*1 + 0*2 + 1*3 + 4*4 + 6*5 + 4*6 + 1*7 + 0*8...= 80
The first sequence is symmetric and the second is linear. It this case, the sum of convolution can be calculated from sum of the first sequence which is = 16 (numSum) and number from second sequence corresponding to the center of the first sequence, which is 5 (numMult). numSum*numMult = 16*5 = 80. We replace the whole loop with one multiplication if we are able to update those numbers in each step, which fortulately seems the case.
If s[i] == '0' then numSum does not change and numMult does not change.
If s[i] == '1' then numSum does not change, only numMult increments by 1, as we shift the whole sequence by one position.
If s[i] == '?' we add original and shiftet sequence together. numSum is multiplied by 2 and numMult increments by 0.5.
The 0.5 means a bit problem, as it is not the whole number. But we know, that the result would be whole number. Fortunately in modular arithmetics in this case exists inversion of two (=1/2) as a whole number. It is h = (mod+1)/2. As a reminder, inversion of 2 is such a number, that h*2=1 modulo mod. Implementation wisely it is easier to multiply numMult by 2 and divide numSum by 2, but it is just a detail, we would need 0.5 anyway. The code:
long CalcOptSimpl(string s)
{
long len = s.length();
long long sum = 0;
const long mod = 1000000007;
long numSum = (mod + 1) / 2;
long long numMult = 0;
for (long i = 0; i < len; ++i)
{
if (s[i] == '1')
{
numMult += 2;
}
else if (s[i] == '0')
{
sum += numSum * numMult;
if (sum > std::numeric_limits<long long>::max() / 4) { sum %= mod; }
}
else
{
sum = sum * 2 + numSum * numMult;
if (sum > std::numeric_limits<long long>::max() / 4) { sum %= mod; }
numSum = (numSum * 2) % mod;
numMult++;
}
}
return (long)(sum % mod);
}
I am pretty sure there exists some simple way to get this code, yet I am still unable to see it. But sometimes path is the goal :-)
If a sequence has N zeros with indexes zero[0], zero[1], ... zero[N - 1], the number of inversions for it would be (zero[0] + zero[1] + ... + zero[N - 1]) - (N - 1) * N / 2. (you should be able to prove it)
For example, 11010 has two zeros with indexes 2 and 4, so the number of inversions would be 2 + 4 - 1 * 2 / 2 = 5.
For all 2^k sequences, you can calculate the sum of two parts separately and then add them up.
1) The first part is zero[0] + zero[1] + ... + zero[N - 1]. Each 0 in the the given sequence contributes index * 2^k and each ? contributes index * 2^(k-1)
2) The second part is (N - 1) * N / 2. You can calculate this using a dynamic programming (maybe you should google and learn this first). In short, use f[i][j] to present the number of sequence with j zeros using the first i characters of the given sequence.

Computing the number of times fib(n) is called FOR EACH n

I want to compute the number of times fib(n) is called FOR EACH n. I have written the code as below:
#include <stdio.h>
#define N 10
int count[N + 1]; // count[n] keeps track of the number of times each fib(n) is called
int fib(int n) {
count[n]++;
if(n <= 1)
return n;
else
return fib(n - 1) + fib(n - 2);
}
int main() {
for(int i = 0; i <= N; i++) {
count[i] = 0; // initialize count to 0
}
fib(N);
// print values of count[]
for(int i = 0; i <= N; i++) {
printf("count[%d] = %d", i, count[i]);
}
}
I have tried printing the array count[] to get the result, where the result is resembles the fibonacci numbers except for count[0]:
count[0] = 34 count[1] = 55 count[2] = 34 count[3] = 21 count[4] = 13
count[5] = 8 count[6] = 5 count[7] = 3 count[8] = 2 count[9] = 1
count[10] = 1
Is there a way to mathematically show this result, maybe a recursive formula? Also, why doesn't count[0], or rather fib(0), doesn't continue the fibonacci sequence? Thanks.
Because count[1] is going to be called for each count[2] + count[3] but count[0] is only going to be called for count[2]...count[1] doesn't contribute because it's a terminus.
As for mathematical formula:
if n == 0: fib(N - 1)
else: fib(N-(n-1))
As for calculation
call(n)=call(n-1)+call(n-2)+1
call(1)=1
call(0)=1
Hope this make things clear.
n | calls
---+--------
0 | 1
1 | 1
2 | 3
3 | 5 f(3)= f(2)[= f(1)+ f(0)]+ f(1)
5 | 9
.
fib(n) 2*fib(n)-1

Most efficient way to calculate lexicographic index

Can anybody find any potentially more efficient algorithms for accomplishing the following task?:
For any given permutation of the integers 0 thru 7, return the index which describes the permutation lexicographically (indexed from 0, not 1).
For example,
The array 0 1 2 3 4 5 6 7 should return an index of 0.
The array 0 1 2 3 4 5 7 6 should return an index of 1.
The array 0 1 2 3 4 6 5 7 should return an index of 2.
The array 1 0 2 3 4 5 6 7 should return an index of 5039 (that's 7!-1 or factorial(7)-1).
The array 7 6 5 4 3 2 1 0 should return an index of 40319 (that's 8!-1). This is the maximum possible return value.
My current code looks like this:
int lexic_ix(int* A){
int value = 0;
for(int i=0 ; i<7 ; i++){
int x = A[i];
for(int j=0 ; j<i ; j++)
if(A[j]<A[i]) x--;
value += x*factorial(7-i); // actual unrolled version doesn't have a function call
}
return value;
}
I'm wondering if there's any way I can reduce the number of operations by removing that inner loop, or if I can reduce conditional branching in any way (other than unrolling - my current code is actually an unrolled version of the above), or if there are any clever bitwise hacks or filthy C tricks to help.
I already tried replacing
if(A[j]<A[i]) x--;
with
x -= (A[j]<A[i]);
and I also tried
x = A[j]<A[i] ? x-1 : x;
Both replacements actually led to worse performance.
And before anyone says it - YES this is a huge performance bottleneck: currently about 61% of the program's runtime is spent in this function, and NO, I don't want to have a table of precomputed values.
Aside from those, any suggestions are welcome.
Don't know if this helps but here's an other solution :
int lexic_ix(int* A, int n){ //n = last index = number of digits - 1
int value = 0;
int x = 0;
for(int i=0 ; i<n ; i++){
int diff = (A[i] - x); //pb1
if(diff > 0)
{
for(int j=0 ; j<i ; j++)//pb2
{
if(A[j]<A[i] && A[j] > x)
{
if(A[j]==x+1)
{
x++;
}
diff--;
}
}
value += diff;
}
else
{
x++;
}
value *= n - i;
}
return value;
}
I couldn't get rid of the inner loop, so complexity is o(n log(n)) in worst case, but o(n) in best case, versus your solution which is o(n log(n)) in all cases.
Alternatively, you can replace the inner loop by the following to remove some worst cases at the expense of another verification in the inner loop :
int j=0;
while(diff>1 && j<i)
{
if(A[j]<A[i])
{
if(A[j]==x+1)
{
x++;
}
diff--;
}
j++;
}
Explanation :
(or rather "How I ended with that code", I think it is not that different from yours but it can make you have ideas, maybe)
(for less confusion I used characters instead and digit and only four characters)
abcd 0 = ((0 * 3 + 0) * 2 + 0) * 1 + 0
abdc 1 = ((0 * 3 + 0) * 2 + 1) * 1 + 0
acbd 2 = ((0 * 3 + 1) * 2 + 0) * 1 + 0
acdb 3 = ((0 * 3 + 1) * 2 + 1) * 1 + 0
adbc 4 = ((0 * 3 + 2) * 2 + 0) * 1 + 0
adcb 5 = ((0 * 3 + 2) * 2 + 1) * 1 + 0 //pb1
bacd 6 = ((1 * 3 + 0) * 2 + 0) * 1 + 0
badc 7 = ((1 * 3 + 0) * 2 + 1) * 1 + 0
bcad 8 = ((1 * 3 + 1) * 2 + 0) * 1 + 0 //First reflexion
bcda 9 = ((1 * 3 + 1) * 2 + 1) * 1 + 0
bdac 10 = ((1 * 3 + 2) * 2 + 0) * 1 + 0
bdca 11 = ((1 * 3 + 2) * 2 + 1) * 1 + 0
cabd 12 = ((2 * 3 + 0) * 2 + 0) * 1 + 0
cadb 13 = ((2 * 3 + 0) * 2 + 1) * 1 + 0
cbad 14 = ((2 * 3 + 1) * 2 + 0) * 1 + 0
cbda 15 = ((2 * 3 + 1) * 2 + 1) * 1 + 0 //pb2
cdab 16 = ((2 * 3 + 2) * 2 + 0) * 1 + 0
cdba 17 = ((2 * 3 + 2) * 2 + 1) * 1 + 0
[...]
dcba 23 = ((3 * 3 + 2) * 2 + 1) * 1 + 0
First "reflexion" :
An entropy point of view. abcd have the fewest "entropy". If a character is in a place it "shouldn't" be, it creates entropy, and the earlier the entropy is the greatest it becomes.
For bcad for example, lexicographic index is 8 = ((1 * 3 + 1) * 2 + 0) * 1 + 0 and can be calculated that way :
value = 0;
value += max(b - a, 0); // = 1; (a "should be" in the first place [to create the less possible entropy] but instead it is b)
value *= 3 - 0; //last index - current index
value += max(c - b, 0); // = 1; (b "should be" in the second place but instead it is c)
value *= 3 - 1;
value += max(a - c, 0); // = 0; (a "should have been" put earlier, so it does not create entropy to put it there)
value *= 3 - 2;
value += max(d - d, 0); // = 0;
Note that the last operation will always do nothing, that's why "i
First problem (pb1) :
For adcb, for example, the first logic doesn't work (it leads to an lexicographic index of ((0* 3+ 2) * 2+ 0) * 1 = 4) because c-d = 0 but it creates entropy to put c before b. I added x because of that, it represents the first digit/character that isn't placed yet. With x, diff cannot be negative.
For adcb, lexicographic index is 5 = ((0 * 3 + 2) * 2 + 1) * 1 + 0 and can be calculated that way :
value = 0; x=0;
diff = a - a; // = 0; (a is in the right place)
diff == 0 => x++; //x=b now and we don't modify value
value *= 3 - 0; //last index - current index
diff = d - b; // = 2; (b "should be" there (it's x) but instead it is d)
diff > 0 => value += diff; //we add diff to value and we don't modify x
diff = c - b; // = 1; (b "should be" there but instead it is c) This is where it differs from the first reflexion
diff > 0 => value += diff;
value *= 3 - 2;
Second problem (pb2) :
For cbda, for example, lexicographic index is 15 = ((2 * 3 + 1) * 2 + 1) * 1 + 0, but the first reflexion gives : ((2 * 3 + 0) * 2 + 1) * 1 + 0 = 13 and the solution to pb1 gives ((2 * 3 + 1) * 2 + 3) * 1 + 0 = 17. The solution to pb1 doesn't work because the two last characters to place are d and a, so d - a "means" 1 instead of 3. I had to count the characters placed before that comes before the character in place, but after x, so I had to add an inner loop.
Putting it all together :
I then realised that pb1 was just a particular case of pb2, and that if you remove x, and you simply take diff = A[i], we end up with the unnested version of your solution (with factorial calculated little by little, and my diff corresponding to your x).
So, basically, my "contribution" (I think) is to add a variable, x, which can avoid doing the inner loop when diff equals 0 or 1, at the expense of checking if you have to increment x and doing it if so.
I also checked if you have to increment x in the inner loop (if(A[j]==x+1)) because if you take for example badce, x will be b at the end because a comes after b, and you will enter the inner loop one more time, encountering c. If you check x in the inner loop, when you encounter d you have no choice but doing the inner loop, but x will update to c, and when you encounter c you will not enter the inner loop. You can remove this check without breaking the program
With the alternative version and the check in the inner loop it makes 4 different versions. The alternative one with the check is the one in which you enter the less the inner loop, so in terms of "theoretical complexity" it is the best, but in terms of performance/number of operations, I don't know.
Hope all of this helps (since the question is rather old, and I didn't read all the answers in details). If not, I still had fun doing it. Sorry for the long post. Also I'm new on Stack Overflow (as a member), and not a native speaker, so please be nice, and don't hesitate to let me know if I did something wrong.
Linear traversal of memory already in cache really doesn't take much times at all. Don't worry about it. You won't be traversing enough distance before factorial() overflows.
Move the 8 out as a parameter.
int factorial ( int input )
{
return input ? input * factorial (input - 1) : 1;
}
int lexic_ix ( int* arr, int N )
{
int output = 0;
int fact = factorial (N);
for ( int i = 0; i < N - 1; i++ )
{
int order = arr [ i ];
for ( int j = 0; j < i; j++ )
order -= arr [ j ] < arr [ i ];
output += order * (fact /= N - i);
}
return output;
}
int main()
{
int arr [ ] = { 11, 10, 9, 8, 7 , 6 , 5 , 4 , 3 , 2 , 1 , 0 };
const int length = 12;
for ( int i = 0; i < length; ++i )
std::cout << lexic_ix ( arr + i, length - i ) << std::endl;
}
Say, for a M-digit sequence permutation, from your code, you can get the lexicographic SN formula which is something like: Am-1*(m-1)! + Am-2*(m-2)! + ... + A0*(0)! , where Aj range from 0 to j. You can calculate SN from A0*(0)!, then A1*(1)!, ..., then Am-1 * (m-1)!, and add these together(suppose your integer type does not overflow), so you do not need calculate factorials recursively and repeatedly. The SN number is a range from 0 to M!-1 (because Sum(n*n!, n in 0,1, ...n) = (n+1)!-1)
If you are not calculating factorials recursively, I cannot think of anything that could make any big improvement.
Sorry for posting the code a little bit late, I just did some research, and find this:
http://swortham.blogspot.com.au/2011/10/how-much-faster-is-multiplication-than.html
according to this author, integer multiplication can be 40 times faster than integer division. floating numbers are not so dramatic though, but here is pure integer.
int lexic_ix ( int arr[], int N )
{
// if this function will be called repeatedly, consider pass in this pointer as parameter
std::unique_ptr<int[]> coeff_arr = std::make_unique<int[]>(N);
for ( int i = 0; i < N - 1; i++ )
{
int order = arr [ i ];
for ( int j = 0; j < i; j++ )
order -= arr [ j ] < arr [ i ];
coeff_arr[i] = order; // save this into coeff_arr for later multiplication
}
//
// There are 2 points about the following code:
// 1). most modern processors have built-in multiplier, \
// and multiplication is much faster than division
// 2). In your code, you are only the maximum permutation serial number,
// if you put in a random sequence, say, when length is 10, you put in
// a random sequence, say, {3, 7, 2, 9, 0, 1, 5, 8, 4, 6}; if you look into
// the coeff_arr[] in debugger, you can see that coeff_arr[] is:
// {3, 6, 2, 6, 0, 0, 1, 2, 0, 0}, the last number will always be zero anyway.
// so, you will have good chance to reduce many multiplications.
// I did not do any performance profiling, you could have a go, and it will be
// much appreciated if you could give some feedback about the result.
//
long fac = 1;
long sn = 0;
for (int i = 1; i < N; ++i) // start from 1, because coeff_arr[N-1] is always 0
{
fac *= i;
if (coeff_arr[N - 1 - i])
sn += coeff_arr[N - 1 - i] * fac;
}
return sn;
}
int main()
{
int arr [ ] = { 3, 7, 2, 9, 0, 1, 5, 8, 4, 6 }; // try this and check coeff_arr
const int length = 10;
std::cout << lexic_ix(arr, length ) << std::endl;
return 0;
}
This is the whole profiling code, I only run the test in Linux, code was compiled using G++8.4, with '-std=c++11 -O3' compiler options. To be fair, I slightly rewrote your code, pre-calculate the N! and pass it into the function, but it seems this does not help much.
The performance profiling for N = 9 (362,880 permutations) is:
Time durations are: 34, 30, 25 milliseconds
Time durations are: 34, 30, 25 milliseconds
Time durations are: 33, 30, 25 milliseconds
The performance profiling for N=10 (3,628,800 permutations) is:
Time durations are: 345, 335, 275 milliseconds
Time durations are: 348, 334, 275 milliseconds
Time durations are: 345, 335, 275 milliseconds
The first number is your original function, the second is the function re-written that gets N! passed in, the last number is my result. The permutation generation function is very primitive and runs slowly, but as long as it generates all permutations as testing dataset, that is alright. By the way, these tests are run on a Quad-Core 3.1Ghz, 4GBytes desktop running Ubuntu 14.04.
EDIT: I forgot a factor that the first function may need to expand the lexi_numbers vector, so I put an empty call before timing. After this, the times are 333, 334, 275.
EDIT: Another factor that could influence the performance, I am using long integer in my code, if I change those 2 'long' to 2 'int', the running time will become: 334, 333, 264.
#include <iostream>
#include <vector>
#include <chrono>
using namespace std::chrono;
int factorial(int input)
{
return input ? input * factorial(input - 1) : 1;
}
int lexic_ix(int* arr, int N)
{
int output = 0;
int fact = factorial(N);
for (int i = 0; i < N - 1; i++)
{
int order = arr[i];
for (int j = 0; j < i; j++)
order -= arr[j] < arr[i];
output += order * (fact /= N - i);
}
return output;
}
int lexic_ix1(int* arr, int N, int N_fac)
{
int output = 0;
int fact = N_fac;
for (int i = 0; i < N - 1; i++)
{
int order = arr[i];
for (int j = 0; j < i; j++)
order -= arr[j] < arr[i];
output += order * (fact /= N - i);
}
return output;
}
int lexic_ix2( int arr[], int N , int coeff_arr[])
{
for ( int i = 0; i < N - 1; i++ )
{
int order = arr [ i ];
for ( int j = 0; j < i; j++ )
order -= arr [ j ] < arr [ i ];
coeff_arr[i] = order;
}
long fac = 1;
long sn = 0;
for (int i = 1; i < N; ++i)
{
fac *= i;
if (coeff_arr[N - 1 - i])
sn += coeff_arr[N - 1 - i] * fac;
}
return sn;
}
std::vector<std::vector<int>> gen_permutation(const std::vector<int>& permu_base)
{
if (permu_base.size() == 1)
return std::vector<std::vector<int>>(1, std::vector<int>(1, permu_base[0]));
std::vector<std::vector<int>> results;
for (int i = 0; i < permu_base.size(); ++i)
{
int cur_int = permu_base[i];
std::vector<int> cur_subseq = permu_base;
cur_subseq.erase(cur_subseq.begin() + i);
std::vector<std::vector<int>> temp = gen_permutation(cur_subseq);
for (auto x : temp)
{
x.insert(x.begin(), cur_int);
results.push_back(x);
}
}
return results;
}
int main()
{
#define N 10
std::vector<int> arr;
int buff_arr[N];
const int length = N;
int N_fac = factorial(N);
for(int i=0; i<N; ++i)
arr.push_back(N-i-1); // for N=10, arr is {9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
std::vector<std::vector<int>> all_permus = gen_permutation(arr);
std::vector<int> lexi_numbers;
// This call is not timed, only to expand the lexi_numbers vector
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix2(&x[0], length, buff_arr));
lexi_numbers.clear();
auto t0 = high_resolution_clock::now();
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix(&x[0], length));
auto t1 = high_resolution_clock::now();
lexi_numbers.clear();
auto t2 = high_resolution_clock::now();
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix1(&x[0], length, N_fac));
auto t3 = high_resolution_clock::now();
lexi_numbers.clear();
auto t4 = high_resolution_clock::now();
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix2(&x[0], length, buff_arr));
auto t5 = high_resolution_clock::now();
std::cout << std::endl << "Time durations are: " << duration_cast<milliseconds> \
(t1 -t0).count() << ", " << duration_cast<milliseconds>(t3 - t2).count() << ", " \
<< duration_cast<milliseconds>(t5 - t4).count() <<" milliseconds" << std::endl;
return 0;
}