Complexity of a function with 1 loop - c++

Can anyone tell me what's the complexity of the below function? And how to calculate the complexity?
I am suspecting that it's O(log(n)) or O(sqrt(N)).
My reasoning was based on taking examples of n=4, n=8, n=16 and I found that the loop will take log(n) but I don't think it'll be enough since sqrt also will give the same values so I need to work on bigger values of n, so I am not sure how to approach this.
I had this function in the exam today.
void f(int n){
int i=1;
int j=1;
while(j <= n){
i += 1;
j += i;
}
}

The sequence j goes through is 1 3 6 10 15 21, aka the triangular numbers, aka n*(n+1)/2.
Expanded, this is ( n^2 + n ) / 2. We can ignore the scaling ( / 2) and linear ( + n) factors, which leaves us with n^2.
j grows as a n^2 polynomial, so the loop will stop after the inverse of that growth:
The time complexity is O(sqrt(n))

For what it's worth, I wrote a small program that attempts to illustrate whether this is O(log(N)) or O(sqrt(N)) by actually counting how many iterations of your loop execute. This seemed a reasonable approximation given that the body of the loop is largely negligible (simply incrementing two integer variables).
#include <stdio.h>
#include <math.h>
int f(int n)
{
int i=1;
int j=1;
int count = 0;
while(j <= n){
i += 1;
j += i;
count++;
}
return count;
}
int main()
{
for (int ii = 0; ii < 10; ii++) {
int count = pow(10, ii);
int rc = f(count);
char *fmt = "N=%d^%-2d -> %d, log(N)=%.2f, sqrt(N)=%.2f\n";
printf(fmt, 10, ii, rc, log(count), sqrt(count));
}
return 0;
}
Running this code results in the following output:
N=10^0 -> 1, log(N)=0.00, sqrt(N)=1.00
N=10^1 -> 4, log(N)=2.30, sqrt(N)=3.16
N=10^2 -> 13, log(N)=4.61, sqrt(N)=10.00
N=10^3 -> 44, log(N)=6.91, sqrt(N)=31.62
N=10^4 -> 140, log(N)=9.21, sqrt(N)=100.00
N=10^5 -> 446, log(N)=11.51, sqrt(N)=316.23
N=10^6 -> 1413, log(N)=13.82, sqrt(N)=1000.00
N=10^7 -> 4471, log(N)=16.12, sqrt(N)=3162.28
N=10^8 -> 14141, log(N)=18.42, sqrt(N)=10000.00
N=10^9 -> 44720, log(N)=20.72, sqrt(N)=31622.78
So, for example, you can see that when N=10^9, the number of iterations is 44720, which is much greater than log(N) (20.72) but quite close to sqrt(N) (31622.78).

It is depended your condition. In other words, the time complexity is O(log n).
How many statements are executed, relative to input size n? Often,
but NOT always, we can get an idea from the number of times a loop
iterates. The loop body executes for i= 2^0 + 2^1 + 2^2 + .... + 2^n; and this
sequence has O(log n) values.
Check the "Introduction to Algorithms" book about more details.

Related

Sum of infinite array fails one test case

Problem Statement:
Given an array “A” of N integers and you have also defined the new
array “B” as a concatenation of array “A” for an infinite number of
times. For example, if the given array “A” is [1,2,3] then, infinite
array “B” is [1,2,3,1,2,3,1,2,3,.......]. Now you are given Q queries,
each query consists of two integers “L“ and “R”. Your task is to find
the sum of the subarray from index “L” to “R” (both inclusive) in the
infinite array “B” for each query.
vector<int> sumInRanges(vector<int> &arr, int n, vector<vector<long long>> &queries, int q) {
vector<int> ans;
for(int i=0; i<q; i++){
int l = queries[i][0];
int r = queries[i][1];
int sum = 0;
for(int j=l-1; j<r; j++){
sum += arr[j%n];
}
ans.push_back(sum);
}
return ans;
}
One test case is failing. Could someone suggest the edit required?
Good I've found link to your actual problem.
Take a look on note:
Sum Of Infinite Array
Note :
The value of the sum can be very large, return the answer as modulus 10^9+7.
....
Constraints :
1 <= T <= 100
1 <= N <= 10^4
1 <= A[i] <= 10^9
1 <= Q <= 10^4
1 <= L <= R <= 10^18
Time Limit: 1sec
So basically your code have problem with integer overflow.
Your implementation is to simple. You have to leverage fact that this infinitive array has a period otherwise your code never meets time requirement. You do not have to calculate sum of the all indexes, you can skip a lot and calculate correction using multiplication (modulo).
Your solution takes time proportional to l - r because it tries every number.
But this is unnecessary, as there are n identical periods that you can sum in a single go. So the running time can be made proportional to the length of A instead. (Find the multiple of the length just above or on l and the multiple just below r.)
E.g. to sum from 10 to 27 inclusive, use
1231231231|231231231231231231|23123123... = 1231231231|23+4x123+1|23123123...

What's the time-complexity function [ T(n) ] for these loops?

j = n;
while (j>=1) {
i = j;
while (i <= n) { cout<<"Printed"; i*= 2; }
j /= 2;
}
My goal is finding T(n) (function that gives us number of algorithm execution) whose order is expected to be n.log(n) but I need exact function which can work fine at least for n=1 to n=10 data
I have tried to predict the function, finally I ended in *T(n) = floor((n-1)log(n)) + n
which is correct just for n=1 and n=2.
I should mention that I found that inaccurate function by converting the original code to the for-loop just like below :
for ( j = 1 ; j <= n ; j*= 2) {
for ( i = j ; i<= n ; i*=2 ) {
cout << "Printed";
}
}
Finally I appreciate your help to find the exact T(n) in advance. 🙏
using log(x) is the floor of log based 2
1.)
The inner loop is executed 1+log(N)-log(j) the outer loop executed times 1+log(N) with j=1,2,4...N times the overall complexity is T(N)=log(N)log(N)+2*log(N)+1-(log(1)+log(2)+log(4)...+log(N))= log(N)^2-(0+1+2+...+log(N))+2*log(N)+1= log(N)^2-log(N)(log(N)-1)/2+1= log(N)^2/2+3*log(N)/2+1
2.) same here just in reverse order.
I know it is no proof but maybe easier to follow then math : godbolt play with n. it always returns 0;
Outer loop and inner loop are both O(log₂ N).
So total time is
O(log₁₀ N * log₂ N) == O(2 * log₂ N)
Which just gets reduced to O(lg N)

What is the complexity of this program

I have solved a question on HackerEarth.
The question is
Phineas is Building a castle in his backyard to impress Isabella ( strange, isn't it? ). He has got everything delivered and ready. Even the ground floor has been finished. Now is time to make the upper part. This is where the things become interesting. As Ferb is sleeping in the house after a long day painting the fence (and you folks helped him, didn't ya!), Phineas has to do all the work himself. He is good at this, and all he wants you to do is operate the mini crane to lift the stones. Stones for the wall has been cut and ready waiting for you to lift them up.
Now we don't have Ferb to operate the mini crane, in which he is an expert, we got to do the job as quick as possible. We are given the maximum lifting capacity of the crane, and the weight of each stone. Since it's a mini crane, we cannot place more then 2 stones (of any possible size) at a time, or it will disturb the balance of the crane. we need to find out in how many turns we can deliver the stones to Phineas, who is building the castle.
INPUT: First line of input gives T, the number of test cases. For each test case, first line gives M, the maximum lifting capacity of the crane. first integer N of next line of each test case gives the number of stones, followed by N numbers, specifying the weight of individual stone X.
OUTPUT: For each test case, print the minimum number of turns the crane is operated for all stones to be lifted.
CONSTRAINTS:
1 <= T <= 50
1 <= M <= 1000
1 <= N <= 1000
Sample Input
1
50
3 28 22 48
Sample Output
2
Explanation
In first turn, 28 and 22 will be lifted together. In second turn 48 will be lifted.
Discard the stones with weight > max capacity of crane.
Now I have solved this question and I my source code is
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <vector>
using namespace std;
int main(void) {
int T = 0;
scanf("%d",&T);
while(T--) {
int i = 0,M = 0, N = 0,max = 0, res = 0, index = 0, j = 0, temp = 0;
vector<int> v1;
scanf("%d",&M);
scanf("%d",&N);
for(i = 0; i < N ;++i) {
scanf("%d",&temp);
if(temp <= M)
v1.push_back(temp);
}
for(i = 0; i < v1.size() ; ++i) {
max = 0;
index = 0;
if(v1[i] != -1) {
for(j = i + 1; j < v1.size(); ++j) {
if(v1[j] != -1) {
temp = v1[i] + v1[j];
if(temp > max && temp <= M) {
max = temp;
index = j;
}
}
}
++res;
v1[i] = -1;
v1[index] = -1;
}
}
printf("%d\n",res);
}
return 0;
}
Now here are my question
I want to know the average case time complexity of this code. Also I think worst case complexity of this code would be O(N^2).
This is a brute force approach or dynamic programming approach?
Is there any better approach then this?
This is a simplified version of Knapsack Prolblem
While the Knapsack problem is a typical dynamic programming question, this simplified question does not require dynamic Programming. Complexity of your solution is indeed O(n^2), the approach is more suitable described as Greedy As you tried to find a optimal pair for each stone, if there exist. The complexity can be further reduced to O(nlgn) if you sort the stones first and work on a sorted vector.

Algorithm analysis: Am I analyzing these algorithms correctly? How to approach problems like these [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
1)
x = 25;
for (int i = 0; i < myArray.length; i++)
{
if (myArray[i] == x)
System.out.println("found!");
}
I think this one is O(n).
2)
for (int r = 0; r < 10000; r++)
for (int c = 0; c < 10000; c++)
if (c % r == 0)
System.out.println("blah!");
I think this one is O(1), because for any input n, it will run 10000 * 10000 times. Not sure if this is right.
3)
a = 0
for (int i = 0; i < k; i++)
{
for (int j = 0; j < i; j++)
a++;
}
I think this one is O(i * k). I don't really know how to approach problems like this where the inner loop is affected by variables being incremented in the outer loop. Some key insights here would be much appreciated. The outer loop runs k times, and the inner loop runs 1 + 2 + 3 + ... + k times. So that sum should be (k/2) * (k+1), which would be order of k^2. So would it actually be O(k^3)? That seems too large. Again, don't know how to approach this.
4)
int key = 0; //key may be any value
int first = 0;
int last = intArray.length-1;;
int mid = 0;
boolean found = false;
while( (!found) && (first <= last) )
{
mid = (first + last) / 2;
if(key == intArray[mid])
found = true;
if(key < intArray[mid])
last = mid - 1;
if(key > intArray[mid])
first = mid + 1;
}
This one, I think is O(log n). But, I came to this conclusion because I believe it is a binary search and I know from reading that the runtime is O(log n). I think it's because you divide the input size by 2 for each iteration of the loop. But, I don't know if this is the correct reasoning or how to approach similar algorithms that I haven't seen and be able to deduce that they run in logarithmic time in a more verifiable or formal way.
5)
int currentMinIndex = 0;
for (int front = 0; front < intArray.length; front++)
{
currentMinIndex = front;
for (int i = front; i < intArray.length; i++)
{
if (intArray[i] < intArray[currentMinIndex])
{
currentMinIndex = i;
}
}
int tmp = intArray[front];
intArray[front] = intArray[currentMinIndex];
intArray[currentMinIndex] = tmp;
}
I am confused about this one. The outer loop runs n times. And the inner for loop runs
n + (n-1) + (n-2) + ... (n - k) + 1 times? So is that O(n^3) ??
More or less, yes.
1 is correct - it seems you are searching for a specific element in what I assume is an un-sorted collection. If so, the worst case is that the element is at the very end of the list, hence O(n).
2 is correct, though a bit strange. It is O(1) assuming r and c are constants and the bounds are not variables. If they are constant, then yes O(1) because there is nothing to input.
3 I believe that is considered O(n^2) still. There would be some constant factor like k * n^2, drop the constant and you got O(n^2).
4 looks a lot like a binary search algorithm for a sorted collection. O(logn) is correct. It is log because at each iteration you are essentially halving the # of possible choices in which the element you are looking for could be in.
5 is looking like a bubble sort, O(n^2), for similar reasons to 3.
O() doesn't mean anything in itself: you need to specify if you are counting the "worst-case" O, or the average-case O. For some sorting algorithm, they have a O(n log n) on average but a O(n^2) in worst case.
Basically you need to count the overall number of iterations of the most inner loop, and take the biggest component of the result without any constant (for example if you have k*(k+1)/2 = 1/2 k^2 + 1/2 k, the biggest component is 1/2 k^2 therefore you are O(k^2)).
For example, your item 4) is in O(log(n)) because, if you work on an array of size n, then you will run one iteration on this array, and the next one will be on an array of size n/2, then n/4, ..., until this size reaches 1. So it is log(n) iterations.
Your question is mostly about the definition of O().
When someone say this algorithm is O(log(n)), you have to read:
When the input parameter n becomes very big, the number of operations performed by the algorithm grows at most in log(n)
Now, this means two things:
You have to have at least one input parameter n. There is no point in talking about O() without one (as in your case 2).
You need to define the operations that you are counting. These can be additions, comparison between two elements, number of allocated bytes, number of function calls, but you have to decide. Usually you take the operation that's most costly to you, or the one that will become costly if done too many times.
So keeping this in mind, back to your problems:
n is myArray.Length, and the number of operations you're counting is '=='. In that case the answer is exactly n, which is O(n)
you can't specify an n
the n can only be k, and the number of operations you count is ++. You have exactly k*(k+1)/2 which is O(n2) as you say
this time n is the length of your array again, and the operation you count is ==. In this case, the number of operations depends on the data, usually we talk about 'worst case scenario', meaning that of all the possible outcome, we look at the one that takes the most time. At best, the algorithm takes one comparison. For the worst case, let's take an example. If the array is [[1,2,3,4,5,6,7,8,9]] and you are looking for 4, your intArray[mid] will become successively, 5, 3 and then 4, and so you would have done the comparison 3 times. In fact, for an array which size is 2^k + 1, the maximum number of comparison is k (you can check). So n = 2^k + 1 => k = ln(n-1)/ln(2). You can extend this result to the case when n is not = 2^k + 1, and you will get complexity = O(ln(n))
In any case, I think you are confused because you don't exactly know what O(n) means. I hope this is a start.

Porting optimized Sieve of Eratosthenes from Python to C++

Some time ago I used the (blazing fast) primesieve in python that I found here: Fastest way to list all primes below N
To be precise, this implementation:
def primes2(n):
""" Input n>=6, Returns a list of primes, 2 <= p < n """
n, correction = n-n%6+6, 2-(n%6>1)
sieve = [True] * (n/3)
for i in xrange(1,int(n**0.5)/3+1):
if sieve[i]:
k=3*i+1|1
sieve[ k*k/3 ::2*k] = [False] * ((n/6-k*k/6-1)/k+1)
sieve[k*(k-2*(i&1)+4)/3::2*k] = [False] * ((n/6-k*(k-2*(i&1)+4)/6-1)/k+1)
return [2,3] + [3*i+1|1 for i in xrange(1,n/3-correction) if sieve[i]]
Now I can slightly grasp the idea of the optimizing by automaticly skipping multiples of 2, 3 and so on, but when it comes to porting this algorithm to C++ I get stuck (I have a good understanding of python and a reasonable/bad understanding of C++, but good enough for rock 'n roll).
What I currently have rolled myself is this (isqrt() is just a simple integer square root function):
template <class T>
void primesbelow(T N, std::vector<T> &primes) {
T sievemax = (N-3 + (1-(N % 2))) / 2;
T i;
T sievemaxroot = isqrt(sievemax) + 1;
boost::dynamic_bitset<> sieve(sievemax);
sieve.set();
primes.push_back(2);
for (i = 0; i <= sievemaxroot; i++) {
if (sieve[i]) {
primes.push_back(2*i+3);
for (T j = 3*i+3; j <= sievemax; j += 2*i+3) sieve[j] = 0; // filter multiples
}
}
for (; i <= sievemax; i++) {
if (sieve[i]) primes.push_back(2*i+3);
}
}
This implementation is decent and automatically skips multiples of 2, but if I could port the Python implementation I think it could be much faster (50%-30% or so).
To compare the results (in the hope this question will be successfully answered), the current execution time with N=100000000, g++ -O3 on a Q6600 Ubuntu 10.10 is 1230ms.
Now I would love some help with either understanding what the above Python implementation does or that you would port it for me (not as helpful though).
EDIT
Some extra information about what I find difficult.
I have trouble with the techniques used like the correction variable and in general how it comes together. A link to a site explaining different Eratosthenes optimizations (apart from the simple sites that say "well you just skip multiples of 2, 3 and 5" and then get slam you with a 1000 line C file) would be awesome.
I don't think I would have issues with a 100% direct and literal port, but since after all this is for learning that would be utterly useless.
EDIT
After looking at the code in the original numpy version, it actually is pretty easy to implement and with some thinking not too hard to understand. This is the C++ version I came up with. I'm posting it here in full version to help further readers in case they need a pretty efficient primesieve that is not two million lines of code. This primesieve does all primes under 100000000 in about 415 ms on the same machine as above. That's a 3x speedup, better then I expected!
#include <vector>
#include <boost/dynamic_bitset.hpp>
// http://vault.embedded.com/98/9802fe2.htm - integer square root
unsigned short isqrt(unsigned long a) {
unsigned long rem = 0;
unsigned long root = 0;
for (short i = 0; i < 16; i++) {
root <<= 1;
rem = ((rem << 2) + (a >> 30));
a <<= 2;
root++;
if (root <= rem) {
rem -= root;
root++;
} else root--;
}
return static_cast<unsigned short> (root >> 1);
}
// https://stackoverflow.com/questions/2068372/fastest-way-to-list-all-primes-below-n-in-python/3035188#3035188
// https://stackoverflow.com/questions/5293238/porting-optimized-sieve-of-eratosthenes-from-python-to-c/5293492
template <class T>
void primesbelow(T N, std::vector<T> &primes) {
T i, j, k, l, sievemax, sievemaxroot;
sievemax = N/3;
if ((N % 6) == 2) sievemax++;
sievemaxroot = isqrt(N)/3;
boost::dynamic_bitset<> sieve(sievemax);
sieve.set();
primes.push_back(2);
primes.push_back(3);
for (i = 1; i <= sievemaxroot; i++) {
if (sieve[i]) {
k = (3*i + 1) | 1;
l = (4*k-2*k*(i&1)) / 3;
for (j = k*k/3; j < sievemax; j += 2*k) {
sieve[j] = 0;
sieve[j+l] = 0;
}
primes.push_back(k);
}
}
for (i = sievemaxroot + 1; i < sievemax; i++) {
if (sieve[i]) primes.push_back((3*i+1)|1);
}
}
I'll try to explain as much as I can. The sieve array has an unusual indexing scheme; it stores a bit for each number that is congruent to 1 or 5 mod 6. Thus, a number 6*k + 1 will be stored in position 2*k and k*6 + 5 will be stored in position 2*k + 1. The 3*i+1|1 operation is the inverse of that: it takes numbers of the form 2*n and converts them into 6*n + 1, and takes 2*n + 1 and converts it into 6*n + 5 (the +1|1 thing converts 0 to 1 and 3 to 5). The main loop iterates k through all numbers with that property, starting with 5 (when i is 1); i is the corresponding index into sieve for the number k. The first slice update to sieve then clears all bits in the sieve with indexes of the form k*k/3 + 2*m*k (for m a natural number); the corresponding numbers for those indexes start at k^2 and increase by 6*k at each step. The second slice update starts at index k*(k-2*(i&1)+4)/3 (number k * (k+4) for k congruent to 1 mod 6 and k * (k+2) otherwise) and similarly increases the number by 6*k at each step.
Here's another attempt at an explanation: let candidates be the set of all numbers that are at least 5 and are congruent to either 1 or 5 mod 6. If you multiply two elements in that set, you get another element in the set. Let succ(k) for some k in candidates be the next element (in numerical order) in candidates that is larger than k. In that case, the inner loop of the sieve is basically (using normal indexing for sieve):
for k in candidates:
for (l = k; ; l += 6) sieve[k * l] = False
for (l = succ(k); ; l += 6) sieve[k * l] = False
Because of the limitations on which elements are stored in sieve, that is the same as:
for k in candidates:
for l in candidates where l >= k:
sieve[k * l] = False
which will remove all multiples of k in candidates (other than k itself) from the sieve at some point (either when the current k was used as l earlier or when it is used as k now).
Piggy-Backing onto Howard Hinnant's response, Howard, you don't have to test numbers in the set of all natural numbers not divisible by 2, 3 or 5 for primality, per se. You need simply multiply each number in the array (except 1, which self-eliminates) times itself and every subsequent number in the array. These overlapping products will give you all the non-primes in the array up to whatever point you extend the deterministic-multiplicative process. Thus the first non-prime in the array will be 7 squared, or 49. The 2nd, 7 times 11, or 77, etc. A full explanation here: http://www.primesdemystified.com
As an aside, you can "approximate" prime numbers. Call the approximate prime P. Here are a few formulas:
P = 2*k+1 // not divisible by 2
P = 6*k + {1, 5} // not divisible 2, 3
P = 30*k + {1, 7, 11, 13, 17, 19, 23, 29} // not divisble by 2, 3, 5
The properties of the set of numbers found by these formulas is that P may not be prime, however all primes are in the set P. I.e. if you only test numbers in the set P for prime, you won't miss any.
You can reformulate these formulas to:
P = X*k + {-i, -j, -k, k, j, i}
if that is more convenient for you.
Here is some code that uses this technique with a formula for P not divisible by 2, 3, 5, 7.
This link may represent the extent to which this technique can be practically leveraged.