algorithmic complexity of following code snippet - c++

we're all know that for a code snippet like
for(int i=0;i<n;i++){ //do something}
Has a complexity of O(n) and we can find it:
sum symbol from n=0 to n-1 c1 = c1* ((n-1)*n)/2
What if the code will be :
for(int i=0;i<n;i+=2){ //do something}
I dont only interested in with big o notation but also,exact growth rate function.
Is there anyone to help me ? thanks in advance

The //do something shall be executed n/2 times .
Hence complexity = O(n/2) = O(n) (same as the previous loop)
Practically obviously the second loop will take less time as it only executes half the number of statements. However as n grows, the growth in time complexity shall be linear for both the loops.

for(int i=0;i<n;i++) { /* do something */ }
It has unknown complexity where we don't know about /* do something */. If the loop's body hasn't inner loops based on n. It has complexity O(n).
for(int i=0;i<n;i+=2) { /* do something */ }
As same as above, this has complexity O(n) too. Note that O(n/2) = O(n).

When you take Big O notation , we assume n is a positive set .
N can grow in 2 ways
1-Linearly
2-Exponentially
N grows exponentially if we multiply n by an integer >1. Consider the code snippet
for(i=1;i<n;i=i*2){
}
here n increase by a large value, i.e, 2,4,8,16,32....
N grows linearly if we increment n by an integer>0( The code that you gave grows linearly )
For linear growth, Time complexity is O(n)
For exponential growth, Time complexity is O(logn)
Lets consider another example
for(i=0;i<n*10;i++){//has Complexity O(n)
}
for(i=0;i<n*n;i++){//has Complexity O(n*n)
}
This is because 10 is a constant and n is not a constant

Related

Proof for the time complexity in big O of a inserting unique numbers in a list

I was writing a simple loop in C++ and was wondering what the time complexity would be.
My intuition tells me that it is O(n*log(n)) but I couldn't come up for a proof for the n*log(n)
std::vector<int> nums{1,2,3,4,5,1,1,1,1,3,3,3,3};
std::vector<int> unique_nums;
// get all unique nums
for(auto n: nums)
{
if(std::find(unique_nums) != unique_nums.end())
{
unique_nums.push_back(n);
}
}
This is my current understanding. std::find is a linear time search through the vector but the size of unique_nums vector is growing at an unknown rate.
Questions are:
Am I correct in thinking that the time complexity is O(n*log(n))
If that is true, is there a proof or formula that I could apply to see that?
Worst case is when the input has only unique numbers. In that case, the equivalent is:
for(auto n: nums)
{
std::find(unique_nums); // we already know that it is not found
// because input is unique numbers
unique_nums.push_back(n); // and we have to push_back anyhow
}
std::find is linear in size of unique_nums and push_back is amortized constant. The leading term of the sum 1 + 2 + .... + n is n^2, hence we can say that complexity of the loop is O(nums.size()*nums.size()).

What is the time complexity of linked list traversal using recursion? [duplicate]

I have gone through Google and Stack Overflow search, but nowhere I was able to find a clear and straightforward explanation for how to calculate time complexity.
What do I know already?
Say for code as simple as the one below:
char h = 'y'; // This will be executed 1 time
int abc = 0; // This will be executed 1 time
Say for a loop like the one below:
for (int i = 0; i < N; i++) {
Console.Write('Hello, World!!');
}
int i=0; This will be executed only once.
The time is actually calculated to i=0 and not the declaration.
i < N; This will be executed N+1 times
i++ This will be executed N times
So the number of operations required by this loop are {1+(N+1)+N} = 2N+2. (But this still may be wrong, as I am not confident about my understanding.)
OK, so these small basic calculations I think I know, but in most cases I have seen the time complexity as O(N), O(n^2), O(log n), O(n!), and many others.
How to find time complexity of an algorithm
You add up how many machine instructions it will execute as a function of the size of its input, and then simplify the expression to the largest (when N is very large) term and can include any simplifying constant factor.
For example, lets see how we simplify 2N + 2 machine instructions to describe this as just O(N).
Why do we remove the two 2s ?
We are interested in the performance of the algorithm as N becomes large.
Consider the two terms 2N and 2.
What is the relative influence of these two terms as N becomes large? Suppose N is a million.
Then the first term is 2 million and the second term is only 2.
For this reason, we drop all but the largest terms for large N.
So, now we have gone from 2N + 2 to 2N.
Traditionally, we are only interested in performance up to constant factors.
This means that we don't really care if there is some constant multiple of difference in performance when N is large. The unit of 2N is not well-defined in the first place anyway. So we can multiply or divide by a constant factor to get to the simplest expression.
So 2N becomes just N.
This is an excellent article: Time complexity of algorithm
The below answer is copied from above (in case the excellent link goes bust)
The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:
statement;
Is constant. The running time of the statement will not change in relation to N.
for ( i = 0; i < N; i++ )
statement;
Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.
for ( i = 0; i < N; i++ ) {
for ( j = 0; j < N; j++ )
statement;
}
Is quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.
while ( low <= high ) {
mid = ( low + high ) / 2;
if ( target < list[mid] )
high = mid - 1;
else if ( target > list[mid] )
low = mid + 1;
else break;
}
Is logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.
void quicksort (int list[], int left, int right)
{
int pivot = partition (list, left, right);
quicksort(list, left, pivot - 1);
quicksort(list, pivot + 1, right);
}
Is N * log (N). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.
In general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not nearly as common. Big O notation is described as O ( <type> ) where <type> is the measure. The quicksort algorithm would be described as O (N * log(N )).
Note that none of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course. ;)
Taken from here - Introduction to Time Complexity of an Algorithm
1. Introduction
In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.
2. Big O notation
The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity.
For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n, the asymptotic time complexity is O(n3). More on that later.
A few more examples:
1 = O(n)
n = O(n2)
log(n) = O(n)
2 n + 1 = O(n)
3. O(1) constant time:
An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size.
Examples:
array: accessing any element
fixed-size stack: push and pop methods
fixed-size queue: enqueue and dequeue methods
4. O(n) linear time
An algorithm is said to run in linear time if its time execution is directly proportional to the input size, i.e. time grows linearly as input size increases.
Consider the following examples. Below I am linearly searching for an element, and this has a time complexity of O(n).
int find = 66;
var numbers = new int[] { 33, 435, 36, 37, 43, 45, 66, 656, 2232 };
for (int i = 0; i < numbers.Length - 1; i++)
{
if(find == numbers[i])
{
return;
}
}
More Examples:
Array: Linear Search, Traversing, Find minimum etc
ArrayList: contains method
Queue: contains method
5. O(log n) logarithmic time:
An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size.
Example: Binary Search
Recall the "twenty questions" game - the task is to guess the value of a hidden number in an interval. Each time you make a guess, you are told whether your guess is too high or too low. Twenty questions game implies a strategy that uses your guess number to halve the interval size. This is an example of the general problem-solving method known as binary search.
6. O(n2) quadratic time
An algorithm is said to run in quadratic time if its time execution is proportional to the square of the input size.
Examples:
Bubble Sort
Selection Sort
Insertion Sort
7. Some useful links
Big-O Misconceptions
Determining The Complexity Of Algorithm
Big O Cheat Sheet
Several examples of loop.
O(n) time complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount. For example following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}
O(nc) time complexity of nested loops is equal to the number of times the innermost statement is executed. For example, the following sample loops have O(n2) time complexity
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
For example, selection sort and insertion sort have O(n2) time complexity.
O(log n) time complexity of a loop is considered as O(log n) if the loop variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
For example, [binary search][3] has _O(log n)_ time complexity.
O(log log n) time complexity of a loop is considered as O(log log n) if the loop variables is reduced / increased exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 0; i = fun(i)) {
// some O(1) expressions
}
One example of time complexity analysis
int fun(int n)
{
for (int i = 1; i <= n; i++)
{
for (int j = 1; j < n; j += i)
{
// Some O(1) task
}
}
}
Analysis:
For i = 1, the inner loop is executed n times.
For i = 2, the inner loop is executed approximately n/2 times.
For i = 3, the inner loop is executed approximately n/3 times.
For i = 4, the inner loop is executed approximately n/4 times.
…………………………………………………….
For i = n, the inner loop is executed approximately n/n times.
So the total time complexity of the above algorithm is (n + n/2 + n/3 + … + n/n), which becomes n * (1/1 + 1/2 + 1/3 + … + 1/n)
The important thing about series (1/1 + 1/2 + 1/3 + … + 1/n) is around to O(log n). So the time complexity of the above code is O(n·log n).
References:
1
2
3
Time complexity with examples
1 - Basic operations (arithmetic, comparisons, accessing array’s elements, assignment): The running time is always constant O(1)
Example:
read(x) // O(1)
a = 10; // O(1)
a = 1,000,000,000,000,000,000 // O(1)
2 - If then else statement: Only taking the maximum running time from two or more possible statements.
Example:
age = read(x) // (1+1) = 2
if age < 17 then begin // 1
status = "Not allowed!"; // 1
end else begin
status = "Welcome! Please come in"; // 1
visitors = visitors + 1; // 1+1 = 2
end;
So, the complexity of the above pseudo code is T(n) = 2 + 1 + max(1, 1+2) = 6. Thus, its big oh is still constant T(n) = O(1).
3 - Looping (for, while, repeat): Running time for this statement is the number of loops multiplied by the number of operations inside that looping.
Example:
total = 0; // 1
for i = 1 to n do begin // (1+1)*n = 2n
total = total + i; // (1+1)*n = 2n
end;
writeln(total); // 1
So, its complexity is T(n) = 1+4n+1 = 4n + 2. Thus, T(n) = O(n).
4 - Nested loop (looping inside looping): Since there is at least one looping inside the main looping, running time of this statement used O(n^2) or O(n^3).
Example:
for i = 1 to n do begin // (1+1)*n = 2n
for j = 1 to n do begin // (1+1)n*n = 2n^2
x = x + 1; // (1+1)n*n = 2n^2
print(x); // (n*n) = n^2
end;
end;
Common running time
There are some common running times when analyzing an algorithm:
O(1) – Constant time
Constant time means the running time is constant, it’s not affected by the input size.
O(n) – Linear time
When an algorithm accepts n input size, it would perform n operations as well.
O(log n) – Logarithmic time
Algorithm that has running time O(log n) is slight faster than O(n). Commonly, algorithm divides the problem into sub problems with the same size. Example: binary search algorithm, binary conversion algorithm.
O(n log n) – Linearithmic time
This running time is often found in "divide & conquer algorithms" which divide the problem into sub problems recursively and then merge them in n time. Example: Merge Sort algorithm.
O(n2) – Quadratic time
Look Bubble Sort algorithm!
O(n3) – Cubic time
It has the same principle with O(n2).
O(2n) – Exponential time
It is very slow as input get larger, if n = 1,000,000, T(n) would be 21,000,000. Brute Force algorithm has this running time.
O(n!) – Factorial time
The slowest!!! Example: Travelling salesman problem (TSP)
It is taken from this article. It is very well explained and you should give it a read.
When you're analyzing code, you have to analyse it line by line, counting every operation/recognizing time complexity. In the end, you have to sum it to get whole picture.
For example, you can have one simple loop with linear complexity, but later in that same program you can have a triple loop that has cubic complexity, so your program will have cubic complexity. Function order of growth comes into play right here.
Let's look at what are possibilities for time complexity of an algorithm, you can see order of growth I mentioned above:
Constant time has an order of growth 1, for example: a = b + c.
Logarithmic time has an order of growth log N. It usually occurs when you're dividing something in half (binary search, trees, and even loops), or multiplying something in same way.
Linear. The order of growth is N, for example
int p = 0;
for (int i = 1; i < N; i++)
p = p + 2;
Linearithmic. The order of growth is n·log N. It usually occurs in divide-and-conquer algorithms.
Cubic. The order of growth is N3. A classic example is a triple loop where you check all triplets:
int x = 0;
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
for (int k = 0; k < N; k++)
x = x + 2
Exponential. The order of growth is 2N. It usually occurs when you do exhaustive search, for example, check subsets of some set.
Loosely speaking, time complexity is a way of summarising how the number of operations or run-time of an algorithm grows as the input size increases.
Like most things in life, a cocktail party can help us understand.
O(N)
When you arrive at the party, you have to shake everyone's hand (do an operation on every item). As the number of attendees N increases, the time/work it will take you to shake everyone's hand increases as O(N).
Why O(N) and not cN?
There's variation in the amount of time it takes to shake hands with people. You could average this out and capture it in a constant c. But the fundamental operation here --- shaking hands with everyone --- would always be proportional to O(N), no matter what c was. When debating whether we should go to a cocktail party, we're often more interested in the fact that we'll have to meet everyone than in the minute details of what those meetings look like.
O(N^2)
The host of the cocktail party wants you to play a silly game where everyone meets everyone else. Therefore, you must meet N-1 other people and, because the next person has already met you, they must meet N-2 people, and so on. The sum of this series is x^2/2+x/2. As the number of attendees grows, the x^2 term gets big fast, so we just drop everything else.
O(N^3)
You have to meet everyone else and, during each meeting, you must talk about everyone else in the room.
O(1)
The host wants to announce something. They ding a wineglass and speak loudly. Everyone hears them. It turns out it doesn't matter how many attendees there are, this operation always takes the same amount of time.
O(log N)
The host has laid everyone out at the table in alphabetical order. Where is Dan? You reason that he must be somewhere between Adam and Mandy (certainly not between Mandy and Zach!). Given that, is he between George and Mandy? No. He must be between Adam and Fred, and between Cindy and Fred. And so on... we can efficiently locate Dan by looking at half the set and then half of that set. Ultimately, we look at O(log_2 N) individuals.
O(N log N)
You could find where to sit down at the table using the algorithm above. If a large number of people came to the table, one at a time, and all did this, that would take O(N log N) time. This turns out to be how long it takes to sort any collection of items when they must be compared.
Best/Worst Case
You arrive at the party and need to find Inigo - how long will it take? It depends on when you arrive. If everyone is milling around you've hit the worst-case: it will take O(N) time. However, if everyone is sitting down at the table, it will take only O(log N) time. Or maybe you can leverage the host's wineglass-shouting power and it will take only O(1) time.
Assuming the host is unavailable, we can say that the Inigo-finding algorithm has a lower-bound of O(log N) and an upper-bound of O(N), depending on the state of the party when you arrive.
Space & Communication
The same ideas can be applied to understanding how algorithms use space or communication.
Knuth has written a nice paper about the former entitled "The Complexity of Songs".
Theorem 2: There exist arbitrarily long songs of complexity O(1).
PROOF: (due to Casey and the Sunshine Band). Consider the songs Sk defined by (15), but with
V_k = 'That's the way,' U 'I like it, ' U
U = 'uh huh,' 'uh huh'
for all k.
For the mathematically-minded people: The master theorem is another useful thing to know when studying complexity.
O(n) is big O notation used for writing time complexity of an algorithm. When you add up the number of executions in an algorithm, you'll get an expression in result like 2N+2. In this expression, N is the dominating term (the term having largest effect on expression if its value increases or decreases). Now O(N) is the time complexity while N is dominating term.
Example
For i = 1 to n;
j = 0;
while(j <= n);
j = j + 1;
Here the total number of executions for the inner loop are n+1 and the total number of executions for the outer loop are n(n+1)/2, so the total number of executions for the whole algorithm are n + 1 + n(n+1/2) = (n2 + 3n)/2.
Here n^2 is the dominating term so the time complexity for this algorithm is O(n2).
Other answers concentrate on the big-O-notation and practical examples. I want to answer the question by emphasizing the theoretical view. The explanation below is necessarily lacking in details; an excellent source to learn computational complexity theory is Introduction to the Theory of Computation by Michael Sipser.
Turing Machines
The most widespread model to investigate any question about computation is a Turing machine. A Turing machine has a one dimensional tape consisting of symbols which is used as a memory device. It has a tapehead which is used to write and read from the tape. It has a transition table determining the machine's behaviour, which is a fixed hardware component that is decided when the machine is created. A Turing machine works at discrete time steps doing the following:
It reads the symbol under the tapehead.
Depending on the symbol and its internal state, which can only take finitely many values, it reads three values s, σ, and X from its transition table, where s is an internal state, σ is a symbol, and X is either Right or Left.
It changes its internal state to s.
It changes the symbol it has read to σ.
It moves the tapehead one step according to the direction in X.
Turing machines are powerful models of computation. They can do everything that your digital computer can do. They were introduced before the advent of digital modern computers by the father of theoretical computer science and mathematician: Alan Turing.
Time Complexity
It is hard to define the time complexity of a single problem like "Does white have a winning strategy in chess?" because there is a machine which runs for a single step giving the correct answer: Either the machine which says directly 'No' or directly 'Yes'. To make it work we instead define the time complexity of a family of problems L each of which has a size, usually the length of the problem description. Then we take a Turing machine M which correctly solves every problem in that family. When M is given a problem of this family of size n, it solves it in finitely many steps. Let us call f(n) the longest possible time it takes M to solve problems of size n. Then we say that the time complexity of L is O(f(n)), which means that there is a Turing machine which will solve an instance of it of size n in at most C.f(n) time where C is a constant independent of n.
Isn't it dependent on the machines? Can digital computers do it faster?
Yes! Some problems can be solved faster by other models of computation, for example two tape Turing machines solve some problems faster than those with a single tape. This is why theoreticians prefer to use robust complexity classes such as NL, P, NP, PSPACE, EXPTIME, etc. For example, P is the class of decision problems whose time complexity is O(p(n)) where p is a polynomial. The class P do not change even if you add ten thousand tapes to your Turing machine, or use other types of theoretical models such as random access machines.
A Difference in Theory and Practice
It is usually assumed that the time complexity of integer addition is O(1). This assumption makes sense in practice because computers use a fixed number of bits to store numbers for many applications. There is no reason to assume such a thing in theory, so time complexity of addition is O(k) where k is the number of bits needed to express the integer.
Finding The Time Complexity of a Class of Problems
The straightforward way to show the time complexity of a problem is O(f(n)) is to construct a Turing machine which solves it in O(f(n)) time. Creating Turing machines for complex problems is not trivial; one needs some familiarity with them. A transition table for a Turing machine is rarely given, and it is described in high level. It becomes easier to see how long it will take a machine to halt as one gets themselves familiar with them.
Showing that a problem is not O(f(n)) time complexity is another story... Even though there are some results like the time hierarchy theorem, there are many open problems here. For example whether problems in NP are in P, i.e. solvable in polynomial time, is one of the seven millennium prize problems in mathematics, whose solver will be awarded 1 million dollars.

What is the time complexity of traversing a 2d array

What is the time complexity of traversing (rows ,columns) a two dimensional array?
bool check(int array [9][9])
{
int num=0;
for (int i = 0; i < 9; i++) {
for (int j = 0; j < 9; j++) {
if (array [i][j] == 0) {
num++;
}
}
}
return num;
}
I think each for loop will take square root of n so that nested loops totally take O(n) as traversing all elements, where I am defining n as the total size of the input (in this case 81 elements in array). Is that correct?
As you define n to be the total size of the input, yes the running time of the algorithm you propose will be O(n): you are performing one single operation on each element of the input, for n total operations.
Where the confusion is arising from this question is that by convention, multi-dimensional arrays are not referred to by their total size but rather by each of their dimensions separately. So rather than viewing array as being of size n (81) it would be considered to be an array of size p x q (9 x 9). That would give you a running time of O(pq). Or, if we limit it to square arrays with both dimensions r, O(r^2).
All are correct, which is why it's important to give a clear definition of your variables up front when talking about time complexity. Otherwise, when you use n to mean the total size when most people would assume that n would be a single dimension, you are inviting a lot of confusion.
The time complexity will be O (n*m) where n the number of arrays which is the 1st dimension and m the max size of each internal array ie, the 2nd dimension.
For any algorithm of the form
for (1..n) {
for (1..m) {
doSomething();
}
}
The average, best and worst case time complexity is O(n x m). In your case if n=m, it becomes O(n^2)
The time complexity is O(N), which means its time complexity is linear.
Let's look at the concept of time complexity. When we define any time complexity in Big O notation what we mean is how does the graph of N versus run time must look like in the worst execution case.
For given nested loop size of the data is 9*9 = 81.No matter what operation you perform in the inside for loop. The loops will not execute more than 9*9 = 81 times. If the size of the array was [10][10] the loops will execute not more than 100 times.
If you make graph of execution time of the code with number of inputs or data it will be linear.
The Time complexity is derived by how many times your code is going to do lookup of an element in the data structure to deduce the result. It does not matter whether it is 1-D, 2-D or n-D array. If you access an element not more than once for an n-D array to deduce the solution, the complexity is linear O(N), where N = N1 * N2 * ... *Nn
Let's understand this by taking real world example of two different hotels having N rooms each. You need to search your friend in the hotel.
In first scenario let's say first hotel has 100 rooms on single(ground) floor, you need to visit 100 rooms in worst case to find your friend, so here complexity is linear i.e. 0(N) or O(100).
In second scenario the hotel has 4 floors having 25 rooms each. In the worst case you have to visit 25*4=100 rooms (ignore the accessing time/process between floors), hence complexity is again linear.
A 2-d array arr[i][j] can be traversed by a single loop also, where the loop will run for (i × j) times.
Consider n = (i×j), then the time complexity for traversing a 2-d array is O(n).
Thanks to coder2design.com

What is the algorithmic complexity of the code below

Is the Big-O for the following code O(n) or O(log n)?
for (int i = 1; i < n; i*=2)
sum++;
It looks like O(n) or am I missing this completely?
It is O(logn), since i is doubled each time. So at overall you need to iterate k times, until 2^k = n, and in this case it happens when k = logn (since 2^logn = n).
Simple example: Assume n = 100 - then:
iter1: i = 1
iter2: i = 2
iter3: i = 4
iter4: i = 8
iter5: i = 16
iter6: i = 32
iter7: i = 64
iter8: i = 128 > 100
It is easy to see that an iteration will be added when n is doubled, which is logarithmic behavior, while linear behavior is adding iterations for a constant increase of n.
P.S. (EDIT): mathematically speaking, the algorithm is indeed O(n) - since big-O notation gives asymptotic upper bound, and your algorithm runs asymptotically "faster" then O(n) - so it is indeed O(n) - but it is not a tight bound (It is not Theta(n)) and I doubt that is actually what you are looking for.
The complexity is O(logn) because the loops runs (log2n - 1) times.
O(log(n)), as you only loop ~log2(n) times
No the complexity is not linear. Try to play through a few scenarios: how many iterations does this cycle do for n = 2, n=4, n=16, n=1024? How about for n = 1024 * 1024? Maybe this will help you get the correct answer.
For loop check runs lg(n) +1 times. The inner loop runs lg(n) times. So, the complexity is is O(lg n), not O(log n).
If n==8, the following is how the code will run:
i=1
i=2
i=4
i=8 --Exit condition
It is O(log(n)).
Look at the code num++;
It loops O(log(n)) times.

complexity analysis of algorithm

here is code, which fills two dimensional array with random genarated numbers in range [1 19] without duplication, my question is: how to determine it's complexity?
For example, I see that its running time is at least O(n^2), because of its inner and outer cycles, but that about the goto statement?
Here is my code:
#include <iostream>
#include <set>
#include <cstdlib>
using namespace std;
int main()
{
int min=1;
int max=19;
int a[3][3];
set<int>b;
for (int i=0; i<3; i++)
{
for (int j=0; j<3; j++)
{
loop:
int m=min+rand()%(max-min);
if (b.find(m)==b.end())
{
a[i][j]=m;
b.insert(m);
}
else
goto loop;
}
}
for (int i=0; i<3; i++)
{
for (int j=0; j<3; j++)
cout<< a[i][j]<<" ";
cout<<endl;
}
return 0;
}
I would say that complexity of algorithm is c*O(n^2) where c is some constant, it is because if it finds duplicated element inside cycles it repeats generation of random numbers and takes some constant time, am I right?
As the likelihood of getting a working number decreases, the number of goto-loops increases.
For a uniform random number generator, the behavior is linear with respect to the number of.. numbers. It definitely doesn't add a constant to your complexity.
If n is the number of elements in a, then it'll on average scale with O(n²). (or if n is the number of rows in the square matrix a; O(n⁴)).
A much simpler implementation would be using Fisher-Yates shuffle
It's O(infinity). The O notation gives an upper bound. Because of your use of rand() in a loop, there's no guarantee that you will make progress. Therefore, no upper bound exists.
[edit]
Ok, people also want other complexities than the conventional, worst-case complexity.
The worst-case complexity obtained by assuming that the RNG generates an infinite series of ones; this means that even the first loop iteration doesn't finish. Therefore there's no finite upper bound on the run time, O(infinity).
The best-case complexity is obtained by assuming that the RNG generates sequential numbers. That means the cost of each iteration is O(log N) (set::find), and there are O(N)*O(N) iterations, so the upper bound is O(N2 log N).
The average case complexity is harder. Assuming that max = k*N*N for some k > 1, the RNG will succesfully pick an "unused" number in O(1) time. Even after N*N numbers are chosen, there are still (k-1) unused numbers, so the chance p of picking an unused number is p >= (k-1)*(N*N)/k*(N*N) <=> p>= (k-1)/k. That means we can expect to pick an unused number in k/(k-1) attempts, which is independent of N and therefore O(1). set::find still dominates the cost of each iteration, at O(log N). We still have the same number of iterations, so we get the same upper bound of O(N2 log N)
The goto loops until a random number equals a given one.
if the distribution of random numbers is uniform, "retry ... until" is "linear in average" respect to the amplitude of the range.
But this linearity gos to multiply the complexity of set::find (log(n)) (ste::insert just happen once)
The two external for are based on constants (so their timing doesn't depend on the data), hence they just multiply the time, but don't increase complexity.
"Complexity" is not about how much absolute time (or space) your program takes. It is about how much the time (or space) increases when you increase the size of your program's input data.
(BTW O for time and O for space may be different.)
Time Complexity
Assuming n is number of elements in the matrix, you have to ask yourself what happens when you add a single element to your matrix (i.e. when n becomes n+1):
You need to iterate over the new element, which is O(1). We are talking about one iteration here, so double loop does not matter.
You have another iteration for printing, which is also O(1), assuming cout<< is O(1).
You have to find the element which is O(log(n)) - the std::set is typically implemented as a red-black tree.
You have to retry the find (via goto) potentially several times. Depending on rnd, min, max and the width of int, number of retries may be O(1) (i.e. it does not increase with increase in number of elements) or it may be worse than that.
You have to insert the element which is O(log(n)).
Assuming the "best" rnd, you are looking at the following increase for one element...
(O(1) + O(1)) * (O(log(n)) * O(1) + O(log(n)) = O(1) * O(log(n)) = O(log(n))
...so for n elements, your complexity is:
(O(n) + O(n)) * (O(log(n)) * O(1) + O(log(n)) = O(n) * O(log(n)) = O(n * log(n))
Assuming "bad" rnd of O(n), you are looking at...
(O(n) + O(n)) * (O(log(n)) * O(n) + O(log(n)) = O(n) * O(n * log(n)) = O(n^2 * log(n))
Space Complexity
Your matrix is O(n) and std::set is O(n) so you are O(n) here overall.