Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This question is for revision purposes from a past exam paper
I just want to know if I am on the right track
1. int i=1;
2. while (i <= n) {
3. for (int j=1; j<10; j++)
4. sum++;
5. i++;
6. }
7. for( int j = 1; j <= n; j++ )
8. for( int k = 1; k <= n; k=k*2 )
9. sum++;
1.) How many times is statement 4 executed? A. O(n) B. O(n^2) C. O(log n) D. O(n log n) E. none of the
above
Here I chose A
2.) How many times is statement 9 executed? A. O(n) B. O(n^2) C. O(log n) D. O(n log n) E. none of the above
Because of line 8 (k=k*2) I chose C
3.) What is the running time of the entire code fragment? A.
O(n) B. O(n^2) C. O(log n) D. O(n log n)
Since O(n)+O(logn)=O(n) so I chose A
Your answer 1 is correct, it's inside a loop controlled only by n.
Answer 2 is incorrect. It would be O(log n) if line 7 did not exist but, because line 7 is forcing lines 8 and 9 to run multiple times dependent on n, the answer is O(n log n).
Answer 3 is the correct reasoning but suffers from the fact answer 2 was wrong. O(n) + O(n log n) simplifies down to O(n log n).
So the answers are A, D and D.
I dont know how the questions where formulated, but if the wording is like you say, your examiner didnt know the right definition of big O (at least when he expects the "right" answers) – as "Big O functions include smaller". So something that executes as a function of n in f(n) = 10 n which is linear is also in O(n), O(n^2), O(n log n).
If one asks for the "smallest" possible, your answers would be
Statement 4 is executed 10 n times, so A
Statement 9 is executed n*log n times, so D
Here it is executed the sum of both, n + n*log n so (here you lost an *n), so D would be the right.
So if multiple answers were possible and it was just asked for how much it is executed, the right answers would be
A,B,D
B,D
B,D
Ans 1: A ie. O(n) as the statement 4 would be executed 10*n times.
Ans 2: D ie. O(nlog(n)) as the statement 9 would be executed n*log(n) times.
Ans 3: D as the overall complexity [O(n) + O(nlog(n))] would be n*log(n).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to optimize this loop. Its time complexity is n2. I want something like n or log(n).
for (int i = 1; i <= n; i++) {
for (int j = i+1; j <= n; j++) {
if (a[i] != a[j] && a[a[i]] == a[a[j]]) {
x = 1;
break;
}
}
}
The a[i] satisfy 1 <= a[i] <= n.
This is what I will try :
Let us call B the image by a[], i.e. the set {a[i]}: B = {b[k]; k = 1..K, such that i exists, a[i] = b[k]}
For each b[k] value, k = 1..K, determine the set Ck = {i; a[i] = b[k]}.
Determinate of B and the Ck could be done in linear time.
Then let us examine the sets Ck one by one.
If Card(Ck} = 1 : k++
If Card(Ck) > 1 : if two elements of Ck are elements of B, then x = 1 ; else k++
I will use a table (std::vector<bool>) to memorize if an element of 1..N belongs to B or not.
I hope not having made a mistake. No time to write a programme just now. I could do it later on, but I guess you will be able to do it easily.
Note: I discovered after sending this answer that #Mike Borkland proposed something similar already in a comment...
Since sometimes you need to see a solution to learn, I'm providing you with a small function that does the job you want. I hope it helps.
#define MIN 1
#define MAX 100000 // 10^5
int seek (int *arr, int arr_size)
{
if(arr_size > MAX || arr_size < MIN || MIN < 1)
return 0;
unsigned char seen[arr_size];
unsigned char indices[arr_size];
memset(seen, 0, arr_size);
memset(indices, 0, arr_size);
for(int i = 0; i < arr_size; i++)
{
if (arr[i] <= MAX && arr[i] >= MIN && !indices[arr[i]] && seen[arr[arr[i]]])
return 1;
else
{
seen[arr[arr[i]]] = 1;
indices[arr[i]] = 1;
}
}
return 0;
}
Ok, how and why this works? First, let's take a look at the problem the one the original algorithm is trying to solve; they say half of the solution is a well-stated problem. The problem is to find if in a given integer array A of size n whose elements are bound between one and n ([1,n]) there exist two elements in A, x and y such that x != y and Ax = Ay (the array at the index x and y, respectively). Furthermore, we are seeking for an algorithm with good time complexity so that for n = 10000 the implementation runs within one second.
To begin with, let's start analyzing the problem. In the worst case scenario, the array needs to be completely scanned at least one time to decide if such pair of elements exist within the array. So, we can't do better than O(n). But, how would you do that? One possible way is to scan the array and record if a given index has appeared, this can be done in another array B (of size n); likewise, record if a given number that corresponds to A at the index of the scanned element has appeared, this can also be done in another array C. If while scanning the current element of the array has not appeared as an index and it has appeared as an element, then return yes. I have to say that this is a "classical trick" of using hash-table-like data structures.
The original tasks were: i) to reduce the time complexity (from O(n^2)), and ii) to make sure the implementation runs within a second for an array of size 10000. The proposed algorithm runs in O(n) time and space complexity. I tested with random arrays and it seems the implementation does its job much faster than required.
Edit: My original answer wasn't very useful, thanks for pointing that out. After checking the comments, I figured the code could help a bit.
Edit 2: I also added the explanation on how it works so it might be useful. I hope it helps :)
I want to optimize this loop. Its time complexity is n2. I want something like n or log(n).
Well, the easiest thing is to sort the array first. That's O(n log(n)), and then a linear scan looking for two adjacent elements is also O(n), so the dominant complexity is unchanged at O(n log(n)).
You know how to use std::sort, right? And you know the complexity is O(n log(n))?
And you can figure out how to call std::adjacent_find, and you can see that the complexity must be linear?
The best possible complexity is linear time. This only allows us to make a constant number of linear traversals of the array. That means, if we need some lookup to determine for each element, whether we saw that value before - it needs to be constant time.
Do you know any data structures with constant time insertion and lookups? If so, can you write a simple one-pass loop?
Hint: std::unordered_set is the general solution for constant-time membership tests, and Damien's suggestion of std::vector<bool> is potentially more efficient for your particular case.
This question already has answers here:
How do I find the time complexity (Big O) of while loop?
(2 answers)
Closed 4 years ago.
I am just starting with algorithms and I am trying to find out the running time in terms of 'n' for the while loop below.
int k=1;
while(k<n-k){
k+=k;
}
Here n>2. I understand that the value of k doubles everytime and the loop runs only once, once k value becomes greater than n/2. But I am having difficulty in expressing the same in terms of 'n'.
It's worth to list the important points:
k doubles on every loop iteration
your loop condition can be rewritten as: while(2*k < n)*
The essential question is: how many times I have to double the k, untill k doubled will be equal or greater than n?
This is fairly easy. This is exactly how logarithms work. Take a number 2, for example. How many times do I have to double it to reach, let's say, 1000? The answer is log21000 rounded up.
Essentially, your algorithm is log_2(n) - 1, which means that your algorithm runs in logarithmic time complexity.
*As François Andrieux correctly stated in his comment, while mathematically this statement is true, this is not always the case in programming, due to the representation limits of data types. For large ks, the expression 2*k might cause an overflow and invalidate the whole expression, while with the same input the expression k < n-k will behave correctly.
Replace
while(k<n-k)
k+=k;
with equivalent
while(2*k<n)
k*=2
the last is definitely O(log(n)) - it makes log2(n)-1 steps
Expression k < n-k simplifies to k < n/2.
Time Complexity should be O(log(n)) with base being 2
K = 1 -> 2 -> 4 -> 8 -> ... -> m iterations
2^(m-1) < n/2
m-1 < log2(n/2)
m ~ log2(n)
As a beginner programmer, I've always had trouble noticing the complexity of sometimes simple code. The code in question is:
k = 1;
while (k <= n){
cout << k << endl;
k = k * 2;
}
At first, I thought the complexity was O(log n) due to the k = k*2 line, and I ran the code as a test and kept track of how many times it looped in regard to the size of n, which was relatively low for even large sizes of n. I also am pretty sure that it is not O(n) because it would have taken much longer to run, but I could be wrong there, as that is why I'm asking the question.
Thanks!
It is O(log n).
Each Iteration, k doubles - which means that in (log n) iterations it will be equal or greater than n.
In your example k doesn’t increase by 1 (k++), it doubles every time it runs which traverses the loop in log(n) time. Remember that logarithms are the opposite operation of exponentiating something. Logarithms appear when things are constantly halved or doubled such as k in your example
As you suggested, the provided example would be O(log n) due to the fact that k is being multiplied by a constant regardless of the size of n. This behavior can also be observed by comparing the necessary traversals of two very simple test cases.
For instance, if n = 10, it's easy to demonstrate that the program would then iterate through the loop 6 times.
Yet if you double the value of n so that n = 20, the program will only require one more traversal, whereas you would expect a program that is O(n) to require roughly twice as many traversals as the original test case.
Example: 1~9
1
/ \
2 3
/ \ / \
4 5 6 7
/ \
8 9
The deepth of the tree(or focus on 1 2 4 8...) is alsways ⌊O(logn)⌋+1, so the complexity is O(log n)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Given an unsorted array, assign every element to its immediate larger number after the current number, assign to -1 if no such number exists
Eg. 3 1 2 5 9 4 8 should be converted to
5 2 5 9 -1 8 -1
O(nlogn) or O(n) approach ?
Following is a way to do it in O(nlogn) :-
int newarr[];
MinHeap heap;
heap.push(0);
for(int i=1;i<n;i++) {
while(arr[heap.top()]<arr[i]) {
k = heap.pop();
newarr[k] = arr[i];
}
heap.push(arr[i]);
}
// No larger elements
while(!heap.isEmpty) {
k = heap.pop();
newarr[k] = -1;
}
Time Complexity : There are only n inserts and n deletes possible in the heap from the above code hence it is O(nlogn) where it take O(logn) for insert and delete in heap
Here is an sketch of a n log(n) solution:
copy your array in copy: O(n)
sort copy: O(n log(n))
for each i in input: (n loops)
perform a dichotomic search to find i in copy. O(log(n))
replace i in input O(1)
=> loop is in O(n log(n))
There are several place where it could be optimized, but I seriously doubt there could be an asymptotically better (eg: O(n)) algorithm. The reason is that if instead of replacing each number but the value of the next number you write the position of the next number then you have a sorted linked list in your array and sorting is know to be at least O(n log(n)). However, I agree that is is not a real proof and I might be wrong.
This question already has an answer here:
finding the running time for my algorithm for finding whether an input is prime in terms of the input
(1 answer)
Closed 9 years ago.
void print(int num)
{
for(int i=2; i<sqrt(num); i++) // VS for(int i=2; i<num/2; i++)
{
if(num%i==0)
{
cout<<"not prime\n";
exit(0);
}
}
cout<<"prime\n";
}
I know that these algorithms are slow for finding primes but I hope to learn about Big oh using these examples.
Im assuming that the algorithm that goes from i=2 to i
Can someone explain the running time of both of the algorithms in terms of the input num using big oh notation?
As only constant statements are within if-statement, the total time complexity is actually determined by the for-loop.
for(int i=2; i<sqrt(num); i++)
This means it will run sqrt(num)-2 times, so the total complexity is O(sqrt(n)).
And easily, you will realize if the for-loop changes to:
for(int i=2; i<num/2; i++)
, it will run num/2-2 times, thus the total complexity will be O(num).
If you run this, you will actually go through the loop sqrt(num)-2 times, i.e. for i==2 to i==sqrt(num), increasing step by 1 at a time.
Thus, in terms of size of num, this algorithm's running time is O( sqrt(num) ).
As stated in other answers, the cost of the algorithm that iterates from 2 to sqrt(n) is O(sqrt n) and the cost of the algorithm that iterates from 2 to n/2 is O(n). However, these bounds apply for the worst case, and the worst case happens when n is prime.
In the average, both algorithms run in O(1) expected time: Half of the numbers are even, so their cost is 2*n/2. A third of the numbers are multiple of 3, so their cost is 3*n/3. A 1/4 of the numbers are multiple of 4, so their cost is 4*n/4...
First we have to specify our task. So what we want is to find a function
f(N) = number_of_steps
when N is your num argument passed to function. From this point forward we are going to assume that every sentence that doesn't depend on the size of the input data takes a constant C number computational steps.
We are going to add the individual number of steps of the function.
f(N) = for_ + C;
Now how many times will be for executed? sqrt(N)-2, so:
f(N) = sqrt(N) -2 + C = sqrt(num) -2 + C
O( f(num)) = sqrt(num)