for (int p = t; p > 0; p >>= 1) {
for (int i = 0; i < n - p; ++i) {
if ((i & p) != 0) {
sort2(a, i, i+p);
}
}
for(int q = t; q > p; q >>= 1) {
for(int i = 0; i < n- q; ++i) {
if ((i & p) != 0) {
sort2(a, i+p, i+q);
}
}
}
}
Here n is some positive integer and t is greater than n/2, but not equal to n.
As per my understanding, the inner for loop runs for (n-p) times but I could not figure out the outer for loop.
I tried finding it as below:
If t=64 and n=100, it takes the binary value of p which is equal to 64 and so p=1000000 base 2.
I understand that every time it reduces by one digit and it executes a total of 7 times in this case. I somehow couldn't figure out general time.
Also, my understanding is that the 3rd for loop i.e
for(int q = t; q > p; q >>= 1)
doesn't execute at all because the condition q>p doesn't satisfy as p=q=t.
Is this correct? I am just starting out with algorithms.
For this: Complexity would be BigO( log(t)(n-t)log(t) )
Excluding the sort2 function complexity inside the for loop.
Explaination:
Outer loop Complexity would be log(p)+1 [Each time it is right shifting the bit by 1 and going for greater than 0], so for t = 64, the loop will go as [64, 32, 16, 8, 4, 2, 1].
Inner loop complexity is going to be greater of ( O(n-p), O((n-t)log(t)) )
Execution
for the second inner loop having nested for loop with the most outer loop:
Related
I'm doing problems on leetcode and was able to solve this one, but I'm not exactly sure what the Big O notation for my solution is. Here is the problem:
Given an array of integers 'nums' sorted in non-decreasing order, find the starting and ending position of a given target value.
If target is not found in the array, return [-1, -1].
You must write an algorithm with O(log n) runtime complexity.
Example 1:
Input: nums = [5,7,7,8,8,10], target = 8
Output: [3,4]
Example 2:
Input: nums = [5,7,7,8,8,10], target = 6
Output: [-1,-1]
Example 3:
Input: nums = [], target = 0
Output: [-1,-1]
My code:
class Solution {
public:
vector<int> searchRange(vector<int>& nums, int target) {
int l = 0,m,h = nums.size()-1;
vector<int> ans;
ans.push_back(-1);
ans.push_back(-1);
while (l <= h) {
m = (l+h)/2;
if (nums[m] == target) {
l = m-1;
h = m+1;
ans.at(0)=m;
ans.at(1)=m;
do {
if (l >= 0 and nums[l] == target) {
ans.at(0)=l;
l--;
}
else {
l = -99;
}
if (h <= nums.size()-1 and nums[h] == target) {
ans.at(1)=h;
h++;
}
else {
h = nums.size();
}
} while (l >= 0 or h < nums.size());
return ans;
}
else if (nums[m] < target) {
l = m+1;
}
else {
h = m-1;
}
}
return ans;
}
};
My Thoughts:
I used a binary search to locate the first instance of the target value so I know its at least O(logN), but what gets me confused is my inner do-while loop within the outer while loop. In class I was told the Big O notation of an algorithm will be, for instance, O(N^2) if there is a for loop nested within another for loop because for every iteration of the outer loop, the inner loop executes N times, assuming N is used as the value in the terminating condition for both loops. However, in this case the inner do-while loop will begin executing for only one outer loop iteration if and only if the target value is even in 'nums'. Using the same logic from class, this leaves me unsure as to how the inner do-while loop effects the Big O because if its O(N*N) for a for loop whose nested for loop occurs N times for every outer loop iteration, then what would it be for my solution where the inner do-while loop can begin executing either only for a single outer loop iteration or not at all? O(logN * 1) = O(logN) seems to be a viable answer until I consider the fact that the worst case runtime for the inner while loop would be O(N) if 'nums' consisted of N elements that were all the target value. I'd imagine this would make the Big O notation be O(N * logN * 1) = O(N * logN), which would then make my solution invalid but I'm not very confidant in that answer. Any help is greatly appreciated, thanks!
Your code complexity is O(log(N)) + O(N). As you can rearrange your code like below. Its not the code structure that determines the time complexity. Its how the program counter moves.
vector<int> searchRange(vector<int>& nums, int target) {
int l = 0,m,h = nums.size()-1;
vector<int> ans;
ans.push_back(-1);
ans.push_back(-1);
bool found = false;
//takes log(n) time
while (l <= h) {
m = (l+h)/2;
if (nums[m] == target) {
found = true;
break
}
else if (nums[m] < target) {
l = m+1;
}
else {
h = m-1;
}
}
if(found){
// takes O(n) time.
l = m-1;
h = m+1;
ans.at(0)=m;
ans.at(1)=m;
do {
if (l >= 0 and nums[l] == target) {
ans.at(0)=l;
l--;
}
else {
l = -99;
}
if (h <= nums.size()-1 and nums[h] == target) {
ans.at(1)=h;
h++;
}
else {
h = nums.size();
}
} while (l >= 0 or h < nums.size());
}
return ans;
}
This worst case can occur if all your entries are target and hence you need to traverse full array.
You can easily get the answer with O(log(n)) if you use https://en.cppreference.com/w/cpp/algorithm/equal_range function, it uses lower_bound and upper_bound both of which are O(log(n)).
#include<bits/stdc++.h>
using namespace std;
int main() {
int a[5] = {1, 2, 3, 4, 5};
int k;
cin >> k;
int i, j, ct;
i = 0;
j = 4;
ct = 0;
while (i < j) {
if (a[i] + a[j] > k) {
--j;
}
else if (a[i] + a[j] < k) {
++i;
}
else {
ct++;
}
}
cout << ct;
}
Trying to print the total number of pairs ct in a given sorted array whose sum is equal to k, k is any given input. But the problem is, value of i changes once and value of j remains same. why? hence i < j is always true and loop runs to infinite hence no certain value of ct comes out. Where is the problem in this code?
There are many issues in your code, but the reason for it being stuck is pretty simple. You have three cases
larger
smaller
equal
If you reach the equal case, then you do not change i nor j so it will always go to case equal, forever.
Let's say k=5:
In the while loop (i>j) or (0<4):
First case (1+5>k): true
j = 3
the while loop (0<3):
Second case (1+4 = 5)
ct = 1
the while loop (0<3):
Third case (1+4 = 5)
ct = 2
the while loop (0<3):
Fourth case (1+4 = 5)
ct = 3
the while loop (0<3):
Fifth case (1+4 = 5)
ct = 4
So you end up with an infinite loop that never ends because neither the i or j value is being updated
i.e the "else{} " is being infinitely run because the loop condition still holds.
//outer for loop runs at most n times
for (int w = 1; w < n; w++) {
// inner for loop at most log(73550/n) times
for (int y = w; y < 73550; y = y * 2) {
x = x + w;
}
k = k * w;
}
I am really confused on whether the second loop adds to the big O time complexity since it has a set max iteration? would big O be O(n), O(nlog(1/n)) or neither??
int p = 0;
int q = 0;
//runs at most 18n^2 times
while (p < 18 * n * n) {
if (p % 2 == 0) {
q++;
}
p++;
}
//p = 18n^2 q1 = 9n^2
//runs at most log(9n^2) times
for (int r = 1; r < q; r = r * 3) {
q++;
}
return p * q;
the time complexity of a sequential functions like this is just the larger time complexity right? so it will be O(n^2) ?
//runs at most n(4n-1) times
for (int k = 2; k <= 2n(4n-1); k+=2) {
j++;
}
even with the -1 the time complexity will be O(n^2) right?
First case: O(n), since as you said, there's a constant bound on the number of loop iterations. It's not typical to have very large constants on loop bounds, so this doesn't tend to be a big deal in most natural algorithms. If 73550 was actually a non-constant variable, but independent of n, we could give it a name (e.g. m), and say that the complexity is O(n*log(m)).
Second case: Yes, O(n^2), for the reason you gave.
Third case: Yes, O(n^2). First, big-O only provides an upper bound, so the -1 only makes it easier to guarantee the bound. Second, even if you mean Ө(n^2), it still is, because n(4n-1) = 4n^2-n, which is asymptotically larger than k*n^2 for some constant k. In this case, any k less than 4.
I have worked out a O(n square) solution to the problem. I was wondering about a better solution to this. (this is not a homework/interview problem but something I do out of my own interest, hence sharing here):
If a=1, b=2, c=3,….z=26. Given a string, find all possible codes that string
can generate. example: "1123" shall give:
aabc //a = 1, a = 1, b = 2, c = 3
kbc // since k is 11, b = 2, c= 3
alc // a = 1, l = 12, c = 3
aaw // a= 1, a =1, w= 23
kw // k = 11, w = 23
Here is my code to the problem:
void alpha(int* a, int sz, vector<vector<int>>& strings) {
for (int i = sz - 1; i >= 0; i--) {
if (i == sz - 1) {
vector<int> t;
t.push_back(a[i]);
strings.push_back(t);
} else {
int k = strings.size();
for (int j = 0; j < k; j++) {
vector<int> t = strings[j];
strings[j].insert(strings[j].begin(), a[i]);
if (t[0] < 10) {
int n = a[i] * 10 + t[0];
if (n <= 26) {
t[0] = n;
strings.push_back(t);
}
}
}
}
}
}
Essentially the vector strings will hold the sets of numbers.
This would run in n square. I am trying my head around at least an nlogn solution.
Intuitively tree should help here, but not getting anywhere post that.
Generally, your problem complexity is more like 2^n, not n^2, since your k can increase with every iteration.
This is an alternative recursive solution (note: recursion is bad for very long codes). I didn't focus on optimization, since I'm not up to date with C++X, but I think the recursive solution could be optimized with some moves.
Recursion also makes the complexity a bit more obvious compared to the iterative solution.
// Add the front element to each trailing code sequence. Create a new sequence if none exists
void update_helper(int front, std::vector<std::deque<int>>& intermediate)
{
if (intermediate.empty())
{
intermediate.push_back(std::deque<int>());
}
for (size_t i = 0; i < intermediate.size(); i++)
{
intermediate[i].push_front(front);
}
}
std::vector<std::deque<int>> decode(int digits[], int count)
{
if (count <= 0)
{
return std::vector<std::deque<int>>();
}
std::vector<std::deque<int>> result1 = decode(digits + 1, count - 1);
update_helper(*digits, result1);
if (count > 1 && (digits[0] * 10 + digits[1]) <= 26)
{
std::vector<std::deque<int>> result2 = decode(digits + 2, count - 2);
update_helper(digits[0] * 10 + digits[1], result2);
result1.insert(result1.end(), result2.begin(), result2.end());
}
return result1;
}
Call:
std::vector<std::deque<int>> strings = decode(codes, size);
Edit:
Regarding the complexity of the original code, I'll try to show what would happen in the worst case scenario, where the code sequence consists only of 1 and 2 values.
void alpha(int* a, int sz, vector<vector<int>>& strings)
{
for (int i = sz - 1;
i >= 0;
i--)
{
if (i == sz - 1)
{
vector<int> t;
t.push_back(a[i]);
strings.push_back(t); // strings.size+1
} // if summary: O(1), ignoring capacity change, strings.size+1
else
{
int k = strings.size();
for (int j = 0; j < k; j++)
{
vector<int> t = strings[j]; // O(strings[j].size) vector copy operation
strings[j].insert(strings[j].begin(), a[i]); // strings[j].size+1
// note: strings[j].insert treated as O(1) because other containers could do better than vector
if (t[0] < 10)
{
int n = a[i] * 10 + t[0];
if (n <= 26)
{
t[0] = n;
strings.push_back(t); // strings.size+1
// O(1), ignoring capacity change and copy operation
} // if summary: O(1), strings.size+1
} // if summary: O(1), ignoring capacity change, strings.size+1
} // for summary: O(k * strings[j].size), strings.size+k, strings[j].size+1
} // else summary: O(k * strings[j].size), strings.size+k, strings[j].size+1
} // for summary: O(sum[i from 1 to sz] of (k * strings[j].size))
// k (same as string.size) doubles each iteration => k ends near 2^sz
// string[j].size increases by 1 each iteration
// k * strings[j].size increases by ?? each iteration (its getting huge)
}
Maybe I made a mistake somewhere and if we want to play nice we can treat a vector copy as O(1) instead of O(n) in order to reduce complexity, but the hard fact remains, that the worst case is doubling outer vector size in each iteration (at least every 2nd iteration, considering the exact structure of the if conditions) of the inner loop and the inner loop depends on that growing vector size, which makes the whole story at least O(2^n).
Edit2:
I figured out the result complexity (the best hypothetical algoritm still needs to create every element of the result, so result complexity is like a lower bound to what any algorithm can archieve)
Its actually following the Fibonacci numbers:
For worst case input (like only 1s) of size N+2 you have:
size N has k(N) elements
size N+1 has k(N+1) elements
size N+2 is the combination of codes starting with a followed by the combinations from size N+1 (a takes one element of the source) and the codes starting with k, followed by the combinations from size N (k takes two elements of the source)
size N+2 has k(N) + k(N+1) elements
Starting with size 1 => 1 (a) and size 2 => 2 (aa or k)
Result: still exponential growth ;)
Edit3:
Worked out a dynamic programming solution, somewhat similar to your approach with reverse iteration over the code array and kindof optimized in its vector usage, based on the properties explained in Edit2.
The inner loop (update_helper) is still dominated by the count of results (worst case Fibonacci) and a few outer loop iterations will have a decent count of sub-results, but at least the sub-results are reduced to a pointer to some intermediate node, so copying should be pretty efficient. As a little bonus, I switched the result from numbers to characters.
Another edit: updated code with range 0 - 25 as 'a' - 'z', fixed some errors that led to wrong results.
struct const_node
{
const_node(char content, const_node* next)
: next(next), content(content)
{
}
const_node* const next;
const char content;
};
// put front in front of each existing sub-result
void update_helper(int front, std::vector<const_node*>& intermediate)
{
for (size_t i = 0; i < intermediate.size(); i++)
{
intermediate[i] = new const_node(front + 'a', intermediate[i]);
}
if (intermediate.empty())
{
intermediate.push_back(new const_node(front + 'a', NULL));
}
}
std::vector<const_node*> decode_it(int digits[9], size_t count)
{
int current = 0;
std::vector<const_node*> intermediates[3];
for (size_t i = 0; i < count; i++)
{
current = (current + 1) % 3;
int prev = (current + 2) % 3; // -1
int prevprev = (current + 1) % 3; // -2
size_t index = count - i - 1; // invert direction
// copy from prev
intermediates[current] = intermediates[prev];
// update current (part 1)
update_helper(digits[index], intermediates[current]);
if (index + 1 < count && digits[index] &&
digits[index] * 10 + digits[index + 1] < 26)
{
// update prevprev
update_helper(digits[index] * 10 + digits[index + 1], intermediates[prevprev]);
// add to current (part 2)
intermediates[current].insert(intermediates[current].end(), intermediates[prevprev].begin(), intermediates[prevprev].end());
}
}
return intermediates[current];
}
void cleanupDelete(std::vector<const_node*>& nodes);
int main()
{
int code[] = { 1, 2, 3, 1, 2, 3, 1, 2, 3 };
int size = sizeof(code) / sizeof(int);
std::vector<const_node*> result = decode_it(code, size);
// output
for (size_t i = 0; i < result.size(); i++)
{
std::cout.width(3);
std::cout.flags(std::ios::right);
std::cout << i << ": ";
const_node* item = result[i];
while (item)
{
std::cout << item->content;
item = item->next;
}
std::cout << std::endl;
}
cleanupDelete(result);
}
void fillCleanup(const_node* n, std::set<const_node*>& all_nodes)
{
if (n)
{
all_nodes.insert(n);
fillCleanup(n->next, all_nodes);
}
}
void cleanupDelete(std::vector<const_node*>& nodes)
{
// this is like multiple inverse trees, hard to delete correctly, since multiple next pointers refer to the same target
std::set<const_node*> all_nodes;
for each (auto var in nodes)
{
fillCleanup(var, all_nodes);
}
nodes.clear();
for each (auto var in all_nodes)
{
delete var;
}
all_nodes.clear();
}
A drawback of the dynamically reused structure is the cleanup, since you wanna be careful to delete each node only once.
In the following algorithm for merge-sort, within the 3rd definition, first while loop there is:
a[k++] = (a[j] < b[i]) ? a[j++] : b[i++].
I understand that the RHS is a conditional statement stating that if the first operand is satisfied, then we should perform the second operand, and if it is not satisfied, we should perform the third operand.
What element does a[k++], a[j++] and b[i++] correspond to?
From my understanding, it should mean in each successive while loop, the element is incremented.
ie. beginning with the initialised values (i=1, j=m+1, k=1) for the first while loop, the next while loop will consist of (i=2, j=m+2, k=2), and so on.
Here is the entire algorithm:
# split in half
m = n / 2
# recursive sorts
sort a[1..m]
sort a[m+1..n]
# merge sorted sub-arrays using temp array
b = copy of a[1..m]
i = 1, j = m+1, k = 1
while i <= m and j <= n,
a[k++] = (a[j] < b[i]) ? a[j++] : b[i++]
→ invariant: a[1..k] in final position
while i <= m,
a[k++] = b[i++]
→ invariant: a[1..k] in final position
a[k] takes the kth element of the array a.
k++ increases the value of k, but returns the previous value.
Thus, a[k++] returns a[k] with the side-effect of increasing k after returning the value of a[k]. a[k++] = 4 is equivalent to:
a[k] = 4
k = k + 1
On the other hand, ++k would increase k before returning it, so a[++k] = 4 would be
k = k + 1
a[k] = 4
The increment and decrement operators work the same in array subscripts as they do in other locations. The postfix version increments the variable and returns its original value, and the prefix version increments the variable and returns its new value.
int i = 0;
do {
if (i++) { std::cout << "i > 0" << std::endl; }
} while (i < 10);
// Checks "i"'s original value.
// First check fails, because i was 0 before incrementing.
// Outputs line 9 times.
// -----
int i = 0;
do {
if (++i) { std::cout << "i > 0" << std::endl; }
} while (i < 10);
// Checks "i"'s incremented value.
// First check succeeds, because i is incremented before being read.
// Outputs line 10 times.
Similarly, if we have this:
int arr[5] = { 1, 2, 3, 4, 5 };
int i = 0;
do {
std::cout << arr[i++] << std::endl;
} while (i < 5);
The variable's original value will be used as the index, and the output will be:
1
2
3
4
5
However, if we have this:
int arr[5] = { 1, 2, 3, 4, 5 };
int i = 0;
do {
std::cout << arr[++i] << std::endl;
} while (i < 5);
The variable's incremented value is used as the index, and the output will be:
2
3
4
5
Considering this, we can take your example line, a[k++] = (a[j] < b[i]) ? a[j++] : b[i++], and read it as meaning this:
Assign value to a[k], then increment k.
Value is conditionally determined based on:
(a[j] < b[i])
If true, value is:
Read a[j], then increment j.
If false, value is:
Read b[i], then increment i.
It can be a useful time-saver if you know how to use it properly, but it can also make things harder to parse if used improperly.