What is the time complexity of this code to find if the number is power of 2 or not.
Is it O(1)?
bool isPowerOfTwo(int x) {
// x will check if x == 0 and !(x & (x - 1)) will check if x is a power of 2 or not
return (x && !(x & (x - 1)));
}
LeetCode 231
Informally it's O(1) because the code takes a bounded amount of time to run. It's not constant time as the runtime does depend on the input (for example, if x is 0, the function returns quickly), but it's bounded.
More formally, it's ambiguous because O(n) is defined for functions parameterized by arbitrarily large n, and here int is limited to 2^31 or 2^63. Often in complexity calculations of real programs this can be ignored (because an array of size 2^31 is very large), but here it's easy to think up numbers out of the range that your function accepts.
In practice, complexity theorists commonly generalize your problem in two ways
Either assume int contains Theta(log n) bits, and arithmetic operations work in O(1) time for Theta(log n) bits (that is, the size of memory cells and registers get larger as the input size increases). That's sometimes called the "word model".
Or assume that arithmetic operations are O(1) only for bit operations. That's sometimes called the "bit model".
In the word model, the function is O(1). In the bit model, the function is O(log n).
Note, if you replace int with a big-int type, then your function will certainly be O(log n).
Yes, It is O(1), but Time complexity for bitwiseAnd(10^9,1) bitwiseAnd(10,1) are not same even though they both are O(1). In reality, there are 4 basic operations involved in your equation itself, which we consider as basic and unit operations in terms of the power of computing that it does. But in reality, These basic operations also have a cost of 32 or 64 operations as 32 or 64 bits are used to represent a number in most of the cases. So this O(1) time complexity means that the worst time complexity is of 32 or 64 operations in terms of computing and since 32 and 64 both are very low values and thses operations are performed on machine level so that's why we do not think much about the time these unit steps require to perform their function.
Yes the code is time complexity O(1) because the running time is constant and does not depend on the size of the input.
Related
I have written a small algorithm, that a changes a digit of a given number to zero. This is done in base 10. And the digits are indexed from least significant to most significant positions.
Example
unsigned int num = 1234
removeDigit(0, &num); // num == 1230
void removeDigit(unsigned int digit, unsigned int *num) {
unsigned int th = 1;
for (; digit > 0; digit--) th *= 10;
while (*num / th % 10 != 0) *num -= th;
}
I was thinking about the runtime of the algorithm. I'm pretty sure its "technically" O(1), but I also see why you could say it is O(n).
The fact that an integer cannot have more than 11 digits, effectively makes it O(1), but you could still argue that for an arbitrary data type with an endless amount of digits, this algorithm is O(n).
What would you say? If someone were to ask me what the runtime of this is, what should I say?
What would you say? If someone were to ask me what the runtime of this is, what should I say?
When you give some time/space asymptotic bounds, typically it is because the algorithm can be used on arbitrarily large inputs.
In this case, your possible inputs are so constrained that providing the absolute maximum number of given operations/cycles/time that will take place worst-case can be a much more useful metric instead, in some domains.
Also consider that, given this is a very simple algorithm, you could simply provide the exact equations for the number of given operations/cycles/time, too; which is way more precise than giving either asymptotic bounds or absolute maximums.
The for loop is executed digit+1 times. The while loop is executed nd times, where nd denotes the value of the designated digit. So in total generality, the asymptotic complexity is
O(digit + nd).
This formula is valid for any number length and any numeration base, but assumes that the arithmetic operations are done in constant time, which is somewhat unrealistic.
In any base, it is clear that nd = O(1), as this number is bounded by the base. So in the worst case,
O(digit).
Now for fixed-length arithmetic, digits is bounded so that digits = O(1). And as a consequence, the worst-case complexity is also
O(1).
Note that there is no incompatibility between the complexity being O(digits + nd) and the worst-case complexity being O(1). This is justified by digits and nb both being O(1).
Other answers and comments have missed the fact that removeDigit(1000000000,p) will do a billion iterations of the first loop... So this is certainly not O(1). Sometimes in this kind of analysis we make use of the fact that machine words are of constant size, but asymptotic analysis ceases to be useful when we treat the largest possible integer value as a constant.
So... unless otherwise specified, the N in O(N) or similar refers to the size of the input. The size of an integer x would usually be considered to be log2 x when it matters, so in terms of that usual convention, your function takes O(2n) time, i.e., exponential.
But when we use asymptotic analysis in real life we try to convey useful information when we make a statement of complexity. In this case the most useful thing you can say is that removeDigit(digit,p) takes O(digit) time.
I intend to know about the differences in performance or a bit mask as compared to bitset. I know copying a bitmask will take O(1) as it is basically represented just as an integer, so is that the same for bitsets as well, where each value is represented by 1 bit, hence making it the same size as a bitmask? Or will copying a bitset take O(N) time.
I'm trying to measure the usefuleness of bitmasking, specifically in the context of competitive programming.
Thanks!
Copying a bitmask isn't constant-time. It's O(n) in the number of bits, just like any other operation that has to touch every element of a structure once.
Generally speaking, a C++ bitset object should behave comparably to a hand-rolled integer bitmask. For instance, operations on a bitset<32> should perform identically to the equivalent bitwise operations on a uint32_t.
When you say that something is O(N), you are talking about its asymptotic complexity. "Asymptotic" is an important word here. It means you are saying that the actual complexity of the thing approaches some linear function of N as N increases without bound.
So, it's important to know what N is. In the case of a bit-mapped set, it's probably the number of unique elements that can be in (or not in) the set. But what is N when you are talking about a data structure that fits in an int? How can N increase without bound in that case?
It doesn't make any sense to talk about the asymptotic complexity of a thing if the thing doesn't scale. An int does not scale. An int is just an int. It doesn't make any sense to say that an operation on an int is O(1) or O(anythingelse) for that matter.
Im learning O-notation, and I thought that 1 is O(1) because since 1 is considered a constant its Big-O would be 1. However, I'm reading that it can be O(n) as well. How is this possible? Would it be because is n = 1 then it would be the same?
Yes a function that is O(1) is also O(n) -- and O(n2), and O(en), and so on. (And mathematically, you can think of 1 as a function that always has the same value.)
If you look at the formal definition of Big-O notation, you'll see (roughly stated) that a function f(x) is O(g(x)) if g(x) exceeds f(x) by at least some constant factor as x goes to infinity. Quoting the linked article:
Let f and g be two functions defined on some subset of the real
numbers. One writes
*f(x)=O(g(x)) as x --> ∞
if and only if there is a positive
constant M such that for all sufficiently large values of x, the
absolute value of f(x) is at most M multiplied by the absolute value
of g(x). That is, f(x) = O(g(x)) if and only if there exists a
positive real number M and a real number x0 such that
*|f(x)| ≤ M |g(x)| for all x ≥ *x0.
However, we rarely say that an O(1) function or algorithm is O(n), since saying it's O(n) is misleading and doesn't convey as much information. For example, we say that the Quicksort algorithm is O(n log n), and Bubblesort is O(n2). It's strictly true that Quicksort is also O(n2), but there's no point in saying so -- largely because a lot of people aren't familiar with the exact mathematical meaning of Big-O notation.
There's another notation called Big Theta (Θ) which applies tighter bounds.
Actually, Big O Notation shows how the program complexity (it may be time, memory etc.) depends on the problem size.
O(1) means that the program complexity is independent of problem size. e.g. Accessing an array element. No matter which index you select, the time of accessing will be independent of the index.
O(n) means that the program complexity linearly depends on problem size. e.g. If you are linear searching an element in the array, you need to traverse most of the elements of the array. In the worst case, if element is not present in the array, you will be traversing the complete array.
If we increase the size of the array, the complexity say, time complexity will be different i.e. it will take more time to execute if we are traversing 100 elements than the time taken if we are traversing only 10 elements.
I hope this helps you a bit.
The Big O Notation is a mathematical way to explain the behaviour of a function near to a point or to infinity. This is a tool that is used in computer science to analyse the complexity of an algorithm. The complexity of an algorithm helps you analyse if your algorithm suits your situation and process your logic in a reasonable time.
But that does not answer your question. The question that if n is equal to 1 doesn't make sense in Big O notation. Like the name said, it's a notation and not a way to calculate in mathematics. Big O notation means to evaluate the behaviour of this algorithm near to infinite to tell which part of the algorithm is the most significant. For example, if an algorithm has a behaviour that can be represented by the function 2x^2 + 3x the Big O notation says to take each part of this function and evaluate it near to infinite and take the function that is the most significant. So, by evaluating 2x^2 and 3x we will see that 2x^2 will be a bigger infinite that 3x. And the difference between x^2 and 3x are infinite too. So, if we eliminate the coefficients (that are not the variables part of function) we will have two complexities: O(x^2) and O(x), so O(n^2) and O(n). But we know that the most significant is the O(n^2).
It's the same thing if inside a part of code, you have two complexities O(1) and O(n) the O(n) will be your algorithm complexity.
But if a O(n) complexity process only one element the behaviour will be equivalent to O(1). But it doesn’t means that your algorithm has O(1) complexity.
I have this code and want to know its time complexity:
int N,M; // let N and M be any two numbers
while(N != M && N > 0 && M > 0){
if(N > M)N -= M;
else M -= N;
}
I don't know how to analyze this, since the values of M and N decrease in unusual ways. Is there a standard way to approach this?
This code is a naive implementation of the Euclidean algorithm. At each iteration, you're subtracting out the smaller number from the bigger one, so you can divide the algorithm into "phases." Each phase consists of subtracting as many copies of the smaller number out of the bigger one until the bigger number drops below the smaller number. (This is connected to a procedure the Ancient Greeks knew about call anythpharesis) A modern version of this could would just be to mod the bigger number by the smaller number, which is known to terminate within O(log min{M, N}) steps (this is the modern Euclidean algorithm) and spend O(1) time on each step, assuming the numbers are represented as integers.
In this case, we know that there will be O(log min{M, N}) phases, but each phase won't take a constant amount of time. Using the geometric intuition behind anythpharesis, it's possible to construct pairs of numbers where it will take a very long time for each individual phase to terminate, so the best bound that I'm aware of would be to say that the runtime is O(N + M).
In short: this code is inefficient compared to a modern implementation, which runs in logarithmic time. It's hard to get a good upper bound on the runtime, but realistically speaking it doesn't matter since you would probably just rewrite the code to be more efficient anyway. :-)
Hope this helps!
In C++ if I do a logical OR (or AND) on two bitsets, for example:
bitset<1000000> b1, b2;
//some stuff
b1 |= b2;
Does this happen in O(n) or O(1) time? Why?
Also, can this be accomplished using an array of bools in O(1) time?
Thanks.
It has to happen in O(N) time since there is a finite number of bits that can be processed in any given chunk of time by a given processor platform. In other words, the larger the bit-set, the longer the amount of time each operation will take, and the increase will be linear with respect to the number of bits in the bitset.
You also end up with the same problem using the array of bool types. While each individual operation itself will take O(1) time, the total amount of time for N objects will be O(N).
It's impossible to perform a logical operation (e.g. OR or AND) on arbitrary arrays of flags in unit time. True Big-Oh analysis deals with runtime as the size of the data tends to infinity, and a Core i7 is never going to OR together a billion bits in the same time it takes to OR together two bit.
I think it needs to be made clear that Big O is a boundary - an asymptotic boundary (minimum time required cannot be less than the f(x)'s Big O., and in in thinking about it, it states the order of magnitude of the speed of a computation. So if you think about how an array works - if you can say I can do this operation all in one computation or so, or there's a known amount that is very small and much less than N, then it is constant. If you need to iterate in some manner (in this case you will see all the bits need to be checked, and there is no short cut for bitwise OR - therefore N bits need to be computed, and therefore it's O(n). [It's actually tighter boundary than that, but we're dealing with just Big O]. An array itself stores N-bits in it.
In fact, few things are really O(1) (index look ups at a known address using a pointer can be O(1) (if you already know what you are looking up). But, if you have M things that need to be looked up in constant time, then it is O(M) * O(1) = O(M).
This is a function of modern day computer - since most things are processed sequentially. (multi-core helps but doesn't come close to affecting big O notation yet). There is of course, the ability of the computer to process words in parallel, but even that is just a constant subtraction. O(n) / O(64) is still O(n).