n = 0;
sum = 0;
cin >> x;
while (x != -999)
{
n++;
sum += x;
cin >> x;
}
mean = sum / n;
I understand how to find complexity of an algorithm.
My problem is that I'm not sure if this can be solved since it relies on input.
For the worst case, I think that the input never equals -999 so the worst case complexity is infinity.
Is this the right way to go about this? Thanks in advance!!
The time taken to execute this algorithm scales linearly with the number of inputs. If there are infinite inputs (never -999) then it takes infinite time. But it's still O(n).
You're right it's not possible to solve this. In this case I would say that it's best case scenario if your input ever fullfills the condition. In most cases loop will go forever , even if you include variable overflow there are little chances for x to be equal to -999 (and by overflow i mean case when user inputs very large number that is then parsed to negative number).
For the sake of complexity theory the running time of this algorith is O(k) where k stands for the number of times the loops is executed. And is good to assume it's infinity.
Related
Below is the problem question.
https://leetcode.com/problems/four-divisors/
I need a more optimized code.The below code exceeds the time limit.Please suggest me some edits to make this code more optimized.
My solution:
class Solution {
public:
int sumFourDivisors(vector<int>& nums) {
vector<int> ans;
int x=0;
for(int i=0;i<nums.size();i++){
int j=1;
while(j!=nums[i]+1){
if(nums[i]%j==0){
ans.push_back(j);
}
j++;
}
if(ans.size()==4){
x=accumulate(ans.begin(),ans.end(),x);
ans.clear();
}
else
ans.clear();
}
return x;
}
};
The trick to questions like this one is to find ways to do much less work than the obvious brute-force approach. For each number nums[i] in nums, you're doing nums[i] modulo operations. There are ways that you could cut that number down significantly. When you're trying to speed up code that does the same thing repeatedly, you've got two options: 1) speed up each iteration; 2) reduce the number of iterations needed. If you can't make much headway with one approach, try the other.
Since the point of doing problems like this is to get better at problem solving, I don't think telling you the answer to the problem is the right thing to do. Even so, it's not giving too much away to give an example of what I'm talking about.
Let's say one of the numbers in nums is 24. Right now, your program calculates nums[i]%j for all j from 1 to 24. But you know 1 and nums[i] are always divisors, so once you find that 24 % 2 == 0 and 24 % 3 == 0, you've got four divisors already. By the time you get to 24 % 4 == 0 you've already got 5 divisors, so you know that you can skip 24 because it has more than 4 divisors. Bailing out as soon as you can saves a lot of wasted work.
So, use what you know to reduce the amount of work that your code does. There are several other ways to do that in this problem, and in fact an optimal solution won't even need the explicit check above. Even for large numbers, the number of mod operations needed to check each number will be much smaller than the number itself.
Both of these algorithms are giving same output but the first one takes nearly double time (>.67) compared to second one (.36). How is this possible? Can you tell me the time complexity of both algorithms? If they're the same, why is the time different?
1st algorithm:
for (int i =0 ;i<n;i++){
cin>>p[i];
if(i>0){
if(p[i-1]>p[i]){
cout<<p[i]<<" ";
}
else{
cout<<"-1"<<" ";
}
}
}
2nd algorithm:
for (int i =0 ;i<n;i++){
cin>>p[i];
}
for (int i =0 ; i<n-1;i++){
if(p[i]>p[i+1]){
cout<<p[i]<<" ";
}
else{
cout<<"-1"<<" ";
}
}
Time complexity in a modern processor can be an almost-useless performance statistic.
In this case we have one algorithm that goes from 0 to n-1--O(N)--and a second that goes 0 to n-1 twice--the constant drops out so it's still O(N). The first algorithm has an extra if statement that will be false exactly once and a decent compiler will obliterate that. We wind up with the same amount of input, the same amount of output, the same amount of array accesses (sort of) and the same amount of if (a>b).
What the second has that the first doesn't is determinism. One loop determines everything for the second. All of the input is read in in the first loop. That means The CPU can see exactly what is going to happen ahead of the time because it has all of the numbers and thus knows exactly how every branch of the if will go and can predict with 100% accuracy, load up the caches, and fill up pipelines to everything is ready ahead of time without missing a beat.
Algorithm 1 can't do that because the next input is not known until the next iteration of the loop. Unless the input pattern is predictable, it's going to guess which way if(p[i-1]>p[i]) is going wrong a lot of the time.
Additional reading: Why is it faster to process a sorted array than an unsorted array?
I am using an equation in which we have to find the maximum value that x can take given the value of b. Both x and b can take only nonnegative integer values. The equation is:
x^4+x^3+x^2+x+1≤b
I have written the following code(apparently dumb) to solve it:
#include<iostream>
#include<climits>
using namespace std;
int main()
{
unsigned long long b,x=0;
cout<<"hey bro, value of b:";
cin>>b;
while(x++<b)
if(x*x*x*x+x*x*x+x*x+x+1>b)
break;
if(b==0)
cout<<"Sorry,no value of x satisfies the inequality"<<endl;
else
cout<<"max value of x:"<<x-1<<endl;
return 0;
}
The above code works fine upto b=LONG_MAX but after for b=LONG_LONG_MAX or b=ULLONG_MAX, it starts taking forever. How can I solve this problem so that it works fine for at most b=ULLONG_MAX?
If for x = m, the inequality holds, then it also holds for all integers < m. If it doesn't hold for m, then it doesn't hold for any integer > m. What algorithm does this suggest?
If you want to spoil yourself, click here for the algorithm.
This is not just an optimization issue. (For optimization, see IVlad's answer). It is also a correctness issue. With very large values, the expression causes integer overflow: to put it simply, it wraps around from ULLONG_MAX back to zero, and your loop carries on having not detected this. You need to build overflow detection in your code.
A really simple observation solves your problem in O(1) time.
Find k = sqrt(sqrt(b))
If k satisfies your inequality, k is your answer. If it does not, k-1 is your answer.
Old answer (real problem here is not big number of iterations, but integer overflow; please read from 'Update' part; I keep this part here for history of false assumptions):
These values are very big. When your program checks each value from 0 to LONG_LONG_MAX, it shold make about 9*10^12 operations, isn't it? For ULLONG_MAX we have about 18*10^12 operations. Try to modify this program to see actual speed of processing:
while (x++ < b)
{
if (x % 1000000 == 0)
cout << " current x: " << x << endl;
if (x*x*x*x+x*x*x+x*x+x+1>b)
break;
}
So, you need to optimize this algorithm (i.e. reduce number of iterations): since your function is monotonic, you can use Binary search algorithm (see Bisection method too for clarification).
Also there is a possible problem with integer overflow: function x*x*x*x for big values x will be calculated wrong. Just imagine thay your type is unsigned char (1 byte). For example, when your program calculates 250*250*250*250 you expect 3906250000, but in fact you have 3906250000 % 256 (i.e. 16). So, if x is too big, it is possible, that your function will return value < b (and it will be strange; theoretically it can brake your optimized algorithm). Good news is that you will not see this problem, if do every check correctly. But for more complex functions you would also need to support long math (for example, use GMP or another implementation).
Update: How to avoid overflow risks?
We need to find maximal allowed value of x (let's call it xmax). Value x is allowed if x*x*x*x+x*x*x+x*x+x+1 < ULLONG_MAX. So, answer on initial question (about x*x*x*x+x*x*x+x*x+x+1 < b) is not bigger than xmax. Let's find xmax (just solve equation x*x*x*x+x*x*x+x*x+x+1=ULLONG_MAX in any system, for example WolframAlpha: anwer is about 65535.75..., so xmax==65535. So, if we check x from 0 to xmax we will not have overflow problems. Also it is our initial values for binary search algorithm.
Also it means, that we do not need to use binary search algorithm here, because it is enought to check only 65535 values. If x==65535 is not answer, we have to stop and return answer 65536.
If we need cross-platform solution without hardcoding of xmax, we can use any bigint implementation (GMP or any simpler solution) or implement more accurate multiplication and other operations. Example: if we need to multyply x and y, we can calculate z=ULLONG_MAX/x and compare this value and y. If z<y, we can't multiply x and y without overflow.
You could try finding an upper limit and working down from there.
// Find the position of the most significant bit.
int topBitPosition = 0;
while(b >> topBitPosition)
topBitPosition++;
// Find a rough estimate of b ^ 1/4
unsigned long long x = b >> (topBitPosition - topBitPosition/4);
// Work down from there
while(x*x*x*x+x*x*x+x*x+x+1 > b)
x--;
cout<<"max value of x:"<<x-1<<endl;
Don't let x exceed 65535. If 65535 satisfies the inequality, 65536 will not.
Quick answer:
First af all you are starting from x=0 and then increasing it which is not the best solution since you are looking for the maximum value and not the first one.
So for that I would go from an upperbound that can be
x=abs((b)^(1/4))
than decrease from that value, and as soon you find an element <=b you are done.
You can even think in this way:
for y=b to 1
solve(x^4+x^3+x^2+x+1=y)
if has an integer solution then return solution
See this
This is a super quick answer I hope I didn't write too many stupid things, and sorry I don't know yet how to write math here.
Here's a slightly more optimized version:
#include<iostream>
int main()
{
std::cout << "Sorry, no value of x satisfies the inequality" << std::endl;
return 0;
}
Why? Because x^4+x^3+x^2+x+1 is unbounded as x approaches positive infinity. There is no b for which your inequality holds. Computer Science is a subset of math.
This question already has an answer here:
finding the running time for my algorithm for finding whether an input is prime in terms of the input
(1 answer)
Closed 9 years ago.
void print(int num)
{
for(int i=2; i<sqrt(num); i++) // VS for(int i=2; i<num/2; i++)
{
if(num%i==0)
{
cout<<"not prime\n";
exit(0);
}
}
cout<<"prime\n";
}
I know that these algorithms are slow for finding primes but I hope to learn about Big oh using these examples.
Im assuming that the algorithm that goes from i=2 to i
Can someone explain the running time of both of the algorithms in terms of the input num using big oh notation?
As only constant statements are within if-statement, the total time complexity is actually determined by the for-loop.
for(int i=2; i<sqrt(num); i++)
This means it will run sqrt(num)-2 times, so the total complexity is O(sqrt(n)).
And easily, you will realize if the for-loop changes to:
for(int i=2; i<num/2; i++)
, it will run num/2-2 times, thus the total complexity will be O(num).
If you run this, you will actually go through the loop sqrt(num)-2 times, i.e. for i==2 to i==sqrt(num), increasing step by 1 at a time.
Thus, in terms of size of num, this algorithm's running time is O( sqrt(num) ).
As stated in other answers, the cost of the algorithm that iterates from 2 to sqrt(n) is O(sqrt n) and the cost of the algorithm that iterates from 2 to n/2 is O(n). However, these bounds apply for the worst case, and the worst case happens when n is prime.
In the average, both algorithms run in O(1) expected time: Half of the numbers are even, so their cost is 2*n/2. A third of the numbers are multiple of 3, so their cost is 3*n/3. A 1/4 of the numbers are multiple of 4, so their cost is 4*n/4...
First we have to specify our task. So what we want is to find a function
f(N) = number_of_steps
when N is your num argument passed to function. From this point forward we are going to assume that every sentence that doesn't depend on the size of the input data takes a constant C number computational steps.
We are going to add the individual number of steps of the function.
f(N) = for_ + C;
Now how many times will be for executed? sqrt(N)-2, so:
f(N) = sqrt(N) -2 + C = sqrt(num) -2 + C
O( f(num)) = sqrt(num)
#include <iostream>
using namespace std;
int main(){
int a,b,hcf=0,i=1;
cout<<"Enter Value :";
cin>>a;
cout<<"Enter value :";
cin>>b;
while(i<=a || i<=b){
if(a%i ==0 && b%i ==0)hcf=i;
++i;
}
return 0;
}
or Remainder Method ?
Are you finding the hcf at all ? It looks like that you are trying to reverse a number.
Unless the numbers involved are really small, Euclid's algorithm is likely to be a lot faster. This one is linear on the size of the number (with two divisions per iteration, and divisions are one of the slowest types of instruction). Euclid's is actually fairly non-trivial to analyze -- Knuth V2 has several pages on it, but the bottom line is that it's generally quite a bit faster.
If you want to use a variation on the one you're using now, I'd start with i equal to the smaller of the two inputs, and work your way down. This way, the first time you find a common factor, you have your answer so you can break out of the loop.