Is this a good method to find hcf? - c++

#include <iostream>
using namespace std;
int main(){
int a,b,hcf=0,i=1;
cout<<"Enter Value :";
cin>>a;
cout<<"Enter value :";
cin>>b;
while(i<=a || i<=b){
if(a%i ==0 && b%i ==0)hcf=i;
++i;
}
return 0;
}
or Remainder Method ?

Are you finding the hcf at all ? It looks like that you are trying to reverse a number.

Unless the numbers involved are really small, Euclid's algorithm is likely to be a lot faster. This one is linear on the size of the number (with two divisions per iteration, and divisions are one of the slowest types of instruction). Euclid's is actually fairly non-trivial to analyze -- Knuth V2 has several pages on it, but the bottom line is that it's generally quite a bit faster.
If you want to use a variation on the one you're using now, I'd start with i equal to the smaller of the two inputs, and work your way down. This way, the first time you find a common factor, you have your answer so you can break out of the loop.

Related

Go through the array from left to right and collect as many numbers as possible

CSES problem (https://cses.fi/problemset/task/2216/).
You are given an array that contains each number between 1…n exactly once. Your task is to collect the numbers from 1 to n in increasing order.
On each round, you go through the array from left to right and collect as many numbers as possible. What will be the total number of rounds?
Constraints: 1≤n≤2⋅10^5
This is my code on c++:
int n, res=0;
cin>>n;
int arr[n];
set <int, greater <int>> lastEl;
for(int i=0; i<n; i++) {
cin>>arr[i];
auto it=lastEl.lower_bound(arr[i]);
if(it==lastEl.end()) res++;
else lastEl.erase(*it);
lastEl.insert(arr[i]);
}
cout<<res;
I go through the array once. If the element arr[i] is smaller than all the previous ones, then I "open" a new sequence, and save the element as the last element in this sequence. I store the last elements of already opened sequences in set. If arr[i] is smaller than some of the previous elements, then I take already existing sequence with the largest last element (but less than arr[i]), and replace the last element of this sequence with arr[i].
Alas, it works only on two tests of three given, and for the third one the output is much less than it shoud be. What am I doing wrong?
Let me explain my thought process in detail so that it will be easier for you next time when you face the same type of problem.
First of all, a mistake I often made when faced with this kind of problem is the urge to simulate the process. What do I mean by "simulating the process" mentioned in the problem statement? The problem mentions that a round takes place to maximize the collection of increasing numbers in a certain order. So, you start with 1, find it and see that the next number 2 is not beyond it, i.e., 2 cannot be in the same round as 1 and form an increasing sequence. So, we need another round for 2. Now we find that, 2 and 3 both can be collected in the same round, as we're moving from left to right and taking numbers in an increasing order. But we cannot take 4 because it starts before 2. Finally, for 4 and 5 we need another round. That's makes a total of three rounds.
Now, the problem becomes very easy to solve if you simulate the process in this way. In the first round, you look for numbers that form an increasing sequence starting with 1. You remove these numbers before starting the second round. You continue this way until you've exhausted all the numbers.
But simulating this process will result in a time complexity that won't pass the constraints mentioned in the problem statement. So, we need to figure out another way that gives the same output without simulating the whole process.
Notice that the position of numbers is crucial here. Why do we need another round for 2? Because it comes before 1. We don't need another round for 3 because it comes after 2. Similarly, we need another round for 4 because it comes before 2.
So, when considering each number, we only need to be concerned with the position of the number that comes before it in the order. When considering 2, we look at the position of 1? Does 1 come before or after 2? It it comes after, we don't need another round. But if it comes before, we'll need an extra round. For each number, we look at this condition and increment the round count if necessary. This way, we can figure out the total number of rounds without simulating the whole process.
#include <iostream>
#include <vector>
using namespace std;
int main(int argc, char const *argv[])
{
int n;
cin >> n;
vector <int> v(n + 1), pos(n + 1);
for(int i = 1; i <= n; ++i){
cin >> v[i];
pos[v[i]] = i;
}
int total_rounds = 1; // we'll always need at least one round because the input sequence will never be empty
for(int i = 2; i <= n; ++i){
if(pos[i] < pos[i - 1]) total_rounds++;
}
cout << total_rounds << '\n';
return 0;
}
Next time when you're faced with this type of problem, pause for a while and try to control your urge to simulate the process in code. Almost certainly, there will be some clever observation that will allow you to achieve optimal solution.

worst case for this code?

n = 0;
sum = 0;
cin >> x;
while (x != -999)
{
n++;
sum += x;
cin >> x;
}
mean = sum / n;
I understand how to find complexity of an algorithm.
My problem is that I'm not sure if this can be solved since it relies on input.
For the worst case, I think that the input never equals -999 so the worst case complexity is infinity.
Is this the right way to go about this? Thanks in advance!!
The time taken to execute this algorithm scales linearly with the number of inputs. If there are infinite inputs (never -999) then it takes infinite time. But it's still O(n).
You're right it's not possible to solve this. In this case I would say that it's best case scenario if your input ever fullfills the condition. In most cases loop will go forever , even if you include variable overflow there are little chances for x to be equal to -999 (and by overflow i mean case when user inputs very large number that is then parsed to negative number).
For the sake of complexity theory the running time of this algorith is O(k) where k stands for the number of times the loops is executed. And is good to assume it's infinity.

C++ Program hangs when calling a function

I have written a code which checks whether a integer is prime or not,whenever i am calling that function the command line just hangs.
I am using MingW in Windows 7
#include<iostream>
#include<math.h>
using namespace std;
bool checkPrime(int n);
int main()
{
int t,tempstart;
cout<<checkPrime(6);
return 0;
}
bool checkPrime(int n)
{
bool check=true;
for (int i = 0; i <(int) sqrt(n) ; i++)
{
if(n%i==0)
{
check=false;
cout<<"asas";
break;
}
}
return check;
}
it should not hang-up at least not for n=6
1.try this
instead of:
if(n%i==0)
write:
if((n%i)==0)
some compilers do a mess with multi operation conditions so bracketing usually helps
2.as mentioned i should go from 2
n%0 is division by zero maybe your code thrown hidden exception and that is what is wrong
3.have you try to debug it ?
get a breakpoint inside for loop and step it and see why it is not stopping when i==2,3
Tips for speeding it up a little (just few thousand times for bigger n)
1.i=2,3,5,7,9,11,13,...
as mentioned in comments do i=2 separately (by (n&1)==0 instead of (n%2)==0)
and the rest:
for (nn=sqrt(n),i=3;i<=nn;i+=2) ...
2.instead of sqrt(n) use number with the half of bits
3.do not compute sqrt inside for (some compilers do not pre-compute)
4.use sieve of Aristoteles
create one or more arrays which will eliminate few divisions from you
do not forget to use array size as common multiply of dividers inside it
this is what can speed up things considerably
because it can eliminate many for cycles with a single array access
but it need initialization of arrays prior to their use
for example array
BYTE sieve[4106301>>1]; //only odd numbers are important so the size is half and each bit hold one sieve value so the size is really 4106301*8 which is divided by:
can hold sieves for dividers:
3,5,7,11,13,17,19,23,137,131,127,113,83,61,107,101,
103,67,37,41,43,53,59,97,89,109,79,73,71,31,29
5.divide by primes
you can remember first M primes in some array (for example first 100 primes)
and divide by them
and the use
for (nn=sqrt(n),i=last prime+2;i<=nn;i++) ...
also you can remember all primes computed on runtime in some list and use them
6.you can combine all above all together
7.there are many other ways to improve performance of is_prime(n) ?
so study if you need more speed
According to the operator precedence in C++
% has more priority than ==
so you should be using
if((n%i)==0)
good luck!

C++ Coprimes Problem. Optimize code

Hi i want to optimize the following code. It tries to find all coprimes in a given range by comparing them to n. But i want to make it run faster... any ideas?
#include <iostream>
using namespace std;
int GCD(int a, int b)
{
while( 1 )
{
a = a % b;
if( a == 0 )
return b;
b = b % a;
if( b == 0 )
return a;
}
}
int main(void){
int t;
cin >> t;
for(int i=0; i<t; i++){
int n,a,b;
cin >> n >> a >> b;
int c = 0;
for(int j=a; j<=b; j++){
if(GCD(j, n) == 1) c++;
}
cout << c << endl;
}
return 0;
}
This smells like homework, so only a hint.
You don't need to calculate GCD here. If you can factorize n (even in the crudest way of trying to divide by every odd number smaller than 2^16), then you can just count numbers which happen not to divide by factors of n.
Note that there will be at most 10 factors of a 32-bit number (we don't need to remember how many times given prime is used in factorization).
How to do that? Try to count non-coprimes using inclusion–exclusion principle. You will have at most 1023 subsets of primes to check, for every subset you need to calculate how many multiplies are in the range, which is constant time for each subset.
Anyway, my code works in no time now:
liori:~/gg% time ./moje <<< "1 1003917915 1 1003917915"
328458240
./moje <<< "1 1003917915 1 1003917915" 0,00s user 0,00s system 0% cpu 0,002 total
On a single core computer it's not going to get much faster than it currently is. So you want to utilize multiple cores or even multiple computers. Parallelize and distribute.
Since each pair of numbers you want to calculate the GCD for isn't linked to any other pair of numbers you can easily modify your program to utilize multiple cores by using threads.
If this still isn't fast enough you'd better start thinking of using distributed computing, assigning the work to many computers. This is a bit trickier but should improve the performance the most if the search space is large.
Consider giving it a try with doubles. It said that divisions with doubles are faster on typical intel chips. Integer division is the slowest instruction out there. This is a chicken egg problem. Nobody uses them because they're slow and intel doesnt make it faster because nobody uses it.

Is there any trick to handle very very large inputs in C++?

A class went to a school trip. And, as usually, all N kids have got their backpacks stuffed with candy. But soon quarrels started all over the place, as some of the kids had more candies than others. Soon, the teacher realized that he has to step in: "Everybody, listen! Put all the candies you have on this table here!"
Soon, there was quite a large heap of candies on the teacher's table. "Now, I will divide the candies into N equal heaps and everyone will get one of them." announced the teacher.
"Wait, is this really possible?" wondered some of the smarter kids.
Problem specification
You are given the number of candies each child brought. Find out whether the teacher can divide the candies into N exactly equal heaps. (For the purpose of this task, all candies are of the same type.)
Input specification
The first line of the input file contains an integer T specifying the number of test cases. Each test case is preceded by a blank line.
Each test case looks as follows: The first line contains N : the number of children. Each of the next N lines contains the number of candies one child brought.
Output specification
For each of the test cases output a single line with a single word "YES" if the candies can be distributed equally, or "NO" otherwise.
Example
Input:
2
5
5
2
7
3
8
6
7
11
2
7
3
4
Output:
YES
NO
The problem is simple but the case is that SPOJ judges are using very very large inputs. I have used unsigned long long as datatype, yet it shows wc..
Here's my code:
#include<iostream>
using namespace std;
int main()
{
unsigned long long c=0,n,k,j,testcases,sum=0,i;
char b[10000][10];
cin>>testcases;
while(testcases-->0)
{
sum=0;
cin>>n;
j=n;
while(j-->0)
{
cin>>k;
sum+=k;
}
if(sum%n==0)
{
b[c][0]='Y';b[c][1]='E';b[c][2]='S';b[c][3]='\0';
c++;
}
else
{
b[c][0]='N';b[c][1]='O';b[c][2]='\0';
c++;
}
}
for(i=0;i<c;i++)
cout<<"\n"<<b[i];
return 0;
}
Easy. Don't add up the number of candies. Instead, keep a count of kids, a count of candies per kid. (CCK), and a count of extra candies (CEC. When you read a new line, CK += 1; CEC += newCandies; if (CEC > CK) CCK += (CEC / CK); CEC %= CK;
Does a line like this not concern you?
b[c][0]='Y';b[c][1]='E';b[c][2]='S';b[c][3]='\0';
Would it not be simpler to write??
strcpy(b[c], "YES");
You can do this question without summing all the candies. Just calculate the remainder off each child's heap (which will be smaller than N). This way, the number won't grow too large and overflow.
I won't write out a solution since this is a contest problem, but if you're stuck I can give some more hints.
If you have input that is larger than unsigned long long, then they probably want you to implement custom functions for arbitrary-precision arithmetic (or the problem can be solved without using the large integers). If the input fits the largest native integer type, but your algorithm requires larger integer, it's most likely time to think about a different algorithm. :)
If you're reading in from cin, you can only read in values that will fit into some sort of integer variable. It's possible that the sum would overflow.
However, you don't have to add the numbers up. You can add the remainders (from dividing by N) up, and then see if the sum of the remainders is N.