Minimum cuts with each cut passing through 2 points on circle - c++

There are N persons and we wishes to give only one piece of cake to each person. Bob has his own way of maximizing number of pieces. For each cut, he tries to make maximum possible smaller pieces after that cut. The cake is circular and each cut follows a straight line that passes through the circle twice. There will be no half-cuts or semi-cuts.
what is the minimum number of cuts he should make so that every person gets at least one smaller part of the cake.
(With this kind of distribution, every person will not get same size of cake and he is not bothered of it.)
Example : Let N=3 then answer is 2.
Note : Passes through circle twice mean that the cut doesn't stop in between. It starts at one point on circle, and ends at some other point. It is not necessary that cut has to pass through center for sure
Here is my code that I tried :
typedef unsigned long long int ulld;
ulld n;
cin >> n;
ulld steps = 0;
ulld currentAmount = 1;
while (currentAmount < n) steps++, currentAmount <<= 1;
cout << steps << endl;
N can go upto 10^12. So I want O(log n) appraoch.

The number of pieces f(k) that can be made with k cuts is a somewhat famous problem, whose solution is f(k) = k*(k+1)/2 + 1. You could have found that sequence yourself by working small examples and invoking the search function on OEIS. Solving for f(k) = n, we get k = ceil((sqrt(8*n - 7) - 1)/2).

Related

Go through the array from left to right and collect as many numbers as possible

CSES problem (https://cses.fi/problemset/task/2216/).
You are given an array that contains each number between 1…n exactly once. Your task is to collect the numbers from 1 to n in increasing order.
On each round, you go through the array from left to right and collect as many numbers as possible. What will be the total number of rounds?
Constraints: 1≤n≤2⋅10^5
This is my code on c++:
int n, res=0;
cin>>n;
int arr[n];
set <int, greater <int>> lastEl;
for(int i=0; i<n; i++) {
cin>>arr[i];
auto it=lastEl.lower_bound(arr[i]);
if(it==lastEl.end()) res++;
else lastEl.erase(*it);
lastEl.insert(arr[i]);
}
cout<<res;
I go through the array once. If the element arr[i] is smaller than all the previous ones, then I "open" a new sequence, and save the element as the last element in this sequence. I store the last elements of already opened sequences in set. If arr[i] is smaller than some of the previous elements, then I take already existing sequence with the largest last element (but less than arr[i]), and replace the last element of this sequence with arr[i].
Alas, it works only on two tests of three given, and for the third one the output is much less than it shoud be. What am I doing wrong?
Let me explain my thought process in detail so that it will be easier for you next time when you face the same type of problem.
First of all, a mistake I often made when faced with this kind of problem is the urge to simulate the process. What do I mean by "simulating the process" mentioned in the problem statement? The problem mentions that a round takes place to maximize the collection of increasing numbers in a certain order. So, you start with 1, find it and see that the next number 2 is not beyond it, i.e., 2 cannot be in the same round as 1 and form an increasing sequence. So, we need another round for 2. Now we find that, 2 and 3 both can be collected in the same round, as we're moving from left to right and taking numbers in an increasing order. But we cannot take 4 because it starts before 2. Finally, for 4 and 5 we need another round. That's makes a total of three rounds.
Now, the problem becomes very easy to solve if you simulate the process in this way. In the first round, you look for numbers that form an increasing sequence starting with 1. You remove these numbers before starting the second round. You continue this way until you've exhausted all the numbers.
But simulating this process will result in a time complexity that won't pass the constraints mentioned in the problem statement. So, we need to figure out another way that gives the same output without simulating the whole process.
Notice that the position of numbers is crucial here. Why do we need another round for 2? Because it comes before 1. We don't need another round for 3 because it comes after 2. Similarly, we need another round for 4 because it comes before 2.
So, when considering each number, we only need to be concerned with the position of the number that comes before it in the order. When considering 2, we look at the position of 1? Does 1 come before or after 2? It it comes after, we don't need another round. But if it comes before, we'll need an extra round. For each number, we look at this condition and increment the round count if necessary. This way, we can figure out the total number of rounds without simulating the whole process.
#include <iostream>
#include <vector>
using namespace std;
int main(int argc, char const *argv[])
{
int n;
cin >> n;
vector <int> v(n + 1), pos(n + 1);
for(int i = 1; i <= n; ++i){
cin >> v[i];
pos[v[i]] = i;
}
int total_rounds = 1; // we'll always need at least one round because the input sequence will never be empty
for(int i = 2; i <= n; ++i){
if(pos[i] < pos[i - 1]) total_rounds++;
}
cout << total_rounds << '\n';
return 0;
}
Next time when you're faced with this type of problem, pause for a while and try to control your urge to simulate the process in code. Almost certainly, there will be some clever observation that will allow you to achieve optimal solution.

Can Anyone reduce the Complexity of My Code. Problem E of Codeforces Round113 Div.2

Link to The Problem: https://codeforces.com/problemset/problem/166/E
Problem Statement:
*You are given a tetrahedron. Let's mark its vertices with letters A, B, C, and D correspondingly.
An ant is standing in the vertex D of the tetrahedron. The ant is quite active and he wouldn't stay idle. At each moment of time, he makes a step from one vertex to another one along some edge of the tetrahedron. The ant just can't stand on one place.
You do not have to do much to solve the problem: your task is to count the number of ways in which the ant can go from the initial vertex D to itself in exactly n steps. In other words, you are asked to find out the number of different cyclic paths with the length of n from vertex D to itself. As the number can be quite large, you should print it modulo 1000000007 (10^9 + 7).*
Input:
The first line contains the only integer n (1 ≤ n ≤ 107) — the required length of the cyclic path.
Output:
Print the only integer — the required number of ways modulo 1000000007 (10e9 + 7).
Example: Input n=2 , Output: 3
Input n=4, Output: 21
My Approach to Problem:
I have written a recursive code that takes two input n and present index, then I am traveling and exploring all possible combinations.
#include<iostream>
using namespace std;
#define mod 10000000
#define ll long long
ll count_moves=0;
ll count(ll n, int present)
{
if(n==0 and present==0) count_moves+=1, count_moves%=mod; //base_condition
else if(n>1){ //Generating All possible Combinations
count(n-1,(present+1)%4);
count(n-1,(present+2)%4);
count(n-1,(present+3)%4);
}
else if(n==1 and present) count(n-1,0);
}
int main()
{
ll n; cin>>n;
if(n==1) {
cout<<"0"; return;
}
count(n,0);
cout<<count_moves%mod;
}
But the problem is that I am getting Time Limit Error since Time Complexity of my Code is very high. Please Can anyone suggest me how can I optimize/Memoize my code to reduce its complexity?
#**Edit 1: ** Some People are commenting about macros and division well it's not an issue. The Range of n is 10^7 and complexity of my code is exponential so my actual doubt is how to decrease it to linear time. i,e O(n).
Anytime you built into a recursion and you exceeded time complexity, you have to understand the recursion is likely the problem.
The best solution is to not use a recursion.
Look at the result you have:
3
6
21
60
183
546
1641
4920
   ⋮      ⋮
While it might be hard to find a pattern for the first couple terms, but it gets easier later on.
Each term is roughly 3 times larger than the last term, or more precisely,
Now you could just write a for loop for it:
for(int i = 0; i < n-1; i++)
{
count_moves = count_moves * 3 + std::pow(-1, i) * 3;
}
or to get rid of pow():
for(int i = 0; i < n-1; i++)
{
count_moves = count_moves * 3 + (i % 2 * 2 - 1) * -3;
}
Further more, you could even build that into a general term formula to get rid of the for loop:
or in code:
count_moves = (pow(3, n) + (n % 2 * 2 - 1) * -3) / 4;
However, you can't get rid of the pow() this time, or you will have to write a loop for that then.
I believe one of your issues is that you are recalculating things.
Take for example n=4. count(3,x) is called 3 times for x in [0,3].
However if you made a std::map<int,int> you could save the value for (n,present) pairs and only calculate each value once.
This will take more space. The map will be 4*(n-1) big when you are done. That is still probably too large for 10^9?
Another thing you can do is multithread. Each call to count can instigate its own thread. You need to be careful then to be thread safe when changing the global count and the state of the std::map if you decide to use it.
Edit:
Calculate count(n,x) one time for n in [1,n-1] x in [0,3] then count[n,0] = a*count(n-1,1) +b*count(n-1,2) +c*count(n-1,3).
If you can figure out the pattern for what a,b,c are given n or maybe even the a,b,c for the n-1 case then you may be able to solve this problem easily.

Get minimum number of shots required so that goals over shots is the percentage given

I am having some difficulty understanding why an extremely simple program I've coded in C++ keeps looping. I'll describe the problem at hand first just to check if maybe my solution is incorrect and then I'll write the code:
The shooting efficiency of a soccer player is the percentage of
goals scored over all the shots on goal taken in all his professional career. It is a rational number between 0 and 100,
rounded to one decimal place. For example, a player who
made 7 shots on goal and scored 3 goals has a shooting
efficiency of 42.9.
Given the shooting efficiency of a player, we want to know which
is the minimum amount of shots on goal needed to get that
number (which must be greater than 0).
What I thought of is that if p is the percentage given, then in order to get the minimum number of shots n, the relationship np <= n must be satisfied since np would be the number of goals scored over a total of n.
I've coded the following program:
int main(){
float efficiency;
cin >> efficiency;
int i = 1;
float tries = i*efficiency;
while(tries > i){
i++;
tries = i*efficiency;
}
cout << i << endl;
return 0;
}
This program never terminates since it keeps looping inside the while, any suggestions on what might be wrong would be really appreciated.
You multiply efficiency after incrementing i. This means efficiency will grow much faster than i as when i increases by 1, efficiency will increase (i+1) times, ending up much larger than i.

Finding the sequence so that the event is finished at the earliest

This is a problem from informatica olympiad that I am trying to solve since sometime. This is important for me since this contains an underlying fundamental problem that I see in a lot of problems.
Given N citizens for an event such that they have to program on a single computer, eat chocolates and then eat doughnuts. time , ith citizen takes for each task is given as input. Each citizen has to finish the tasks in order, i.e., first program then eat chocolate and then eat doughnuts. Any number of people could eat chocolates or doughnuts at a time but since computer is one only 1 person can program each time. Once, he is done he would move to chocolates and next person shall program. The task is to find the order in which citizens be sent out to program such that event ends in minimum time and this time is the output.
I worked this problem using the approach:
If I start with ith citizen then for remaining n-1 citizens if I find the time (tn-1) then tn = max((ni[0]+ni[1]+ni[2]), ni[0] + tn-1). Eg.:
18 7 6
23 10 27
20 9 14
then 18+7+6, 18+23+10+27, 18+23+20+9+14, max would be 84 but if you start with 23 then time would be 74 which is less.
I implemented this approach whose code I am presenting here. However, the complexity is O(n!) for my approach. I can see underlying repeated subproblems,so I could use DP approach. But the problem is I need to store the time value for each list i to j such that it could begin with any k from i to j and so on. This storage process would again be complex and require n! storage. How, to solve this problem and similar such problems?
Here is my program on my approach:
#include <iostream>
#include <vector>
#include <climits>
int min_time_sequence(std::vector<std::vector<int> > Info, int N)
{
if (N == 0) return 0;
if (N == 1)
{
int val = Info[0][0] + Info[0][1] + Info[0][2];
return val;
}
std::vector<std::vector<int> > tmp = Info;
int mn = INT_MAX;
for (int i = 0; i < N; ++i)
{
//prepare new list
tmp.erase(tmp.begin()+i);
int mn = min_time_sequence(tmp, N-1);
int v1 = Info[i][0] + mn;
int v2 = Info[i][0] + Info[i][1] + Info[i][2];
int larger = v1 > v2 ? v1 : v2;
if (mn > larger) mn = larger;
}
return mn;
}
int main()
{
int N;
std::cin>>N;
std::vector<std::vector<int> > Info;
//input
for (int i = 0; i < N; ++i)
{
std::cin>>Info[i][0];
std::cin>>Info[i][1];
std::cin>>Info[i][2];
}
int mx = 0;
if (N > 0)
mx = min_time_sequence(Info, N);
std::cout<<mx<<std::endl;
return 0;
}
Since you asked for general techniques, you might want to look at greedy algorithms, that is, algorithms that repeatedly optimize the next selection. In this case, that might be for the remaining person who will take the longest total time (the sum of the three times) to program next, so he or she will finish eating sooner, and no one who starts later will take more time.
If such an algorithm were optimal, the program could simply sort the list by the sum of times, in decreasing order, which takes O(N log N) time.
You would, however, be expected to prove that your solution is valid. One way to do that is known as “Greedy Stays Ahead.” That is an inductive proof where you show that the solution your greedy algorithm produces is at least as optimal (by some measure equivalent to optimality at the final step) at its first step, then that it is also as good at its second step, the step after that, and so on. Hint: you might try measuring what is the worst-case scenario for how much time the event could need after each person starts programming. At the final step, when the last person gets to start programming, this is equivalent to optimality.
Another method to prove an algorithm is optimal is “Proof by Exchange.” This is a form of proof by contradiction in which you hypothesize that some different solution is optimal, then you show that exchanging a part of that solution with a part of your solution could improve the supposedly-optimal solution. That contradicts the premise that it was ever optimal—which proves that no other solution is better than this. So: assume the optimal order is different, meaning the last person who finishes started after someone else who took less time. What happens if you switch the positions of those two people?
Greedy solutions are not always best, so in cases where they are not, you would want to look at other techniques, such as symmetry-breaking and pruning the search tree early.

How can I find number of consecutive sequences of various lengths satisfy a particular property?

I am given a array A[] having N elements which are positive integers
.I have to find the number of sequences of lengths 1,2,3,..,N that satisfy a particular property?
I have built an interval tree with O(nlogn) complexity.Now I want to count the number of sequences that satisfy a certain property ?
All the properties required for the problem are related to sum of the sequences
Note an array will have N*(N+1)/2 sequences. How can I iterate over all of them in O(nlogn) or O(n) ?
If we let k be the moving index from 0 to N(elements), we will run an algorithm that is essentially looking for the MIN R that satisfies the condition (lets say I), then every other subset for L = k also is satisfied for R >= I (this is your short circuit). After you find I, simply return an output for (L=k, R>=I). This of course assumes that all numerics in your set are >= 0.
To find I, for every k, begin at element k + (N-k)/2. Figure out if this defined subset from (L=k, R=k+(N-k)/2) satisfies your condition. If it does, then decrement R until your condition is NOT met, then R=1 is your MIN (your could choose to print these results as you go, but they results in these cases would be essentially printed backwards). If (L=k, R=k+(N-k)/2) does not satisfy your condition, then INCREMENT R until it does, and this becomes your MIN for that L=k. This degrades your search space for each L=k by a factor of 2. As k increases and approaches N, your search space continuously decreases.
// This declaration wont work unless N is either a constant or MACRO defined above
unsigned int myVals[N];
unsigned int Ndiv2 = N / 2;
unsigned int R;
for(unsigned int k; k < N; k++){
if(TRUE == TESTVALS(myVals, k, Ndiv2)){ // It Passes
for(I = NDiv2; I>=k; I--){
if(FALSE == TESTVALS(myVals, k, I)){
I++;
break;
}
}
}else{ // It Didnt Pass
for(I = NDiv2; I>=k; I++){
if(TRUE == TESTVALS(myVals, k, I)){
break;
}
}
}
// PRINT ALL PAIRS from L=k, from R=I to R=N-1
if((k & 0x00000001) == 0) Ndiv2++;
} // END --> for(unsigned int k; k < N; k++)
The complexity of the algorithm above is O(N^2). This is because for each k in N(i.e. N iterations / tests) there is no greater than N/2 values for each that need testing. Big O notation isnt concerned about the N/2 nor the fact that truly N gets smaller as k grows, it is concerned with really only the gross magnitude. Thus it would say N tests for every N values thus O(N^2)
There is an Alternative approach which would be FASTER. That approach would be to whenever you wish to move within the secondary (inner) for loops, you could perform a move have the distance algorithm. This would get you to your O(nlogn) set of steps. For each k in N (which would all have to be tested), you run this half distance approach to find your MIN R value in logN time. As an example, lets say you have a 1000 element array. when k = 0, we essentially begin the search for MIN R at index 500. If the test passes, instead of linearly moving downward from 500 to 0, we test 250. Lets say the actual MIN R for k = 0 is 300. Then the tests to find MIN R would look as follows:
R=500
R=250
R=375
R=312
R=280
R=296
R=304
R=300
While this is oversimplified, your are most likely going to have to optimize, and test 301 as well 299 to make sure youre in the sweet spot. Another not is to be careful when dividing by 2 when you have to move in the same direction more than once in a row.
#user1907531: First of all , if you are participating in an online contest of such importance at national level , you should refrain from doing this cheap tricks and methodologies to get ahead of other deserving guys. Second, a cheater like you is always a cheater but all this hampers the hard work of those who have put in making the questions and the competitors who are unlike you. Thirdly, if #trumetlicks asks you why haven't you tagged the ques as homework , you tell another lie there.And finally, I don't know how could so many people answer this question this cheater asked without knowing the origin/website/source of this question. This surely can't be given by a teacher for homework in any Indian school. To tell everyone this cheater has asked you the complete solution of a running collegiate contest in India 6 hours before the contest ended and he has surely got a lot of direct helps and top of that invited 100's others to cheat from the answers given here. So, good luck to all these cheaters .