I am solving a LeetCode problem Search in Rotated Sorted Array, in order to learn Binary Search better. The problem statement is:
There is an integer array nums sorted in ascending order (with distinct values). Prior to being passed to your function, nums is possibly rotated at an unknown pivot index. For example, [0,1,2,4,5,6,7] might be rotated at pivot index 3 and become [4,5,6,7,0,1,2]. Given the array nums after the possible rotation and an integer target, return the index of target if it is in nums, or -1 if it is not in nums.
With some online help, I came up with the solution below, which I mostly understand:
class Solution {
public:
int search(vector<int>& nums, int target) {
int l=0, r=nums.size()-1;
while(l<r) { // 1st loop; how is BS applicable here, since array is NOT sorted?
int m=l+(r-l)/2;
if(nums[m]>nums[r]) l=m+1;
else r=m;
}
// cout<<"Lowest at: "<<r<<"\n";
if(nums[r]==target) return r; //target==lowest number
int start, end;
if(target<=nums[nums.size()-1]) {
start=r;
end=nums.size()-1;
} else {
start=0;
end=r;
}
l=start, r=end;
while(l<r) {
int m=l+(r-l)/2;
if(nums[m]==target) return m;
if(nums[m]>target) r=m;
else l=m+1;
}
return nums[l]==target ? l : -1;
}
};
My question: Are we searching over a parabola in the first while loop, trying to find the lowest point of a parabola, unlike a linear array in traditional binary search? Are we finding the minimum of a convex function? I understand how the values of l, m and r change leading to the right answer - but I do not fully follow how we can be guaranteed that if(nums[m]>nums[r]), our lowest value would be on the right.
You actually skipped something important by “getting help”.
Once, when I was struggling to integrate something tricky for Calculus Ⅰ, I went for help and the advisor said, “Oh, I know how to do this” and solved it. I learned nothing from him. It took me another week of going over it (and other problems) myself to understand it sufficient that I could do it myself.
The purpose of these assignments is to solve the problem yourself. Even if your solution is faulty, you have learned more than simply reading and understanding the basics of one example problem someone else has solved.
In this particular case...
Since you already have a solution, let’s take a look at it: Notice that it contains two binary search loops. Why?
As you observed at the beginning, the offset shift makes the array discontinuous (not convex). However, the subarrays either side of the discontinuity remain monotonic.
Take a moment to convince yourself that this is true.
Knowing this, what would be a good way to find and determine which of the two subarrays to search?
Hints:
A binary search as ( n ⟶ ∞ ) is O(log n)
O(log n) ≡ O(2 log n)
I should also observe to you that the prompt gives as example an arithmetic progression with a common difference of 1, but the prompt itself imposes no such restriction. All it says is that you start with a strictly increasing sequence (no duplicate values). You could have as input [19 74 512 513 3 7 12].
Does the supplied solution handle this possibility?
Why or why not?
Related
Here is a recursive function. Which traverses a map of strings(multimap<string, string> graph). Checks the itr -> second (s_tmp) if the s_tmp is equal to the desired string(Exp), prints it (itr -> first) and the function is executed for that itr -> first again.
string findOriginalExp(string Exp){
cout<<"*****findOriginalExp Function*****"<<endl;
string str;
if(graph.empty()){
str ="map is empty";
}else{
for(auto itr=graph.begin();itr!=graph.end();itr++){
string s_tmp = itr->second;
string f_tmp = itr->first;
string nll = "null";
//s_tmp.compare(Exp) == 0
if(s_tmp == Exp){
if(f_tmp.compare(nll) == 0){
cout<< Exp <<" :is original experience.";
return Exp;
}else{
return findOriginalExp(itr->first);
}
}else{
str="No element is equal to Exp.";
}
}
}
return str;
}
There are no rules for stopping and it seems to be completely random. How is the time complexity of this function calculated?
I am not going to analyse your function but instead try to answer in a more general way. It seems like you are looking for an simple expression such as O(n) or O(n^2) for the complexity for your function. However, not always complexity is that simple to estimate.
In your case it strongly depends on what are the contents of graph and what the user passes as parameter.
As an analogy consider this function:
int foo(int x){
if (x == 0) return x;
if (x == 42) return foo(42);
if (x > 0) return foo(x-1);
return foo(x/2);
}
In the worst case it never returns to the caller. If we ignore x >= 42 then worst case complexity is O(n). This alone isn't that useful as information for the user. What I really need to know as user is:
Don't ever call it with x >= 42.
O(1) if x==0
O(x) if x>0
O(ln(x)) if x < 0
Now try to make similar considerations for your function. The easy case is when Exp is not in graph, in that case there is no recursion. I am almost sure that for the "right" input your function can be made to never return. Find out what cases those are and document them. In between you have cases that return after a finite number of steps. If you have no clue at all how to get your hands on them analytically you can always setup a benchmark and measure. Measuring the runtime for input sizes 10,50, 100,1000.. should be sufficient to distinguish between linear, quadratic and logarithmic dependence.
PS: Just a tip: Don't forget what the code is actually supposed to do and what time complexity is needed to solve that problem (often it is easier to discuss that in an abstract way rather than diving too deep into code). In the silly example above the whole function can be replaced by its equivalent int foo(int){ return 0; } which obviously has constant complexity and does not need to be any more complex than that.
This function takes a directed graph and a vertex in that graph and chases edges going into it backwards to find a vertex with no edge pointing into it. The operation of finding the vertex "behind" any given vertex takes O(n) string comparisons in n the number of k/v pairs in the graph (this is the for loop). It does this m times, where m is the length of the path it must follow (which it does through the recursion). Therefore, it has time complexity O(m * n) string comparisons in n the number of k/v pairs and m the length of the path.
Note that there's generally no such thing as "the" time complexity for just some function you see written in code. You have to define what variables you want to describe the time in terms of, and also the operations with which you want to measure the time. E.g. if we want to write this purely in terms of n the number of k/v pairs, you run into a problem, because if the graph contains a suitably placed cycle, the function doesn't terminate! If you further constrain the graph to be acyclic, then the maximum length of any path is constrained by m < n, and then you can also get that this function does O(n^2) string comparisons for an acyclic graph with n edges.
You should approximate the control flow of the recursive calling by using a recurrence relation. It's been like 30 years since I took college classes in Discrete Math, but generally you do like pseuocode, just enough to see how many calls there are. In some cases just counting how many are on the longest condition on the right hand side is useful, but you generally need to plug one expansion back in and from that derive a polynomial or power relationship.
I am given
struct point
{
int x;
int y;
};
and the table of points:
point tab[MAX];
Program should return the minimal distance between the centers of gravity of any possible pair of subsets from tab. Subset can be any size (of course >=1 and < MAX).
I am obliged to write this program using recursion.
So my function will be int type because I have to return int.
I globally set variable min (because while doing recurssion I have to compare some values with this min)
int min = 0;
My function should for sure, take number of elements I add, sum of Y coordinates and sum of X coordinates.
int return_min_distance(int sY, int sX, int number, bool iftaken[])
I will be glad for any help further.
I thought about another table of bools which I pass as a parameter to determine if I took value or not from table. Still my problem is how to implement this, I do not know how to even start.
I think you need a function that can iterate through all subsets of the table, starting with either nothing or an existing iterator. The code then gets easy:
int min_distance = MAXINT;
SubsetIterator si1(0, tab);
while (si1.hasNext())
{
SubsetIterator si2(&si1, tab);
while (si2.hasNext())
{
int d = subsetDistance(tab, si1.subset(), si2.subset());
if (d < min_distance)
{
min_distance = d;
}
}
}
The SubsetIterators can be simple base-2 numbers capable of counting up to MAX, where a 1 bit indicates membership in the subset. Yes, it's a O(N^2) algorithm, but I think it has to be.
The trick is incorporating recursion. Sorry, I just don't see how it helps here. If I can think of a way to use it, I'll edit my answer.
Update: I thought about this some more, and while I still can't see a use for recursion, I found a way to make the subset processing easier. Rather than run through the entire table for every distance computation, the SubsetIterators could store precomputed sums of the x and y values for easy distance computation. Then, on every iteration, you subtract the values that are leaving the subset and add the values that are joining. A simple bit-and operation can reveal these. To be even more efficient, you could use gray coding instead of two's complement to store the membership bitmap. This would guarantee that at each iteration exactly one value enters and/or leaves the subset. Minimal work.
This is the only question on my final review that I'm still uncertain about. I've figured all of the other 74 out, but this one is completely stumping me. I think it has something to do with finding C and k, but I don't remember how to do this or what it even means... and I may not even be on the right track there.
The question I'm encountering is "What is the minimum acceptable value for N such that the definition for O(f(N)) is satisfied for member function Heap::Insert(int v)?"
The code for Heap::Insert(int v) is as follows:
void Insert(int v)
{
if (IsFull()) return;
int p=++count;
while (H[p/2] > v) {
H[p] = H[p/2];
p/= 2;
}
H[p] = v;
}
The possible answers given are: 32, 64, 128, 256.
I'm completely stumped and have to take this exam in the morning. Help would be immensely appreciated.
I admit the question is quite obscure, but I will try to give a reasonable explanation.
If we call f(N) the temporal complexity of the operation executed by your code as a function of the number of elements in the heap, the professor wanted you to remember that f(N) = O(log(N)) for a binary heap insert, that is O(h), where h is the height of the heap and we assume it to be complete (remember how a heap works and that it can be represented as a binary tree). Thus, you have to try those four values of Nmin and find the smallest one that satisfies the definition, i.e. the one for which
f(n) <= k*log(N)
For each N >= Nmin and at least a k. I would give you the details for calculating f(N) if only your code did what the professor or you expected it to do.
Note: I'd really love a LaTeX render over Stack Overflow questions! Like on Math
I am asking for your ideas regarding this problem:
I have one array A, with N elements of type double (or alternatively integer). I would like to find an algorithm with complexity less than O(N2) to find:
max A[i] - A[j]
For 1 < j <= i < n. Please notice that there is no abs(). I thought of:
dynamic programming
dichotomic method, divide and conquer
some treatment after a sort keeping track of indices
Would you have some comments or ideas? Could you point at some good ref to train or make progress to solve such algorithm questions?
Make three sweeps through the array. First from j=2 up, filling an auxiliary array a with minimal element so far. Then, do the sweep from the top i=n-1 down, filling (also from the top down) another auxiliary array, b, with maximal element so far (from the top). Now do the sweep of the both auxiliary arrays, looking for a maximal difference of b[i]-a[i].
That will be the answer. O(n) in total. You could say it's a dynamic programming algorithm.
edit: As an optimization, you can eliminate the third sweep and the second array, and find the answer in the second sweep by maintaining two loop variables, max-so-far-from-the-top and max-difference.
As for "pointers" about how to solve such problems in general, you usually try some general methods just like you wrote - divide and conquer, memoization/dynamic programming, etc. First of all look closely at your problem and concepts involved. Here, it's maximum/minimum. Take these concepts apart and see how these parts combine in the context of the problem, possibly changing order in which they're calculated. Another one is looking for hidden order/symmetries in your problem.
Specifically, fixing an arbitrary inner point k along the list, this problem is reduced to finding the difference between the minimal element among all js such that 1<j<=k, and the maximal element among is: k<=i<n. You see divide-and-conquer here, as well as taking apart the concepts of max/min (i.e. their progressive calculation), and the interaction between the parts. The hidden order is revealed (k goes along the array), and memoization helps save the interim results for max/min values.
The fixing of arbitrary point k could be seen as solving a smaller sub-problem first ("for a given k..."), and seeing whether there is anything special about it and it can be abolished - generalized - abstracted over.
There is a technique of trying to formulate and solve a bigger problem first, such that an original problem is a part of this bigger one. Here, we think of find all the differences for each k, and then finding the maximal one from them.
The double use for interim results (used both in comparison for specific k point, and in calculating the next interim result each in its direction) usually mean some considerable savings. So,
divide-and-conquer
memoization / dynamic programing
hidden order / symmetries
taking concepts apart - seeing how the parts combine
double use - find parts with double use and memoize them
solving a bigger problem
trying arbitrary sub-problem and abstracting over it
This should be possible in a single iteration. max(a[i] - a[j]) for 1 < j <= i should be the same as max[i=2..n](a[i] - min[j=2..i](a[j])), right? So you'd have to keep track of the smallest a[j] while iterating over the array, looking for the largest a[i] - min(a[j]). That way you only have one iteration and j will be less than or equal to i.
You just need go over the array find the max and min then get the difference, so the worst case is linear time . If the array is sorted, you can find the diff in constant time, or do I miss something?
Java implementation runs in linear time
public class MaxDiference {
public static void main(String[] args) {
System.out.println(betweenTwoElements(2, 3, 10, 6, 4, 8, 1));
}
private static int betweenTwoElements(int... nums) {
int maxDifference = nums[1] - nums[0];
int minElement = nums[0];
for (int i = 1; i < nums.length; i++) {
if (nums[i] - minElement > maxDifference) {
maxDifference = nums[i] - minElement;
}
if (nums[i] < minElement) {
minElement = nums[i];
}
}
return maxDifference;
}
}
Guys I'm working on class called LINT (large int) for learning purposes, and everything went ok till know. I'm stuck on implementing operator/(const LINT&). The problem here is that in when I want to divide LINT by LINT I'm getting into recursive fnc invocation i.e:
//unfinished
LINT_rep LINT_rep::divide_(const LINT_rep& bottom)const
{
typedef LINT_rep::Iterator iter;
iter topBeg = begin();
iter topEnd = end();
iter bottomBeg = bottom.begin();
iter bottomEnd = bottom.end();
LINT_rep topTmp;//for storing smallest number (dividend) which can be used to divide by divisor
while (topBeg != topEnd)
{
topTmp.insert_(*topBeg);//Number not large enough add another digit
if (topTmp >= bottom)
{//ok number >= we can divide
LINT_rep topShelf = topTmp / bottom;//HERE I'M RUNNING INTO TROUBLE
}
else
{
}
++topBeg;
}
return LINT_rep("-1");//DUMMY
}
What I'm trying to do is to implement this as if I would divide those numbers by hand, so for example having for a dividend 1589 and for divisor 27 I would go like so:
check if first digit is >= divisor and if so divide
if not add to the first digit another digit and check if a > b
At some point it will be bigger (in simplified scenario) and if so I have to divide but in this case I'm running into recursive call and I have no idea how to break it.
One note: as a tmp I have to use LINT instead of int for example because those numbers my not fit into int.
So generally what I'm asking for is there any other way to do division? Or maybe there is false logic in my thinking (quite possible).
Thank you.
When doing your part (1) you can't divide; you have to repeatedly subtract, or guess to subtract a multiple, just like when you do it by hand. You can 'guess' more effectively by setting upper and lower bounds for the multiple required and doing a binary-chop through the range.
I've done a similar thing myself; it's a handy exercise to practice operator overloading. I can supply a snippet of code if you like, although it uses arrays and half-baked exceptions so I hesitate to offer it up before the expert readers of this site.
First, please don't work on such a class. Use CGAL's big int, and there was some boost bigint submission I think, also, there're about three or four other popular implementations.
Second, the division algorithm is described here: http://en.wikipedia.org/wiki/Long_division
[EDIT] Correct way to do it:
Digit k of the result (C):
if first digit (from left) of A, call it A[nA-1] is smaller than B[nB-1], write zero into C[k]. k-- (move to next digit).
Otherwise, you seek maximum digit C[k] so that C[k]*B*10^k <= A. That is done in a loop. Actually, the previous sentence is a private case of this one. But it is not yet finished. You do A-=C[k]*B*10^k (the substracted part was zero otherwise). Only then,
k-- (next digit). Loop until k==0.
No need for recursion. Just two nested loops.
One loop for k (digit of the result), one loop for finding every digit, one loop (near it) for substracting (the -= operator).