Okay so I need some help with a homework problem I have in my data structures class. We are to code a recursive function for the following.
g(2x)=(2g(x)) / (1+g^2(x))... -1<=x<=1... For |x| < E (epsilon).... E=10^-6.....g(x)=x+x3/6....code for dx=10^-1.
I honestly don't know how to code this recursively. After having running code we are to then write out the time complexity in big oh notation, however I'm still stuck on step one. Any help with explanations would be greatly appreciated as I no a problem along this line will be on the final
You can use a while loop over the value of epsilon (or implement it through a recursive call). Once you get your approximation close enough (10^-6) to what you need to approximate, then you stop.
And, the complexity of such a program is somehow linked to the cyclomatic complexity (it is to say, the minimal number of nested loops or recursive calls you need to express your algorithm). In your case, the complexity is in O(n) (if you need only one loop) or O(n^2) (if you need two nested loops).
Note that the dx is the step of your approximation function.
Related
I'm working with finding a maximum value of a list (by using maximum) and I know that this function need to process the entire list to reach its goal and this obviously gets slower as the list gets bigger. Unfortunately, my list is huge (hundreds of million).
Is this the only way? Or there are faster ways to do this? I found that Haskell is fast but at this point (getting slower), I'm wondering is there any other option to find the maximum.
I know that this function need to process the entire list to reach its goal and this obviously gets slower as the list gets bigger.
Finding a maximum is O(n) in Haskell (and presumably every other language as well). Unless you have some concrete benchmarks and code this seems totally expected.
Since you need to "look" at each value and pick the highest O(n) is the best solution (without any other information that could be used). If you have to perform the function multiple times, then you might want to sort your list (ascending or descending) and use the function head or last, that have a time complexity of O(1), while haskells sort is a mergesort with O(n log n) worst-case and average-case performance.
Algorithm in Bottom up approach
a[0]=0,a[1]=1
integer fibo(n)
if a[n]== null
a[n] = fibo(n-1) + fibo(n-2)
return a[n]
How this algorithm has the time limit of O(N)
For 5 it calls 8 times.
pass of fibnacci series in Bottom Up approach
fibo(5) calling 8 times to go top to down and also calling 8 times to return top from bottom. so total call is 8+8=16 of my view. So how the time complexity is O(N) it's unclear to me.
I found many similar questions answered here but all of those isn't related with my
interest.
Some of these are:
Time Complexity of Fibonacci Series
Time Complexity of Fibonacci Algorithm
Anyone help would be appreciated.Thanks
There are a couple of quick things to mention before answering your question about the time complexity. The reason for this is that the time complexity at least partially depends on these answers.
First, there seems to be a bug in your program as you have an array 'a' for the base conditions (Fibbinacci numbers 0 and 1) and some array 'm' which is set in the fibo function, but never used again. More importantly, when you reach n=1 or n=0, you return the value of m[n] which is entirely unknown. So, I'm going to assume the algorithm is rewritten as follows:
a[0]=0,a[1]=1
integer fibo(n)
if a[n]== null
a[n] = fibo(n-1) + fibo(n-2)
return a[n]
Okay, second problem. Let's assume that that a is always defined as at least n+1 integers. There needs to be enough room for the incoming data. This is important because c++ will let you overwrite values at the n+1th index. It's out of bounds and wrong, but c++ doesn't give those sorts of protections. It is up to you as the programmer to verify boundary conditions like that. (I'm assuming c++ because this is tagged with c++. The code looks more like python, which has its wrap-around indices which are problematic on their own.)
Third, let's assume that you don't start with a new array 'a' for each run of the algorithm. This is important because if a stores already-calculated values then you will save time on calculation by not having to re-evaluate those values. That time savings is a great thing even if it won't affect how I calculate time complexity.
Great. Let's get started with your question. Let's use the image below to answer it. When you start the algorithm at n you are going to make two recursive calls for fibo(n-1) and fibo(n-2) BUT they do not happen simultaneously. Instead the first call for fibo(n-1) takes place and must be 100% complete before the second call for fibo(n-2) begins. That call is represented by the green line from n-1 on the nth line to the n-1th line.
Now, those green lines apply to each recursion down the line until you reach the fibo(1) call. That call terminates early because a[n] is NOT null. Finally the second call for fibo(0) is executed and it also terminates early because a[n] is not null. Okay, so much for the first set of recursive calls.
As each recursive call returns, the second call (represented by the orange broken line) is made, but a[n] is no longer null, so that call terminates early and the call returns up to the next layer.
So, let's count the number of calls. From n to 1 is n-1 recursive calls. At the end there is one additional call to fibo(0) so that is n recursive calls. Then on the way up there are n-2 additional calls which terminate early. So, altogether we have 2n-2 calls which is O(n).
Of course, if you call fibo(k) and then fibo(k+x) you will only need to do the first 2x calls because everything from fibo(k) down is already known. It is a considerable savings after the initial investment. Any questions?
Regarding O(2n)=O(n), that is a good follow up. Big-O complexity rules say that we are interested in the order-of-magnitude when you compare efficiency. So, suppose that you were looking at a n=1000. O(n)=1000, O(2n)=2000, but O(n2)=1,000,000. O(n) is more or less the same as O(2n), but if you compare them with O(n2), that is a huge difference. Similarly, if you have O(n+1)=1001 that isn't much different from O(n). So, in general we say that the leading term, the most important value in the equation is what is important. We aren't really interested in extra terms. We aren't really interested in specific coefficients because they don't really affect the outcome.
If you still have questions, see this site for some additional information.
https://justin.abrah.ms/computer-science/big-o-notation-explained.html
int func(int n){
if(n==1)
return 0;
else
return sqrt(n);
}
Where sqrt(n) is a C math.h library function.
O(1)
O(lg n)
O(lg lg n)
O(n)
I think that the running time entirely depends on the sqrt(n). However, I don't know how this function is actually implemented.
P.S. The general approach towards finding the square root of a number that I know of is using Newton's method. If I am not wrong, the time complexity using Newton's method turns out to be O(lg n). So should the answer be O(lg n)?
P.P.S. Got this question in a recent test that I appeared for.
I am going to give a bit more general case answer, without assuming constant size of int.
The answer is Theta(logn).
We know newton-raphson is Theta(logn) - that excludes Theta(n) (assuming sqrt() is as efficient as we can).
However, a general number n requries log_2(n) bits to encode - and you require to read all of it in order to get an accurate sqrt() function. This excludes Theta(1) and Theta(log(log(n)).
From the above, we know that the complexity of the function is Theta(log(n)).
As a side note, since O(log(n)) is a subset of O(n) - it is also a valid answer, though not tight one. For more information about big Theta and big O and their differences, you might want to have a look on this thread.
This depends on the implementation of sqrt and also on what kind of time complexity you are interested.
I would say you can consider it to be "constant", so O(1), in that sense: If you put in a random int, it will in average take the same amount of time. (Reason: numbers with many digits are much more common).
But have a look here. Another possible answer is O(M(n)), where M(n) is the complexity of a multiplication and n is the number of digits in your integer.
This looking like a text-book question and a is perhaps meant to be a trap. The teacher perhaps wants to check if you can distinguish between computing sqrt for a list of numbers (which would be O(n)), and a single number (which would be O(1)).
Be aware that the "correct" answer often also depends on the context in which it is asked.
Let n=2^m
Given T(n)=T(sqrt(n))+1
T(2^m)=T(2^m-1)+1
Let T(2^m)=S(m)
then,
S(m)=2S(m/2)+1
using master theorem-
S(m)=theta(m)
=theta(log(n))
Hence, time complexity is theta(log(n)).
I've read that C++ uses introsort (introspective sort) for its built-in std::sort where it starts off with quicksort and switches to heapsort when you hit the depth limit.
I've also read that the depth limit is supposed to be 2*log(2,N).
Is this value purely experimental? Or is there some mathematical theory behind it?
If you have an interval (range or array), the number of times you'll have to split the interval in half before you end up with an empty (or one element) interval is log(2,N), that's just a mathematical fact, you can work it out easily, if you want. If all goes perfectly well with quicksort, it should recurse log(2,N) times, for the same reason (and at each recursion level, it has to process all values of the interval, which leads to a O(N*log(2,N)) complexity for the overall algorithm). The problem is that quicksort could require many more recursions (if it keeps getting "unlucky" with picking pivot values, which means that it doesn't split the interval in half, but in an imbalanced way instead). At worse, quicksort could end up recursing N times, which is definitely not acceptable for a production-quality implementation.
Switching to heap-sort at 2*log(2,N) is just a good heuristic in general, to detect a much too deep number of recursions.
Technically, you could base this on the empirical performance of heap-sort versus quick-sort, to figure out what limit is the best. But such tests are highly dependent on the application (what are you sorting? how are you comparing elements? how cheap are the element swaps? etc..). So, most one-size-fits-all implementation, like std::sort, would just pick a reasonable limit like 2*log(2,N).
What #Mikael Persson said regarding why the depth limit is 2*log(2,N) is partly correct. It is not just a good heuristic, or a reasonable limit.
In fact, as you have probably guessed (depicted from your second question), there is an important mathematical reason for this: in tilde notation (search for tilde notation), quicksort makes on average ~2*log(2,N) comparisons. In big-oh notation, this is equivalent to O(N*log(2,N)).
That is why introsort switches to heapsort (which has asymptotic O(N*log(2,N)) complexity) when the depth of the recursion becomes more than 2*log(2,N). You can think of it as something which is not usual to happen and most probably means that something went wrong with the pivot picking and quicksort alone would lead to O(N^2) complexity.
You can find a short mathematical proof of the average number of compares quicksort does here (slide 21).
as we know that the trees are recursive data structures, We use recurrsion in writing the procedures of tree like delete method of BST etc.
the advantage of recurrsion is, our procedures becomes very small (for example the code of inorder traversal is of only 4 or 5 lines) rather than a non recurrsive procedure which would be lengthy but not as complex as recurssive procedure in understanding perspective. that is why i hate recurrsion and i prefer to write non recurrsive procedure and i have done that in binary serach trees and avl trees.
Now please elaborate that, prefering non recursive procedures over recurrsive procedures is bad or good thing."
Recursion is a tool like any other. You don't have to use every tool that's available but you should at least understand it.
Recursion makes a certain class of problems very easy and elegant to solve and your "hatred" of it is irrational at best. It's just a different way of doing things.
The "canonical" recursive function (factorial) is shown below in both recursive and iterative forms and, in my opinion, the recursive form more clearly reflects the mathematical definition of f(1) = 1, f(n) = n*f(n-1) for n>1.
Iterative: Recursive:
def fact(n): def fact(n):
r = n if n == 1:
while n > 1: return 1
r = r * n return n * fact(n-1)
n = n - 1
return r
Pretty much the only place I would prefer an iterative solution to a recursive one (for solutions that are really well suited for recursion) is when the growth in stack size may lead to problems (the above factorial function may well be one of those since stack growth depends on n but it may also be optimised to an iterative solution by the compiler). But this stack overflow rarely happens since:
Most stacks can be configured where necessary.
Recursion (especially tail-end recursion where the recursive call is the last thing that happens in the function) can usually be optimised to an iterative solution by an intelligent compiler.
Most algorithms I use in recursive situations (such as balanced trees and so on, as you mention) tend to be O(logN) and stack use doesn't grow that fast with increased data. For example, you can process a 16-way tree storing two billion entries with only seven levels of stack (167 =~ 2.6 billion).
You should read about Tail Recursion. In general, if a compiler manages to apply tail recursion to a procedure, it it quite effective, if not, then not so.
Also a important issue is the maximum recusion depth of your compiler -- usually it's limited by the stack size. The downside here is that there's no graceful way to handle a stack overflow.
Recursion is elegant, but prone to stack overflowing. Use tail-end recursion whenever possible to give the compiler the chance to convert it an iterative solution.
It's definitely you decision which tool you want to use - but keep in mind that most algorithms dealing with tree-like data structures are usually implemented recursively. As it's common practice, your code is easier to read and less surprising for others.
Recursion is a tool. Sometimes using the "tool of recursion" makes the code easier to read, although not necessarily easier to comprehend.
In general, recursive solutions tend to be good candidates where a "divide and conquer" approach to solving a specific problem is natural.
Typically, recursion is a good fit where you can look at a problem and say "aha, if I knew the answer for a simpler variant f this problem, I could use that solution to generate the answer I want" and "the simplest possible problem is P and its solution is S". Then, the code to solve the problem as a whole boils down to looking at the in-data, simplifying it, recursively generate a (simpler) answer and then go from the simpler answer to the answer as a whole.
If we consider the problem of counting the levels of a tree, the answer is that the height of the tree is 1 more than the height of the "tallest/deepest" of the children and the height of a leaf is 1. Something like the following code. The problem can be solved iteratively, but you'd, essentially, re-implement the call stack in your own data structures.
def tree_height (tree):
if tree.is_leaf():
return 1
childmax = 0;
for child in tree.children():
childmax=max(childmax, tree_height(child))
return childmax+1
It's also worth considering that Tail Call Optimization can make some recursive functions running in constant stack space.