Why time complexity of following code is O(n^2)? - c++

void level_order_recursive(struct node *t , int h) //'h' is height of my binary tree
{ //'t' is address of root node
for(int i = 0 ; i <= h ; i++)
{
print_level(t , i);
}
}
After print_level() is called everytime , I think recursive function is called (2^i) times . So 2^0 + 2^1 + 2^2 ....2^h should give time complexity of O(2^n).Where am I going wrong ?
void print_level(struct node * t , int i)
{
if( i == 0)
cout << t -> data <<" ";
else
{
if(t -> left != NULL)
print_level(t -> left , i - 1); //recursive call
if(t -> right != NULL)
print_level(t -> right , i - 1); //recursive call
}
}

You are confusing h and n. h is the height of the tree. n is apparently the number of elements in the tree. So print_level takes worst case O ($2^i), but that is also just n.
The worst case happens when you have a degenerate tree, where each node has only one successor. In that case you have n nodes, but the height of the tree is also h = n. Each call to print_level takes i steps in that case, and summing up i from 1 to h = n gives O ($n^2).

You always start at the root of the tree t and increase the level by one each time (i) until you reach the height of the tree h.
You said it is a binary tree, but you did not mention any property, e.g. balanced or so. So I assume it can be an unbalanced binary tree and thus the height of the tree in worst case can be h = n where n is the number of nodes (that is a completely unbalanced tree that looks like a list actually).
So this means that level_order_recursive loops n times. I.e. the worst case is that the tree has n levels.
print_level receives the root node and the level to print. And it calls itself recursively until it reaches the level and prints out that level.I.e. it loops i times (a recursive call decreases i by one each time).
So you have 1 + 2 + 3 + ... + h iterations. And since h = n you get 1 + 2 + 3 ... + n steps. This is (n * (n+1))/2 (Gaussian sum formula) which is in O(n^2).
If you can assure that the tree is balanced than you would improve the worst case scenario, because the height would be h = ld(n) where ld denotes the binary logarithm.

Based on this or that, pages 3 and 4, binary search algorithm, which resembles our case, has a time complexity of T(n) = T(n/2) + c.
Except that, both left and right sub-trees are browsed, hence the 2T(n/2) in the formula below, since this is a traversal algorithm, rather than a search one.
Here, I will comply to the question and use 'h' instead of 'n'.
Using recurrence relation, you get the following proof:

In the worst case the time complexity will be O(n^2) but cannot be 2^n as time complexity for each level will be-> O(n) + O(n-1) + O(n-2) + ... + O(1) which is at worst O(n^2).

Related

what is the time complexity for this recursive function?

void solve(string op, int n, int zeros, int ones)
{
if(n==0){
cout<<op<<" ";
return;
}
string op1 = op;
op1.push_back('1');
solve(op1, n-1, zeros, ones+1);
if(ones > zeros){
string op2 = op;
op2.push_back('0');
solve(op2, n-1, zeros+1, ones);
return;
}
}
what is the time complexity for the solve function? is it O(2^N)? can someone please explain how you approach to find complexities for recursive functions?
link to question: https://www.geeksforgeeks.org/print-n-bit-binary-numbers-1s-0s-prefixes/
So, we want to estimate the worse case here. The worse case would be if the condition (ones > zeros) always returns to true. Is it possible? Yes, if ones-zeros >= n. Since I don't know the context of your task, I may assume it.
Let T(n) is the complexity of your function. Your function calls itself with (n-1) two times. That means,
T(n) = T(n-1) + T(n-1) + c
where c is a constant for routines you are doing else like appending '1' or '0', condition evaluating etc. what is not depended on n.
So,
T(n) = 2T(n-1) + c =
= 2(2T(n-2) + c) + c = 4T(n-2) + 3c =
= 4(2T(n-3) + c) + 3c = 8T(n-3) + 7c =
= ...
= 2^k*T(n-k) + (2^k-1)*c
so if (n-k) == 0, you are done, as T(0) = z. So, we are done when k == n. Regarding z it is not obvious. In the current implementation we do output a string which is O(n). If we would just count strings it would be O(1). If we printing is essential the final complexity would be O(n2^n) if not then O(2^n)
That means,
T(n) = z*2^n + (2^n - 1)*c = O(z2^n)
[UPDATE1]
after realizing the real problem which was not clear from the code snippet but became clear after reading the info from the provided link, I would say that the complexity is different.
The calculation above is still true under assumptions was made.
Now, to this problem. We want to find all sequences of 1 and 0 of length n where each prefix contains 1's not less than the number of 0's in this prefix.
The algorithm provides the solution to this problem. As you can notice at each recursive call, the algo adds either 0 or 1 to the resulting sequence. That means the number of recursive calls is exactly the umber of symbols in the resulting strings.
We know that the length of each resulting string is n. So, we need to figure out the number of strings. Let's take a look what your program finds for different n's:
n | number of strings
-----------------------
1 | 1
2 | 2
3 | 3
4 | 6
5 | 10
6 | 20
7 | 35
8 | 70
So, if you look carefully, you will recognize that these are binomial coefficients C(n, n/2). This binomial coefficient can be estimated as ~2^n/(\pi * n/2) by large n. So, the complexity of the algo is O(2^n/(n)). However, if we also consider printing at end of recursion, we need to multiply with n, as it is the length of the outputted string. So, we end up with O(2^n)
To be clean, we need to prove that our assumption is correct. Hopefully you can do it by induction or other methods.
[UPDATE2]
Your implementation suffers from coping the string at each iteration. This slows down the execution by factor n. You can avoid it by passing a reference to the string and removing the added character after a recursive call like this:
void solve(string &op, int n, int zeros, int ones)
{
if(n==0){
cout<<op<<" ";
return;
}
op.push_back('1');
solve(op, n-1, zeros, ones+1);
op.pop_back();
if(ones > zeros){
op.push_back('0');
solve(op, n-1, zeros+1, ones);
op.pop_back();
return;
}
}
If I understand you correctly, the time complexity for the function is O(N*2^N). I use the recursion tree to analyze the results.

Is there a way to reduce the time complexity of the program?

Assume there are n prisoners standing in a circle. The first prisoner has a knife with which he kills the second prisoner and passes on the knife to the third person who kills the fourth prisoner and passes the knife to the fifth prisoner.
This cycle is repeated till only one prisoner is left. Note that the prisoners are standing in a circle, thus the first prisoner is next to the nth prisoner. Return the index of the last standing prisoner.
I tried implementing the solution using a circular linked list. Here's my code
The structure of the circular linked list is:-
struct Node
{
int Data;
Node *Next;
};
Node *Head = NULL;
Here are my deleteByAddress() and main() functions:-
inline void deleteByAddress(Node *delNode)
{
Node *n = Head;
if(Head == delNode)
{
while(n -> Next != Head)
{
n = n -> Next;
}
n -> Next = Head -> Next;
free(Head);
Head = n -> Next;
return ;
}
while(n -> Next != delNode)
{
n = n -> Next;
}
n -> Next = delNode -> Next;
delete delNode;
}
int main(void)
{
for(int i = 1 ; i <= 100 ; i++)
insertAtEnd(i);
Node *n = Head;
while(Head -> Next != Head)
{
deleteByAddress(n -> Next);
n = n -> Next;
}
cout << Head -> Data;
return 0;
}
The above code works perfectly and produces the desired output for n = 100, which is 73.
Is there any way we can reduce the time complexity or use a more efficient data structure to implement the same question.
This is known as the Josephus problem. As the Wikipedia page shows and others have noted, there is a formula for when k is 2. The general recurrence is
// zero-based Josephus
function g(n, k){
if (n == 1)
return 0
return (g(n - 1, k) + k) % n
}
console.log(g(100, 2) + 1)
This can easily be solved with O(1) complexity using the following:
last = (num - pow(2, int(log(num)/log(2)))) * 2 + 1
for example for num = 100 :
last = (100 - pow(2, int(log(100)/log(2)))) * 2 + 1 = 73
And if you have log2() function, you may replace a bit ugly log(num)/log(2) which basically takes a logarithm with the base 2.
Use 1 loop. You can grab, at every iteration, the current one's next, then set current to the next ones next and then delete the next one.
This assumes all the data is set up before hand and ignores the rewriting of the next variable when you hit the bounds.
The trick to reduce time complexity is to come up with more clever algorithms than brute-forcing it by simulation.
Here, as so often, key is obviously to solve the math. The first loop, for example, kills everybody with i%2=1 (assuming 0 based indexing), the second everybody with i%4=(n+1)%2*2 or so etc. - I'd be looking for a closed form to directly compute the survivor. It will likely boil down to a few bit manipulations yielding a O(log n) algorithm that is almost instant in practise because of all running completely in CPU registers with not even L1 cache accesses.
For such a simple processing the list manipulation and memory allocation is going to dominate the computation, you could use just a single array where you have an index to the first alive and each element is the index of next alive.
That said you could indeed search for a formula that avoids doing the loops... for example if the number of prisoners is even then after the first "loop" you end up with half of the prisoners and the knife back in the hands of first one. This means that the index of the surviving prisoner when n is even is
f(n) = 2 * f(n / 2) # when n is even
in case n is odd things are a bit more complex... after the first loop you will end up with (n + 1)/2 prisoners, but the knife in the hand of last one so some modulo arithmetic is needed because you need to "adjust" the result of the recursive call f((n + 1)/2).
The method to reduce time complextiy is, as in most cases that a challenge fails for out-of-time reasons, to not simulate and use math instead. With luck it turns into a one-liner.
The algorithm can be sped up very much, if you change to:
Note that for a total number of prisoners which is a power of two, always index 0 will survive.
For other cases:
determine the highest power of two which is lower or equal to the number of prisoners
determine R, the rest when reducing the number of prisoners by that power of two
the prisoner who survives in the end will be the one who gets the knife after that number of prisoners has been killed
Trying to find out which prisoner that is.
Case of 5 prisoners (1 higher than 22, R=1):
01234
Deaths 1: x x
Deaths 2:x x
last : O
Case of 6 (R=2):
012345
Deaths 1: x x x
Deaths 2:x x (index 4 kills index 0 after index 2 was killed by index 0)
last : O
Case of 7 (R=3):
0123456
Deaths 1:xx x x (index 6 kills index 0 after index 5 was killed by index 2)
Deaths 2: x x (index 6 kills index 2 after index 4 was killed by index 2)
last : O
Case of 8 is the next power of two, index 0 survives.
In the end, the final survivor is always the one at index 2*R.
Hence, instead of simulating, you just need to determine R.
That should be possible at worst in a time complexity of order of logarithm to base 2 of total number.

how to find a recurrence relation from algorithm

I'm trying to understand recurrence relations. I've found a way to determine the maximum element in an array of integers through recursion. Below is the function. The first time it is called, n is the size of the array.
int ArrayMax(int array[], int n) {
if(n == 1)
return array[0];
int result = ArrayMax(array, n-1);
if(array[n-1] > result)
return array[n-1];
else
return result;
}
Now I want to understand the recurrence relation and how to get to big-O notation from there. I know that T(n) = aT(n/b) + f(n), but I don't see how to get what a and b should be.
a is "how many recursive calls there are", and b is "how many pieces you split the data into", intuitively. Note that the parameter inside the recursive call doesn't have to be n divided by something, in general it's any function of n that describes how the magnitude of your data has been changed.
For example binary search does one recursive call at each layer, splits the data into 2, and does constant work at each layer, so it has T(n) = T(n/2) + c. Merge sort splits the data in two each time (the split taking work proportional to n) and recurses on both subarrays - so you get T(n) = 2T(n/2) + cn.
In your example, you'd have T(n) = T(n-1) + c, as you're making one recursive call and "splitting the data" by reducing its size by 1 each time.
To get the big O notation from this, you just make substitutions or expand. With your example it's easy:
T(n) = T(n-1) + c = T(n-2) + 2c = T(n-3) + 3c = ... = T(0) + nc
If you assume T(0) = c0, some "base constant", then you get T(n) = nc + c0, which means the work done is in O(n).
The binary search example is similar, but you've got to make a substitution - try letting n = 2^m, and see where you can get with it. Finally, deriving the big O notation of eg. T(n) = T(sqrt(n)) + c is a really cool exercise.
Edit: There are other ways to solve recurrence relations - the Master Theorem is a standard method. But the proof isn't particularly nice and the above method works for every recurrence I've ever applied it to. And... well, it's just more fun than plugging values into a formula.
In your case recurrence relation is:
T(n) = T(n-1) + constant
And Master theorem says:
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1
Here Master theorem can not be applied because for master theorem
b should be greater than 1 (b>1)
And in your case b=1

What's time complexity of this algorithm for finding all Path Sum?

Path Sum Given a binary tree and a sum, find all root-to-leaf paths where each path's sum equals the given sum.
For example: sum = 11.
5
/ \
4 8
/ / \
2 -2 1
The answer is :
[
[5, 4, 2],
[5, 8, -2]
]
Personally I think, the time complexity = O(2^n), n is the number of
nodes of the given binary tree.
Thank you Vikram Bhat and David Grayson, the tight time
complexity = O(nlogn), n is the number of nodes in the given binary
tree.
Algorithm checks each node once, which causes O(n)
"vector one_result(subList);" will copy entire path from subList to one_result, each time, which causes O(logn), because the
height is O(logn).
So finally, the time complexity = O(n * logn) =O(nlogn).
The idea of this solution is DFS[C++].
/**
* Definition for binary tree
* struct TreeNode {
* int val;
* TreeNode *left;
* TreeNode *right;
* TreeNode(int x) : val(x), left(NULL), right(NULL) {}
* };
*/
#include <vector>
using namespace std;
class Solution {
public:
vector<vector<int> > pathSum(TreeNode *root, int sum) {
vector<vector<int>> list;
// Input validation.
if (root == NULL) return list;
vector<int> subList;
int tmp_sum = 0;
helper(root, sum, tmp_sum, list, subList);
return list;
}
void helper(TreeNode *root, int sum, int tmp_sum,
vector<vector<int>> &list, vector<int> &subList) {
// Base case.
if (root == NULL) return;
if (root->left == NULL && root->right == NULL) {
// Have a try.
tmp_sum += root->val;
subList.push_back(root->val);
if (tmp_sum == sum) {
vector<int> one_result(subList);
list.push_back(one_result);
}
// Roll back.
tmp_sum -= root->val;
subList.pop_back();
return;
}
// Have a try.
tmp_sum += root->val;
subList.push_back(root->val);
// Do recursion.
helper(root->left, sum, tmp_sum, list, subList);
helper(root->right, sum, tmp_sum, list, subList);
// Roll back.
tmp_sum -= root->val;
subList.pop_back();
}
};
Though it seems that time complexity is O(N) but if you need to print all paths then it is O(N*logN). Suppose that u have a complete binary tree then the total paths will be N/2 and each path will have logN nodes so total of O(N*logN) in worst case.
Your algorithm looks correct, and the complexity should be O(n) because your helper function will run once for each node, and n is the number of nodes.
Update: Actually, it would be O(N*log(N)) because each time the helper function runs, it might print a path to the console consisting of O(log(N)) nodes, and it will run O(N) times.
TIME COMPLEXITY
The time complexity of the algorithm is O(N^2), where ‘N’ is the total number of nodes in the tree. This is due to the fact that we traverse each node once (which will take O(N)), and for every leaf node we might have to store its path which will take O(N).
We can calculate a tighter time complexity of O(NlogN) from the space complexity discussion below.
SPACE COMPLEXITY
If we ignore the space required for all paths list, the space complexity of the above algorithm will be O(N) in the worst case. This space will be used to store the recursion stack. The worst-case will happen when the given tree is a linked list (i.e., every node has only one child).
How can we estimate the space used for the all paths list? Take the example of the following balanced tree:
1
/ \
2 3
/ \ / \
4 5 6 7
Here we have seven nodes (i.e., N = 7). Since, for binary trees, there exists only one path to reach any leaf node, we can easily say that total root-to-leaf paths in a binary tree can’t be more than the number of leaves. As we know that there can’t be more than N/2 leaves in a binary tree, therefore the maximum number of elements in all paths list will be O(N/2) = O(N). Now, each of these paths can have many nodes in them. For a balanced binary tree (like above), each leaf node will be at maximum depth. As we know that the depth (or height) of a balanced binary tree is O(logN) we can say that, at the most, each path can have logN nodes in it. This means that the total size of the all paths list will be O(N*logN). If the tree is not balanced, we will still have the same worst-case space complexity.
From the above discussion, we can conclude that the overall space complexity of our algorithm is O(N*logN).
Also from the above discussion, since for each leaf node, in the worst case, we have to copy log(N) nodes to store its path, therefore the time complexity of our algorithm will also be O(N*logN).
The worst case time complexity is not O(nlogn), but O(n^2).
to visit every node, we need O(n) time
to generate all paths, we have to add the nodes to the path for every valid path.
So the time taken is sum of len(path). To estimate an upper bound of the sum: the number of paths is bounded by n, the length of path is also bounded by n, so O(n^2) is an upper bound. Both worst case can be reached at the same time if the top half of the tree is a linear tree, and the bottom half is a complete binary tree, like this:
1
1
1
1
1
1 1
1 1 1 1
number of paths is n/4, and length of each path is n/2 + log(n/2) ~ n/2

How Recursion Works Inside a For Loop

I am new to recursion and trying to understand this code snippet. I'm studying for an exam, and this is a "reviewer" I found from Standford' CIS Education Library (From Binary Trees by Nick Parlante).
I understand the concept, but when we're recursing INSIDE THE LOOP, it all blows! Please help me. Thank you.
countTrees() Solution (C/C++)
/*
For the key values 1...numKeys, how many structurally unique
binary search trees are possible that store those keys.
Strategy: consider that each value could be the root.
Recursively find the size of the left and right subtrees.
*/
int countTrees(int numKeys) {
if (numKeys <=1) {
return(1);
}
// there will be one value at the root, with whatever remains
// on the left and right each forming their own subtrees.
// Iterate through all the values that could be the root...
int sum = 0;
int left, right, root;
for (root=1; root<=numKeys; root++) {
left = countTrees(root - 1);
right = countTrees(numKeys - root);
// number of possible trees with this root == left*right
sum += left*right;
}
return(sum);
}
Imagine the loop being put "on pause" while you go in to the function call.
Just because the function happens to be a recursive call, it works the same as any function you call within a loop.
The new recursive call starts its for loop and again, pauses while calling the functions again, and so on.
For recursion, it's helpful to picture the call stack structure in your mind.
If a recursion sits inside a loop, the structure resembles (almost) a N-ary tree.
The loop controls horizontally how many branches at generated while the recursion decides the height of the tree.
The tree is generated along one specific branch until it reaches the leaf (base condition) then expand horizontally to obtain other leaves and return the previous height and repeat.
I find this perspective generally a good way of thinking.
Look at it this way: There's 3 possible cases for the initial call:
numKeys = 0
numKeys = 1
numKeys > 1
The 0 and 1 cases are simple - the function simply returns 1 and you're done. For numkeys 2, you end up with:
sum = 0
loop(root = 1 -> 2)
root = 1:
left = countTrees(1 - 1) -> countTrees(0) -> 1
right = countTrees(2 - 1) -> countTrees(1) -> 1
sum = sum + 1*1 = 0 + 1 = 1
root = 2:
left = countTrees(2 - 1) -> countTrees(1) -> 1
right = countTrees(2 - 2) -> countTrees(0) -> 1
sum = sum + 1*1 = 1 + 1 = 2
output: 2
for numKeys = 3:
sum = 0
loop(root = 1 -> 3):
root = 1:
left = countTrees(1 - 1) -> countTrees(0) -> 1
right = countTrees(3 - 1) -> countTrees(2) -> 2
sum = sum + 1*2 = 0 + 2 = 2
root = 2:
left = countTrees(2 - 1) -> countTrees(1) -> 1
right = countTrees(3 - 2) -> countTrees(1) -> 1
sum = sum + 1*1 = 2 + 1 = 3
root = 3:
left = countTrees(3 - 1) -> countTrees(2) -> 2
right = countTrees(3 - 3) -> countTrees(0) -> 1
sum = sum + 2*1 = 3 + 2 = 5
output 5
and so on. This function is most likely O(n^2), since for every n keys, you're running 2*n-1 recursive calls, meaning its runtime will grow very quickly.
Just to remember that all the local variables, such as numKeys, sum, left, right, root are in the stack memory. When you go to the n-th depth of the recursive function , there will be n copies of these local variables. When it finishes executing one depth, one copy of these variable will be popped up from the stack.
In this way, you will understand that, the next-level depth will NOT affect the current-level depth local variables (UNLESS you are using references, but we are NOT in this particular problem).
For this particular problem, time-complexity should be carefully paid attention to. Here are my solutions:
/* Q: For the key values 1...n, how many structurally unique binary search
trees (BST) are possible that store those keys.
Strategy: consider that each value could be the root. Recursively
find the size of the left and right subtrees.
http://stackoverflow.com/questions/4795527/
how-recursion-works-inside-a-for-loop */
/* A: It seems that it's the Catalan numbers:
http://en.wikipedia.org/wiki/Catalan_number */
#include <iostream>
#include <vector>
using namespace std;
// Time Complexity: ~O(2^n)
int CountBST(int n)
{
if (n <= 1)
return 1;
int c = 0;
for (int i = 0; i < n; ++i)
{
int lc = CountBST(i);
int rc = CountBST(n-1-i);
c += lc*rc;
}
return c;
}
// Time Complexity: O(n^2)
int CountBST_DP(int n)
{
vector<int> v(n+1, 0);
v[0] = 1;
for (int k = 1; k <= n; ++k)
{
for (int i = 0; i < k; ++i)
v[k] += v[i]*v[k-1-i];
}
return v[n];
}
/* Catalan numbers:
C(n, 2n)
f(n) = --------
(n+1)
2*(2n+1)
f(n+1) = -------- * f(n)
(n+2)
Time Complexity: O(n)
Space Complexity: O(n) - but can be easily reduced to O(1). */
int CountBST_Math(int n)
{
vector<int> v(n+1, 0);
v[0] = 1;
for (int k = 0; k < n; ++k)
v[k+1] = v[k]*2*(2*k+1)/(k+2);
return v[n];
}
int main()
{
for (int n = 1; n <= 10; ++n)
cout << CountBST(n) << '\t' << CountBST_DP(n) <<
'\t' << CountBST_Math(n) << endl;
return 0;
}
/* Output:
1 1 1
2 2 2
5 5 5
14 14 14
42 42 42
132 132 132
429 429 429
1430 1430 1430
4862 4862 4862
16796 16796 16796
*/
You can think of it from the base case, working upward.
So, for base case you have 1 (or less) nodes. There is only 1 structurally unique tree that is possible with 1 node -- that is the node itself. So, if numKeys is less than or equals to 1, just return 1.
Now suppose you have more than 1 key. Well, then one of those keys is the root, some items are in the left branch and some items are in the right branch.
How big are those left and right branches? Well it depends on what is the root element. Since you need to consider the total amount of possible trees, we have to consider all configurations (all possible root values) -- so we iterate over all possible values.
For each iteration i, we know that i is at the root, i - 1 nodes are on the left branch and numKeys - i nodes are on the right branch. But, of course, we already have a function that counts the total number of tree configurations given the number of nodes! It's the function we're writing. So, recursive call the function to get the number of possible tree configurations of the left and right subtrees. The total number of trees possible with i at the root is then the product of those two numbers (for each configuration of the left subtree, all possible right subtrees can happen).
After you sum it all up, you're done.
So, if you kind of lay it out there's nothing special with calling the function recursively from within a loop -- it's just a tool that we need for our algorithm. I would also recommend (as Grammin did) to run this through a debugger and see what is going on at each step.
Each call has its own variable space, as one would expect. The complexity comes from the fact that the execution of the function is "interrupted" in order to execute -again- the same function.
This code:
for (root=1; root<=numKeys; root++) {
left = countTrees(root - 1);
right = countTrees(numKeys - root);
// number of possible trees with this root == left*right
sum += left*right;
}
Could be rewritten this way in Plain C:
root = 1;
Loop:
if ( !( root <= numkeys ) ) {
goto EndLoop;
}
left = countTrees( root -1 );
right = countTrees ( numkeys - root );
sum += left * right
++root;
goto Loop;
EndLoop:
// more things...
It is actually translated by the compiler to something like that, but in assembler. As you can see the loop is controled by a pair of variables, numkeys and root, and their values are not modified because of the execution of another instance of the same procedure. When the callee returns, the caller resumes the execution, with the same values for all values it had before the recursive call.
IMO, key element here is to understand function call frames, call stack, and how they work together.
In your example, you have bunch of local variables which are initialised but not finalised in the first call. It's important to observe those local variables to understand the whole idea. At each call, the local variables are updated and finally returned in a backwards manner (most likely it's stored in a register before each function call frame is popped off from the stack) up until it's added to the initial function call's sum variable.
The important distinction here is - where to return. If you need accumulated sum value like in your example, you cannot return inside the function which would cause to early-return/exit. However, if you depend on a value to be in a certain state, then you can check if this state is hit inside the for loop and return immediately without going all the way up.