I need to write a program that counts the number of nodes from a certain level given in binary
tree.
I mean < numberofnodes(int level){} >
I tried writing it without any success because I don't how to get to a certain level then go
to count number of nodes.
Do it with a recursive function which only descends to a certain level.
Well, there are many ways you can do this. Best is to have a single dimensional array that keep track of the number of nodes that you add/remove at each level. Considering your requirement that would be the simplest way.
However, if provided with just a binary tree, you WILL have to traverse and go to that many levels and count the nodes, I do not see any other alternative.
To go to a certain level, you will typically need to have a variable called as 'current_depth' which will be tracking the level you are in. Once you reach your level of interest and that the nodes are visited once (typically a In order traversal would suffice) you can increment your count. Hope this helped.
I'm assuming that your binary tree is not necessarily complete (i.e., not every node has two or zero children or this becomes trivial). I'm also assuming that you are supposed to count just the nodes at a certain level, not at that level or deeper.
There are many ways to do what you're asked, but you can think of this as a graph search problem - you are given a starting node (the root of the tree), a way to traverse edges (the child links), and a criteria - a certain distance from the root.
At this point you probably learned graph search algorithms - which algorithm sounds like a natural fit for your task?
In general terms:
Recursion.
In each iteration of the recursion, you need to measure somehow what level are you on, and therefore to know how far down the tree you need to go beyond where you are now.
Recursion parts:
What are you base cases? at what conditions do you say "Okay, time to stop the recursion" ?
How can you count something in a recursion without any global count?
I think the easiest is to simply follow the recursive nature of the tree with an accumulator to track the current level (or maybe the number of levels remaining to be descended).
The non-recursive branch of that function is when you hit the level in question. At that point you simply return 1 (because you found one node at that level).
The recursive branch, simply sums the number of nodes at that level returned from the left and right recursive invocations.
This is the pseudo-code, this assumes the root has level 0
int count(x,currentLevel,desiredLevel)
if currentLevel = desiredLevel
return 1
left = 0
right = 0
if x.left != null
left = count(x.left, currentLevel+1, desiredLevel
if x.right != null
right = count(x.right, currentLevel+1, desiredLevel
return left + right
So to get the number of nodes for level 3 you call
count(root,0,3)
Related
In the DECREASE-KEY operation of Fibonacci Heap, whenever a node is cut from its parent and added to the root list, its mark attribute is set to FALSE. However, in the EXTRACT-MIN operation, the children of the min-node are added to the root list but their mark attributes aren't set to FALSE. Why is there such inconsistency?
Moreover, in the linking operation where a node is made the child of another node, the mark attribute of the new child is set to FALSE. The EXTRACT-MIN operation performs this linking operation multiple times. But in the amortized analysis of EXTRACT-MIN operation described in the CLRS book, the authors claim that the number of marked nodes doesn't change in EXTRACT-MIN operation. They use m(H) to denote the number of marked nodes both before and after EXTRACT-MIN operation. I am quoting the exact line from the book:
The potential before extracting the minimum node is t(H)+2m(H), and
the potential afterward is at most (D(n)+1)+2m(H).
Here D(n) is the maximum degree of any node in an n-node Fibonacci Heap, t(H) is the number of trees in the Fibonacci Heap and m(H) is the number of marked nodes in the Fibonacci Heap.
Isn't this calculation wrong?
Let's take a step back - why do we need mark bits in a Fibonacci heap in the first place?
The idea behind a Fibonacci heap is to make the DECREASE-KEY operation as fast as possible. To that end, the basic strategy of DECREASE-KEY is to take the element whose key is being decreased and to cut it from its parent if the heap property is violated. Unfortunately, if we do that, then we lose the exponential connection between the order of a tree and the number of nodes in the tree. That's a problem because the COALESCE step of an EXTRACT-MIN operation links trees based on their orders and assumes that each tree's order says something about how many nodes it contains. With that connection broken, all the nice runtime bounds we want go away.
As a compromise, we introduce mark bits. If a node loses a child, it gets marked to indicate "something was lost here." Once a marked node loses a second child, then it gets cut from its parent. Over time, if you do a huge number of cuts in a single tree, eventually the cuts propagate up to root of the tree, decreasing the tree order. That makes the tree behave differently during a COALESCE step.
With that said, the important detail here is that mark bits are only relevant for non-root nodes. If a node is a root of a tree and it loses a child, this immediately changes its order in a way that COALESCE will notice. Therefore, you can essentially ignore the mark bit of any tree root, since it never comes up. It's probably a wise idea to clear a node's mark bit before moving it up to the root list just so you don't have to clear it later, but that's more of an implementation detail than anything else.
I'm working on Minimax algorithm to build a gomoku game. My problem with Minimax is that with the same evaluation value in child nodes, which is really added to parent node or it is randomly added.
Example tree from Wiki
So as you can see from the tree above, at ply 3 of Min node, there are 2 child nodes have the value of 6. What node is really added to parent node ?
Updated question
Why at the leaves, they are separated to group of 2 or group of 3 which are corresponding to different parent nodes ??
What node is really added to parent node ?
In a word, "neither".
You evaluate the nodes, and take the maximum value. You add values, not nodes, so if the best value is shared by multiple nodes there's no need for you to pick between the nodes - you'd get the same result either way. You just take that best value.
Well, presumably you implemented the algorithm, so you can tell. And since you didn't post your code, we can't.
In general, if your evaluation function can't differentiate between moves, then there's no big problem with choosing either move at random. In fact, there are many games where this is a common event. Symmetry tends to produce situations in which two moves are equally good. And it's clear that in such situations it doesn't matter which move you choose.
As for trees, note that many trees do not order their children, and the MinMax tree is no exception. There's not such a thing as "first child" and "second child".
I was thinking about this and was encountering a lot of bugs while trying to do this. Is it possible?
I believe you are asking whether you can do the following update:
Given the update A B C, you want to add C to all elements from A to B.
The problem is to do the update on the segment tree would normally take O(N * logN) time given that N is the maximum number of elements. However, the key idea about implementing a segment tree is that you would like to suppose range queries and that normally you are not interested in all O(N^2) ranges but rather a much smaller subset of them.
You can enhance the range updates using lazy propagation which would generally mean that you do an update, but you do not update all of the nodes in the segment tree -> you update to some point, but you do not continue futher down the tree since it not needed.
Say that you have updated everything up to a node K which is responsible for the range [10; 30] for example. Later, you do a "get info" query on [20;40]. Obviously, you will have to visit node K, but you are not interested on the whole range, but rather the range [20;30], which is actually its right child.
What you have to do is "push" the update of node K to its left child and then its right child and continue as needed.
In general this would mean that when doing an update, you will do an update only until you find a suitable node(s) for the interval you update, but not go any further. This yields O(logN) time.
Then, when quering you continue propagating the updates down the tree when you reach a node for which you know you have saved some update for later. This also yields O(logN) time.
Some good material: Lazy propagation on segment trees
So when balancing a KD tree you're supposed to find the median and then put all the elements that are less on the left subtree and those greater on the right. But what happens if you have multiple elements with the same value as the median? Do they go in the left subtree, the right or do you discard them?
I ask because I've tried doing multiple things and it affects the results of my nearest neighbor search algorithm and there are some cases where all the elements for a given section of the tree will all have the exact same value and so I don't know how to split them up in that case.
It does not really matter where you put them. Preferably, keep your tree balanced. So place as many on the left as needed to keep the optimal balance!
If your current search radius touches the median, you will have to check the other part, that's all you need to handle tied objects on the other side. This is usually cheaper than some complex handling of attaching multiple elements anywhere.
When doing a search style algorithm, it is often a good idea to put elements equal to your median on both sides of the median.
One method is to put median equaling elements on the "same side" as where they where before you did your partition. Another method is to put the first one on the left, and the second one on the right, etc.
Another solution is to have a clumping data structure that just "counts" things that are equal instead of storing each one individually. (if they have extra state, then you can store that extra state instead of just a count)
I don't know which is appropriate in your situation.
That depends on your purpose.
For problems such as exact-matching or range search, possibility of repetitions of the same value on both sides will complicate the query and repetition of the same value on both leaves will add to the time-complexity.
A solution is storing all of the medians (the values that are equal to the value of median) on the node, neither left nor right. Most variants of kd-trees store the medians on the internal nodes. If they happen to be many, you may consider utilizing another (k-1)d tree for the medians.
Given a binary search tree and a number, find if there is a path from root to a leaf such that all numbers on the path added up to be the given number.
I know how to do it by recursively. But, I prefer an iterative solution.
If we iterate from root to a leaf each time, there will be overlap because some paths may have overlap.
What if the tree is not binary search ?
Thanks
Basically this problem can be solved using Dynamic Programming on tree to avoid those overlapping paths.
The basic idea is to keep track of the possible lengths from each leaf to a given node in a table f[node]. If we implement it in a 2-dimensional boolean array, it is something like f[node][len], which indicates whether there is a path from a leaf to node with length equal to len. We can also use a vector<int> to store the value in each f[node] instead of using a boolean array. No matter what kind of representation you use, the way you calculate between different f are straightforward, in the form of
f[node] is the union of f[node->left] + len_left[node] and f[node->right] + len_right[node].
This is the case of binary tree, but it is really easy to extend it to non-binary-tree cases.
If there is anything unclear, please feel free to comment.
Anything you can do recursively, you can also do iteratively. However you are not having performance issues with the recursive solution, then I would leave it as is. It would more likely than not be more difficult to code/read if you try to do it iteratively.
However if you insist, you can transform your recursive solution into an iterative one by using a stack. Every time you make a recursive call, push the state variables in your current function call onto the stack. When you are done with a call, pop off the variables.
For BST:
Node current,final = (initialize)
List nodesInPath;
nodesInPath.add(current);
while(current != final) {
List childrenNodes = current.children;
if(noChildren) noSolution;
if(current < final) {
//choose right child if there is one, otherwise no solution
current = children[right];
} else if(current > final){
//choose left child if there is one, otherwise no solution
current = children[left];
}
nodesInPath.add(current);
}
check sum in the nodesInPath
However, for non BST you should apply a solution using dynamic programming as derekhh suggests if you don't want to calculate same paths over and over again. I think, you can store the total length between a certain processed node and the root node. You calculate the distances when you expand them. Then you would apply Breadth-first search to not to traverse same paths again and use previously computed total distances. The algorithm comes to my mind is a little complex, sorry but not have enough time to write it.