Traversing and Printing a Binary Tree Level by Level - c++

I am trying to traverse a binary tree built with the input data from keyboard. Data is inserted to the binary tree successfully. I have a switch statement, where 'case 4' should traverse (and print) the binary tree level by level. However I got EXC_BAD_ACCESS error. I would be more than happy if someone help me out with this one.
(RootPtr is the top -Level 0- node of the binary tree defined globally; TreeDepth() is the function calculating "Depth" of the tree where Depth defined globally and root node has depth of 0; and GetNode is basically an initializer function (using malloc) for type TreePtr pointers.)
Thank you all in advance.
Here is the relevant code:
This is the struct definition;
typedef struct treeItem
{
int data;
struct treeItem *left;
struct treeItem *right;
}Tree , *TreePtr;
This is the switch case where I call Level by Level traversing function(s);
case 4:
TreePtr temp;
GetNode(&temp);
temp = RootPtr;
printLevelOrder(temp);
printf("\n\n");
break;
These are the functions used for traversing the tree level by level;
void printGivenLevel(TreePtr TemPtr, int level)
{
if (items == 0)
return;
else
{ if(level == 0 )
{
printf(" %d", (*TemPtr).data); //the line I got ERROR
}
else
{
printGivenLevel((*TemPtr).left, (level-1));
printGivenLevel((*TemPtr).right, (level-1));
}
}
}
void printLevelOrder(TreePtr TemPtr)
{
TreeDepth();
if (items == 0)
printf("\nTree is empty.\n");
else
{
printf("Traverse level by level:");
for (int i=0; i<=Depth; i++)
{
printGivenLevel(TemPtr, i);
}
}
}

It's an off by one error. In your for loop:
for (int i=0; i<=Depth; i++)
You're traversing this loop Depth + 1 times. This means you're trying to access one more level than there actually is. In particular, in the final call of printGivenLevel, in the point in the recursion where level == 1, you're already at the bottom of the tree. You're now recursing one more time, but the pointers you pass into the next recursion level are garbage pointers (they aren't guaranteed to point to memory you're allowed to access, or even exists). So when you try to dereference them, you get an error.
One more thing: this implementation is pretty inefficient, since you're traversing the tree many times. It's better to do a breadth-first search, like kiss-o-matic mentioned. This way, you'll only traverse the tree once, which is much faster (although it does use more memory).

Related

Stack overflow? Interesting behaviour during very deep recursion

While I was making my assignment on BST, Linked Lists and AVL I noticed.. actually it is as in the title.
I believe it is somehow related to stack overflow, but could not find why it is happening.
Creation of the BST and Linked list
Searching for all elements in Linked list and BST
And probably most interesting...
Comparison of the height of BST and AVL
(based on array of unique random integers)
On every graph something interesting begins around 33k elements.
Optimization O2 in MS Visual Studio 2019 Community.
Search function of Linked list is not recursive.
Memory for each "link" was allocated with "new" operator.
X axis ends on 40k elements because when it is about 43k then stack overflow error happens.
Do you know why does it happen? Actually, I'm curious what is happening. Looking forward to your answers! Stay healthy.
Here is some related code although it is not exactly the same, I can assure it works the same and it could be said some code was based on it.
struct tree {
tree() {
info = NULL;
left = NULL;
right = NULL;
}
int info;
struct tree *left;
struct tree *right;
};
struct tree *insert(struct tree*& root, int x) {
if(!root) {
root= new tree;
root->info = x;
root->left = NULL;
root->right = NULL;
return(root);
}
if(root->info > x)
root->left = insert(root->left,x); else {
if(root->info < x)
root->right = insert(root->right,x);
}
return(root);
}
struct tree *search(struct tree*& root, int x) {
struct tree *ptr;
ptr=root;
while(ptr) {
if(x>ptr->info)
ptr=ptr->right; else if(x<ptr->info)
ptr=ptr->left; else
return ptr;
}
int bstHeight(tree*& tr) {
if (tr == NULL) {
return -1;
}
int lefth = bstHeight(tr->left);
int righth = bstHeight(tr->right);
if (lefth > righth) {
return lefth + 1;
} else {
return righth + 1;
}
}
AVL tree is a BST read inorder and then, array of the elements is inserted into tree object through bisection.
Spikes in time could be, and I am nearly sure they are, because of using up some cache of the CPU (L2 for example). Some leftover data was stored somewhere in slower memory.
The answer is thanks to #David_Schwartz
Spike in the height of the BST tree is actually my own fault. For the "array of unique random" integers I used array of already sorted unique items, then mixing them up by swapping elements with the rand() function. I have totally forgotten how devastating could it be if expected to random larger numbers.
Thanks #rici for pointing it out.

Print ancestors of all nodes in a Binary Tree. Can we do it in less than O(n^2) time complexity?

This is the standard algorithm for printing ancestors of a particular Node in a Binary Tree and it takes O(n) time complexity.
bool print(Node *node,int target){
if(node==NULL)
return false;
if(node->target==target)
return true;
if(print(node->left)||print(node->right)){
cout << node->data;
return true;
}
return false;
}
The question is if we need to print all ancestors of all node's and also store ancestors in an array for each node. what is the time complexity? Can we do it better than O(n^2) i.e without looping through each node and find ancestors.If possible how?
If you want an array corresponding to each node in the tree, then you cannot do better than O(N2) worst case, because the total size of all arrays is worst case O(N2) (in the case that every node in the tree has at most one child). If you expect the trees to be somewhat balanced, this reduces to O(N log N).
You can achieve O(N) construction by sharing data, using a linked list instead of an array for each node. In effect, that's equivalent to computing a parent link for each node, because the linked list is simply a traversal of parent links. But you cannot avoid the cost of printing, because you will print average O(N log N) / worst case O(N2) items when you print out all the ancestor chains.
The parent link construction algorithm is basically the same as the algorithm you present: recursively walk the tree setting the parent links of the children to the current node. To print the ancestor chains for each node, you can use the parent links while you walk the tree.
It can be done in O(n*h), where h is the tree's height, by implementing a DFS, that keeps track of currently open nodes.
A simple c++ like pseudo code can be something like:
void PrintAll(const Node& node) {
open = std::unordered_set<Node>; // empty hash set
PrintAll(node, &open);
}
void PrintAll(const Node& node, std::unordered_set* open) {
if (node == null)
return;
for (const Node& ancestor: open)
cout << ancestor<< "," << node;
open->add(node);
PrintAll(node.left, open);
PrintAll(node.right, open);
open->remove(node);
}
Caveat: In here, we do not print (node, node) (each node is also an ancestor of itself). If we want to do it, it can be fixed easily by adding the addition of node to open before the printing loop.
Also, you can make the unordered_set store only the data, rather than the entire node.
The following code will do the job :
Language Used : Java
// Algorithm for printing the ancestors of a Node.
static boolean findAncestors(Node root, Node a)
{
if(root==null)
{
return false;
}
else if(root==a)
{
System.out.print("Printing ancestors of node " + a.data + " : " + a.data);
return true;
}
else
{
boolean var=false;
var=findAncestors(root.left, a);
if(var)
{
System.out.print(", " + root.data);
return true;
}
var=findAncestors(root.right, a);
if(var)
{
System.out.print(", " + root.data);
return true;
}
}
return false;
}

Recursive Binary Tree Traversal Code Goes to Infinity

I am trying to traverse a binary tree built with the input data from keyboard. Data is inserted to the binary tree successfully. I have a switch statement, where 'case 2' should traverse (and print) the binary tree with Inorder, Preorder and Postorder traversal algorithms using recursion respectively. However when 'case 2' is called, only the first data that should be printed regarding Inorder traversal is printed on screen; and it is also printed many times (infinite) where I need to stop the compiling operation. I would be more than happy if someone help me out with this one.
(RootPtr is the top -Level 0- node of the binary tree defined globally; and GetNode is basically an initializer function (using malloc) for type TreePtr pointers.)
Thank you all in advance.
This is the struct definition;
typedef struct treeItem
{
int data;
struct treeItem *left;
struct treeItem *right;
}Tree , *TreePtr;
These three are the traversing functions called respectively;
void inorder (TreePtr TemPtr)
{
while(TemPtr != NULL)
{
inorder((*TemPtr).left);
printf(" %d ", (*TemPtr).data);
inorder((*TemPtr).right);
}
printf("\n");
}
void preorder (TreePtr TemPtr)
{
while(TemPtr != NULL)
{
printf(" %d ", (*TemPtr).data);
preorder((*TemPtr).left);
preorder((*TemPtr).right);
}
printf("\n");
}
void postorder (TreePtr TemPtr)
{
while(TemPtr != NULL)
{
postorder((*TemPtr).left);
postorder((*TemPtr).right);
printf(" %d", (*TemPtr).data);
}
printf("\n");
}
This one is the related 'case' of the switch statement;
case 2:
TreePtr LocPtr;
GetNode(&LocPtr);
LocPtr = RootPtr;
printf("\n");
printf("Inorder traversal:");
inorder(LocPtr);
printf("Preorder traversal:");
preorder(LocPtr);
printf("Postorder traversal:");
postorder(LocPtr);
printf("\n");
break;
There shouldn't be a while loop inside your traversal functions. Recursivity will go in all nodes already.
void inorder (TreePtr TemPtr)
{
if (TemPtr != NULL) {
inorder((*TemPtr).left);
printf(" %d ", (*TemPtr).data);
inorder((*TemPtr).right);
}
printf("\n");
}
If you think about it, your TemPtr parameter isn't changing to NULL when you iterate in the loop. It would therefore be stuck in an infinite loop.
Explanation on tree traversal:
In your main, case 2, you call inorder with the root of the tree as parameter. Then we need to traverse the tree.
inorder(LocPtr);
In-order traversal is:
go to left child ... visit current node ... go to right child
We need two things in a recursive function/method: a base case, and a recursive call.
Here, our base case is if (TemPtr != NULL). When this condition is false, we know we have reached a leaf. If TemPtr indeed is a leaf, we don't go further to its children (which would throw errors).
But if TemPtr is not NULL, it means we are currently at a valid node. We must therefore follow the definition of in-order traversal (as stated previously).
We visit the left child:
inorder((*TemPtr).left); // equivalent to inorder(TemPtr->left);
we visit the current node:
printf(" %d ", (*TemPtr).data); // equivalent to printf (" %d ", TemPtr->data);
and we visit the right child:
inorder((*TemPtr).right); // equivalent to inorder(TemPtr->right);
When inorder is finished, the caller of inorder continues where it was. This process continues until inorder(LocPtr) finishes, and at this point you'll be back into the main, and your whole tree will have been traversed in-ordered.
The easiest way to visualize this is to draw the calls on a sheet of paper. Functions will stack on each other (main on the bottom), and are removed from the stack once they're finished.

Is it possible to make efficient pointer-based binary heap implementations?

Is it even possible to implement a binary heap using pointers rather than an array? I have searched around the internet (including SO) and no answer can be found.
The main problem here is that, how do you keep track of the last pointer? When you insert X into the heap, you place X at the last pointer and then bubble it up. Now, where does the last pointer point to?
And also, what happens when you want to remove the root? You exchange the root with the last element, and then bubble the new root down. Now, how do you know what's the new "last element" that you need when you remove root again?
Solution 1: Maintain a pointer to the last node
In this approach a pointer to the last node is maintained, and parent pointers are required.
When inserting, starting at the last node navigate to the node below which a new last node will be inserted. Insert the new node and remember it as the last node. Move it up the heap as needed.
When removing, starting at the last node navigate to the second-to-last node. Remove the original last node and remember the the new last node just found. Move the original last node into the place of the deleted node and then move it up or down the heap as needed.
It is possible to navigate to the mentioned nodes in O(log(n)) time and O(1) space. Here is a description of the algorithms but the code is available below:
For insert: If the last node is a left child, proceed with inserting the new node as the right child of the parent. Otherwise... Start at the last node. Move up as long as the current node is a right child. If the root was not reached, move to the sibling node at the right (which necessarily exists). Then (whether or not the root was reached), move down to the left as long as possible. Proceed by inserting the new node as the left child of the current node.
For remove: If the last node is the root, proceed by removing the root. Otherwise... Start at the last node. Move up as long as the current node is a left child. If the root was not reached, move to the sibling left node (which necessarily exists). Then (whether or not the root was reached), move down to the right as long as possible. We have arrived at the second-to-last node.
However, there are some things to be careful about:
When removing, there are two special cases: when the last node is being removed (unlink the node and change the last node pointer), and when the second-to-last node is being removed (not really special but the possibility must be considered when replacing the deleted node with the last node).
When moving nodes up or down the heap, if the move affects the last node, the last-node pointer must be corrected.
Long ago I have made an implementation of this. In case it helps someone, here is the code. Algorithmically it should be correct (has also been subjected to stress testing with verification), but there is no warranty of course.
Solution 2: Reach the last node from the root
This solution requires maintaining the node count (but not parent pointers or the last node). The last (or second-to-last) node is found by navigating from the root towards it.
Assume the nodes are numbered starting from 1, as per the typical notation for binary heaps. Pick any valid node number and represent it in binary. Ignore the first (most significant) 1 bit. The remaining bits define the path from the root to that node; zero means left and one means right.
For example, to reach node 11 (=1011b), start at the root then go left (0), right (1), right (1).
This algorithm can be used in insert to find where to place the new node (follow the path for node node_count+1), and in remove to find the second-to-last-node (follow the path for node node_count-1).
This approach is used in libuv for timer management; see their implementation of the binary heap.
Usefulness of Pointer-based Binary Heaps
Many answers here and even literature say that an array-based implementation of a binary heap is strictly superior. However I contest that because there are situations where the use of an array is undesirable, typically because the upper size of the array is not known in advance and on-demand reallocations of an array are not deemed acceptable, for example due to latency or possibility of allocation failure.
The fact that libuv (a widely used event loop library) uses a binary heap with pointers only further speaks for this.
It is worth noting that the Linux kernel uses (pointer-based) red-black trees as a priority queue in a few cases, for example for CPU scheduling and timer management (for the same purpose as in libuv). I find it likely that changing these to use a pointer-based binary heap will improve performance.
Hybrid Approach
It is possible to combine Solution 1 and Solution 2 into a hybrid approach which dynamically picks either of the algorithms (for finding the last or second-to-last node), the one with a lower cost, measured in the number of edges that need to be traversed. Assume we want to navigate to node number N, and highest_bit(X) means the 0-based index of the highest-order bit in N (0 means the LSB).
The cost of navigating from the root (Solution 2) is highest_bit(N).
The cost of navigating from the previous node which is on the same level (Solution 1) is: 2 * (1 + highest_bit((N-1) xor N)).
Note that in the case of a level change the second equation will yield a wrong (too large) result, but in that case traversal from the root is more efficient anyway (for which the estimate is correct) and will be chosen, so there is no need for special handling.
Some CPUs have an instruction for highest_bit allowing very efficient implementation of these estimates. An alternative approach is to maintain the highest bit as a bit mask and do these calculation with bit masks instead of bit indices. Consider for example that 1 followed by N zeroes squared is equal to 1 followed by 2N zeroes).
In my testing it has turned out that Solution 1 is on average faster than Solution 2, and the hybrid approach appeared to have about the same average performance as Solution 2. Therefore the hybrid approach is only useful if one needs to minimize the worst-case time, which is (twice) better in Solution 2; since Solution 1 will in the worst case traverse the entire height of the tree up and then down.
Code for Solution 1
Note that the traversal code in insert is slightly different from the algorithm described above but still correct.
struct Node {
Node *parent;
Node *link[2];
};
struct Heap {
Node *root;
Node *last;
};
void init (Heap *h)
{
h->root = NULL;
h->last = NULL;
}
void insert (Heap *h, Node *node)
{
// If the heap is empty, insert root node.
if (h->root == NULL) {
h->root = node;
h->last = node;
node->parent = NULL;
node->link[0] = NULL;
node->link[1] = NULL;
return;
}
// We will be finding the node to insert below.
// Start with the current last node and move up as long as the
// parent exists and the current node is its right child.
Node *cur = h->last;
while (cur->parent != NULL && cur == cur->parent->link[1]) {
cur = cur->parent;
}
if (cur->parent != NULL) {
if (cur->parent->link[1] != NULL) {
// The parent has a right child. Attach the new node to
// the leftmost node of the parent's right subtree.
cur = cur->parent->link[1];
while (cur->link[0] != NULL) {
cur = cur->link[0];
}
} else {
// The parent has no right child. This can only happen when
// the last node is a right child. The new node can become
// the right child.
cur = cur->parent;
}
} else {
// We have reached the root. The new node will be at a new level,
// the left child of the current leftmost node.
while (cur->link[0] != NULL) {
cur = cur->link[0];
}
}
// This is the node below which we will insert. It has either no
// children or only a left child.
assert(cur->link[1] == NULL);
// Insert the new node, which becomes the new last node.
h->last = node;
cur->link[cur->link[0] != NULL] = node;
node->parent = cur;
node->link[0] = NULL;
node->link[1] = NULL;
// Restore the heap property.
while (node->parent != NULL && value(node->parent) > value(node)) {
move_one_up(h, node);
}
}
void remove (Heap *h, Node *node)
{
// If this is the only node left, remove it.
if (node->parent == NULL && node->link[0] == NULL && node->link[1] == NULL) {
h->root = NULL;
h->last = NULL;
return;
}
// Locate the node before the last node.
Node *cur = h->last;
while (cur->parent != NULL && cur == cur->parent->link[0]) {
cur = cur->parent;
}
if (cur->parent != NULL) {
assert(cur->parent->link[0] != NULL);
cur = cur->parent->link[0];
}
while (cur->link[1] != NULL) {
cur = cur->link[1];
}
// Disconnect the last node.
assert(h->last->parent != NULL);
h->last->parent->link[h->last == h->last->parent->link[1]] = NULL;
if (node == h->last) {
// Deleting last, set new last.
h->last = cur;
} else {
// Not deleting last, move last to node's place.
Node *srcnode = h->last;
replace_node(h, node, srcnode);
// Set new last unless node=cur; in this case it stays the same.
if (node != cur) {
h->last = cur;
}
// Restore the heap property.
if (srcnode->parent != NULL && value(srcnode) < value(srcnode->parent)) {
do {
move_one_up(h, srcnode);
} while (srcnode->parent != NULL && value(srcnode) < value(srcnode->parent));
} else {
while (srcnode->link[0] != NULL || srcnode->link[1] != NULL) {
bool side = srcnode->link[1] != NULL && value(srcnode->link[0]) >= value(srcnode->link[1]);
if (value(srcnode) > value(srcnode->link[side])) {
move_one_up(h, srcnode->link[side]);
} else {
break;
}
}
}
}
}
Two other functions are used: move_one_up moves a node one step up in the heap, and replace_node replaces moves an existing node (srcnode) into the place held by the node being deleted. Both work only by adjusting the links to and from the other nodes, there is no actual moving of data involved. These functions should not be hard to implement, and the mentioned link includes my implementations.
The pointer based implementation of the binary heap is incredibly difficult when compared to the array based implementation. But it is fun to code it. The basic idea is that of a binary tree. But the biggest challenge you will have is to keep it left-filled. You will have difficulty in finding the exact location as to where you must insert a node.
For that, you must know binary traversal. What we do is. Suppose our heap size is 6. We will take the number + 1, and convert it to bits. The binary representation of 7 is, "111". Now, remember to always omit the first bit. So, now we are left with "11". Read from left-to-right. The bit is '1', so, go to the right child of the root node. Then the string left is "1", the first bit is '1'. As you have only 1 bit left, this single bit tells you where to insert the new node. As it is '1' the new node must be the right child of the current node. So, the raw working of the process is that, convert the size of the heap into bits. Omit the first bit. According to the leftmost bit, go to the right child of the current node if it is '1', and to the left child of the current node if it is '0'.
After inserting the new node, you will bubble it up the heap. This tells you that you will be needing the parent pointer. So, you go once down the tree and once up the tree. So, your insertion operation will take O(log N).
As for the deletion, it is still a challenge to find the last node. I hope you are familiar with deletion in a heap where we swap it with the last node and do a heapify. But for that you need the last node, for that too, we use the same technique as we did for finding the location to insert the new node, but with a little twist. If you want to find the location of the last node, you must use the binary representation of the value HeapSize itself, not HeapSize + 1. This will take you to the last node. So, the deletion will also cost you O(log N).
I'm having trouble in posting the source code here, but you can refer to my blog for the source code. In the code, there is Heap Sort too. It is very simple. We just keep deleting the root node. Refer to my blog for explanation with figures. But I guess this explanation would do.
I hope my answer has helped you. If it did, let me know...! ☺
For those saying this is a useless exercise, there are a couple of (admittedly rare) use cases where a pointer-based solution is better. If the max size of the heap is unknown, then an array implementation will need to stop-and-copy into fresh storage when the array fills. In a system (e.g. embedded) where there are fixed response time constraints and/or where free memory exists, but not a big enough contiguous block, this may be not be acceptable. The pointer tree lets you allocate incrementally in small, fixed-size chunks, so it doesn't have these problems.
To answer the OP's question, parent pointers and/or elaborate tracking aren't necessary to determine where to insert the next node or find the current last one. You only need the bits in the binary rep of the heap's size to determine the left and right child pointers to follow.
Edit Just saw Vamsi Sangam#'s explanation of this algorithm. Nonetheless, here's a demo in code:
#include <stdio.h>
#include <stdlib.h>
typedef struct node_s {
struct node_s *lft, *rgt;
int data;
} NODE;
typedef struct heap_s {
NODE *root;
size_t size;
} HEAP;
// Add a new node at the last position of a complete binary tree.
void add(HEAP *heap, NODE *node) {
size_t mask = 0;
size_t size = ++heap->size;
// Initialize the mask to the high-order 1 of the size.
for (size_t x = size; x; x &= x - 1) mask = x;
NODE **pp = &heap->root;
// Advance pp right or left depending on size bits.
while (mask >>= 1) pp = (size & mask) ? &(*pp)->rgt : &(*pp)->lft;
*pp = node;
}
void print(NODE *p, int indent) {
if (!p) return;
for (int i = 0; i < indent; i++) printf(" ");
printf("%d\n", p->data);
print(p->lft, indent + 1);
print(p->rgt, indent + 1);
}
int main(void) {
HEAP h[1] = { NULL, 0 };
for (int i = 0; i < 16; i++) {
NODE *p = malloc(sizeof *p);
p->lft = p->rgt = NULL;
p->data = i;
add(h, p);
}
print(h->root, 0);
}
As you'd hope, it prints:
0
1
3
7
15
8
4
9
10
2
5
11
12
6
13
14
Sift-down can use the same kind of iteration. It's also possible to implement the sift-up without parent pointers using either recursion or an explicit stack to "save" the nodes in the path from root to the node to be sifted.
A binary heap is a complete binary tree obeying the heap property. That's all. The fact that it can be stored using an array, is just nice and convenient. But sure, you can implement it using a linked structure. It's a fun exercise! As such, it is mostly useful as an exercise or in more advanced datastructures( meldable, addressable priority queues for example ), as it is quite a bit more involved than doing the array version. For example, think about siftup/siftdown procedures, and all the edge cutting/sewing you'll need to get right. Anyways, it's not too hard, and once again, good fun!
There are a number of comments pointing out that by a strict definition it is possible to implement a binary heap as a tree and still call it a binary heap.
Here is the problem -- there is never a reason to do so since using an array is better in every way.
If you do searches to try to find information on how to work with a heap using pointers you are not going to find any -- no one bothers since there is no reason to implement a binary heap in this way.
If you do searches on trees you will find lots of helpful materials. This was the point of my original answer. There is nothing that stops people from doing it this way but there is never a reason to do so.
You say -- I have to do so, I've got an legacy system and I have pointers to nodes I need to put them in a heap.
Make an array of those pointers and work with them in this array as you would a standard array based heap, when you need the contents dereference them. This will work better than any other way of implementing your system.
I can think of no other reason to implement a heap using pointers.
Original Answer:
If you implement it with pointers then it is a tree. A heap is a heap because of how you can calculate the location of the children as a location in the array (2 * node index +1 and 2 * node index + 2).
So no, you can't implement it with pointers, if you do you've implemented a tree.
Implementing trees is well documented if you search you will find your answers.
I have searched around the internet (including SO) and no answer can be found.
Funny, because I found an answer on SO within moments of googling it. (Same Google search led me here.)
Basically:
The node should have pointers to its parent, left child, and right child.
You need to keep pointers to:
the root of the tree (root) (duh)
the last node inserted (lastNode)
the leftmost node of the lowest level (leftmostNode)
the rightmost node of the next-to-lowest level (rightmostNode)
Now, let the node to be inserted be nodeToInsert. Insertion algorithm in pseudocode:
void insertNode(Data data) {
Node* parentNode, nodeToInsert = new Node(data);
if(root == NULL) { // empty tree
parent = NULL;
root = nodeToInsert;
leftmostNode = root;
rightmostNode = NULL;
} else if(lastNode.parent == rightmostNode && lastNode.isRightChild()) {
// level full
parentNode = leftmostNode;
leftmostNode = nodeToInsert;
parentNode->leftChild = nodeToInsert;
rightmostNode = lastNode;
} else if (lastNode.isLeftChild()) {
parentNode = lastNode->parent;
parentNode->rightChild = nodeToInsert;
} else if(lastNode.isRightChild()) {
parentNode = lastNode->parent->parent->rightChild;
parentNode->leftChild = nodeToInsert;
}
nodeToInsert->parent = parentNode;
lastNode = nodeToInsert;
heapifyUp(nodeToInsert);
}
Pseudocode for deletion:
Data deleteNode() {
Data result = root->data;
if(root == NULL) throw new EmptyHeapException();
if(lastNode == root) { // the root is the only node
free(root);
root = NULL;
} else {
Node* newRoot = lastNode;
if(lastNode == leftmostNode) {
newRoot->parent->leftChild = NULL;
lastNode = rightmostNode;
rightmostNode = rightmostNode->parent;
} else if(lastNode.isRightChild()) {
newRoot->parent->rightChild = NULL;
lastNode = newRoot->parent->leftChild;
} else if(lastNode.isLeftChild()) {
newRoot->parent->leftChild = NULL;
lastNode = newRoot->parent->parent->leftChild->rightChild;
}
newRoot->leftChild = root->leftChild;
newRoot->rightChild = root->rightChild;
newRoot->parent = NULL;
free(root);
root = newRoot;
heapifyDown(root);
}
return result;
}
heapifyUp() and heapifyDown() shouldn’t be too hard, though of course you’ll have to make sure those functions don’t make leftmostNode, rightmostNode, or lastNode point at the wrong place.
TL;DR Just use a goddamn array.

Calculate height of a tree

I am trying to calculate the height of a tree. I am doing it with the code written below.
#include<iostream.h>
struct tree
{
int data;
struct tree * left;
struct tree * right;
};
typedef struct tree tree;
class Tree
{
private:
int n;
int data;
int l,r;
public:
tree * Root;
Tree(int x)
{
n=x;
l=0;
r=0;
Root=NULL;
}
void create();
int height(tree * Height);
};
void Tree::create()
{
//Creting the tree structure
}
int Tree::height(tree * Height)
{
if(Height->left==NULL && Height->right==NULL)
{return 0;
}
else
{
l=height(Height->left);
r=height(Height->right);
if (l>r)
{l=l+1;
return l;
}
else
{
r=r+1;
return r;
}
}
}
int main()
{
Tree A(10);//Initializing 10 node Tree object
A.create();//Creating a 10 node tree
cout<<"The height of tree"<<A.height(A.Root);*/
}
It gives me the correct result.
But in some posts(googled page) it was suggested to do a Postorder traversal and use this height method to calculate the height. Any specific reason?
But isn't a postorder traversal precisely what you are doing? Assuming left and right are both non-null, you first do height(left), then height(right), and then some processing in the current node. That's postorder traversal according to me.
But I would write it like this:
int Tree::height(tree *node) {
if (!node) return -1;
return 1 + max(height(node->left), height(node->right));
}
Edit: depending on how you define tree height, the base case (for an empty tree) should be 0 or -1.
The code will fail in trees where at least one of the nodes has only one child:
// code snippet (space condensed for brevity)
int Tree::height(tree * Height) {
if(Height->left==NULL && Height->right==NULL) { return 0; }
else {
l=height(Height->left);
r=height(Height->right);
//...
If the tree has two nodes (the root and either a left or right child) calling the method on the root will not fulfill the first condition (at least one of the subtrees is non-empty) and it will call recursively on both children. One of them is null, but still it will dereference the null pointer to perform the if.
A correct solution is the one posted by Hans here. At any rate you have to choose what your method invariants are: either you allow calls where the argument is null and you handle that gracefully or else you require the argument to be non-null and guarantee that you do not call the method with null pointers.
The first case is safer if you do not control all entry points (the method is public as in your code) since you cannot guarantee that external code will not pass null pointers. The second solution (changing the signature to reference, and making it a member method of the tree class) could be cleaner (or not) if you can control all entry points.
The height of the tree doesn't change with the traversal. It remains constant. It's the sequence of the nodes that change depending on the traversal.
Definitions from wikipedia.
Preorder (depth-first):
Visit the root.
Traverse the left subtree.
Traverse the right subtree.
Inorder (symmetrical):
Traverse the left subtree.
Visit the root.
Traverse the right subtree.
Postorder:
Traverse the left subtree.
Traverse the right subtree.
Visit the root.
"Visit" in the definitions means "calculate height of node". Which in your case is either zero (both left and right are null) or 1 + combined height of children.
In your implementation, the traversal order doesn't matter, it would give the same results. Cant really tell you anything more than that without a link to your source stating postorder is to prefer.
Here is answer :
int Help :: heightTree (node *nodeptr)
{
if (!nodeptr)
return 0;
else
{
return 1 + max (heightTree (nodeptr->left), heightTree (nodeptr->right));
}
}