I’ve been studying binary trees lately and noticed that different resources define height and depth differently. This is really confusing me.
What I understand so far
Height = The longest path from a specific node down to any leaf node
Depth = Count of edges from a specific node up to the root
//
// A (A) level = 1, height = 3, depth = 0
// / \
// B C (C) level = 2, height = 0, depth = 1
// / \
// D E (E) level = 3, height = 1, depth = 2
// / \
// F G (F, G) level = 4, height = 0, depth = 3
//
I got these definitions from online tutorials, but I’m still confused about one thing. When there’s only the root node by itself, what should its height be? Some sources say it’s 1, others say 0. Can someone explain which definition is correct and why?
You’ve got it right on the definitions. This height ambiguity trips up tons of developers - myself included when I started with tree algorithms. The edge-based definition (root height = 0) is way more common in CS literature and standard algorithms. Makes sense since both height and depth count edges, just in opposite directions. I’ve seen the node-based approach (root height = 1) pop up in competitive programming sometimes though. Different communities just developed their own conventions over time. Edge-based works better for most tree operations - calculating balanced properties, recursive algorithms, that stuff. AVL trees are a perfect example - balance factors just make more sense when you’re counting edges. Stick with edge-based unless you’re working with existing code that does it differently.
Yeah, this drove me nuts when I was learning data structures too. Your definitions look good tho. The root height thing is just historical - different textbooks started at different points and everyone stuck with their choice. I use height=0 for a single root since it matches empty trees (height -1). Most interview questions expect the edge-counting approach anyway, so you’ll avoid confusion later.
The confusion about root node height comes down to counting nodes vs edges. Both ways are correct - it just depends on context.
Most CS textbooks and academic work define height as edges in the longest path, so a single root has height 0. Makes sense since depth also counts edges from the root.
But some programming contexts and algorithms count nodes instead, giving a single root height 1. The trick is staying consistent within your system or codebase.
I’ve seen this cause bugs when teammates assumed different conventions. Always check the docs or existing code comments to see which one’s being used. When you’re writing your own code, document which definition you’re following.
This topic was automatically closed 4 days after the last reply. New replies are no longer allowed.