I’m struggling with Big O notation fundamentals and need help understanding two key concepts.
First Issue: Determining the constant value
When we have the inequality:
h(n) <= k.m(n) for n >= n₀
I get confused about finding the constant k. Let me use an example:
Proving that 3n² = O(n³)
Here h(n) = 3n² and m(n) = n³, so:
3n² <= k × n³
To find k, I can rearrange this as k >= 3n²/n³ = 3/n. For large n, this approaches 0, so k = 1 works.
But I’m confused about the process. How do I know which function to divide by? Is there a systematic approach?
Second Issue: What is n₀ and why do we need it?
I understand that n₀ is some threshold value where n >= n₀ must hold. But I don’t get its practical purpose.
Can’t we just find the constant k and be done? Why do we need this n₀ condition? When does it actually matter in proofs?
Any clear explanation would help me understand these concepts better.
k and n₀ work together to make your proof valid. When proving something is O(f(n)), you’re just looking for values that work - not the tightest possible bounds. For 3n² = O(n³), just pick k = 3 and n₀ = 1. Now you get 3n² ≤ 3n³ for n ≥ 1. Way simpler than hunting for k = 1. Here’s the thing: n₀ handles all the weird small cases where your inequality breaks. Try proving n³ = O(n²) - it’s false, but without n₀ you might think it works for tiny values. The threshold forces you to think about what happens as n gets big, not get stuck on edge cases like n = 0.5. My approach? Simplify the inequality first, then pick reasonable values for both constants. Big O is about upper bounds for large inputs. You don’t need perfection - any valid k and n₀ pair that proves your point works fine.
Honestly, the easiest way I learned this was to stop looking for “optimal” values. For your example, just set k=3 and n₀=1. Then 3n² ≤ 3n³ works fine when n≥1. We need n₀ because weird stuff happens with small values - like if n=0.1, then n³ < n² which breaks everything. The threshold just means “ignore the small messy cases, we only care how it behaves with large n.”
The constant k is essentially the worst-case multiplier that ensures your inequality holds for larger inputs. It’s akin to finding a ‘ceiling’ that your function won’t exceed when compared to the benchmark function. For a systematic approach, always divide h(n) by m(n) to derive k, then find the k value that satisfies the inequality for large n. In your case, 3/n diminishes as n increases, meaning any positive constant will suffice - k=1 works well. Regarding n₀, this threshold is crucial because Big O focuses on behavior as n tends towards infinity. Small values can introduce anomalies that could invalidate your inequality. For instance, when proving n² = O(n³), the inequality n² ≤ 1×n³ fails at n=0 (as 0² = 0³), but holds true for n≥1. Hence, n₀=1 is necessary. Without this, your inequality would have to remain valid for all values, complicating your analysis by needing to account for edge cases like n=0, which isn’t helpful for performance assessments with larger datasets.