Introduction to Big O Notation
When we talk about the "efficiency" of an algorithm, what exactly do we mean? How do we quantify it? This is where Big O Notation comes in. Big O Notation is a mathematical way to describe how the running time or space requirements of an algorithm grow as the input size grows.
It's a way to talk about the scalability of an algorithm – how well it performs as the amount of data it processes increases.
Why Do We Need Big O Notation?
Consider two algorithms that solve the same problem, for example, sorting a list of numbers.
- Algorithm A might take
100 * noperations. - Algorithm B might take
n * noperations (orn^2).
Which one is better?
- If
n = 10(small input), Algorithm A takes1000operations, and Algorithm B takes100operations. Algorithm B is faster. - If
n = 1000(large input), Algorithm A takes100,000operations, and Algorithm B takes1,000,000operations. Algorithm A is now significantly faster.
Big O Notation helps us abstract away the constant factors and lower-order terms to focus on the dominant term that dictates the growth rate for large inputs. It allows us to compare algorithms in a standardized, high-level way, regardless of the specific hardware or programming language.
What Does Big O Notation Represent?
Big O Notation represents the upper bound or the worst-case scenario of an algorithm's performance. It describes the maximum amount of time or space an algorithm will take.
Formally, we say that an algorithm is O(g(n)) if there exist positive constants c and n0 such that for all n >= n0, the running time f(n) is less than or equal to c * g(n).
(i.e., f(n) <= c * g(n) for all n >= n0)
Where:
f(n)is the actual running time (or space) of the algorithm.nis the size of the input.g(n)is a function that expresses the growth rate (e.g., n, n^2, log n).cis a constant.n0is a threshold input size after which the inequality holds true.
In simpler terms, Big O describes how the runtime or space requirement scales with the input size n when n becomes very large. We're interested in the "big picture" of how performance degrades as the problem gets bigger.
Ignoring Constants and Lower-Order Terms
When analyzing complexity, Big O Notation simplifies expressions by:
- Dropping lower-order terms: For example, if an algorithm takes
3n^2 + 5n + 10operations, asngets very large,n^2will dominate5nand10. So, we only care about then^2term. - Ignoring constant coefficients: If an algorithm takes
3n^2operations,3is a constant. For Big O, we simplify this toO(n^2). This is because constants depend on the specific implementation, hardware, etc., while the fundamental growth rate (n^2) is what we want to capture.
So, an algorithm with f(n) = 3n^2 + 5n + 10 operations would be described as O(n^2).
Common Big O Notations (from fastest to slowest)
Here are the most common growth rates you'll encounter, ordered by their efficiency for large n:
- O(1) - Constant Time: The time/space taken does not depend on the input size
n. E.g., accessing an array element by index. - O(log n) - Logarithmic Time: The time/space increases very slowly with
n. E.g., binary search. - O(n) - Linear Time: The time/space increases proportionally to
n. E.g., iterating through an array. - O(n log n) - Linearithmic Time: A very common complexity for efficient sorting algorithms. E.g., Merge Sort, Quick Sort.
- O(n^2) - Quadratic Time: The time/space increases with the square of
n. E.g., nested loops, Bubble Sort. - O(2^n) - Exponential Time: The time/space doubles with each addition to
n. Very inefficient for anything but smalln. E.g., solving the Traveling Salesperson Problem using brute force. - O(n!) - Factorial Time: Extremely inefficient. E.g., permutations of a list.
Understanding Big O Notation is crucial for evaluating and comparing algorithms. It's the primary tool for predicting how an algorithm will perform under varying input sizes, allowing developers to make informed decisions about algorithm selection and optimization.

