Analyzing Time Complexity
Now that you understand what Big O Notation is, the next crucial step is learning how to analyze the time complexity of a given piece of code or an algorithm. This process involves counting the number of operations an algorithm performs as a function of its input size, n, and then simplifying that function to its Big O classification.
Remember, when analyzing time complexity, we primarily focus on:
- Dominant Operations: The operations that contribute most significantly to the total runtime as
ngrows. - Worst-Case Scenario: We usually analyze for the worst-case performance, as it gives us an upper bound on how long the algorithm might take.
- Scalability: How the runtime changes as the input size
nincreases.
Common Code Constructs and Their Time Complexity
Let's look at how different code constructs translate into Big O time complexities.
1. O(1) - Constant Time
Operations that take a fixed amount of time, regardless of the input size.
function accessElement(arr, index) {
return arr[index]; // Accessing an array element by index
}
function add(a, b) {
return a + b; // Simple arithmetic operation
}
- Reasoning: These operations involve a fixed number of steps. Whether the array has 10 elements or 1 million,
arr[index]takes the same amount of time.
2. O(n) - Linear Time
Operations where the time taken grows proportionally to the input size n. This is typical for single loops that iterate over the entire input.
function sumArray(arr) {
let sum = 0;
for (let i = 0; i < arr.length; i++) { // Loop runs n times (where n = arr.length)
sum += arr[i];
}
return sum;
}
function findElement(arr, target) {
for (let i = 0; i < arr.length; i++) { // In worst case, loop runs n times
if (arr[i] === target) {
return true;
}
}
return false;
}
- Reasoning: The
forloop executesntimes. Inside the loop, operations areO(1). So,n * O(1) = O(n).
3. O(n^2) - Quadratic Time
Operations where the time taken grows proportionally to the square of the input size n. This is characteristic of nested loops where each loop iterates over the input.
function printAllPairs(arr) {
for (let i = 0; i < arr.length; i++) { // Outer loop runs n times
for (let j = 0; j < arr.length; j++) { // Inner loop runs n times for each outer iteration
console.log(arr[i], arr[j]);
}
}
}
- Reasoning: The outer loop runs
ntimes. For each iteration of the outer loop, the inner loop also runsntimes. This results inn * n = n^2operations.
4. O(n^3) - Cubic Time (and higher powers)
Similar to quadratic, but with three nested loops.
function printAllTriplets(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length; j++) {
for (let k = 0; k < arr.length; k++) {
console.log(arr[i], arr[j], arr[k]);
}
}
}
}
- Reasoning:
n * n * n = n^3operations. Generalizing,knested loops result inO(n^k).
5. O(log n) - Logarithmic Time
Operations where the input size is reduced by a constant factor (usually halved) in each step.
function binarySearch(sortedArr, target) {
let low = 0;
let high = sortedArr.length - 1;
while (low <= high) {
let mid = Math.floor((low + high) / 2);
if (sortedArr[mid] === target) {
return mid;
} else if (sortedArr[mid] < target) {
low = mid + 1; // Halving the search space
} else {
high = mid - 1; // Halving the search space
}
}
return -1;
}
- Reasoning: In each iteration of the
whileloop, the search space is effectively halved. This is the definition of logarithmic growth.
6. O(n log n) - Linearithmic Time
Often seen in efficient sorting algorithms. It typically involves a logarithmic process (like dividing the problem) applied n times.
// Conceptual example (not full implementation)
function mergeSort(arr) {
if (arr.length <= 1) {
return arr;
}
const mid = Math.floor(arr.length / 2);
const left = arr.slice(0, mid);
const right = arr.slice(mid);
// The recursive calls create a call tree of depth log n.
// At each level of the call tree, the total work for merging is O(n).
return merge(mergeSort(left), mergeSort(right));
}
// The 'merge' function (which would be defined separately) takes O(n) time.
// Thus, total time is O(n log n).
- Reasoning: Algorithms like Merge Sort divide the array (
log nlevels of recursion) and then perform linear time work (merging) at each level.
Rules for Calculating Time Complexity
When combining different complexities, follow these rules:
-
Rule of Sum (Sequencing): If your algorithm performs operation A then operation B (sequentially), its total time complexity is the sum of their complexities.
- If A is
O(f(n))and B isO(g(n)), total isO(f(n) + g(n)). - Always take the dominant term. E.g., if you have an
O(n)loop followed by anO(n^2)loop, the total isO(n + n^2) = O(n^2).
function exampleSum(arr) { // O(n) part for (let i = 0; i < arr.length; i++) { /* ... */ } // O(1) part console.log("Done"); // O(n^2) part for (let i = 0; i < arr.length; i++) { for (let j = 0; j < arr.length; j++) { /* ... */ } } // Total: O(n + 1 + n^2) = O(n^2) } - If A is
-
Rule of Product (Nesting): If operation A is performed for each step of operation B (nesting), its total time complexity is the product of their complexities.
- If A is
O(f(n))and B isO(g(n)), total isO(f(n) * g(n)). - Example: A loop running
ntimes, with anO(log n)operation inside. Total:O(n log n).
function searchInEachRow(matrix, target) { for (let i = 0; i < matrix.length; i++) { // Outer loop: n iterations (n rows) // Assume binarySearch (O(log m)) where m is columns in a row // If m == n, then inner operation is O(log n) binarySearch(matrix[i], target); // Inner operation: O(log n) } // Total: O(n * log n) } - If A is
-
Ignoring Constants: Any constant factor is dropped.
O(2n)isO(n),O(500)isO(1). -
Worst-Case Focus: Unless specified, complexity analysis usually refers to the worst-case scenario (e.g., searching for an element not present in a list).
By systematically applying these rules and recognizing common patterns, you can effectively analyze the time complexity of most algorithms. This skill is foundational for designing efficient software and passing technical interviews.

