Introduction
In this lecture, we extend the concept of algorithm analysis by learning how to mathematically derive running time. You’ll understand how input size affects efficiency, how to form summations from loops, and how to classify algorithms by order of growth. Several worked examples—from constant to cubic time—show how theoretical analysis translates into practical performance.
Table of Contents
-
What to Analyze and Why
-
Cases to Consider: Best, Average, Worst
-
Mathematical Background for Analysis
-
Logarithms in Algorithm Analysis
-
Summation Formulas and Loop Mapping
-
-
Steps for Mathematical Analysis of Iterative Algorithms
-
Measuring Input Size and Basic Operations
-
Worked Examples of Algorithm Complexity
-
Example 1 – Swapping Variables (O 1)
-
Example 2 – Division Operation
-
Example 3 – Binary Search (Log n)
-
Example 4 – Linear and Quadratic Loops
-
Example 5 – Selection Sort & Bubble Sort
-
Example 6 – Matrix Multiplication (Cubic)
-
-
Basic Efficiency Classes
-
How to Apply Complexity Theorems
-
Advantages and Limitations of Asymptotic Analysis
-
Conclusion
What to Analyze and Why
Algorithm efficiency depends not only on input size n, but also on input structure and ordering. For example, searching for a number in an array performs differently depending on whether the element is at the beginning, middle, or not present at all. Thus, we must consider different input cases.
Cases to Consider – Best, Average, Worst
-
Best Case (C₍best₎ n): Minimum operations for any input of size n.
-
Worst Case (C₍worst₎ n): Maximum operations required.
-
Average Case (C₍avg₎ n): Expected operations over all inputs of size n (uses probability distribution).
Mathematical Background for Analysis
Logarithms in Algorithm Analysis
Logarithms appear in divide-and-conquer or halving algorithms (e.g., Binary Search).
-
log10(1000)=3 → 10³ = 1000
-
log2(8)=3 → 2³ = 8
Summation Formulas and Loop Mapping
Each iterative loop maps to a summation:
-
Loop index → summation index
-
Start & end values → lower & upper limits
-
Nested loops → nested summations
Common summations:
i=1∑n1=n,i=1∑ni=2n(n+1),i=1∑ni2=6n(n+1)(2n+1)
Steps for Mathematical Analysis of Iterative Algorithms
-
Decide the parameter n (input size).
-
Identify the basic operation.
-
Determine best, average, and worst cases.
-
Set up summations for C(n).
-
Simplify using known formulas.
-
Determine the order of growth (O, Θ, Ω).
Measuring Input Size and Basic Operations
| Problem | Input Size | Basic Operation |
|---|---|---|
| Searching a list | Number of items (n) | Key comparison |
| Matrix multiplication | Matrix dimensions | Multiplication |
| Graph traversal | Vertices / Edges | Visiting nodes |
| Polynomial evaluation | Polynomial degree | Multiplication |
Worked Examples of Algorithm Complexity
Example 1 – Swapping Variables (O 1)
temp = x;
x = y;
y = temp;
Total operations = 3 → T(n) = O(1).
Example 2 – Division Operation
Input: number length = n, Basic Operation: Division (/).
Executed once per digit → T(n) = Θ(n).
Example 3 – Binary Search (Log n)
At each iteration, input is halved:
N→N/2→N/4→…→1
Number of iterations = log₂ N → T(n) = O(log n).
Example 4 – Linear and Quadratic Loops
Linear (O n):
for(i=0;i<n;i++) count++;
Quadratic (O n²): Nested loops →
n×n iterations.
Example 5 – Selection Sort & Bubble Sort
Both use nested loops for pairwise comparisons.
-
Selection Sort:
T(n)=n(n−1)/2=O(n2)
-
Bubble Sort:
T(n)≈n2/2=O(n2)
Example 6 – Matrix Multiplication (Cubic)
Three nested loops →
n×n×n = n³ operations → T(n)=O(n³).
Basic Efficiency Classes
| Class | Notation | Example |
|---|---|---|
| Constant | O(1) | Swap function |
| Logarithmic | O(log n) | Binary Search |
| Linear | O(n) | Simple Loop |
| Quadratic | O(n²) | Bubble / Selection Sort |
| Cubic | O(n³) | Matrix Multiplication |
How to Apply Complexity Theorems
-
Theorem 1: If t₁(n)∈O(g₁(n)) and t₂(n)∈O(g₂(n)), then t₁(n)+t₂(n)∈O(max{g₁,g₂}).
-
Theorem 2: If f₁(n)∈O(g₁(n)) and f₂(n)∈O(g₂(n)), then f₁(n)×f₂(n)∈O(g₁×g₂).
Example: O(n)+O(1)=O(n); O(n)×O(log n)=O(n log n).
Advantages and Limitations of Asymptotic Analysis
Advantages:
-
Hardware-independent performance comparison.
-
Predicts scalability for large inputs.
-
Simple mathematical estimation.
Limitations:
-
Ignores constant factors (1000n log n vs 2n log n treated as same).
-
May mislead when small inputs or cache effects matter.
-
Not a perfect measure for real runtime differences.
Conclusion
This lecture demonstrated how to mathematically analyze iterative algorithms using summations and growth rates. By identifying input size, primitive operations, and using asymptotic notation, we can classify algorithms from O(1) to O(n³). These skills are essential for evaluating and improving algorithm efficiency in any computing domain. In detailLecture # 07#