Mathematical Analysis of Iterative Algorithms and Complexity Examples

Introduction

In this lecture, we extend the concept of algorithm analysis by learning how to mathematically derive running time. You’ll understand how input size affects efficiency, how to form summations from loops, and how to classify algorithms by order of growth. Several worked examples—from constant to cubic time—show how theoretical analysis translates into practical performance.

Table of Contents

  1. What to Analyze and Why

  2. Cases to Consider: Best, Average, Worst

  3. Mathematical Background for Analysis

    • Logarithms in Algorithm Analysis

    • Summation Formulas and Loop Mapping

  4. Steps for Mathematical Analysis of Iterative Algorithms

  5. Measuring Input Size and Basic Operations

  6. Worked Examples of Algorithm Complexity

    • Example 1 – Swapping Variables (O 1)

    • Example 2 – Division Operation

    • Example 3 – Binary Search (Log n)

    • Example 4 – Linear and Quadratic Loops

    • Example 5 – Selection Sort & Bubble Sort

    • Example 6 – Matrix Multiplication (Cubic)

  7. Basic Efficiency Classes

  8. How to Apply Complexity Theorems

  9. Advantages and Limitations of Asymptotic Analysis

  10. Conclusion

What to Analyze and Why

Algorithm efficiency depends not only on input size n, but also on input structure and ordering. For example, searching for a number in an array performs differently depending on whether the element is at the beginning, middle, or not present at all. Thus, we must consider different input cases.

Cases to Consider – Best, Average, Worst

  • Best Case (C₍best₎ n): Minimum operations for any input of size n.

  • Worst Case (C₍worst₎ n): Maximum operations required.

  • Average Case (C₍avg₎ n): Expected operations over all inputs of size n (uses probability distribution).

Mathematical Background for Analysis

Logarithms in Algorithm Analysis

Logarithms appear in divide-and-conquer or halving algorithms (e.g., Binary Search).

  • log10(1000)=3\log_{10}(1000) = 3 → 10³ = 1000

  • log2(8)=3\log_{2}(8) = 3 → 2³ = 8

Summation Formulas and Loop Mapping

Each iterative loop maps to a summation:

  • Loop index → summation index

  • Start & end values → lower & upper limits

  • Nested loops → nested summations

Common summations:

i=1n1=n,i=1ni=n(n+1)2,i=1ni2=n(n+1)(2n+1)6\sum_{i=1}^{n}1 = n,\quad \sum_{i=1}^{n}i = \frac{n(n+1)}{2},\quad \sum_{i=1}^{n}i^2 = \frac{n(n+1)(2n+1)}{6}

Steps for Mathematical Analysis of Iterative Algorithms

  1. Decide the parameter n (input size).

  2. Identify the basic operation.

  3. Determine best, average, and worst cases.

  4. Set up summations for C(n).

  5. Simplify using known formulas.

  6. Determine the order of growth (O, Θ, Ω).

Measuring Input Size and Basic Operations

Problem Input Size Basic Operation
Searching a list Number of items (n) Key comparison
Matrix multiplication Matrix dimensions Multiplication
Graph traversal Vertices / Edges Visiting nodes
Polynomial evaluation Polynomial degree Multiplication

Worked Examples of Algorithm Complexity

Example 1 – Swapping Variables (O 1)

temp = x;
x = y;
y = temp;

Total operations = 3 → T(n) = O(1).

Example 2 – Division Operation

Input: number length = n, Basic Operation: Division (/).
Executed once per digit → T(n) = Θ(n).

Example 3 – Binary Search (Log n)

At each iteration, input is halved:

NN/2N/41N → N/2 → N/4 → … → 1


Number of iterations = log₂ N → T(n) = O(log n).

Example 4 – Linear and Quadratic Loops

Linear (O n):

for(i=0;i<n;i++) count++;

Quadratic (O n²): Nested loops →

n×nn × n

iterations.

Example 5 – Selection Sort & Bubble Sort

Both use nested loops for pairwise comparisons.

  • Selection Sort:

    T(n)=n(n1)/2=O(n2)T(n)=n(n-1)/2 = O(n²)

  • Bubble Sort:

    T(n)n2/2=O(n2)T(n) ≈ n²/2 = O(n²)

Example 6 – Matrix Multiplication (Cubic)

Three nested loops →

n×n×nn × n × n

= n³ operations → T(n)=O(n³).

Basic Efficiency Classes

Class Notation Example
Constant O(1) Swap function
Logarithmic O(log n) Binary Search
Linear O(n) Simple Loop
Quadratic O(n²) Bubble / Selection Sort
Cubic O(n³) Matrix Multiplication

How to Apply Complexity Theorems

  • Theorem 1: If t₁(n)∈O(g₁(n)) and t₂(n)∈O(g₂(n)), then t₁(n)+t₂(n)∈O(max{g₁,g₂}).

  • Theorem 2: If f₁(n)∈O(g₁(n)) and f₂(n)∈O(g₂(n)), then f₁(n)×f₂(n)∈O(g₁×g₂).

Example: O(n)+O(1)=O(n); O(n)×O(log n)=O(n log n).

Advantages and Limitations of Asymptotic Analysis

Advantages:

  • Hardware-independent performance comparison.

  • Predicts scalability for large inputs.

  • Simple mathematical estimation.

Limitations:

  • Ignores constant factors (1000n log n vs 2n log n treated as same).

  • May mislead when small inputs or cache effects matter.

  • Not a perfect measure for real runtime differences.

Conclusion

This lecture demonstrated how to mathematically analyze iterative algorithms using summations and growth rates. By identifying input size, primitive operations, and using asymptotic notation, we can classify algorithms from O(1) to O(n³). These skills are essential for evaluating and improving algorithm efficiency in any computing domain. In detailLecture # 07#

Search within CuiTutorial

Scroll to Top