1. Mathematical Induction

Principle of Mathematical Induction:

Let P be a property of positive integers such that:

  1. Basis step: P(1) is true, and

  2. Inductive step: if P(k) is true for all 1 ≤ k ≤ n then P(n + 1) is true.

Then P(n) is true for all positive integers.

The premise P(n) in the inductive step is called Induction Hypothesis.

The validity of the Principle of Mathematical Induction is obvious. The basis step states that P(1) is true. Then the inductive step implies that P(2) is also true. By the inductive step again we see that P(3) is true, and so on. Consequently the property is true for all positive integers.

In the basis step we may replace 1 with some other integer m. Then the conclusion is that the property is true for every integer n greater than or equal to m.

Example: Prove that the sum of the n first odd positive integers is n2, i.e., 1 + 3 + 5 + · · · + (2n − 1) = n2

Proof: n=1, S(1) = 12 = 1 (1); n=k, S(k) = (k)2 ⇒ n=2k+1, S(k+1) = S(k) + 2k + 1 = k2 + 2k + 1 = (k+1)2 (2);

2. Recursiveness

A definition such that the object defined occurs in the definition is called a recursive definition.

For instance, consider the Fibonacci sequence

0, 1, 1, 2, 3, 5, 8, 13, . . .

It can be defined as a sequence whose two first terms are F0 = 0, F1 = 1 and each subsequent term is the sum of the two previous ones:

Fn = Fn−1 + Fn−2 (for n ≥ 2).

Other examples:

Factorial:

  1. 0! = 1

  2. n! = n · (n − 1)! (n ≥ 1)

Power:

  1. a0 = 1

  2. an = an−1 a (n ≥ 1)

In all these examples we have:

  1. A basis, where the function is explicitly evaluated for one or more values of its argument.

  2. A recursive step, stating how to compute the function from its previous values.

3. Divide and Conquer

Many useful algorithms are recursive in structure: to solve a given problem, they call themselves recursively one or more times to deal with closely related subproblems.

These algorithms typically follow a divide-and-conquer approach: they break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem.

The divide-and-conquer paradigm involves three steps at each level of the recursion:

  • Divide the problem into a number of subproblems that are smaller instances of the same problem.

  • Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner.

  • Combine the solutions to the subproblems into the solution for the original problem.

    MERGE-SORT(A, p, r)
    1  if p < r
    2    q = ⌊(p + r) / 2 ⌋
    3    MERGE-SORT(A, p, q)
    4    MERGE-SORT(A, q, r)
    5    MERGE(A, p, q, r)
    
    MERGE(A, p, q, r)
    1  n1 = q - p + 1
    2  n2 = r - q
    3  let L[1..n1 + 1] and R[1..n2] be new arrays
    4  for i = 1 to n1
    5      L[i] = A[p + i - 1]
    6  for j = 1 to n2
    7      R[j] = A[q + j]
    8  L[n1 + 1] =  ∞
    9  R[n2 + 1] =  ∞
    10 i = 1
    11 j = 1
    12 for k = p to r
    13     if L[i] ≤ R[j]
    14         A[k] = L[i]
    15         i = i + 1
    16     else A[k] = R[j]
    17         j = j + 1

3.1. Analyzing divide-and-conquer algorithms

When an algorithm contains a recursive call to itself, we can often describe its running time by a recurrence equation or recurrence, which describes the overall running time on a problem of size n in terms of the running time on smaller inputs. We can then use mathematical tools to solve the recurrence and provide bounds on the performance of the algorithm.

A recurrence for the running time of a divide-and-conquer algorithm falls out from the three steps of the basic paradigm.

  • We let T(n) be the running time on a problem of size n. If the problem size is small enough, say n ≤ c for some constant c, the straightforward solution takes constant time, which we write as Θ(1).

  • Suppose that our division of the problem yields a subproblems, each of which is 1/b the size of the original.

    For merge sort, both a and b are 2, but we shall see many divide-and-conquer algorithms in which a ≠ b.

    It takes time T(n/b) to solve one subproblem of size n/b, and so it takes time a.T(n/b) to solve a of them.

  • If we take D(n) time to divide the problem into subproblems and _C(n) time to combine the solutions to the subproblems into the solution to the original problem, we get the recurrence

    T(n) =

        O(1) if n ≤ c,

        a.T(n/b) + D(n) + C(n) otherwise.

3.2. Proof of the master theorem

The master method provides a “cookbook” method for solving recurrences of the form

T(n) = a.T(n/b) + f(n)

where a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function.

For merge sort, we see the T(n) that roughly:

T(n) = 2T(n/2) + n

Replacing n with n/2 we have T(n/2) = 2T(n/4) + n/2, hence:

T(n) = 2T(n/2) + n = 2(2T(n/4) + n/2) + n = 4T(n/4) + 2n

Repeating k times we get:

T(n) = 2kT(n/2k) + k.n

So for k = log2n we have:

T(n) = nT(1) + nlog2n = Θ(n.lgn)

3.3. The maximum-subarray problem

// 53. Maximum Subarray
// Medium
//
// Given an integer array nums, find the contiguous subarray (containing at least one number) which
//   has the largest sum and return its sum.
//
// A subarray is a contiguous part of an array.
//
// Example 1:
//
// Input: nums = [-2,1,-3,4,-1,2,1,-5,4]
// Output: 6
// Explanation: [4,-1,2,1] has the largest sum = 6.
//
// Example 2:
//
// Input: nums = [1]
// Output: 1
//
// Example 3:
//
// Input: nums = [5,4,-1,7,8]
// Output: 23
//
//
//
// Constraints:
//
//     1 <= nums.length <= 10^5
//     -10^4 <= nums[i] <= 10^4
//
//
//
// Follow up: If you have figured out the O(n) solution, try coding another solution
//    using the divide and conquer approach, which is more subtle.

package maxSubArray

import (
	"math"
)

// divide-and-conquer
// nums[low, mid, high]
// nums[low,...,mid], nums[low,...,mid,...,high], nums[mid,...,high]
func maxSubArray(nums []int) int {
	var findMaxCrossSubArray func(nums []int, low, mid, high int) int
	findMaxCrossSubArray = func(nums []int, low, mid, high int) int {
		leftMax := math.MinInt
		sum := 0
		for i := mid - 1; i >= low; i-- {
			sum += nums[i]
			if leftMax < sum {
				leftMax = sum
			}
		}

		sum = 0
		rightMax := math.MinInt
		for i := mid; i < high; i++ {
			sum += nums[i]
			if rightMax < sum {
				rightMax = sum
			}
		}

		return leftMax + rightMax
	}

	var findMaxSubArray func(nums []int, low, high int) int
	findMaxSubArray = func(nums []int, low, high int) int {
		if high-low <= 1 { // bottom-out, base-case, only one number, O(1)
			return nums[low]
		}

		mid := (low + high) / 2

		left := findMaxSubArray(nums, low, mid)             // T(n/2)
		cross := findMaxCrossSubArray(nums, low, mid, high) // O(n), n = high - low
		right := findMaxSubArray(nums, mid, high)           // T(n/2)
		// fmt.Println(left, cross, right)
		if left >= right && left >= cross { // O(1)
			return left
		} else if right >= left && right >= cross {
			return right
		}
		return cross
	}

	// T(n) = O(1) + 2T(n/2) + O(n) + O(1) = 2T(n/2) + O(n) => O(nlgn), n > 1
	// T(n) = O(1), n == 1
	return findMaxSubArray(nums, 0, len(nums))
}

// brute-force O(n^2)
// func maxSubArray(nums []int) int {
// 	max := nums[0]
// 	for i := 0; i < len(nums); i++ {
// 		sum := 0
// 		for j := i; j < len(nums); j++ {
// 			sum += nums[j]
// 			if max < sum {
// 				max = sum
// 			}
// 		}
// 	}
// 	return max
// }

4. References