Home About

Growth of Functions

Functions are incarnations of algorithms. As these functions evolve they consume computational resources. Also, functions are often called procedures in computations and the algorithm-function relationship gives birth to what is called procedural abstraction. Any procedure harshed from a function/algorithm is independent and solves the intended problem in its own way. Several procedures with the same name could actually implement different algorithms, but all arrive at the same solution. In this case, each of the procedures is called a procedural abstracion.

An example is a square root function. The square root of a number can be calculated in a variety of ways:

  1. Newton’s method or using a binomial expansion of power 1/2
  2. Using fixed points
  3. Using the Egyptian Method or starting with a guess and improving it…

All these algorithms would arrive at the squareroot of a number, therefore we have a choice. And if we have a choice, then we need to use a scale of preference, and opportunity costs to determine which of the algorithms to use. A closer look at the concept of scale of preference reveals that we might chose the algorithm that uses less time to compute the result we are seeking. And the opportunity cost will enable us get a feel of what we shall lose if we chose one algorithm over the others. Eventually, we need a proper understanding of the implications of using any algorithm we choose. How can we understand what we lose or gain from an algorithm?

The procedures can be formally analyzed to understand the their comsumption of resources. It is this formal understanding that will help us classify algorithms, choose which is better in what situation, communicate our ideas about algorithms, and be able to know what and where to tweak performance.

The study or analysis of the run times of algorithms is investigated and expressed formally using a notation called asymptotic notation. We could call these investigations the Calculus of Algorithms or Algorithmic Calculus.

Note that differential and Integral Calculus are tools that can also be implemented to get an understanding of the rates of change of various Algorithms and also to compute the total amount of resources consumed by an algorithm.

The time taken by an algorithm to perform each step, and eventually the total time taken to return a result plus the sum total of the cost of every step taken is called the running time of an Algorithm.

The running time is often expressed as a Mathematical function. Therefore, we could apply differential calculus to the emergent runtimes functions of various algorithms to determine their growth rates.

Similarly, the application of integral calculus as a tool for determining the total run time, total cost, average cost, total space etc. of an algorithm is very feasible.

Suppose there is loop that runs n times and uses x steps. The total cost of such a loop can be determined by using the integral of x from 1 to n.

That is, c × ∫0n x dx = c × \textonehalf{} × n2

Where c is a constant.

Asymptotic Notation

See the Mathematical concepts of Limits, Sequences and Series for more understanding of the analysis of the algorithms.

Asymptotes

In Mathematics, an asymptote refers to a line or curve that approaches a given curve arbitrarily closely, but never touches it. Vertical asymptotes arise when dealing with rational functios such that the value of the denominator becomes zero. Vertical asymptotes can tell us more about the domain of a function.

Horizontal asymptotes are values, curves or lines that anothe function approaches very closly without actually reaching it. That is as the values of the function grow large without bound, the value it approaches often tend to be the Horizontal asymptotes. Same as the limit of a function as it approaches infinity.

limx → ∞ x = L

Interestingly, L could actually be a function. That is:

limx → ∞ x = g(x)

Asymptotic Notation

Now, in computer science, Asymptotic notations are used to express the rate at with algorithms grow by expressing the functions that the runtimes of these Algorithms approach as their inputs become large without bound.

Suppose f(n) is an Algorithm with input n, and suppose there is a set of some other functions collectively called g(n). The asymptotic notation of the run time of f will tell us how f approaches g as n becomes larger and larger, in the same way that a normal mathematical function will evolve and approach it’s horizontal Asymptote.

  1. Θ - notation

    Used to express the growth of an algorithm A(n) relative to some other function F(n)such that given 3 constants c1, c2, n0, for all inputs n > n0, A(n) is ≥ c1 × F(n) from below and ≤ c2 × F(n) from above. Therefore, A(n) is sandwiched between c1 × F(n) and c2 × F(n).

    Expressed as: 0 ≤ c1F(n) ≤ A(n) ≤ c2 F(n) for all n ≤ n0

    However, when communicating such information, we often say:

    A(n) = Θ(F(n))

    Although we actually mean A(n) ∈ Θ(F(n))

    The sandwich or pinching theorem of limits justifies our notation.

    According to the squeeze or sandwich theorem of limits Squeeze Theorem From this, it is concluded that F(n) is an asymptotically tight bound for A(n).

  2. O - notation

    From Θ-notation, which has both upper and lower bound, it follows that there is a situation where there is either an upper bound or a lower bound and not both. A case with no lower bound, that is Asymptote tight bound minus the lower bound, is expressed using the O-notation. This is called an Asymptotic upper bound.

    Expressed as: 0 ≤ A(n) ≤ cF(n) for all n ≥ n0

    It might be compelling to realize that an Algorithm whose run time is Θ(n) is a subset of that whose run time is 0(n).

    That is,

    Θ(F(n)) \subseteq O(F(n))

    Often, when taking the run times of Algorithms we omit constants, leading coefficients, and take just the leading term. An notaion of F(n) = Θ(n2) might actually mean that F(n) = a n2 + b n + c.

    The fact that

    Θ(n) \subseteq O(n),

    explains this elimitation of minor terms and the leading coefficient.

    Similarly, a algorithm with a growth rate of a × n + b is linear, but can be expressed as A(n) = Θ(n2)

    1. Note

      The upper hints the worst case, which is what we are often interested in knowing. So O-notation tells us about without getting us engaged in sophistaced calculations.

      For example, the insertion sort Algorithm has 2 major loops. Loop one has a runtime of n, and loop 2 a run time of n, therefore the upper bound is n × n which is n2. And this is the actual run time of insertion sort Algorithm.

      In effect, all what run times express are the bounds of the consumption of resources and not the exact run times of every instance of the algorithm in question.

  3. Ω-notation

    And finally the lower bound. If there is a function F(n) such that for all n, an algorithm A(n) is ≥ F(n), then F(n) is a lower bound to that algorithm, A(n).

    Expressed as: 0 ≤ cF(n) ≤ A(n) for n ≥ n0

Theorem

A function is bouned if and only if it is bounded from below and above.

For any 2 functions f(n) and g(n), we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n))

Therefore, the run time of insertion sort is Θ(n) from below and Θ(n2) from above.

Opposite of Asymptotically tight bounds

In numberline arithmetic, < is know as strictly less than. In Asymptotic analysis, Asymptotes with ≤ are called Asymptotically tight bounds. The Opposite of this or bounds that are strictly less than or greater than are expressed using little notations:

  1. little oh: o(g(n))
  2. little omega: ω(g(n))

Mathematical functions and conepts

  1. Monotonicity
  2. Floors and Ceilings
  3. Modular Arithmetic : a ≤ a mod n < n
  4. Polynomials: p(n) = \displaystyle∑i=0n ai ni
Fork me on GitHub