Skip to main content

Algorithms language Chapter 2: Algorithm Complexity (Part-1)

Chapter 2: Algorithm Complexity


Section 2.1: Big-Theta notation

Unlike Big-O notation, which represents only upper bound of the running time for some algorithm, Big-Theta is a
tight bound; both upper and lower bound. Tight bound is more precise, but also more difficult to compute.

The Big-Theta notation is symmetric: f(x) = Ө(g(x)) <=> g(x) = Ө(f(x))

An intuitive way to grasp it is that f(x) = Ө(g(x)) means that the graphs of f(x) and g(x) grow in the same rate, or
that the graphs 'behave' similarly for big enough values of x.

The full mathematical expression of the Big-Theta notation is as follows:

Ө(f(x)) = {g: N0 -> R and c1, c2, n0 > 0, where c1 < abs(g(n) / f(n)), for every n > n0 and abs is the absolute value }

An example

If the algorithm for the input n takes 42n^2 + 25n + 4 operations to finish, we say that is O(n^2), but is also O(n^3)
and O(n^100). However, it is Ө(n^2) and it is not Ө(n^3), Ө(n^4) etc. Algorithm that is Ө(f(n)) is also O(f(n)), but
not vice versa!

Formal mathematical definition

Ө(g(x)) is a set of functions.

Ө(g(x)) = {f(x) such that there exist positive constants c1, c2, N such that 0 <= c1*g(x) <= f(x)
<= c2*g(x) for all x > N}

Because Ө(g(x)) is a set, we could write f(x) ∈ Ө(g(x)) to indicate that f(x) is a member of Ө(g(x)). Instead, we
will usually write f(x) = Ө(g(x)) to express the same notion - that's the common way.

Whenever Ө(g(x)) appears in a formula, we interpret it as standing for some anonymous function that we do not
care to name. For example the equation T(n) = T(n/2) + Ө(n), means T(n) = T(n/2) + f(n) where f(n) is a
function in the set Ө(n).

Let f and g be two functions defined on some subset of the real numbers. We write f(x) = Ө(g(x)) as
x->infinity if and only if there are positive constants K and L and a real number x0 such that holds:

K | g(x)| <= f(x) <= L | g(x)| for all x >= x0.

The definition is equal to:

f(x) = O(g(x)) and f(x) = Ω(g(x))

A method that uses limits

if limit(x->infinity) f(x)/g(x) = c ∈ (0,∞) i.e. the limit exists and it's positive, then f(x) = Ө(g(x))

Common Complexity Classes

Name         Notation       n = 10        n = 100
Constant      Ө(1)            1             1
Logarithmic   Ө(log(n))       3             7
Linear        Ө(n)            10            100
Linearithmic  Ө(n*log(n))     30            700
Quadratic     Ө(n^2)          100           10 000
Exponential   Ө(2^n)          1 024         1.267650e+ 30
Factorial     Ө(n!)           3 628 800     9.332622e+157

Comments

Popular posts from this blog

Algorithms language Chapter 1: Getting started with algorithms (Part-1)

Chapter 1: Getting started with algorithms Section 1.1: A sample algorithmic problem An algorithmic problem is specified by describing the complete set of instances it must work on and of its output after running on one of these instances. This distinction, between a problem and an instance of a problem, is fundamental. The algorithmic problem known as sorting is defined as follows: [Skiena:2008:ADM:1410219] Problem: Sorting Input: A sequence of n keys, a_1, a_2, ..., a_n. Output: The reordering of the input sequence such that a'_1 <= a'_2 <= ... <= a'_{n-1} <= a'_n An instance of sorting might be an array of strings, such as { Haskell, Emacs } or a sequence of numbers such as { 154, 245, 1337 }.

Object-Oriented Programming (OOPS)

 Object-Oriented Programming (OOPS) OOPS stands for "Object-Oriented Programming" and is a programming paradigm that focuses on the creation and manipulation of objects. In object-oriented programming, software is organized around objects, which are instances of classes that encapsulate data and behavior. The main principles of object-oriented programming include: Encapsulation: Objects encapsulate data and behavior together, hiding the internal details and exposing a public interface for interacting with the object. Inheritance: Classes can inherit properties and methods from other classes, forming a hierarchy of classes. Inheritance allows for code reuse and the creation of specialized classes based on existing ones. Polymorphism: Polymorphism allows objects of different classes to be treated as objects of a common superclass. This enables the use of generic code that can operate on objects of different types, providing flexibility and extensibility. Abstraction: Abstractio...

Mastering Blockchain: A deep dive into distributed ledgers, consensus protocols, smart contracts, DApps, cryptocurrencies, Ethereum, and more, 3rd Edition

Mastering Blockchain: A deep dive into distributed ledgers, consensus protocols, smart contracts, DApps, cryptocurrencies, Ethereum, and more, 3rd Edition        Develop a deeper understanding of what’s under the hood of blockchain with this technical reference guide on one of the most disruptive modern technologies Key Features Updated with four new chapters on consensus algorithms, Ethereum 2.0, tokenization, and enterprise blockchains Learn about key elements of blockchain theory such as decentralization, cryptography, and consensus protocols Get to grips with Solidity, Web3, cryptocurrencies, smart contract development and solve scalability, security and privacy issues Discover the architecture of different distributed ledger platforms including Ethereum, Bitcoin, Hyperledger Fabric, Hyperledger Sawtooth, Corda and Quorum Book Description Blockchain is the backbone of cryptocurrencies, with applications in finance, government, media, and other industries. With a ...