Asymptotic Time Complexity and BigO
Notation
(漸近的計算量と O 記法)
Data Structures and Algorithms
3rd lecture, October 6, 2022
https://www.sw.it.aoyama.ac.jp/2022/DA/lecture3.html
Martin J. Dürst
© 200922 Martin
J. Dürst 青山学院大学
Today's Schedule
 Summary from last lecture, last week's homework
 Comparing execution times: From concrete to abstract
 Classification of Functions by Asymptotic Growth
 BigO notation
Moodle Registration,...
 If your name on your student card is in Kanji, change it to Kanji in
Moodle
 If your name on your student card is in wide Latin letters
(全角英字), change it to wide Latin letters in Moodle
 Make sure your name in Moodle uses the same Kanji as on your student
card
 If you are interested in ラボワーク, please contact me after this
lecture
Summary of Last Lecture
 There are four main ways to describe algorithms: natural
language text, diagrams, pseudocode, programs
 Each description has advantages and disadvantages
 Pseudocode is close to structured programming, but
ignores unnecessary details
 In this course, we will use Ruby as "executable
pseudocode"
 The main criterion to evaluate and compare algorithms is time complexity
as a function of the number of (input) data items
Comparing Execution Times: From Concrete to Abstract
Very concrete
 Measure actual execution time
 Count operation steps
 Estimate worst case number of steps
 Think about asymtotic behavior
Very abstract
Last Lecture's Homework 1: Example for
Asymptotic Growth of Number of Steps
number of steps (counting additions and divisions)
n (number of data items) 
1 
8 
64 
512 
4'096 
32'768 
262'144 
linear search 
1 
8 
64 
512 
4'096 
32'768 
262'144 
binary search 
1 
10 
19 
28 
37 
46 
55 
Observations on Homework 1
RANGE_TOP = 1_000_000_000
makes sure that we (almost) never
find an item.
 Approximating the number of steps by a formula:
 Linear search: n
 Binary search: 3 log_{2} n + 1
 Most important term for increasing n:
 Linear search: n
 Binary search: 3 log_{2} n
How to Derive Steps from (Pseudo)Code
 Identify basic operations (arithmetic operations, assignments,
comparisons,...)
 Count or calculate number of times each operation is executed
 For branches, count the longest (worst) branch
 For loops, include the loop logic, and multiply by highest (worst) number
of times the loop is executed
 For functions, include some steps for function overhead and multiply by
highest (worst) number of times the function is called
Why Worst Case
Thinking in Terms of Asymptotic Growth
 The execution time of an algorithm and the number of executed steps
depend on the size of the input
(the number of data items in the input)
 We can express this dependency as a function
f(n)
(where n is the size of the input)
 Rules for comparing functions:
 Concentrate on what happens when n increases (gets really
big)
→ Ignore special cases for small n
→ Ignore constant(time) differences (example: initialization
time)
 Concentrate on the essence of the algorithm
→ Ignore hardware differences and implementation differences
→ Ignore constant factors
⇒ Independent of hardware, implementation details, step counting
details
⇒ Simple expression of essential differences between algorithms
Last Lecture's Homework 2: Example for
Asymptotic Growth of Number of Steps
Fill in the following table
(use engineering notation (e.g. 1.5E+20) if the numbers get very big;
round liberally, the magnitude of the number is more important than the exact
value)
n 
1 
10 
100 
1'000 
10'000 
100'000 
5n 
5 
50 
500 
5'000 
50'000 
500'000 
n^{1.2} 
1 
15.8 
251.2 
3'981 
63'096 
1'000'000 
n^{2} 
1 
100 
10'000 
1'000'000 
100'000'000 
1e+10 
n log_{2}
n 
0 
33.2 
664.4 
9'966 
132'877 
1'660'964 
1.01^{n} 
1.01 
1.1046 
2.7 
20'959 
1.636e+43 
1.372e+432 
Solution to Homework 3: Compare Function Growth
Which function of each pair (left/right column) grows larger if n
increases?
left 
right 
answer 
100n 
n^{2} 

1.1^{n} 
n^{20} 

5 log_{2} n 
10 log_{4} n 

20^{n} 
n! 

100·2^{n} 
2.1^{n} 

Using Ruby to Compare Function Growth
 Start
irb
(Interactive Ruby, a 'command prompt' for
Ruby)
 Write a loop:
(start..end).each { n
comparison }
 Example of
comparison
: puts n, 1.1**n,
n**20
 Change the
start
and end
values until appropriate
 If necessary, convert integers to floating point numbers for easier
comparison
 Define the factulty function:
def fac(n) n<2 ? 1 : n*fac(n1)
end
Caution: Use only when you understand which function will eventually grow
larger
Classification of Functions by Asymptotic Growth
Various growth classes with example functions:
 Linear growth: n, 2n+15, 100n40,
0.001n,...
 Quadratic growth: n^{2},
500n^{2}+30n+3000,...
 Cubic growth: n^{3},
5n^{3}+7n^{2}+80,...
 Logarithmic growth: ln n, log_{2}n, 5
log_{10}n^{2}+30,...
 Exponential growth: 1.1^{n}, 2^{n},
2^{0.5n}+1000n^{15},...
 ...
BigO Notation: Set of Functions
BigO notation is a notation for expressing the order of growth of a
function (e.g. time complexity of an algorithm).
O(g): Set of functions with lower or same order of
growth as function g
Example:
Set of functions that grow slower or as slow as n^{2}:
O(n^{2})
Usage examples:
3n^{1.5} ∈ O(n^{2}),
15n^{2} ∈ O(n^{2}),
2.7n^{3} ∉ O(n^{2})
Exact Definition of O
Iff we can find values c and n_{0} greater 0
so that for all n greater n_{0},
f(n)≤c·g(n), then
f(n)∈O(g(n)).
(Iff: If and only if)
∃c>0: ∃n_{0}≥0:
∀n≥n_{0}:
f(n)≤c·g(n) ⇔ f(n)∈O(g(n))
 g(n) is an asymptotic upper bound of
f(n)
 In some references (books, ...):
 f(n)∈O(g(n))
is written
f(n)＝O(g(n))
 In this case, O(g(n)) is always on the rigth
side
 However,
f(n)∈O(g(n))
is more precise and easier to understand
 Role of c: Ignore constantfactor differences (e.g. one
computer or programming language being twice as fast as another)
 Role of n_{0}: Ignore initialization costs and
behavior for small values of n
Example Algorithms
 The number of steps in linear search is: an +
b
⇒ Linear search has time complexity O(n)
(linear search is O(n), linear search has linear time
complexity)
 The number of steps in binary search is:
d log_{2} n + e
⇒ Binary search has time complexity O(log n)
 Because O(log n) ⊊ O(n),
binary search is faster
Comparing the Execution Time of Algorithms
(from last lecture)
Possible questions:
 How many seconds faster is binary search when compared to linear
search?
 How many times faster is binary search when compared to linear
search?
Problem: These questions do not have a single answer.
When we compare algorithms, we want a simple answer.
The simple and general answer is using bigO notation:
Linear search is O(n), binary search is O(log
n).
Binary search is faster than linear search (for inputs of significant
size)
Additional Examples for O
 Linear growth:
n∈O(n);
2n+15∈O(n);
100n40∈O(n);
5 log_{10}n+30∈O(n), ...
O(1)⊂O(n); O(log
n)⊂O(n); O(20
n)=O(4n + 13), ...
 Quadratic growth:
n^{2}∈O(n^{2});
500n^{2}+30n+3000∈O(n^{2}),
...
O(n)⊂O(n^{2});
O(n^{3})⊄O(n^{2}),
...^{}
 Cubic Growth:
n^{3}∈O(n^{3});
5n^{3}+7n^{2}+80∈O(n^{3}),
...
 Logarithmic growth:
ln n∈O(log n);
log_{2}n∈O(log n);
5 log_{10}n^{2}+30∈O(log
n), ...
Confirming the Order of a Function
 Method 1: Use the definition
Find appropriatie values for n_{0} and c, and
check the definition
 Method 2: Use the limit of a function
lim_{n→∞}(f(n)/g(n)):
 If the limit is 0:
O(f(n))⊊O(g(n)),
f(n)∈O(g(n))
 If the limit is 0 < d < ∞:
O(f(n))=O(g(n)),
f(n)∈O(g(n))
 If the limit is
∞: O(g(n))⊊O(f(n)),
f(n)∉O(g(n))
 Method 3: Simplification
Method 1: Use The Definition
We want to check that 2n+15∈O(n)
The definition of BigO is:
∃c>0: ∃n_{0}≥0:
∀n≥n_{0}:
f(n)≤c·g(n) ⇔ f(n)∈O(g(n))
We have to find values c and n_{0} so that
∀n≥n_{0}:
f(n)≤c·g(n)
Example 1: n_{0}: = 5, c=3
∀n≥5: 2n+15≤3n ⇒ false, either
n_{0} or c (or both) are not big enough
Example 2: n_{0}: = 10, c=4
∀n≥10: 2n+15≤4n ⇒ true, therefore
2n+15∈O(n)
Method 2: Use the Limit of a Function
We want to check which of 3n^{1.5},
15n^{2}, and 2.7n^{3} are ∈
O(n^{2})
lim_{n→∞}(3n^{1.5}/n^{2})
= 0 ⇒
O(3n^{1.5})⊊O(n^{2}),
3n^{1.5}∈O(n^{2})
lim_{n→∞}(15n^{2}/n^{2})
= 15 ⇒
O(15n^{2})=O(n^{2}),
15n^{2}∈O(n^{2})
lim_{n→∞}(2.7n^{3}/n^{2})
= ∞ ⇒
O(n^{2})⊊O(2.7n^{3}),
2.7n^{3}∉O(n^{2})
Method 3: Simplification of BigO Notation
 BigO notation should be as simple as possible
 Examples (for all functions except constant functions, we assume
increasing):
 Constant functions: O(1)
 Linear functions: O(n)
 Quadratic functions: O(n^{2})
 Cubic functions: O(n^{3})
 Logarithmic functions: O(log n)
 For polynomials, all terms except the term with the biggest exponent can
be ignored
 For logarithms, the base is left out (irrelevant)
 Simplification can be applied early (e.g. when counting steps)
Ignoring Lower Terms in Polynomials
Concrete Example: 500n^{2}+30n ∈
O(n^{2})
Derivation for general case: f(n) =
dn^{a} +
en^{b} ∈
O(n^{a})
[a > b > 0]
Definition of O: f (n) ≤
cg(n) [n >
n_{0}; n_{0}, c > 0]
dn^{a} +
en^{b} ≤
cn^{a} [a > 0 ⇒
n^{a}>0]
d +
en^{b}/n^{a}
= d + en^{ba}
≤ c [ba < 0 ⇒
lim_{n→∞}en^{ba}
= 0]
Some possible values for c and n_{0}:
 n_{0} = 1, c ≥
d+e
 n_{0} = 2, c≥
d+2^{ba}e
 n_{0} = 10, c≥
d+10^{ba}e
Some possible values for concrete example
(500n^{2}+30n):
 n_{0} = 1, c ≥ 530 →
500n^{2}+30n ≤ 530n^{2}
[n≥1]
 n_{0} = 2, c ≥ 515 →
500n^{2}+30n ≤ 515n^{2}
[n≥2]
 n_{0} = 10, c ≥ 503 →
500n^{2}+30n ≤ 503n^{2}
[n≥10]
In general: a > b > 0 ⇒
O(n^{a} +
n^{b}) =
O(n^{a})
Ignoring Logarithm Base
How do O(log_{2} n) and
O(log_{10} n) differ?
(Hint: log_{b} a = log_{c}
a / log_{c} b =
log_{c} a ·
log_{b} c)
log_{10} n = log_{2}
n · log_{10} 2 ≅ 0.301 · log_{2}
n
O(log_{10} n) = O(0.301... · log_{2} n) =
O(log_{2} n)
∀ a>1, b>1:
O(log_{a} n) = O(log_{b} n) =
O(log n)
Additional Notations: Ω and Θ
 O(g(n)): Set of functions with lower or
same order of growth as g(n)
 Ω(g(n)): Set of functions with larger
or same order of growth as g(n)
 Θ(g(n)): Set of functions with same
order of growth as g(n)
Examples:
3n^{1.5} ∈ O(n^{2}),
15n^{2} ∈ O(n^{2}),
2.7n^{3} ∉ O(n^{2})
3n^{1.5} ∉
Ω(n^{2}), 15n^{2}
∈ Ω(n^{2}),
2.7n^{3} ∈
Ω(n^{2})
3n^{1.5} ∉
Θ(n^{2}), 15n^{2}
∈ Θ(n^{2}),
2.7n^{3} ∉
Θ(n^{2})
Exact Definitions of Ω and Θ
Definition of Ω
∃c>0: ∃n_{0}≥0:
∀n≥n_{0}:
c·g(n)≤f(n) ⇔
f(n)∈Ω(g(n))
Definition of Θ
∃c_{1}>0: ∃c_{2}>0:
∃n_{0}≥0: ∀n≥n_{0}:
c_{1}·g(n)≤f(n)≤c_{2}·g(n) ⇔
f(n)∈Θ(g(n))
Relationships between Ω and Θ
f(n)∈Θ(g(n))
⇔f(n)∈O(g(n)) ∧
f(n)∈Ω(g(n))
Θ(g(n)) =
O(g(n)) ∩
Ω(g(n))
Use of Order Notation
 O: Maximum (worstcase) time complexity of algorithms
 Ω: Minimally needed time complexity to solve a problem
 Θ: Used when expressing the fact that a time complexity is
not only possible, but actually reached
In general as well as in this course, mainly O will be used.
Summary
 To compare the time complexity of algorithms:
 Ignore constant terms (initialization,...)
 Ignore constant factors (differences due to hardware or
implementation)
 Count basic steps executed in the worst case
 Look at asymptotic growth when input size increases
 Asymptotic growth can be expressed with bigO notation
 The time complexity of algorithms can be expressed as O(log
n), O(n), O(n^{2}),
O(2^{n}), ...
Homework
(no need to submit)
Review this lecture's material and the additional handout (Section 2.2, pp
5259 of The Design & Analysis of Algorithms by Anany Levitin)
every day!
On the Web, find algorithms with time complexities
O(1), O(log n), O(n),
O(n log n),
O(n^{2}), O(n^{3}),
O(2^{n}), O(n!), and so
on.
Glossary
 bigO notation
 O 記法 (O そのものは漸近記号ともいう)
 asymptotic growth
 漸近的 (な) 増加
 approximate
 近似する
 essence
 本質
 constant factor
 一定の係数、定倍数
 eventually
 最終的に
 linear growth
 線形増加
 quadratic growth
 二次増加
 cubic growth
 三次増加
 logarithmic growth
 対数増加
 exponential growth
 指数増加
 Omega (Ω)
 オメガ (大文字)
 capital letter
 大文字
 Theta (Θ)
 シータ (大文字)
 asymptotic upper bound
 漸近的上界
 asymptotic lower bound
 漸近的下界
 appropriate
 適切
 limit
 極限
 polynomial
 多項式
 term
 (式の) 項
 logarithm
 対数
 base
 (対数の) 底