1908934869_OCR_1
1908934869_OCR_1
Chapter 3
Chapter Summary
• Algorithms
• Example Algorithms
• Growth of Functions
• Big-0 and other Notation
• Complexity of Algorithms
rithms
Section 3.1
Section Summary
• Properties of Algorithms
• Algorithms for Sorting
• Halting Problem
Problems and Algorithms
• In many domains there are key general problems that
ask for output with specific properties when given
valid input.
• The first step is to precisely state the problem, using
the appropriate structures to specify the input and the
desired output.
• We then solve the general problem by specifying the
steps of a procedure that takes a valid input and
produces the desired output. This procedure is called
an algorithm.
Algorithms Abu Ja’far Mohammed Ibin Musa Al-Khowarizmi
(780-850)
Definition: An algorithm is a finite set of precise
instructions for performing a computation or for solving a
problem.
Example: Describe an algorithm for finding the maximum
value in a finite sequence of integers.
Solution: Perform the following steps:
Set the temporary maximum equal to the first integer in the
sequence.
2. Compare the next integer in the sequence to the temporary
maximum.
If it is larger than the temporary maximum, set the temporary
maximum equal to this integer.
Repeat the previous step if there are more integers. If not, stop.
4. When the algorithm terminates, the temporary maximum is the
largest integer in the sequence.
Specifying Algorithms
• Algorithms can be specified in different ways. Their steps can be
described in English or in pseudocode.
• Pseudocode is an intermediate step between an English language
description of the steps and a coding of these steps using a
programming language.
• The form of pseudocode we use is specified in Appendix 3. It uses
some of the structures found in languages such as C++ and Java.
• Programmers can use the description of an algorithm in pseudocode to
construct a program in a particular language.
• Pseudocode helps us analyze the time required to solve a problem using
an algorithm, independent of the actual programming language used
to implement algorithm.
Properties of Algorithms
• Input: An algorithm has input values from a specified set.
• Output: From the input values, the algorithm produces the
output values from a specified set. The output values are
the solution.
• Correctness: An algorithm should produce the correct
output values for each set of input values.
• Finiteness: An algorithm should produce the output after a
finite number ofsteps for any input.
• Effectiveness: It must be possible to perform each step of
the algorithm correctly and in a finite amount of time.
• Generality: The algorithm should work for all problems of
the desired form.
Finding the Maximum Element in a
Finite Sequence
• The algorithm in pseudocode:
At the first pass the largest element has been put into the correct position
• At the end of the second pass, the 2nd largest element has been put into the correct
position.
• In each subsequent pass, an additional element is put in the correct position.
Insertion Sort
• Insertion sort begins with the 2nd element. It compares the 2nd element
with the 1st and puts it before the first if it is not larger.
procedure insertion sort
Next the 3rd element is put into (&p...,CZn.
the correct position among the real numbers with n> 2)
first 3 elements. for j := 2 to n
•In each subsequent pass, the n+lst i:= 1
element is put into its correct while a, > ai
position among the first n+1
i := i + 1
elements.
Linear search is used to find the
m := dj
correct position. for /c := 0 to; — i — 1
Gj-k Gj-k-i
dj := m
{Now ap...,un is in increasing order}
Insertion Sort
Example: Show all the steps of insertion sort with the
input: 3 2 4 1 5
i. 2 3 4 1 5 (first two positions are interchanged)
ii. 2 3 4 1 5 (thirdelement remains in its position)
iii. 1 2 3 4 5 (fourth is placed at beginning)
iv. 1 2 3 4 5 (fifth element remains in its position)
Halting Problem
Example: Can we develop a procedure that takes as
input a computer program along with its input and
determines whether the program will eventually halt
with that input?
Halting Problem
Example: Can we develop a procedure that takes as
input a computer program along with its input and
determines whether the program will eventually halt
with that input?
• Solution: No. Proof by contradiction.
• Assume that there is such a procedure and call it
The procedure takes as input a program
P and the input I to P.
• H outputs “halt” if it is the case that P will stop when
run with input I.
• Otherwise, H outputs “loops forever.”
Halting Problem
• Since a program is a string of characters, we can call
H(P,P). Construct a procedure K(P), which works as
follows.
• If H(P,P) outputs “loops forever” then K(P) halts.
• If H(P,P) outputs “halt” then /<(P) goes into an infinite
loop printing “ha” on each iteration.
lf//(P. Pl = ‘ hali<'
/’ as program then kx»p forever
Program Program
Input Output
//</»./) KIP)
Program P H<P. P)
1
Notation
• If one pair of witnesses is found, then there are infinitely
many pairs. We can always make the k or the C larger and
still maintain the inequality |f(z)| < C'|g(a:)| .
• Any pair C and k'where C < C'and k < k' is also a pair of
witnesses since 1/MI < QM’) < whenever x > k> k.
You may see “/(x) = O(g(x))” instead of “/(x) is O(g(x)).”
• But this is an abuse of the equals sign since the meaning is
that there is an inequality relating the values of/and g, for
sufficiently large values of x.
• It is OK to write/(x) g O(g(x)), because O(g(x)) represents
the set of functions that are O(g(x)).
• Usually, we will drop the absolute value sign since we will
always deal with functions that take on positive values.
Using the Definition of Big-0 Notation
Example: Show that is
f(x) = x2 + 2x + 1 O(x2)
Using the Definition of Big-0 Notation
Example: Show that /(x) = x2 + 2x + 1 is O(x2).
Solution: Since when x > 1, x <x2 and 1 < x2
0 < x2 + 2x + 1 < x2 + 2x2 + x2 = 4x2
• Can take C = 4 and k = 1 as witnesses to show that
/(x) is O(x2) (see graph on next slide)
• Alternatively, when x > 2, we have 2x < x2 and 1 < x2.
Hence, 0 < x2 + 2x + 1 < x2 + x2 + x2 = 3x2
when x > 2.
• Can take C = 3 and k = 2 as witnesses instead.
Illustration of Big-0 Notation
f(x) = x2 + 2x + 1 is O(x2)
• If ./(#) is 0(<?(aO) and h(x) is larger than g(x) for all positive real
numbers, then /(x) is O(h(x)).
• Indeed, note that if |/(z)| < C|^(x)| for* > k and if |/i(rr)| > |^(x)|
for all x, then |/(a:)| < C|/i(a;)| if x > k. Hence, f(x) is O(/z(t)) .
• For many applications, the goal is to select the function g(x) in O(g(x))
as small as possible (up to multiplication by a constant, of course).
Using the Definition of Big-0 Notation
Example: Show that 7x2 is O(x3).
Example: Show that n2 is not O(n).
Using the Definition of Big-0 Notation
Example: Show that 7x2 is O(x3).
Solution: When x > 7, 7x2 < x3. Take C =1 and k = 7
as witnesses to establish that 7x2 is O(x3).
(Would C = 7 and /< = 1 work?)
Example: Show that n2 is not O(n).
Solution: Suppose there are constants C and k for
which n2 < Cn, whenever n > k. Then (by dividing
both sides of n2 < Cn) by n, we have that n <Cmust
hold for all n > k. A contradiction!
Big-0 Estimates for Polynomials
Example: Let f(x) = anxn + an_ixn_1 H------------ 1- a^x + ao
where (Iq, di,..., an are real numbers with an =#0.
Then/(x) is 0(xn).
Big-0 Estimates for Polynomials
Example: Let f(x) = anxn + an_]Xn 1 + • ■ • + a^x + ao
where Go, Gi, ■ ■ ■ , CLn are real numbers with an =#0.
Then/(x) is O(xn). Uses triangle inequality,
Proof: [fix) | = |anxn + an.} xn_1 + ••• + a,*1 + a. an exercise in Section 1.8.
Assuming x >1
= Xn (|an| + |an.,| /x+ •••+ |a,|/xn’1 + |aj/x")
<x"(|an| + |an.J + •••+ |aj+ |a,|)
+ |a„.j| + ••• + |aj + |a,| and k - 1. Then/(x) is
Continued ->
Big-0 Estimates for some-
Important Functions
Example: Use big-0 notation to estimate the sum of
the first n positive integers.
Solution: 1 + 2 + • • • + n < n + n + ■ • • n = n2
1 + 2 + ... + n is O(n2) taking C = 1 and k = 1.
Example: Use big-0 notation to estimate the factorial
function /(n) = n! = 1 x2 x ••• x n.
Solution:
n! = 1 x 2 x • • • x n < n x n x • • • x n = nn
n\ is ) taking C — 1 and fc = 1.
Continued ->
Big-0 Estimates for some
Important Functions
Example: Use big-O notation to estimate log
Big-0 Estimates for some
Important Functions
Example: Use big-0 notation to estimate log n!
Solution: Given that n! < n" (previous slide)
then log(n!) < n-log(n).
Hence, Iog(n!) is O(n-log(77)) taking C = 1 and k= 1.
Display of Growth of Functions
• By the definition of big-O notation, there are constants CPC2 ,/c, ,k2 such that
|/i(x) SCJs/x) I when xt*/:, and/2 (x) <C2|j2(x) | whenx> k2.
— l/l 00 I "f I/2 00I '’J' tlle triangle inequality |a + b| < |a| + |b|
• 1/iOOI + I/2 001 SCJ^x) I + C2|<72(x) I
< Cj|<7(x) I + C2|t/(x) I where g(x) = max(|g1(x)|,|g2(x)|)
= (C, + C2) Is(x)|
= C|<?(x)| where C = Ct + C2
• Therefore |(/j+/2 )(x)| < C|g(x)| wheneverx > k, where k = max(k1,k2).
Big-Omega Notation
Definition: Let/and g be functions from the set of
integers or the set of real numbers to the set of real
numbers. We say that /(x) is fi(</(x))
if there are constants C and k such that (1 is the upper case
|/(x)| > C|g(x)| whenx>/c. version of the lower
case Greek letter u).
• We say that “fix) is big-Omega of g(x).”
• Big-O gives an upper bound on the growth of a function,
while Big-Omega gives a lower bound. Big-Omega tells us
that a function grows at least as fast as another.
• fix) is fl(g(x)) if and only if g(x) is Oifix)). This follows
from the definitions.
Big-Omega Notation
Example: Show that /(a?) = 8a?3 + 5a?2 + 7 is
where g(a?) = a?3.
Big-Omega Notation
Example: Show that f(x) = 8a"3 + 5a:2 + 7 is
fX.9(x)) where g(x) = x3.
Solution: f(x) = 8a:3 + 5a?2 + 7 > 8a"3 for all
positive real numbers x.
• Is it also the case that g(x) = x3 is O(8a?3 + 5a?2 + 7) ?
0 is the upper case
Big-Theta Notation version of the lower
case Greek letter 0.
• Definition: Let/and g be functions from the set of
integers or the set of real numbers to the set of real
numbers. The function/(rr) is 0(g(z)) if
f(x) is O(g(x\) and f(x) isQ(c?(a;)).
for q := 1 to k
c ■••= c'7■u+
iq auqjb ■
return C{C = [c/y] is the product of A and B}
Complexity of Matrix Multiplication
Example: How many additions of integers and
multiplications of integers are used by the matrix
multiplication algorithm to multiply two n x n
matrices.
Solution: There are n2 entries in the product. Finding
each entry requires n multiplications and n — 1
additions. Hence, n3 multiplications and n2(n — 1)
additions are used.
Hence, the complexity of matrix multiplication is
O(n3).
Algorithmic Paradigms
• An algorithmic paradigm is a a general approach
based on a particular concept for constructing
algorithms to solve a variety of problems.
• Brute-force algorithms solve the problem in the most
straightforward manner, without taking advantage of
any ideas that can make the algorithm more efficient:
sequential search, bubble sort, insertion sort.
• There are many other paradigms: Greedy algorithms,
divide-and-conquer algorithms, dynamic programming,
backtracking, and probabilistic algorithms.
Algorithms