0% found this document useful (0 votes)
7 views

1908934869_OCR_1

Chapter 3 covers algorithms, their properties, and complexity, emphasizing the importance of specifying problems and solutions through algorithms. It discusses various sorting algorithms, including bubble sort and insertion sort, and introduces the halting problem, which is proven to be undecidable. The chapter also explains the growth of functions using Big-O notation to analyze algorithm efficiency.

Uploaded by

ankushsoun123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

1908934869_OCR_1

Chapter 3 covers algorithms, their properties, and complexity, emphasizing the importance of specifying problems and solutions through algorithms. It discusses various sorting algorithms, including bubble sort and insertion sort, and introduces the halting problem, which is proven to be undecidable. The chapter also explains the growth of functions using Big-O notation to analyze algorithm efficiency.

Uploaded by

ankushsoun123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

rithms

Chapter 3
Chapter Summary
• Algorithms
• Example Algorithms
• Growth of Functions
• Big-0 and other Notation
• Complexity of Algorithms
rithms
Section 3.1
Section Summary
• Properties of Algorithms
• Algorithms for Sorting
• Halting Problem
Problems and Algorithms
• In many domains there are key general problems that
ask for output with specific properties when given
valid input.
• The first step is to precisely state the problem, using
the appropriate structures to specify the input and the
desired output.
• We then solve the general problem by specifying the
steps of a procedure that takes a valid input and
produces the desired output. This procedure is called
an algorithm.
Algorithms Abu Ja’far Mohammed Ibin Musa Al-Khowarizmi
(780-850)
Definition: An algorithm is a finite set of precise
instructions for performing a computation or for solving a
problem.
Example: Describe an algorithm for finding the maximum
value in a finite sequence of integers.
Solution: Perform the following steps:
Set the temporary maximum equal to the first integer in the
sequence.
2. Compare the next integer in the sequence to the temporary
maximum.
If it is larger than the temporary maximum, set the temporary
maximum equal to this integer.
Repeat the previous step if there are more integers. If not, stop.
4. When the algorithm terminates, the temporary maximum is the
largest integer in the sequence.
Specifying Algorithms
• Algorithms can be specified in different ways. Their steps can be
described in English or in pseudocode.
• Pseudocode is an intermediate step between an English language
description of the steps and a coding of these steps using a
programming language.
• The form of pseudocode we use is specified in Appendix 3. It uses
some of the structures found in languages such as C++ and Java.
• Programmers can use the description of an algorithm in pseudocode to
construct a program in a particular language.
• Pseudocode helps us analyze the time required to solve a problem using
an algorithm, independent of the actual programming language used
to implement algorithm.
Properties of Algorithms
• Input: An algorithm has input values from a specified set.
• Output: From the input values, the algorithm produces the
output values from a specified set. The output values are
the solution.
• Correctness: An algorithm should produce the correct
output values for each set of input values.
• Finiteness: An algorithm should produce the output after a
finite number ofsteps for any input.
• Effectiveness: It must be possible to perform each step of
the algorithm correctly and in a finite amount of time.
• Generality: The algorithm should work for all problems of
the desired form.
Finding the Maximum Element in a
Finite Sequence
• The algorithm in pseudocode:

procedure max(alf a2, an: integers)


max := a}
for i := 2 to n
if max < a, then max := at
return max[max is the largest element}

Does this algorithm have all the properties listed on


the previous slide?
Example: Sorting Algorithms
• To sort the elements of a list is to put them in increasing
order (numerical order, alphabetic, and so on).
• Sorting is an important problem because:
• A nontrivial percentage of all computing resources are
devoted to sorting different kinds of lists, especially
applications involving large databases of information that
need to be presented in a particular order (e.g., by customer,
part number etc.).
• An amazing number of fundamentally different algorithms
have been invented for sorting (> 100). Their relative
advantages and disadvantages have been studied extensively
• Sorting algorithms are useful to illustrate the basic notions of
computer science.
Bubble Sort
• Bubble sort makes multiple passes through a list. Every
pair of elements that are found to be out of order are
interchanged.

procedure bubblesort(al,...,an: real numbers


with n > 2)
for i := 1 to n — 1
for; := 1 to n — i
if a >a+1 then interchange a; and a +1
{o,,..., an is now in increasing order}
Bubble Sort
Example: Show the steps of bubble sort with 3 2 4 1 5
First pass ^3 2 2 2 Second pass , 2
3 3 3 3
4 4 1 1
r4
1 1
5 5 5 d

Third pass x2 Fourth pass


r : an interchange
-1
3 3
4 4 ; : pair in correct order
5 5 numbers in color
guaranteed to be in correct order

At the first pass the largest element has been put into the correct position
• At the end of the second pass, the 2nd largest element has been put into the correct
position.
• In each subsequent pass, an additional element is put in the correct position.
Insertion Sort
• Insertion sort begins with the 2nd element. It compares the 2nd element
with the 1st and puts it before the first if it is not larger.
procedure insertion sort
Next the 3rd element is put into (&p...,CZn.
the correct position among the real numbers with n> 2)
first 3 elements. for j := 2 to n
•In each subsequent pass, the n+lst i:= 1
element is put into its correct while a, > ai
position among the first n+1
i := i + 1
elements.
Linear search is used to find the
m := dj
correct position. for /c := 0 to; — i — 1
Gj-k Gj-k-i
dj := m
{Now ap...,un is in increasing order}
Insertion Sort
Example: Show all the steps of insertion sort with the
input: 3 2 4 1 5
i. 2 3 4 1 5 (first two positions are interchanged)
ii. 2 3 4 1 5 (thirdelement remains in its position)
iii. 1 2 3 4 5 (fourth is placed at beginning)
iv. 1 2 3 4 5 (fifth element remains in its position)
Halting Problem
Example: Can we develop a procedure that takes as
input a computer program along with its input and
determines whether the program will eventually halt
with that input?
Halting Problem
Example: Can we develop a procedure that takes as
input a computer program along with its input and
determines whether the program will eventually halt
with that input?
• Solution: No. Proof by contradiction.
• Assume that there is such a procedure and call it
The procedure takes as input a program
P and the input I to P.
• H outputs “halt” if it is the case that P will stop when
run with input I.
• Otherwise, H outputs “loops forever.”
Halting Problem
• Since a program is a string of characters, we can call
H(P,P). Construct a procedure K(P), which works as
follows.
• If H(P,P) outputs “loops forever” then K(P) halts.
• If H(P,P) outputs “halt” then /<(P) goes into an infinite
loop printing “ha” on each iteration.

lf//(P. Pl = ‘ hali<'
/’ as program then kx»p forever
Program Program
Input Output
//</»./) KIP)
Program P H<P. P)

P as input If H\P. P) = "loops forever.”


then halt
Halting Problem
• Now we call K with K as input, i.e. K(K).
• If the output of H(K,K) is “loops forever” then K(K) halts
—A contradiction with the fact that H(K,K) outputs
“loops forever” iff K loops forever on input K.
• If the output of H(K,K) is “halts” then K[K) loops forever
-- A contradiction with the fact that H(K,K) outputs
“halts” iff K halts on input K.
• Therefore, there can not be a procedure that can
decide whether or not an arbitrary program halts. The
halting problem is unsolvable (undecidable).
The Growth of Functions
Section 3.2
Section Summary Donald E. Knuth
• Big-O Notation (Born 1938)

• Big-O Estimates for Important Functions


• Big-Omega and Big-Theta Notation

Edmund Landau Paul Gustav Heinrich Bachmann


(1877-1938) (1837-1920)
The Growth of Functions
• In both computer science and in mathematics, there are
many times when we care about how fast a function grows.
• In computer science, we want to understand how quickly
an algorithm can solve a problem as the size of the input
grows.
• We can compare the efficiency of two different algorithms for
solving the same problem.
• We can also determine whether it is practical to use a
particular algorithm as the input grows.
• Two of the areas of mathematics where questions about the
growth of functions are studied are:
• number theory
• combinatorics
Big-0 Notation
Definition: Let/and g be functions from the set of
integers or the set of real numbers to the set of real
numbers. We say that/(x) is O(g(x)) if there are constants
C and k such that
\f(x)\ < C|5(z)|
whenever x> k. (illustration on next slide)
• This is read as “f(x) is big-0 of g(x)” or “g asymptotically
dominates f.’
• The constants C and k are called witnesses to the
relationship/(x) is O(g(x)). Only one pair of witnesses is
needed.
Illustration of Big-0 Notation
f{x) is O(g(x))

The part of the graph of/(a) that satisfies


/(a) < Q»(a) is shown in eolor.

1
Notation
• If one pair of witnesses is found, then there are infinitely
many pairs. We can always make the k or the C larger and
still maintain the inequality |f(z)| < C'|g(a:)| .
• Any pair C and k'where C < C'and k < k' is also a pair of
witnesses since 1/MI < QM’) < whenever x > k> k.
You may see “/(x) = O(g(x))” instead of “/(x) is O(g(x)).”
• But this is an abuse of the equals sign since the meaning is
that there is an inequality relating the values of/and g, for
sufficiently large values of x.
• It is OK to write/(x) g O(g(x)), because O(g(x)) represents
the set of functions that are O(g(x)).
• Usually, we will drop the absolute value sign since we will
always deal with functions that take on positive values.
Using the Definition of Big-0 Notation
Example: Show that is
f(x) = x2 + 2x + 1 O(x2)
Using the Definition of Big-0 Notation
Example: Show that /(x) = x2 + 2x + 1 is O(x2).
Solution: Since when x > 1, x <x2 and 1 < x2
0 < x2 + 2x + 1 < x2 + 2x2 + x2 = 4x2
• Can take C = 4 and k = 1 as witnesses to show that
/(x) is O(x2) (see graph on next slide)
• Alternatively, when x > 2, we have 2x < x2 and 1 < x2.
Hence, 0 < x2 + 2x + 1 < x2 + x2 + x2 = 3x2
when x > 2.
• Can take C = 3 and k = 2 as witnesses instead.
Illustration of Big-0 Notation
f(x) = x2 + 2x + 1 is O(x2)

The part of ihc graph off(x ) ■ jr2 ♦ ♦ I


that satisfies/(.») < 4.<2 is shown in blue.

I ♦ 2x ♦ I < 4.t* for » > I


Big-0 Notation
• Both /(x) = x2 + 2x + 1 and g(x) = x2
are such that f(x) is 0(</(ir)) and g(x) isO(/(ir)).
We say that the two functions are of the same order. (More on this
later)

• If ./(#) is 0(<?(aO) and h(x) is larger than g(x) for all positive real
numbers, then /(x) is O(h(x)).

• Indeed, note that if |/(z)| < C|^(x)| for* > k and if |/i(rr)| > |^(x)|
for all x, then |/(a:)| < C|/i(a;)| if x > k. Hence, f(x) is O(/z(t)) .

• For many applications, the goal is to select the function g(x) in O(g(x))
as small as possible (up to multiplication by a constant, of course).
Using the Definition of Big-0 Notation
Example: Show that 7x2 is O(x3).
Example: Show that n2 is not O(n).
Using the Definition of Big-0 Notation
Example: Show that 7x2 is O(x3).
Solution: When x > 7, 7x2 < x3. Take C =1 and k = 7
as witnesses to establish that 7x2 is O(x3).
(Would C = 7 and /< = 1 work?)
Example: Show that n2 is not O(n).
Solution: Suppose there are constants C and k for
which n2 < Cn, whenever n > k. Then (by dividing
both sides of n2 < Cn) by n, we have that n <Cmust
hold for all n > k. A contradiction!
Big-0 Estimates for Polynomials
Example: Let f(x) = anxn + an_ixn_1 H------------ 1- a^x + ao
where (Iq, di,..., an are real numbers with an =#0.
Then/(x) is 0(xn).
Big-0 Estimates for Polynomials
Example: Let f(x) = anxn + an_]Xn 1 + • ■ • + a^x + ao
where Go, Gi, ■ ■ ■ , CLn are real numbers with an =#0.
Then/(x) is O(xn). Uses triangle inequality,
Proof: [fix) | = |anxn + an.} xn_1 + ••• + a,*1 + a. an exercise in Section 1.8.

Assuming x >1
= Xn (|an| + |an.,| /x+ •••+ |a,|/xn’1 + |aj/x")
<x"(|an| + |an.J + •••+ |aj+ |a,|)
+ |a„.j| + ••• + |aj + |a,| and k - 1. Then/(x) is

• The leading term anxn of a polynomial dominates its


growth.
Big-0 Estimates for some
Important Functions
Example: Use big-0 notation to estimate the sum of
the first n positive integers.

Example: Use big-0 notation to estimate the factorial


function /(n) =n! = 1 x 2 x x n .

Continued ->
Big-0 Estimates for some-
Important Functions
Example: Use big-0 notation to estimate the sum of
the first n positive integers.
Solution: 1 + 2 + • • • + n < n + n + ■ • • n = n2
1 + 2 + ... + n is O(n2) taking C = 1 and k = 1.
Example: Use big-0 notation to estimate the factorial
function /(n) = n! = 1 x2 x ••• x n.
Solution:
n! = 1 x 2 x • • • x n < n x n x • • • x n = nn
n\ is ) taking C — 1 and fc = 1.
Continued ->
Big-0 Estimates for some
Important Functions
Example: Use big-O notation to estimate log
Big-0 Estimates for some
Important Functions
Example: Use big-0 notation to estimate log n!
Solution: Given that n! < n" (previous slide)
then log(n!) < n-log(n).
Hence, Iog(n!) is O(n-log(77)) taking C = 1 and k= 1.
Display of Growth of Functions

Note the difference in behavior of functions as n gets larger


Logarithms, Powers, and Exponents
• If i < c < d, then nc is O(nJ), but ndis not O(nc)
(The relationship between polynomials depends on their degree.)
• If b > 1 and c and d are positive, then
(logb n)c is O(nJ), but nJis not O((logb n)c).
(Every positive power of a logarithm of base >1 is big-O of every
positive power of n, but the reverse never holds.)
• If b > 1 and d >o, then nd is O(bn)> but bn is not O(nd)
(Every power of n is big-O of every exponential function with base
>i, but the reverse never holds.)
• If l < b < c, then bn is O(cn), but cn is not O(fan).
(The relationship between exponentials depends on their base.)
Combinations of Functions
• If /i (x) is O(gj(x)) and/2 (x) is O(g2(x)) then
(/i +/2 )W is O(max(|gx(x) |,|g2(x) |)).

• If /i (x) and/2 (x) are both O(g(x)) then


(/i +/z )W is O(g(x)).
• If (x) is O(gx(x)) and/2 (x) is O(g2(x)) then
(/1/2 )W is O(gi(x)g2(x)).
Combinations of Functions
• If fi 00 is O(gj(x)) and/2 (x) is O(g2(x)) then
(/i +/z )W is O(max(|gi(x) |,|g2(x) |)).

• By the definition of big-O notation, there are constants CPC2 ,/c, ,k2 such that
|/i(x) SCJs/x) I when xt*/:, and/2 (x) <C2|j2(x) | whenx> k2.

— l/l 00 I "f I/2 00I '’J' tlle triangle inequality |a + b| < |a| + |b|
• 1/iOOI + I/2 001 SCJ^x) I + C2|<72(x) I
< Cj|<7(x) I + C2|t/(x) I where g(x) = max(|g1(x)|,|g2(x)|)
= (C, + C2) Is(x)|
= C|<?(x)| where C = Ct + C2
• Therefore |(/j+/2 )(x)| < C|g(x)| wheneverx > k, where k = max(k1,k2).
Big-Omega Notation
Definition: Let/and g be functions from the set of
integers or the set of real numbers to the set of real
numbers. We say that /(x) is fi(</(x))
if there are constants C and k such that (1 is the upper case
|/(x)| > C|g(x)| whenx>/c. version of the lower
case Greek letter u).
• We say that “fix) is big-Omega of g(x).”
• Big-O gives an upper bound on the growth of a function,
while Big-Omega gives a lower bound. Big-Omega tells us
that a function grows at least as fast as another.
• fix) is fl(g(x)) if and only if g(x) is Oifix)). This follows
from the definitions.
Big-Omega Notation
Example: Show that /(a?) = 8a?3 + 5a?2 + 7 is
where g(a?) = a?3.
Big-Omega Notation
Example: Show that f(x) = 8a"3 + 5a:2 + 7 is
fX.9(x)) where g(x) = x3.
Solution: f(x) = 8a:3 + 5a?2 + 7 > 8a"3 for all
positive real numbers x.
• Is it also the case that g(x) = x3 is O(8a?3 + 5a?2 + 7) ?
0 is the upper case
Big-Theta Notation version of the lower
case Greek letter 0.
• Definition: Let/and g be functions from the set of
integers or the set of real numbers to the set of real
numbers. The function/(rr) is 0(g(z)) if
f(x) is O(g(x\) and f(x) isQ(c?(a;)).

• We say that “f is big-Theta of g(x\’ and also that “/(x) is of


order g(x)" and also that “fix) and g(x) are of the same
order"
• / (#) is ©(.9(a:)) if and only if there exists constants Cx, C2
and k such that C^(x) <f(x) < C2g(x) if x > k. This follows
from the definitions of big-0 and big-Omega.
Big Theta Notation
Example: Show that the sum of the first n positive
integers is 0(n2).
Big Theta Notation
Example: Show that the sum of the first n positive integers
is 0(n2).
Solution: Let/(n) = 1 + 2 + — + n.
• We have already shown that/(n) is O(n2).
• To show that An) is fl(n2), we need a positive constant C
such that/(n) > Cn2 for sufficiently large n. Summing only
the terms greater than n/2 we obtain the inequality
1 + 2 + ■■• + n > [ n/2] + ([ n/2] + 1) + + n
> [ n/2] + [ n/2] + ••• + [ n/2]
= (n - [ n/2] + 1 ) [ n/2}
> (n/2)(n/2) = n2/4
• Taking C= %, f[n) > Cn2 for all positive integers n. Hence,
n) is fl(n2), and we can conclude that /(n) is 0(n2).
Big-Theta Notation
Example: Show that/(x) = 3x2 + Sxlog x is 0(x2).
Big-Theta Notation
Example: Show that/(x) = 3x2 + 8x log x is 0(x2).
Solution:
• 3x2 + 8xlogx < llx2 forx>l,
since 0 < 8x log x < 8x2.
• Hence, 3x2 + 8xlog x is 0(x2).
• x2 is clearly O(3x2 + 8xlog x)
• Hence, 3x2 + Sxlog x is 0(x2).
Big-Theta Notation
• When /(x) is 0(g(z)) it must also be the case that
is0(.g(x)).
• Note that is 0(^(x)) if and only if it is the case
that is O(t/(x)) and g(x) isO(/(z)).
• Sometimes writers are careless and write as if big-0
notation has the same meaning as big-Theta.
Polynomials
Theorem: Let f(x) = anxn + + • • • + aix + ao
where ao, di, • • ■, dn are real numbers with an =£0.
Then/(x) is of order xn (or 0(xn)).
(The proof is an exercise.)
Example:
The polynomial /(a?) = 8arr’ + 5a?2 + 10 is order of x5 (or
0(x5)).
The polynomial /(a?) = 8a?199 + 7a;100 + a?" + 5a?2 + 25
is order of x199 (or 0(x199)).
Section Summary
• Time Complexity
• Worst-Case Complexity
• Understanding the Complexity of Algorithms
The Complexity of Algorithms
• Given an algorithm, how efficient is this algorithm for
solving a problem given input of a particular size? To
answer this question, we ask:
• How much time does this algorithm use to solve a problem?
• How much computer memory does this algorithm use to solve
a problem?
• When we analyze the time the algorithm uses to solve the
problem given input of a particular size, we are studying
the time complexity of the algorithm.
• When we analyze the computer memory the algorithm
uses to solve the problem given input of a particular size,
we are studying the space complexity of the algorithm.
The Complexity of Algorithms
• In this course, we focus on time complexity. The space
complexity of algorithms is studied in later courses.
• We will measure time complexity in terms of the number
of operations an algorithm uses and we will use big-0 and
big-Theta notation to estimate the time complexity.
• We can use this analysis to see whether it is practical to use
this algorithm to solve problems with input of a particular
size. We can also compare the efficiency of different
algorithms for solving the same problem.
• We ignore implementation details (including the data
structures used and both the hardware and software
platforms) because it is extremely complicated to consider
them.
Time Complexity
• To analyze the time complexity of algorithms, we determine the
number of operations, such as comparisons and arithmetic
operations (addition, multiplication, etc.). We can estimate the
time a computer may actually use to solve a problem using the
amount of time required to ao basic operations.
• We ignore minor details, such as the “house keeping” aspects of
the algorithm.
• We will focus on the worst-case time complexity of an algorithm.
This provides an upper bound on the number of operations an
algorithm uses to solve a problem with input of a particular size.
• It is usually much more difficult to determine the average case
time complexity of an algorithm. This is the average number of
operations an algorithm uses to solve a problem over all inputs of
a particular size.
Complexity Analysis of Algorithms
Example: Describe the time complexity of the algorithm
for finding the maximum element in a finite sequence.
procedure max(av «,an: integers)
max := a,
for i := 2 to n
if max < a, then max := a,
return max(max is the largest element}

Solution: Count the number of comparisons.


• The max < a, comparison is made n — 1 times.
• Each time i is incremented, a test is made to see if / < n.
• One last comparison determines that i > n.
• Exactly 2(n — 1) + 1 = 2z? — 1 comparisons are made.

Hence, the time complexity of the algorithm is O(n).


Worst-Case Complexity of Bubble Sort
Example: What is the worst-case complexity of bubble
sort in terms of the number of comparisons made?
procedure bubblesort^,real numbers
with n > 2)
for i := 1 to n — 1
for J := 1 to n — i
if a- >a^ then interchange and a-+1
an is now in increasing order}
Solution: A sequence of n—1 passes is made through the list. On each pass n — i
comparisons are made.
(n-l) + (n-2) + ... + 2 + l =
The worst-case complexity of bubble sort is 0(n2) since n(n —1) 2
2 - |n-
Worst-Case Complexity of Insertion Sort
Example: What is the worst-case complexity of
insertion sort in terms of the number of comparisons
made? procedure insertion sort(alf...fan:
real numbers with n > 2)
Solution: The total number of for j := 2 to n
comparisons are: i := 1
while dj > a,
2 + 3 + --- + n = _i i := i + 1
m := dj
Therefore the complexity is 0(n2). for /c := 0 to; — i — 1
@j-k Gj-k-i
dj := m
Matrix Multiplication Algorithm
• The definition for matrix multiplication can be expressed
as an algorithm; C = A B where C is an m x n matrix that is
the product of the mXk matrix A and the kXn matrix B.
• This algorithm carries out matrix multiplication based on
its definition.
procedure matrix multiplication(A.,B: matrices)
for i := 1 to m
for j := 1 to n A = [a,j] is a m x k matrix
B = bjj] is a A’ x n matrix

for q := 1 to k
c ■••= c'7■u+
iq auqjb ■
return C{C = [c/y] is the product of A and B}
Complexity of Matrix Multiplication
Example: How many additions of integers and
multiplications of integers are used by the matrix
multiplication algorithm to multiply two n x n
matrices.
Solution: There are n2 entries in the product. Finding
each entry requires n multiplications and n — 1
additions. Hence, n3 multiplications and n2(n — 1)
additions are used.
Hence, the complexity of matrix multiplication is
O(n3).
Algorithmic Paradigms
• An algorithmic paradigm is a a general approach
based on a particular concept for constructing
algorithms to solve a variety of problems.
• Brute-force algorithms solve the problem in the most
straightforward manner, without taking advantage of
any ideas that can make the algorithm more efficient:
sequential search, bubble sort, insertion sort.
• There are many other paradigms: Greedy algorithms,
divide-and-conquer algorithms, dynamic programming,
backtracking, and probabilistic algorithms.
Algorithms

TABLE 1 Commonly Used Terminology for the


Complexity of Algorithms.
Complexity Terminology

@(1) Constant complexity


0(log //) Logarithmic complexity
0(n) Linear complexity
0(n log n) Linearithmic complexity
&(nh) Polynomial complexity
O(b"), where b > 1 Exponential complexity
0(n!) Factorial complexity
Algorithms
I kBLE 2 The Computer Time Used by Algorithms.
Problem Size Bit Operations Used

n log/z n //log n ir 2" zz!

10 3 x 10"” s 10",0s 3 x 10~10s 10—9 s IO"8 s 3 x 10


IO2 7 x 10"11 s 10"9s 7 x IO-9s 10"7 s 4 x IO11 yr ♦

103 1.0 x 10~,0s IO"8 s 1 x IO-7s IO"5 s ♦ ♦

104 1.3 x IO"los IO"7 s 1 x 10"6s IO"3 s ♦ ♦

105 1.7 x IO-los 10"6s 2 x 10"5 s 0.1 s ♦ ♦

IO6 2 x 10~l0s IO"5 s 2 x 10—4 s 0.17 min * ♦

Times of more than IO100 years are indicated with an *.


Complexity of Problems
• Tractable Problem'- There exists a polynomial time
algorithm to solve this problem. These problems are said to
belong to the Class P.
• Intractable Problem: There does not exist a polynomial
time algorithm to solve this problem
• Unsolvable Problem : No algorithm exists to solve this
problem, e.g., Halting Problem.
• Class NP: Solution can be checked in polynomial time. But
no polynomial time algorithm has been found for finding a
solution to problems in this class.
• Class NP-Complete: If you find a polynomial time
algorithm for one member of the class, it can be used to
solve all the problems in the class NP.
P Versus NP Problem Stephen Cook
(Born 1939)
• The P versus NP problem asks whether the class P = NP? Are there
problems whose solutions can be checked in polynomial time, but can
not be solved in polynomial time?
• Note that just because no one has found a polynomial time algorithm is
different from proving that the problem cannot be solved by a polynomial
time algorithm.
If a polynomial time algorithm for any of the problems in the NP
complete class were found, then that algorithm could be used to obtain
a polynomial time algorithm for every problem in the NP class.
• It is generally believed that P^NP since no one has been able to find
a polynomial time algorithm for any of the problems in the NP
complete class.
• The problem of P versus NP remains one of the most famous unsolved
problems in mathematics (including theoretical computer science) .The
Clay Mathematics Institute has offered a prize of $1,000,000 for a
solution.

You might also like