ADSA-II-1(R20)UNIT 3.5,4.
ADSA-II-1(R20)UNIT 3.5,4.
DYNAMIC PROGRAMMING
SRET
-1-
(Advanced Data structures and Algorithms (ADSA)
We will formulate this in terms of the solution that includes any of the
items 1…k-1. if we have the optimal solution for a given weight u for these k-1
items, then the optimal solution for the first k items will be described as follows:
there are two main cases:
(1) case: wk>u. as the weight of item k is greater than the weight
constraint that we are working with, item k cannot be
included in the knapsack.
(2) Case: wk<u. if item k is included, the value will be equal to the
value of item k plus the maximum value of k-1 previous items,
subject to a total weight of u-w k , since we have to include w k
without exceeding the weight limit u. this is equivalent to S[k-
1, u-wk] + vk. this leave us two sub-cases.
(a) if this total is less than the maximum for the first k-1 values,
we will not included item k. the new value will be S[k-
1,u].
{
S[k-1,u] if wk > u
S[k,u] =
Max{S[k-1,u],S[k-1,u-wk]+vk} otherwise
SRET
-2-
(Advanced Data structures and Algorithms (ADSA)
SRET
-3-
(Advanced Data structures and Algorithms (ADSA)
Algorithm :
Algorithm Knapsack(w[1…n],v[1…n],W)
{
For u=0 to W
S[0,u] = 0
End for
For i=0 to n
S[i,0] = 0
End for
For i=0 to n
For u=0 to W
If (w[i] < u) then
If(v[i]+S[i-1,u-w[i]]>s[i-1,u]) then
S[i,u] = v[i]+S[i-1,u-w[i]]
Else
S[i,u] = s[i-1,u]
Else
S[i,u] = s[i-1,u]
End for
End for
Print S[n][w]
i=n,k=W
while (i>0 and k>0)
if(S[i][k] != S[i-1][k])
print i
k = k – w[i]
end if
i=i-1
end while
SRET
-4-
(Advanced Data structures and Algorithms (ADSA)
}
Example : Let us trace this algorithm through the following example. Capacity of knapsack is 5.
item( 1 2 3 4
i)
wi 2 3 4 5
vi 4 8 9 11
After the initialization step of the first row and first column, the matrix S will be completed as
follows:
Weight(u)
Item(i 0 1 2 3 4 5
)
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
For i=1, the computations for inner loop (u = 0,1,2,3,4,5) are as follows:
w[1] = 2, v[1] = 4.
u=0, w[1]>u, S[1,0] = S[0,0] = 0.
u=1, w[1]>u, S[1,1] = S[0,1] = 0.
u=2, w[1]=u and (v[1]+s[0,0])>S[0,2], S[1,2] = v[1] + S[0,0] =
4+0 = 4
u=3, w[1]=u and (v[1]+s[0,1])>S[0,3], S[1,3] = v[1] + S[0,1] =
4+0 = 4
u=4, w[1]<u and (v[1]+s[0,2])>S[0,4], S[1,4] = v[1] + S[0,2] =
4+0 = 4
u=5, w[1]<u and (v[1]+s[0,3])>S[0,5], S[1,5] = v[1] + S[0,3] =
4+0 = 4
Weight(u)
Item(i 0 1 2 3 4 5
SRET
-5-
(Advanced Data structures and Algorithms (ADSA)
)
0 0 0 0 0 0 0
1 0 0 4 4 4 4
2 0
3 0
4 0
For i=2, the computations for inner loop (u = 0,1,2,3,4,5) are as follows:
w[2] = 3, v[2] = 8.
u=0, w[2]>u, S[2,0] = S[1,0] = 0.
u=1, w[2]>u, S[2,1] = S[1,1] = 0.
u=2, w[2]>u, S[2,2] = S[1,2] = 4.
u=3, w[2]=u and (v[2]+s[1,0])>S[1,3], S[2,3] = v[2] + S[1,0] =
8+0 = 8
u=4, w[2]<u and (v[2]+s[1,1])>S[1,4], S[2,4] = v[2] + S[1,1] =
8+0 = 8
u=5, w[2]<u and (v[2]+s[1,2])>S[1,5], S[2,5] = v[2] + S[1,2] = 8+4 = 12
Weight(u)
Item(i 0 1 2 3 4 5
)
0 0 0 0 0 0 0
1 0 0 4 4 4 4
2 0 0 4 8 8 1
2
3 0
4 0
For i=3, the computations for inner loop (u = 0,1,2,3,4,5) are as follows:
w[3] = 4, v[3] = 9.
u=0, w[3]>u, S[3,0] = S[2,0] = 0.
SRET
-6-
(Advanced Data structures and Algorithms (ADSA)
For i=4, the computations for inner loop (u = 0,1,2,3,4,5) are as follows:
w[4] = 5, v[4] = 11.
u=0, w[4]>u, S[4,0] = S[3,0] = 0.
u=1, w[4]>u, S[4,1] = S[3,1] = 0.
u=2, w[4]>u, S[4,2] = S[3,2] = 4.
u=3, w[4]>u, S[4,3] = S[3,3] = 8.
u=4, w[4]>u, S[4,4] = S[3,4] = 9.
u=5, w[4]=u and (v[4]+S[3,0])<S[3,5], S[4,5] = S[3,5] = 12
Weight(u)
Item(i 0 1 2 3 4 5
)
0 0 0 0 0 0 0
1 0 0 4 4 4 4
2 0 0 4 8 8 1
SRET
-7-
(Advanced Data structures and Algorithms (ADSA)
2
3 0 0 4 8 9 1
2
4 0 0 4 8 9 1
2
We have essentially found the maximum value that can be places in the
knapsack for any weight u, using all of the items up to i. to find the solution for
the whole problem, we look at S[n,W] . in this case, the maximum value, S[4,5]
from the above table is 12.
To reconstruct the solution, we note that if S[i,k] ≠ S[i-1,k], this
happened because we added item I to the knapsack to increase the value.
Thus, we can backtrack for i=n, k=W, adding an item to the list every time we
see that S[i,k] ≠ S[i-1,k]
For our example, we start with i=4 and k = 5. since S[4,5] = S[3,5] =
S[2,5], we know that items 4 and 3 are not in the knapsack. How ever, when
i=2, S[2,5] ≠ S[1,5], so item 2 is in the knapsack, and we update k to be 3.
when i=1,S[1,3] ≠ S[0,3], so we add item 1, and we are done. The final
knapsack contains items 1 and 2, for total weight of 5 and a value of 12.
SRET
-8-
(Advanced Data structures and Algorithms (ADSA)
Cleary, A0 ( i , j )
= cost( i , j ) , 1 ≤ i ≤ n. We can obtain a recurrence for A k (i, j) using an
argument similar to that used before. A shortest path from i to j going through
no vertex higher than k either goes through vertex k or it does not. If it does, A k
k-1
( i, j ) = A ( i , k ) + Ak-1 ( k, j ). If it does not then no intermediate vertex has
k-1
index greater than k-1. Hence Ak (i, j) = A (i, j). Combining, we get
Example : 6
The given graph 1
4
2 and has the A0 1 2 3 cost matrix
11 2 1 0 4 11
3
2 6 0 2
3 3 3 ∞ 0
SRET
-9-
(Advanced Data structures and Algorithms (ADSA)
The initial A matrix, A(0) , plus its values after 3 iterations A (1), A(2), A(3) are
calculated as,
= min{ 2, 0+2 } = 2
min{ A ( 3 , 1 ) , A ( 3 , 2 ) + A1 ( 2, 1 )}
1 1
A2( 3 ,1 ) =
= min{ 3, 7+6 } = 3
min{ A ( 3 , 2 ) , A ( 3 , 2 ) + A1 ( 2, 2 )}
1 1
A2( 3 ,2 ) =
= min{ 7, 7+0 } = 7
For A(3) matrix, where k=3,
SRET
- 10 -
(Advanced Data structures and Algorithms (ADSA)
min{ A ( 3 , 2 ) , A ( 3 , 3 ) + A2 ( 3, 2 )}
2 2
A3( 3 ,2 ) =
= min{ 7, 0+7 } = 7
Finally the values of A(3) table are the shortest paths between any pair of
vertices in the given graph.
(a) fo (b)
r fo
r
do whil
e do whil
e
int
if int
if
SRET
- 11 -
(Advanced Data structures and Algorithms (ADSA)
whereas the above tree(b), which requires only three comparisons to find
an identifier.
On the average the two trees need 12/5 and 11/5 comparisons,
respectively.
For example, in the case of tree(a), it takes 1,2,3, and 4 comparisons,
respectively, to find the identifiers for ,do, while , int and if. Thus the average
number of comparisons is (1+2+3+4)/5 = 12/5. This calculation assumes that
each identifier is searched for with equal probability and that no unsuccessful
searches ( i.e., searches for identifiers not in the tree) are made.
In general situations, we can expect different identifiers to be searched
for with different frequencies (or probabilities). In addition, we can expect
unsuccessful searches also to be made.
Let us assume that the given set of identifiers is {a1, a2… an} with a1 < a2
< ….<an .
Let p( i ) be the probability with which we search for a i.
Let q(i) is probability of an unsuccessful search.
Given this data, we wish to construct an optimal binary search tree for {a1,
a2,…, an}.
Algorithm :
Algorithm OBST(p,q,n)
{
For i:=0 to n-1 do
{
w[i,i] := q[i]; r[i,i] = 0; c[i,i] =0;
// Optimal tree with one node
w[i,i+1] := q[i] + q[i+1] + p[i+1];
r[i,i+1] := i+1
c[i,i+1] := q[i] + q[i+1] + p[i+1];
}
For m:=2 to n do // find optimal trees with m nodes
SRET
- 12 -
(Advanced Data structures and Algorithms (ADSA)
Solution :
Initially,
w(0,0) = 2 w(1,1) = 3 w(2,2) = 1 w(3,3) = 1 w(4,4) = 1
SRET
- 13 -
(Advanced Data structures and Algorithms (ADSA)
by using the above equation c(i,j) and the observation w(i,j), we get
SRET
- 14 -
(Advanced Data structures and Algorithms (ADSA)
r (1, 3) =2
=> w (2, 4) = p(4) + q(4) + w(2, 3) = 5
c (2, 4) = w(2, 4) + min{c(2, 2) + c(3, 4) , c(2, 3) + c(4,
4)} = 8
2<k<4 (for k=3) (for k=4)
r (2, 4) =3
This process can be repeat until w(0, 4), c(0,4), and r(0, 4) are obtained.
The table of Figure shows in below that shows the results of this
computation.
The box in row i and column j shows the values of w(j, j + i), c(j, j + i) and
r(j, j + i) respectively.
The computation is carried out by row from row 0 to row 4.
j
0 1 2 3 4
i w22 = w33 = w44 =
1 1 1
0 w00 = 2 w11 = 3
c22 = c33 = c44 =
c00 = 0 c11 = 0
0 0 0
r00 = 0 r11 = 0 r22 = r33 = r44 =
0 0 0
1 w01 = 8 w12 = 7 w23 = w34 =
SRET
- 15 -
(Advanced Data structures and Algorithms (ADSA)
3 3
c23 = c34 =
c01 = 8 c12 = 7
3 3
r01 = 1 r12 = 2 r23 = r34 =
3 4
w24 =
w02 = w13 = 9 5
12
c13 = c24 =
2 c02 =
12 8
19
r13 = 2 r24 =
r02 = 1
3
w03 = w14 =
14 11
3 c03 = c14 =
25 19
r03 = 2 r14 = 2
w04 =
19
4 c04 =
32
r04 = 2
From the table we see that c(0, 4) = 32 is the minimum cost of s binary
search tree for (a1, a2, a3, a4 ). The root of tree t04 is a2. Hence, the left sub tree
is t01 and the right sub tree t24. Tree t01 has root a1 and sub tree t00 and t11. Tree
t24 has root a3; its left sub tree is t 22 and its right sub tree t 34. Thus, with the data
in the table it is possible to reconstruct t 04. the following figure shows t04 .
if
do int
SRET
- 16 -
(Advanced Data structures and Algorithms (ADSA)
SRET
- 17 -
(Advanced Data structures and Algorithms (ADSA)
g(3, φ) = C31 = 6 8 8 9 0
g(4, φ) = C41 = 8
using the given equation we compute g(i,S) with |S| = 2, i ≠ 1, 1¢S, and i¢S,
g(2,{3}) = C23 + g(3, φ) = 15
g(2,{4}) = C24 + g(4, φ) = 18
g(3,{2}) = C32 + g(2, φ) = 18
g(3,{4}) = C34 + g(4, φ) = 20
g(4,{2}) = C42 + g(2, φ) = 13
g(4,{3}) = C43 + g(3, φ) = 15
next we compute g(i,S) with |S| = 2, i ≠ 1, 1¢S, and i¢S.
g(2,{3,4}) = min{ C23 + g(3,{4}) , C24 + g(4,{3}) } = 25
g(3,{2,4}) = min{ C32 + g(2,{4}) , C34 + g(4,{2}) } = 25
g(4,{2,3}) = min{ C42 + g(2,{3}) , C43 + g(3,{2}) } = 23
finally, we ontain,
g(1,{2,3,4}) = min{C12+g(2,{3,4}), C13+g(3,{2,4}), C14+g(4,{2,3})}
= min{ 35, 40, 43 }
= 35.
An optimal tour of the given graph has length “35”.
SRET
- 18 -
(Advanced Data structures and Algorithms (ADSA)
A tour of this length can be constructed if we retain with each g(i,S) the
value of j that minimizes the right-hand side of the given equation.
Let j(i,S) be this value. Then, j(1,{2,3,4}) = 2. thus the tour starts from
1 and goes to 2.
The remaining tour can be obtained from g(2,{3,4}). So j(2,{3,4}) = 4.
thus the next edge is <2,4>.
The remaining tour is for g(4,{3}). So, j(4,{3}) = 3.
The optimal tour is 1, 2, 4, 3, 1.
SRET
- 19 -
(Advanced Data structures and Algorithms (ADSA)
Another Example:
• Suppose I want to compute A1.A2.A3.A4 .
• Matrix Multiplication is associative, so we can do the multiplication in several
different orders.
• A1 is 10 by 100 matrix
• A2 is 100 by 5 matrix
• A3 is 5 by 50 matrix
• A4 is 50 by 1 matrix
• A1A2A3A4 is a 10 by 1 matrix
SRET
- 20 -
(Advanced Data structures and Algorithms (ADSA)
One Approach is :
SRET
- 21 -
(Advanced Data structures and Algorithms (ADSA)
{
P(n) =
1 if n = 1
SRET
- 22 -
(Advanced Data structures and Algorithms (ADSA)
Algorithm :
Matrix-Chain-Order(p)
{
n = length[p] -1
for i=1 to n
do m[i,i] = 0
for l=2 to n
do for i=1 to n-l+1
do j = i+l-1
m[i,j] = ∞
for k=i to j-1
do q = m[i,k] + m[k+1,j] + pi-1pkpj
if q< m[i,j]
SRET
- 23 -
(Advanced Data structures and Algorithms (ADSA)
then m[i,j] = q
s[i,j] = k
return m and s
}
BACKTRACKING:
N-QUEENS PROBLEM
1 2
1 1 1 2 3
1 Q 2 1
n=1 n=2 2
3
n=3
Trivial Solution No Solution No Solution
1 2 3 4
1 <- Queen
1
2 <- Queen
2
3 <- Queen
3
4 <- Queen
4
The 4 by 4 board(the following state space tree) shows a solution for n=4.
SRET
- 24 -
(Advanced Data structures and Algorithms (ADSA)
(Φ)
1 5 9 1
3
Q Q Q Q
Algorithm 6.3(a):rn_queens(k,n)
1. for row[k]=1 to n
2. if position_ok(k,n)=true, then
3. if k=n, then
4. print solution
5. else
6. rn_queens(k+1,n)
7. endif
8. endif
9. endfor.
Algorithm 6.3(b):position_ok(k,n)
SRET
- 26 -
(Advanced Data structures and Algorithms (ADSA)
Time Complexity:
We will obtain an upper bound for the running time of the algorithm by
bounding the number of times rn_queens(k,n) is called for each k<n. There
are(n-1)…(n-k+2) ways to place queens in the first k-1 columns in distinct rows.
Ignoring recursive calls, rn_queens(k,n) executes in Ө(n) time for k<n, there is
at most one placement possible for the queen. Also, the loop in rn_queens
executes in Ө(n) time. There are n(n-1)…2 ways for the queens jto have been
placed in the first n-1 columns, so the worst-case running time for
rn_queens(n,m) is:
n[n(n-1)…2]=nxn!
The algorithm runs in O(nxn!).
SUM OF SUBSETS PROBLEM:
The sum of subsets problem consists of finding a subset of a given set X
= {x1,x2…xn} of n distinct positive integers and a positive integer S. find all
subsets of {x1,x2…xn} that sum to S.
For Example,
If X = {1,2,3,4,5,6} and S= 12, there will be more than one subset whose
sum is 12. the subsets are {1,2,3,6}, {1,2,4,5}, {1,5,6}, {2,4,6}, {3,4,5}.
We will assume a binary state space tree.
The nodes at depth 1 are for including (yes = 1, no = 0) item 1, the nodes
at depth 2 are for item 2, etc. The left branch includes x i, and the right
branch excludes xi. The nodes contain the sum of the numbers included
so far.
Backtracking consists of doing a DFS of the state space tree, checking
whether each node is promising and if the node is non-promising
backtracking to the node’s parent.
We call a node non-promising if it cannot lead to a feasible (or optimal)
solution, otherwise it is promising.
The state space tree consisting of expanded nodes only is called the
pruned state space tree. The next coming Example shows the pruned
state space tree for the sum of subsets problem.
SRET
- 27 -
(Advanced Data structures and Algorithms (ADSA)
SRET
- 28 -
(Advanced Data structures and Algorithms (ADSA)
Example: X = {3,5,6,7} and S = 15. There are only 15 nodes in the pruned
state space tree
In the following fig. The full state space tree has 31 nodes (2 4+1 – 1). Trace of
the Algorithm is given in Table , next to the state space tree. Nodes of the tree
are shown by circled numbers. Notice that the x 1 is included at level 1 of the
tree, x2 at level 2, and son on. We have only one solution with subset {3,5,7}.
This occurs at node 6 in the following Diagram.
1
0
3 0
2 1
1
3 0
5 0 5 0
3 1 1 1
2 5
8 3 5 0
6 0
6 0 6 0
4 5 9
1 11 1 5 1
14 8 9 0 3 3 4
7 0
6 7
Pruned State Space Tree - Example
15 8
SRET
- 29 -
(Advanced Data structures and Algorithms (ADSA)
R B
4 3
Graph Coloring
SRET
- 30 -
(Advanced Data structures and Algorithms (ADSA)
Note that all these colorings are sort of equivalent. They all share the following
structure:
The same color is used for both node 1 and node 4. For colorings (i) and
(ii), it is R, for colorings (iii) and (iv), it is G, and for colorings (v) and (vi), it
is B.
Nodes 2 and 3 must have distinct colors different from each other and
from the color used for nodes 1 and 4.
By this observation, it follows that two colorings are equivalent if one can
be transformed into another by permuting the k colors.
We will use the following strategy to find all valid colorings of a graph:
Order nodes arbitrarily
Assign the first node a color.
SRET
- 31 -
(Advanced Data structures and Algorithms (ADSA)
Given a partial assignment of colors (c1,c2,……ci-1) to the first i-1 nodes, try
to find a color for the i-th node in the graph.
If there is no possible color for the i-th node given the previous choices,
backtrack to a previous solution.
R G
1 1 1 B
G B R B R G R G
2 2 2 2 2 2 2 2 2 B
G G
B B R B B R G R R G
SRET 3 3 3 3 3 3 3 3 3 3 3 3
- 32 -
R R
4 3 4 G 3 G B 4 B 3
The following Algorithm solves the graph coloring problem. It generates one
solution (only).
C [1....j-1]: a partial coloring for the first j-1 nodes of the graph.
adj [1..n][1..n]: adjacency matrix for n-node graph.
k: Number of colors(labels).
We invoke the algorithm: Coloring(C,j,k,n), where j is the first node(j = 1).
1. If j=n+1, then
2. print C
3. Endif
4. For i=1 to k
5. C[j] = i
6. If Valid(C,j,n) is true, then
7. Coloring(C,j+1,k,n)
8. Endif
9. Endfor.
SRET
- 33 -
(Advanced Data structures and Algorithms (ADSA)
AMILTONIAN CYCLES:
A B A B
C C
D
F E D E
SRET
UnDirected Graphs - 34 -
(Advanced Data structures and Algorithms (ADSA)
B
Illustration of Hamiltonian
A B C F
D E E
D C
E D F C
F E
F D
A
Example :
Find a Hamiltonian cycle for the directed graph.
Directed Graph and its Adjacency List
3 N 1 2 3
1 → 2
2 → 3 → 4 → 6
2
3 → 4
5
4 4 → 1 → 7
5 → 3
1
6 → 1 → 4 → 7
7 → 5
6 7
Given a directed graph G = (V,E). We store the
graph as an adjacency list (for each vertex v ε {1,2,…,n}, and store a list of the
vertices w such that (v,w) ε E). We store a Hamiltonian cycle as A[1..n], where
the cycle is:
A[n]→ A[n-1] → … → A[2] → A[1] → A[n]
We also set up an array D, the out-degree of each vertex. The values are
tabulated as:
SRET
- 35 -
(Advanced Data structures and Algorithms (ADSA)
D 1 2 3 4 5 6 7
1 3 1 2 1 3 1
Algorithm : Initialization()
1. For i = 1 to n-1
2. mark [i] =0
3. Endfor
4. mark [n] = 1; A[n] = n
5. Hamilton(n-1)
Algorithm : Hamilton(k)
1. If k = 0 , then
2. process(A)
3. Else
4. For j = 1 to D[A[k+1]]
5. w = N[A[k+1]][j]
6. If mark[w] = 0, then
7. mark[w] = 1; A[k] = w
8. Endif
9. Endfor
10. Hamilton(k-1)
11. mark[w]=0
12. Endif.
Algorithm : Process(A)
1. ok = 0
2. For j = 1 to D[A[1]]
3. If N[A[1],j] = A[n], then ok = 1
4. Endfor
5. If ok = 1, then Print(A).
Basic concepts
Some computational problems are complex and difficult to find efficient
algorithms for solving them and it cannot be solved in future also and have non
SRET
- 36 -
(Advanced Data structures and Algorithms (ADSA)
polynomial time algorithm and some problems are there, that can be solved by
a time algorithm, like O(n), O(logn), O(n logn), and O(n 2). Polynomial time
algorithms are being tractable and problems that require non-polynomial time
as being intractable.
SRET
- 37 -
(Advanced Data structures and Algorithms (ADSA)
possible to verify the answer quickly. The class of questions for which an
answer can be verified in polynomial time is called NP.
NP-complete
To attack the P = NP question the concept of NP-completeness is very
useful.
Informally the NP-complete problems are the "toughest" problems in NP
in the sense that they are the ones most likely not to be in P.
NP-complete problems are a set of problems that any other NP-problem
can be reduced to in polynomial time, but retain the ability to have their
solution verified in polynomial time.
In comparison, NP-hard problems are those at least as hard as NP-
complete problems, meaning all NP-problems can be reduced to them,
but not all NP-hard problems are in NP, meaning not all of them have
solutions verifiable in polynomial time.
SRET
- 38 -
(Advanced Data structures and Algorithms (ADSA)
No one has been able to develop a polynomial time algorithm for any
problem in the second group.
The theory of NP-completeness which we present here does not provide
a method of obtaining polynomial time algorithms for problems in the
second group. Nor does it say that algorithms of this complexity do not
exist. Instead, what we do is shown that many of the problems for which
there are no known polynomial time algorithms are computationally
related.
In fact, we establish two classes of problems. These are given the names
NP-hard and
NP-complete.
A problems that is NP-complete has the property that it can be solved in
polynomial time if and only if all other NP-complete problems can also be
solved in polynomial time.
If an NP-hard problem can be solved in polynomial time, then all NP-
complete problems can be solved in polynomial time.
SRET
- 39 -
(Advanced Data structures and Algorithms (ADSA)
All NP-complete problems are NP-hard, but some NP-hard problems are
not known to be NP-complete.
Nondeterministic Algorithms:
Up to now the notion of algorithm that we have been using has the
property that the result of every operation is uniquely defined.
Algorithms with this property are termed deterministic algorithms. Such
algorithms agree with the way programs are executed on a computer.
In a theoretical framework we can remove this restriction on the outcome
of every operation. We can allow algorithms to contain operations whose
outcomes are not uniquely defined but are limited to specified sets
possibilities.
The machine executing such operations is allowed to choose any one of
these outcomes subject to a termination conditions to be defined later.
This leads to the concept of a nondeterministic algorithm.
To specify such algorithms, we introduce three new functions:
SRET
- 40 -
(Advanced Data structures and Algorithms (ADSA)
Examples :
SRET
- 41 -
(Advanced Data structures and Algorithms (ADSA)
NP
SRET
- 42 -
(Advanced Data structures and Algorithms (ADSA)
It is easy to see that there are NP-hard problems that are not NP-
completer. Only a decision problem can be NP-complete. However, an
optimization problem may be NP-hard. Furthermore if L1 is
A decision problem and L2 an optimization problem, it is quite possible that L 1α
L2. One can trivially show that the knapsack decision problem reduces to the
knapsack optimization problem. For the clique problem one can easily show that
the clique decision problem reduces to the clique optimization problem. In fact,
SRET
- 43 -
(Advanced Data structures and Algorithms (ADSA)
one can also show that these optimization problems reduce to their
corresponding decision problems. Yet, optimization problems cannot be NP-
complete whereas decision problems can. There also exist NP-hard decision
problems that are not NP-complete. Figure 11.2 shows the relationship among
these classes.
SRET
- 44 -