0% found this document useful (0 votes)
34 views

Shadow Prices

The document discusses shadow prices in linear programming. It defines shadow prices as the vector Π that represents the sensitivity of the optimal objective value v(p) to changes in the right-hand side parameter p. The shadow prices give the rate of change of v(p) with respect to p around the optimal solution. Specifically, v(p) is approximated locally as v(b) + ΠT(p - b). Globally, v(p) is greater than or equal to this approximation. An example calculates the shadow price Π1 as -1/5, representing the rate of change of optimal costs with respect to changes in the availability of ingredient X.

Uploaded by

Steve Yeo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Shadow Prices

The document discusses shadow prices in linear programming. It defines shadow prices as the vector Π that represents the sensitivity of the optimal objective value v(p) to changes in the right-hand side parameter p. The shadow prices give the rate of change of v(p) with respect to p around the optimal solution. Specifically, v(p) is approximated locally as v(b) + ΠT(p - b). Globally, v(p) is greater than or equal to this approximation. An example calculates the shadow price Π1 as -1/5, representing the rate of change of optimal costs with respect to changes in the availability of ingredient X.

Uploaded by

Steve Yeo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

FOUNDATIONS OF OPTIMIZATION: IE6001

Shadow Prices

Napat Rujeerapaiboon
Semester I, AY2019/2020
Example 1 (perturbed)

Assume that p1 , the availability of X, is not precisely known.

max x1 + x2 objective function


s.t. 2x1 + x2 ≤ p1 constraint on availability of X
x1 + 3x2 ≤ 18 constraint on availability of Y
x1 ≤ 4 constraint on demand of A
x1 , x2 ≥ 0 non-negativity constraints

−min −x1 − x2
s.t. 2x1 + x2 + x3 = p1
x1 + 3x2 + x4 = 18
x1 + x5 = 4
x1 , x2 , x3 , x4 , x5 ≥ 0
Example 1 (perturbed)

The value function v (p1 ) expresses the optimal value of the LP


as a function of the unknown availability parameter p1 .

v (p1 ) = min −x1 − x2


s.t. 2x1 + x2 + x3 = p1
x1 + 3x2 + x4 = 18
x1 + x5 = 4
x1 , x2 , x3 , x4 , x5 ≥ 0
Example 1 (perturbed)
x2

Optimal Solution with basic variables x 1 , x 2, x


5
and nonbasic variables x 3 , x 4

Availability of X:
p1 = 11

Optimal Value:
v(11) = -8

x1
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
+1
Example 1 (perturbed)
x2

Degenerate Optimal Solution


I = {1,2,3}, I = {1,2,4}, I = {1,2,5} are all optimal!

Availability of X:
2
p1 = 12 3

Optimal Value:
2 2
v(12 3 ) = -8 3

x1
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
+1
Example 1 (perturbed)
x2

Optimal Solution with basic variables x 1 , x 2, x


3
and nonbasic variables x 4 , x 5

Availability of X:
2
p1 > 12 3

Optimal Value:
2
v(p1) = -8 3

x1
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
+1
Example 1 (perturbed)
x2

Degenerate Optimal Solution


I = {1,2,5}, I = {2,3,5}, I = {2,4,5} are all optimal!

Availability of X:
p1 = 6

Optimal Value:
v(6) = -6

x1
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
+1
Example 1 (perturbed)
x2

Optimal Solution with basic variables x 2 , x 4, x


5
and nonbasic variables x 1 , x 3

Availability of X:
0 > p1 < 6

Optimal Value:
0 > v(p1) > -6

x1
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
+1
Example 1 (perturbed)

Note: v (p1 ) is non-increasing, convex and piecewise linear.

v(p1)

2
0 6 12 3

Problem p1
infeasible
for p1 < 0

-6

2
-8 3
Perturbation

Let p ∈ Rm denote a general RHS and define the


value function v (p) : Rm → R by:
n o
v (p) = min c T x | A x = p; x ≥ 0

Solving the original LP (the reference problem)


n o
min x0 = c T x | A x = b, x ≥ 0

thus computes v (b).


Shadow Prices

Suppose we have solved the reference problem


n o
min x0 = c T x | Ax = b, x ≥ 0

and found an optimal basis matrix B satisfying

xB = B −1 b ≥ 0 (Feasibility)

and

r = cN − N T (B −1 )T cB ≥ 0 (Optimality).
Shadow Prices (cont)

Definition: The vector of shadow prices Π ∈ Rm is defined as

Π = (B −1 )T cB ,

where B = B(I) is an optimal basis.

Note that there can be more than one optimal basis


⇒ The shadow prices need not be unique.

The shadow Prices give information about the sensitivity of the


value function v (p) at p = b.
Local Behaviour of Value Function

Theorem: v (p) = v (b) + ΠT (p − b) for all p ∈ Rm with


B −1 p ≥ 0.

Proof:
• If B −1 p ≥ 0, then B remains the optimal basis for

min{c T x : Ax = p, x ≥ 0}

since r is not affected by changing b to p.


• Thus, we find

v (p) = cBT B −1 p
= cBT B −1 b + cBT B −1 (p − b)
= v (b) + ΠT (p − b) 
Global Behaviour of Value Function

Theorem: v (p) ≥ v (b) + ΠT (p − b) for all p ∈ Rm .

Proof:
v (p) = minx≥0;Ax=p c T x


= minx≥0;Ax=p c T x − ΠT (Ax − p)


≥ minx≥0 c T x − ΠT (Ax − p)


= minx≥0 (c T − ΠT A)x + ΠT p


= ΠT p + minx≥0 (c T − ΠT A)x

| {z }
≥0 see next slide!
Global Behaviour of Value Function

 
 xB
cBT B −1
 T
c − ΠT A x cBT cNT
  
= | − [B | N]
xN
   
 T T
 xB T
 −1
 xB
= cB | cN − cB I | B N
xN xN

= cBT xB − cBT xB + cNT − cBT B −1 N xN




= r T xN

≥ 0 (as r ≥ 0, and xN ≥ 0)
Global Behaviour of Value Function

Thus, we find

v (p) ≥ ΠT p + minx≥0 (c T − ΠT A)x




≥ ΠT p
= ΠT b + ΠT (p − b)
= cBT B −1 b + ΠT (p − b)
= v (b) + ΠT (p − b)


Shadow Prices in Example 1
Note: Π1 is the shadow price for the budget of ingredient X.

v(p1)
p1 = b1

2
0 6 11 12 3

p1

-6 1
1
2
-8 3

At p1 = b1 = 11, the optimal costs change by Π1 = − 25


if the availability of X increases by 1.
Interpretation

• Assume the company can buy a ”small” additional amount


of ingredient X, at price µ1 per unit.

• Is it worthwhile to buy additional units of X?

• Yes if µ1 + Π1 < 0 (overall cost decreases);

• No if µ1 + Π1 > 0 (overall cost increases).

⇒ Therefore, −Π1 is the maximum price one should pay for


one additional unit of X!
Evaluation of Shadow Prices

Sometimes shadow prices can be read off the final tableau.

Lemma: Suppose row t is initially a “≤-constraint” and a slack


variable xs had been added. Then, Πt = βs , where βs is the
objective coefficient of xs in the final (optimal) tableau.
Proof:
• If xs is nonbasic in the final tableau, then

βs = −rs = −(cN − N T B −T cb )T es
= −cs + ΠT Nes = 0 + ΠT et = Πt .

• If xs is basic in the final tableau, then

βs = 0 = cs = esT cB = esT B T Π = etT Π = Πt . 


Evaluation of Shadow Prices (cont)

Sometimes shadow prices can be read off the final tableau.

Lemma: Suppose row t is initially a “≥-constraint” and a


surplus variable xs had been added. Then, Πt = −βs , where βs
is the objective coefficient of xs in the final tableau.
Example 1 (revisited)

The final tableau for Example 1 is:

BV x0 x1 x2 x3 x4 x5 RHS
x0 1 0 0 − 25 − 15 0 −8
x2 0 0 1 − 15 2
5 0 5
x5 0 0 0 − 35 1
5 1 1
3
x1 0 1 0 5 − 15 0 3

• The constraint on the availability of X was standardised by


introducing the slack variable x3 .
• The shadow price Π1 for that constraint thus coincides with
the coefficient of x3 in the objective row of the above
tableau ⇒ Π1 = − 52 .

You might also like