0% found this document useful (0 votes)
4 views

Unconstrained optimization methods

Mathematical Optimization is a crucial applied mathematics field used across various sectors such as manufacturing, finance, and engineering, providing cost-effective solutions compared to traditional methods. It involves optimizing an objective function subject to constraints, with variables that can be discrete or continuous, and can be static or dynamic. The document discusses optimization vocabulary, types of problems, methods for finding extrema, and practical applications, illustrating the importance of optimization in real-world scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Unconstrained optimization methods

Mathematical Optimization is a crucial applied mathematics field used across various sectors such as manufacturing, finance, and engineering, providing cost-effective solutions compared to traditional methods. It involves optimizing an objective function subject to constraints, with variables that can be discrete or continuous, and can be static or dynamic. The document discusses optimization vocabulary, types of problems, methods for finding extrema, and practical applications, illustrating the importance of optimization in real-world scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 87

Mathematical Optimization in the “Real World”

Mathematical Optimization is a branch of applied mathematics which is useful in


many different fields. Here are a few examples:
• Manufacturing • Marketing
• Production • Policy Modeling
• Inventory control
• Transportation
• Scheduling
• Networks
• Finance
• Engineering
• Mechanics
• Economics
• Control engineering
Why Mathematical Optimization is Important

• Mathematical Optimization works better than traditional “guess-and-check” methods


• M. O. is a lot less expensive than building and testing
• In the modern world, pennies matter, microseconds matter, microns matter.
Optimization Vocabulary
Your basic optimization problem consists of…
• The objective function, f(x), which is the output you’re trying to maximize or
minimize.
Optimization Vocabulary
Your basic optimization problem consists of…
• The objective function, f(x), which is the output you’re trying to maximize or minimize.
• Variables, x1 x2 x3 and so on, which are the inputs – things you can control. They are
abbreviated xn to refer to individuals or x to refer to them as a group.
Optimization Vocabulary
Your basic optimization problem consists of…
• The objective function, f(x), which is the output you’re trying to maximize or minimize.

• Variables, x1 x2 x3 and so on, which are the inputs – things you can control. They are

abbreviated xn to refer to individuals or x to refer to them as a group.


• Constraints, which are equations that place limits on how big or small some variables
can get. Equality constraints are usually noted hn(x) and inequality constraints are noted
gn(x).
Optimization Vocabulary
A football coach is planning practices for his running backs.

• His main goal is to maximize running yards – this will become his objective function.

• He can make his athletes spend practice time in the weight room; running sprints; or
practicing ball protection. The amount of time spent on each is a variable.

• However, there are limits to the total amount of time he has. Also, if he completely
sacrifices ball protection he may see running yards go up, but also fumbles, so he
may place an upper limit on the amount of fumbles he considers acceptable. These
are constraints.

Note that the variables influence the objective function and the constraints place limits
on the domain of the variables.
Practice Question 1
Group the following into what might be maximized, minimized or cannot be optimized.

1. When choosing a new phone and plan, you might consider: minutes of talk time per
month; how much is charged for overages; whether extra minutes roll over; amount of
data allowed; cost per month; amount of storage/memory; how many phones are
available; brands/types of available phones; cost of the phone; amount of energy used;
time it takes to download apps or music; whether or not you get signal in your home.
Practice Question 2
2. An airplane designer is trying to build the most fuel-efficient airplane possible.
Write one factor as an objective (“Minimize/maximize _____”) and the rest as
constraints ( “_____ ≤ c1”, or ≥ or =). Delete any non-numerical factors:
speed, fuel consumption, range, noise, weight, cost, ease of use, amount of lift,
amount of drag, sonic boom volume, payload (how much it can carry).
Practice Questions 3-5
For each of the following tasks, write an objective function (“maximize ____”) and at
least two constraints (“subject to _____ ≤ c 1”, or ≥ or =)
3. A student must create a poster project for a class.
4. A shipping company must deliver packages to customers.
5. A grocery store must decide how to organize the store layout.
Optimization Methods
Types of Optimization Problems
• Some problems have constraints and some do not.

unlimited limited
Types of Optimization Problems
• Some problems have constraints and some do not.
• There can be one variable or many.

x1 x3
x4 x2

x6 x5
x7
x8
Types of Optimization Problems
• Some problems have constraints and some do not.
• There can be one variable or many.
• Variables can be discrete (for example, only have integer values) or continuous.
Types of Optimization Problems
• Some problems have constraints and some do not.
• There can be one variable or many.
• Variables can be discrete (for example, only have integer values) or continuous.
• Some problems are static (do not change over time) while some are dynamic
(continual adjustments must be made as changes occur).
Types of Optimization Problems
• Some problems have constraints and some do not.
• There can be one variable or many.
• Variables can be discrete (for example, only have integer values) or continuous.
• Some problems are static (do not change over time) while some are dynamic
(continual adjustments must be made as changes occur).
• Systems can be deterministic (specific causes produce specific effects) or stochastic
(involve randomness/ probability).

?
? ?
Types of Optimization Problems
• Some problems have constraints and some do not.
• There can be one variable or many.
• Variables can be discrete (for example, only have integer values) or continuous.
• Some problems are static (do not change over time) while some are dynamic
(continual adjustments must be made as changes occur).
• Systems can be deterministic (specific causes produce specific effects) or stochastic
(involve randomness/ probability).
• Equations can be linear (graph to lines) or nonlinear (graph to curves)
Unconstrained Optimization

Unconstrained optimization problems consider the problem of minimizing or maximizing an objective


function that depends on real variables with no restrictions on their values.

Unconstrained optimization problems arise directly in some applications but they also arise indirectly
from reformulations of constrained optimization problems. Often it is practical to replace the constraints
of an optimization problem with penalized terms in the objective function and to solve the problem as an
unconstrained problem.

Unconstrained…

but still has minimums

and even a (local) maximum


Problem specification
Suppose we have a cost function (or objective function)
f(x) (one variable) or f() (multivariable).
Our aim is to find values of the parameters (decision variables) x that minimize this function

If we seek a maximum of f(x) (profit function) it is equivalent to seeking a minimum of –f(x).

At a high level, algorithms for unconstrained minimization follow this general structure:
• Choose a starting point .
• Beginning at , generate a sequence of iterates with non-increasing function (f) value until a solution point with
sufficient accuracy is found or until no further progress can be made.
In a similar way can be formulated general structure for unconstrained maximization. Just we generate a sequence
of iterates with non-decreasing function.
To generate the next iterate , the algorithm uses information about the function at and possibly earlier iterates.
Extrema points of the function
Let function f(x) is defined on the interval [a,b].
Local vs. Global Extremes
If a point is a maximum or minimum relative to the other points in its “neighborhood”,
then it is considered a local maximum or local minimum.
Local vs. Global Extremes
If a point is a maximum or minimum relative to the other points in its “neighborhood”,
then it is considered a local maximum or local minimum.

Local maximum

Local minima
Local vs. Global Extremes
If a point is a maximum or minimum relative to all the other points on the function,
then it is considered a global maximum or global minimum.

No global maximum

Global minimum
Extrema points of the function
Let function f(x) is defined on the interval [a,b].

x0 is a function f(x) local minimum point, if


f(x) ≥ f(x0), x D(x0, δ)

x0 is a function f(x) global minimum point, if

f(x) ≥ f(x0), x [a,b]

Analogically (just with the opposite inequalities) can be defined the local and global maximum
points.

Since

we will mostly consider minimum of the function because the theory for the maximum is similar
Types of minima

• which of the minima is found depends on the starting point


• such minima often occur in real applications
Practice Problem 1
1. Sketch a function with…
a) Two local maxima, one of which is global, one local minimum and no global
minimum
b) No local or global extremes
c) One global minimum and no maxima
d) Two global minima, one local maximum, no global maximum
Unimodal and Multimodal Functions

A unimodal function has only one minimum and the rest of the graph goes up from there; or one maximum and
the rest of the graph goes down.

With unimodal functions, any extreme you find is guaranteed to be the global extreme.
Unimodal and Multimodal Functions
A bimodal function has two local minima or maxima.

Beyond that, trimodal, quadrimodal and then multimodal.

With bimodal and above, you don’t know if an extreme is local or global unless you
know the entire graph.
Practice Problems 1 and 2

2. Draw a trimodal function with no global maximum. On your function label the local and global minima, and
the local maxima.

4. If a smooth function has n extrema with no global minimum, how many local maxima will it have? How
many local minima will it have? How many total local extremes?
Methods of finding extrema

• Analytical method:

Theorem. If function f(x) on the interval [a,b] has a derivative and at the point c from
this interval has local minimum or maximum, then its derivative at that point c is
equal to 0: f’(c)=0.

Minimum or maximum points are called extrema points of the function.

Theorem. Continuous function f(x), defined on the interval [a,b] takes its the biggest
(the least) value whether at the extrema points or at the ends of the interval.
Some useful derivatives

Exponents
Linear Functions
f ( x)  Ax n  f ' ( x) nAx n  1
f ( x)  Ax  f ' ( x)  A
Example: f ( x) 4 x Example: f ( x) 3 x 5
f ' x  4 f ' x  15x 4

Logarithms
A
f ( x)  A ln( x)  f ' ( x) 
x
Example: f ( x) 12 ln x 
12
f ' x  
x
A necessary condition for a maximum or a minimum is that the derivative equals zero

f x  f x 
f '  x  0

OR

f '  x  0

x *
x
x* x

But how can we tell which is which?


While the first derivative measures the slope (change in the value of the function), the second derivative measures the
change in the first derivative (change in the slope)

f ' x   0
f x  f x 
f '  x  0

f '  x  0

f ' x   0
x *
x
x* x

As x increases, the As x increases, the f ''  x   0


f ''  x   0 slope is increasing
slope is decreasing
After prices for almonds climbed to a record $4 per pound in 2014, farmers across
California began replacing their cheaper crops with the nut, causing a huge increase
in supply. Now, the bubble has popped. Since late 2014, according to The
Washington Post, almond prices have fallen by around 25%.

Lets suppose that almond prices are


currently $3/lb. and are falling at
the rate of $.04/lb. per week.

Let’s also suppose that your


orchard is current bearing 35
$3.00 pounds per tree, and that number
will increase by 1 lb. per week.

How long should you wait to


harvest your nuts to maximize your
revenues?
We want to maximize revenues which is price per pound times total pounds sold

Q 35  t P 3.00  .04t

Revenue P * Q 35  t 3  .04t 

Now, take a
derivative with R 105  1.6t  .04t 2
respect to ‘t’ and set
it equal to zero
R ' 1.6  .08t
P $2.20
Then, solve for ‘t’
*
t 20 Q 55
R $121
We want to minimize cost per mile

Cost Per Mile

40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 Speed
Suppose that you run a trucking company. You have
the following expenses.

Expenses
• Driver Salary: $22.50 per hour
• $0.27 per mile depreciation What speed should you tell your driver to
• v/140 dollars per mile for fuel costs where maintain in order to minimize your cost
‘v’ is speed in miles per hour per mile?
What speed should you tell your driver to maintain in order to
minimize your cost per mile?

22.50 v
Cost .27  
Note that if I divide the v 140
driver’s salary by the speed, I
get the driver’s salary per
mile
$ Now, take a 22.50 1
$ derivative with C '  2
 0
hr.  respect to ‘t’ and set v 140
miles mile it equal to zero
hr.
Then, solve for ‘t’ v  3,150 56 mph
Cost $1.07 per mile
We want to minimize cost per mile
Cost Per Mile

40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 Speed
Remark. Usually in real situations it is hard to find a derivative, or to solve an equation f’(c)=0, or even to
know the expression of the function, what means that we know only some values of it from the observations.
Thus we need other methods.

• If it is possible, graph function and find those points.

Re-election algorithm: construct a net of the points xi=a+i*h, i=0,1,…,n; faind values of the function at those

points fi=f(xi) and choose extrema points. Such method is reliable but not effective.
Unimodal function
We consider the following minimization problem
min f(x)
s.t. x [a; b]

where f : ℝ → ℝ is a univariate function and [a; b] is an interval.


Let x* denote a minimizer of f(x) in [a; b]. f is said to be unimodal on [a; b] if f(x) is decreasing for a ≤ x ≤ x* and
increasing for x* ≤ x ≤ b.
A function which is increasing for a ≤ x ≤ x* and decreasing for x*≤ x ≤ b is also called a unimodal function. Of
course in this case x* is a maximizer.
Unconstrained univariate optimization
Assume we can start close to the global minimum

How to determine the minimum (or maximum)?


• Search methods (Dichotomous, Fibonacci, Golden-Section)
• Approximation methods
1. Polynomial interpolation
2. Newton method
• Combination of both (alg. of Davies, Swann, and Campey)
Search methods
Search methods
Search methods
• Start with the interval (“bracket”) [] such that the minimum x* lies inside.
• Evaluate f(x) at two point inside the bracket.
• Reduce the bracket.
• Repeat the process.
• Can be applied to any function and differentiability is not essential.
To choose two points
Requirements:
(1) Two points chosen make the interval to be reduced quickly.
(2) The length is predictable after given iterations.

Problems need to solve:


(1) How to choose the first two points?
(2) How to choose a new test point?

Golden section(0.618) search


Fibonacci search
What is Golden Ratio
The Golden Ratio, symbolized by (the Greek letter rho) is a famous number related to the Fibonacci numbers. It is
irrational and has two possible values: 1.61803... and 0.61803.... We will be using the second value.

The non-decimal representation is (–1+√5)/2. It has the interesting property that 2 = 1–, among others.

Expressed algebraically:
The Problem

Recall that we need three points to verify that a minimum lies within an interval. If you just take the midpoint of
the interval…

then you still don’t know if the actual minimum lies left or right of the endpoint.
The Solution
Instead, we divide the interval into three sections instead of two by choosing two
interior points instead of one.

Although it would seem obvious to divide the segment into equal thirds, with points
at .33 and .67 across the segment, there is a better way.
The Solution

Rather than divide our segment into equal thirds, it’s preferred to divide it according to
the Golden Ratio, at 0.382… and 0.618…

Dividing a segment with these proportions is known as creating a Golden Section.


The reason for its usefulness will be explained soon.
The Solution
After dividing the segment, there are two testable intervals, left and right. Whichever
one has the middle point lower than the two endpoints, becomes our new interval for
the minimum.

left
right
interval
interval
The Next Step

Then we will repeat the procedure again.

New point
The Next Step

Then we will repeat the procedure again.

New point Same point as before

The cool property of the Golden Ratio is that if we use the exact value of , one of the
two interior points can be used again – it’s already there.
Golden section method
Assume the objective function f is unimodal over [a; b].

What is golden section?


The golden section is a line section divided into two parts.

Also
x2  x1 (b  a )  ( x1  a )  (b  x2 )
And We are interested in the solution which is < 1:
x2  x1 (b  a )  ( x1  a )  (b  x2 )

3− √ 5
𝜌= ≈ 0 , 38
x2  a (b  a )  (b  x2 ) 2
Expressing (b - a) in the right side of the equation we get: Finally:
1−2 𝜌 𝑥1=𝑎+(𝑏− 𝑎) 𝜌
2
𝜌= or 𝜌 − 3 𝜌+1=0
1− 𝜌 𝑥 2=𝑏−(𝑏− 𝑎)𝜌
Golden section examples in the real life
Golden section algorithm
Example 1
Example 1
Example 1
Example 2
Example 2
Fibonacci method
We continue to discuss the minimization problem where f(x) is unimodal on [a; b].
In the golden search method, two function evaluations are made at the first iteration and then only one function
evaluation is made for each subsequent iteration. The ratio for the reduction of intervals at each iteration
remains constant.
The Fibonacci method differs from the golden ratio method in that the ratio for the reduction of intervals is not
constant. Additionally, the number of subintervals (iterations) is predetermined and based on the specified
tolerance.
Fibonacci Search
Example 1
Example 1
Example 1
Example 1
Example 1
Fibonacci method
Fibonacci algorithm
Example 2
Approximation methods
Polynomial interpolation
• Bracket the minimum.
• Fit a quadratic or cubic polynomial which
interpolates f(x) at some points in the interval.
• Jump to the (easily obtained) minimum of the
polynomial.
• Throw away the worst point and repeat the
process.

• Quadratic interpolation using 3 points


• Other methods to interpolate?
– Cubic interpolation
Newton method
Is used to solve non linear
equations f(x)=0. Is based on the
tangent line.

Equation of the tangent line:

y  y0  f ( x0 )( x  x0 )

Iterations formula:

f ( xn )
xn 1  xn  '
f ( xn )
Newton method
f ( xn )
xn 1  xn  '
f ( xn )
Since we searching for the points where the derivative of the function is equal to
zero, i.e. we are solving the equation

f ' ( x) 0

In the recurrent formula in pace of the function f we substitute its derivative f’ :

f ' ( xn )
xn 1  xn  ''
f ( xn )
Newton method
• Global convergence of Newton’s method is poor.
• Often fails if the starting point is too far from the minimum.
• in practice, must be used with a globalization strategy which reduces the step length until function
decrease is assured.
Extension to n (multivariate) dimensions
• How big n can be?
– problem sizes can vary from a handful of parameters to many thousands
• For n =2 the cost function surfaces can be visualized.

You might also like