Attendance code: 828440
Note on Canvas Quizzes
▶ Some questions will ask about programming constructs
suitable for some task
▶ Always think about the minimal possible solution within the
scope considered in the lecture notes
Tutorial problem 1: SEIR model
dS
= −βSI ,
dt
dE
= βSI − σE ,
dt
dI
= σE − γI ,
dt
dR
= γI . (1)
dt
Euler method implementation
Code available on Canvas under “Files” in the folder
®)
“example codes” as tutorial3 seir model euler.py (in
Python ) and tutorial3 seir model euler.m (in Matlab
Implement automatic peak detection
This is a typical “open-ended“ bonus question for coding
Homeworks. The solution will not be in the example codes!
Poll: how can we detect peaks in numerical ODE solution?
Poll: which equations should be adjusted to reflect the
decline of immunity level in the population?
Solution without arrays
Solution that uses functions
Tutorial problem 2: Finite-difference approximations and
interpolating polynomial
▶ We can shift the enumeration of points for our convenience:
ti−2 → t0 , ti−1 → t1 , ti → t2 , ti+1 → t3 , ti+2 → t4 .
fi−2 → f0 , fi−1 → f1 , fi → f2 , fi+1 → f3 , fi+2 → f4 .
▶ 5 points ⇒ Fourth-order polynomial
1 2
P4 (t) = f0 + (D+ f )0 (t − t0 ) + D+ f 0 (t − t0 ) (t − t1 ) +
2!
1 3
+ D+ f 0 (t − t0 ) (t − t1 ) (t − t2 ) +
3!
1 4
+ D+ f 0 (t − t0 ) (t − t1 ) (t − t2 ) (t − t3 ) (2)
4!
Finite-difference approximations and interpolating
polynomial
(D+ f )0 = δt −1 (f1 − f0 ),
2
f 0 = δt −2 (f2 − 2f1 + f0 ),
D+
3
f 0 = δt −3 (f3 − 3f2 + 3f1 − f0 ),
D+
4
f 0 = δt −4 (f4 − 4f3 + 6f2 − 4f1 + f0 ).
D+ (3)
Poll: the coefficients of fi are, up to a sign ...
Differentiating the interpolating polynomial
d m f (t) d m Pn (t)
▶ Finite difference approximation to dt m ⇒ dt m
dP4 (t) 1 2
d
= (D+ f )0 + D+ f 0 ((t − t0 ) (t − t1 )) +
dt t=t2 2! dt t=t2
1 3
d
+ D+ f 0 ((t − t0 ) (t − t1 ) (t − t2 )) +
3! dt t=t2
1 4
d
+ D+ f 0 ((t − t0 ) (t − t1 ) (t − t2 ) (t − t3 )) (4)
4! dt t=t2
Differentiating the interpolating polynomial
d
((t − t0 ) (t − t1 )) = (t2 − t0 ) + (t2 − t1 ) = 3 δt
dt t=t2
d
((t − t0 ) (t − t1 ) (t − t2 )) = (t2 − t0 ) (t2 − t1 ) = 2 δt 2
dt t=t2
d
((t − t0 ) (t − t1 ) (t − t2 ) (t − t3 )) =
dt t=t2
= (t2 − t0 ) (t2 − t1 ) (t2 − t3 ) = −2 δt 3 (5)
Putting everything together
dP4 (t) f1 − f0 3δt f2 − 2f1 + f0
= + +
dt t=t2 δt 2! δt 2
2δt 2 f3 − 3f2 + 3f1 − f0
+ +
3! δt 3
−2δt 3 f4 − 4f3 + 6f2 − 4f1 + f0
+ (6)
4! δt 4
dP4 (t) −1 3
= δt (f1 − f0 ) + (f2 − 2f1 + f0 )+
dt t=t2 2
1 1
+ (f3 − 3f2 + 3f1 − f0 )− (f4 − 4f3 + 6f2 − 4f1 + f0 ) (7)
3 12
Putting everything together
dP4 (t) 1
= (12f1 − 12f0 + 18f2 − 36f1 + 18f0 +
dt t=t2 12δt
+4f3 − 12f2 + 12f1 − 4f0 −f4 + 4f3 − 6f2 + 4f1 − f0 ) (8)
dP4 (t) −f4 + 8f3 − 8f1 + f0
= (9)
dt t=t2 12 δt
Restoring enumeration
df (t) −fi+2 + 8fi+1 − 8fi−1 + fi−2
+ O δt 4
= (10)
dt t=ti 12 δt
Error of interpolating polynomials
f (n+1) (τ )
f (t) − Pn (t) = (t − t0 ) (t − t1 ) . . . (t − tn ) (11)
(n + 1)!
▶ Important: τ can depend on t and n
Error of interpolating polynomials: an example
▶ Let f (t) = sin (t)
▶ t0 = 0, t1 = π/2
▶ P1 (t) = 2t
π
▶ f (t) − P1 (t) = − sin(τ ) π
2 t t − 2 .
Comparing the error estimate with exact difference
0.30
0.25
0.20
0.15
0.10 Sin[t]- 2 t
0.05 t (t - /2)
2
0.00
0.0 0.5 1.0 1.5
Comparing the error estimate with exact difference
0.80
0.75
0.70
0.65
0.60
t (t - /2)
0.55 (Sin[t]- 2 t )/(- 2
)
0.50
0.45
0.0 0.5 1.0 1.5
Scaling of error for finite difference approximations
dk
(f (t) − Pn (t)) =
dt k !
dk f (n+1) (τ )
= (t − t0 ) (t − t1 ) . . . (t − tn ) =
dt k (n + 1)!
f (n+1) (τ ) d k
= (t − t0 ) (t − t1 ) . . . (t − tn ) +
(n + 1)! dt k
!
d f (n+1) (τ ) d k−1
+ (t − t0 ) (t − t1 ) . . . (t − tn ) + . . . (12)
dt (n + 1)! dt k−1
▶ Normally, the term in blue will be the leading error
▶ Sometimes, the blue term vanishes and the term in red will be
the leading error
▶ The term in red scales as the next power of δt
Example: central finite difference of 2nd order
Interpolating polynomial:
1 2
P2 (t) = fi−1 + (D+ f )0 (t − ti−1 ) + D f (t − ti−1 ) (t − ti )(13)
2 + 0
Finite-difference approximation to 2nd derivative:
d 2 P2 (t) fi+1 + fi−1 − 2fi 2
= + O δt (14)
dt 2 t=ti δt 2
Example: central finite difference of 2nd order
Error of our finite-difference approximation:
d2
(f (t) − P2 (t)) =
dt 2 t=ti
d 2 f (3) (τ )
= (t − ti−1 ) (t − ti ) (t − ti+1 ) =
dt 2 3!
t=ti
f (3) (τ ) d2
= (t − ti−1 ) (t − ti ) (t − ti+1 ) +
3! dt 2 t=ti
d f (3) (τ ) d
+2 (t − ti−1 ) (t − ti ) (t − ti+1 ) +
dt 3! dt t=ti
t=ti
d 2 f (3) (τ )
+ (t − ti−1 ) (t − ti ) (t − ti+1 )|t=ti (15)
dt 2 3!
t=ti
Introduce u = t − ti
d2
(t − ti−1 ) (t − ti ) (t − ti+1 ) =
dt 2 t=ti
d2
= (u + δt) u (u − δt) =0 (16)
du 2 u=0
Therefore, the next, higher-order term contributes instead!
d
(t − ti−1 ) (t − ti ) (t − ti+1 ) =
dt t=ti
= (ti − ti−1 ) (ti − ti+1 ) = −δt 2 (17)
This scaling is one order higher than we would normally expect!
▶ We use (n + 1) consecutive function values fi to construct
interpolating polynomial of order n.
▶ The error of order-n interpolating polynomial scales as δt n+1 .
▶ The error of the m’th derivative of interpolating polynomial
scales as δt n+1−m .
▶ If the number of function values (n + 1) is odd and we
approximate the derivative of even order m at the central
point of (n + 1) points, the error scales as δt n+2−m , that is,
one order higher.
Poll: how will the error of this finite difference
approximation scale with δt?
▶ Points ti−3 , ti−2 , ti−1 , ti and ti+1
▶ Fourth order derivative at t = ti
Poll: how will the error of this finite difference
approximation scale?
▶ Points ti−3 , ti−2 , ti−1 , ti , ti+1 , ti+2 and ti+3
▶ Second order derivative at t = ti
Higher-order finite difference approximations
fi+1 − fi−1
f ′ (ti ) = + O δt 2
2δt
′ −fi+2 + 8fi+1 − 8fi−1 + fi−2
+ O δt 4
f (ti ) =
12δt
fi+3 − 9f i+2 + 45f i+1 − 45f i−1 + 9fi−2 − fi−3
f ′ (ti ) = + O δt 6 (18)
60δt
Will higher-order finite differences always improve
precision?
▶ Quickly changing functions?
▶ Finite precision of floating-point numbers?
▶ Discontinuous functions/derivatives?
▶ Imperfect (noisy) data?
Improvements of the Euler method
Poll: What improvement strategy DOES NOT
necessarily lead to more precise or stable method?
▶ Using higher-order finite difference approximations for dfdt(t)
▶ Using improved integration schemes (integral derivation of the
Euler method)
▶ Special methods for Newtonian equations of motion
▶ Using implicit methods
Milne method: higher-order finite differences
df (t)
= F (t, f (t)) ⇒
dt
f¯i+1 − f¯i−1
= F ti , f¯i
⇒ (19)
2δt
Implicit Euler method
▶ Instead of forward finite difference for dfdt(t) , use backward one
▶ Usual Euler method: fi+1 = fi + δt F (ti , fi )
▶ Implicit Euler method: fi+1 = fi + δt F (ti+1 , fi+1 )
▶ Iterative relations differ by terms of order δt 2 - subleading
compared to discretization errors
▶ Usual Euler: straightforward expression for fi+1 in terms of fi
▶ Implicit Euler: solve an equation to express fi+1 in terms of fi
df (t)
= −f (t)
dt
fi+1 = fi − δt fi ⇒ fi+1 = fi (1 − δt)
fi
fi+1 = fi − δt fi+1 ⇒ fi+1 = (20)
1 + δt
▶ For the implicit method, iterations are well-behaved for all
δt > 0!
Implicit Euler method
d 2 f (t)
= −f (t) ⇒
dt 2
df (t) dg (t)
⇒ = g (t), = −f (t) (21)
dt dt
df (t)
Use backward finite difference to approximate dt :
f¯i+1 − f¯i ḡi+1 − ḡi
= ḡi+1 , = −f¯i+1 (22)
δt δt
f¯i+1 = f¯i + δt ḡi+1 ḡi+1 = ḡi − δt f¯i+1 (23)
Implicit Euler method
f¯i+1 f¯i
1 −δt
= ⇒
δt 1 ḡi+1 ḡi
f¯i+1 f¯i
1 1 δt
⇒ = (24)
ḡi+1 1 + δt 2 −δt 1 ḡi
▶ Equations for f¯i+1 are typically nonlinear
▶ Use numerical methods to solve, e.g. the Newton-Raphson
method
▶ Implicit methods can do a lot of “magic” where explicit
methods fail miserably
Implicit Euler method
▶ Analytic problems on implicit Euler method in the final exam:
▶ Simple linear equations
▶ A system of a most two linear equations
▶ A quadratic equation in single variable (or other trivially
solvable nonlinear equation)
▶ In case of multiple roots: choose fi+1 that differs from fi by
terms of order δt
▶ Example: apply one step of implicit Euler method to an ODE
df
√
dt = t f , starting with f (0) = f0
▶ Higher-level question: what happens if f0 = 0?
Symplectic Euler method (aka Leapfrog/Verlet ...)
d 2 f (t)
= −f (t) ⇒
dt 2
df (t) dg (t)
⇒ = g (t), = −f (t) (25)
dt dt
Normal Euler method:
f¯i+1 = f¯i + δt ḡi ḡi+1 = ḡi − δt f¯i (26)
When we calculate ḡi+1 , f¯i+1 is already calculated!
f¯i+1 = f¯i + δt ḡi ḡi+1 = ḡi − δt f¯i+1 (27)
Symplectic Euler method (aka Leapfrog/Verlet ...)
d 2 f (t)
= F (t, f (t)) ⇒
dt 2
df (t) dg (t)
⇒ = g (t), = F (t, f (t)) (28)
dt dt
Normal Euler method:
f¯i+1 = f¯i + δt ḡi ḡi+1 = ḡi + δt F ti , f¯i
(29)
When we calculate ḡi+1 , f¯i+1 is already calculated!
f¯i+1 = f¯i + δt ḡi ḡi+1 = ḡi + δt F ti+1 , f¯i+1
(30)
Symplectic Euler method - stability
▶ Symplectic Euler is an area-preserving map in the space of
(g , f )
▶ Jacobian: (there will be no questions about Jacobians in the final exam)
∂ f¯i+1 ∂ ḡi+1
!
∂ f¯i+1 , ḡi+1
= det ∂ f¯i ∂ f¯i =1 (31)
∂ f¯i , ḡi ∂ f¯i+1 ∂ ḡi+1
∂ ḡi ∂ ḡi
▶ This prohibits both inward/outward spiralling behaviour