0% found this document useful (0 votes)
16 views

Diedrich Notes Hs17

This document is a lecture on control systems engineering. It covers topics such as signals and systems, system modeling, linearization, stability analysis, response of first order systems, controllability and observability, transfer functions, and pole placement. The document contains definitions of key concepts, examples of how to model and analyze systems, and problems for students to work through related to these control systems engineering topics.

Uploaded by

wallhumedxx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Diedrich Notes Hs17

This document is a lecture on control systems engineering. It covers topics such as signals and systems, system modeling, linearization, stability analysis, response of first order systems, controllability and observability, transfer functions, and pole placement. The document contains definitions of key concepts, examples of how to model and analyze systems, and problems for students to work through related to these control systems engineering topics.

Uploaded by

wallhumedxx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Control Systems I

Paul Aurel Diederichs

December 8, 2017

1
Paul Aurel Diederichs Control Systems I HS 2017

Contents

1 Signals and Systems 3


1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Classification of System Characteristics . . . . . . . . . . . . . . . . 3
1.3 Control System Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 System Modeling 6
2.1 System Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.2 Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Linearization Recipe . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.1 Identify the system equations (modeling) . . . . . . . . . . . 7
2.3.2 Determine an equilibrium State . . . . . . . . . . . . . . . . . 7
2.3.3 Linearization via the Jacobimatrix . . . . . . . . . . . . . . . 7
2.4 Implementation of Linearized Systems (Deviation Variables) . . . . . 8
2.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 General Solution of Linear Time Invariant Systems 12

4 Stability 12
4.1 Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.1.1 Lyapunov Asymptotically Stable . . . . . . . . . . . . . . . . 12
4.1.2 Lyapunov Stable . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.1.3 Lyapunov Unstable . . . . . . . . . . . . . . . . . . . . . . . . 13
4.1.4 Phase Portraits . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.1.5 Determining Lyapunov Stability . . . . . . . . . . . . . . . . 13
4.1.6 Lyapunov’s Stability Principle . . . . . . . . . . . . . . . . . 13
4.1.7 BIBO Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.4 Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.5 Problem 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2.6 Problem 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2
Paul Aurel Diederichs Control Systems I HS 2017

4.3.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.4 Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3.5 Problem 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3.6 Problem 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 System Response 20
5.1 Effects of the Eigenvalues on the Initial-Condition Response . . . . . 20

6 Forced Response of First Order Systems 21


6.1 First Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.1 Input Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.2 Time Constant . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.3 Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.4 Step Response . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

7 Controllability and Observability 26


7.1 Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.2 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.3 Controllability and Observability of Diagonal Systems . . . . . . . . 26
7.3.1 Stabilizability and Detectability . . . . . . . . . . . . . . . . . 26

8 Transfer Functions 27
8.1 Laplace Transformation . . . . . . . . . . . . . . . . . . . . . . . . . 27
8.2 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
8.2.1 General Definition . . . . . . . . . . . . . . . . . . . . . . . . 27
8.2.2 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
8.2.3 Input u(t) = est . . . . . . . . . . . . . . . . . . . . . . . . . . 28

9 Pole Placement 29

10 Problems 30
10.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
10.1.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
10.1.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
10.1.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
10.1.4 Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
10.1.5 Problem 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
10.1.6 Problem 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
10.1.7 Problem 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3
Paul Aurel Diederichs Control Systems I HS 2017

11 Solutions 33
11.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
11.1.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
11.1.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
11.1.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
11.1.4 Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
11.1.5 Problem 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
11.1.6 Problem 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
11.1.7 Problem 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

14 Feedback Systems 51
14.1 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
14.1.1 Open-Loop Gain . . . . . . . . . . . . . . . . . . . . . . . . . 51
14.1.2 Complementary Sensitivity/Closed-Loop Transfer Function . 52
14.1.3 Sensitivity Transfer Function . . . . . . . . . . . . . . . . . . 52
14.2 Closed-Loop Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 52
14.3 Proportional Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

15 Root Locus 53
15.0.1 Why - Importance of the Root Locus . . . . . . . . . . . . . . 53
15.0.2 Root Locus Plot . . . . . . . . . . . . . . . . . . . . . . . . . 54
15.0.3 Derivation of Root Locus Plot . . . . . . . . . . . . . . . . . 55
15.0.4 Rules - Root Locus . . . . . . . . . . . . . . . . . . . . . . . . 55
15.0.5 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
15.0.6 Number of Branches . . . . . . . . . . . . . . . . . . . . . . . 55
15.0.7 Starting and Ending Points . . . . . . . . . . . . . . . . . . . 56
15.0.8 Root Locus on Real Axis . . . . . . . . . . . . . . . . . . . . 56
15.0.9 Asymptotes of the Root Locus . . . . . . . . . . . . . . . . . 56
15.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

16 Frequency Response 60

17 Bode Plot 60
17.1 Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
17.2 Repeated Poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
17.3 Pole at the origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
17.4 Complex Poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
17.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
17.5.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
17.5.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
17.5.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
17.5.4 Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
17.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
17.6.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
17.6.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
17.6.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
17.6.4 Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4
Paul Aurel Diederichs Control Systems I HS 2017

18 Stability Margins 73
18.1 Poles of the Closed-Loop Transfer Function . . . . . . . . . . . . . . 73
18.2 Bode Plot & Horror Point -1 . . . . . . . . . . . . . . . . . . . . . . 73
18.2.1 Phase Margin . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
18.2.2 Gain Margin . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
18.2.3 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

19 Nyquist Diagram 75
19.1 Drawing the Nyquist Plot . . . . . . . . . . . . . . . . . . . . . . . . 75
19.2 Nyquist’s Stability Theorem . . . . . . . . . . . . . . . . . . . . . . . 75
19.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
19.3.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
19.3.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
19.3.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
19.3.4 Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
19.4 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
19.4.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
19.4.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
19.4.3 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
19.4.4 Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

20 System Specifications 84
20.1 Steady-State Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
20.2 System Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
20.3 Limitations of Proportional Control . . . . . . . . . . . . . . . . . . 85
20.4 Time Domain Specifications . . . . . . . . . . . . . . . . . . . . . . . 85
20.5 Frequency Domain Specifications . . . . . . . . . . . . . . . . . . . . 86

21 Dominant poles Approximation 87

5
Paul Aurel Diederichs Control Systems I HS 2017

1 Signals and Systems

1.1 Definitions
Signal: A signal is a mapping from the time domain T (continuous in CS I) to the
signal space W (real numbers in CS I). It is a function of time that represents a
physical quantity such as a force, position...
Time Space: the time domain T lies on the real number line and therefore
T=R
Signal Space: the signal space also lies on the real number line and thus
W = R. The real number line can be extended to vector-valued signals, for
which W = Rn for some integer n.

Figure 1: Mapping from Time to Signal Space

System: A system is an input-output model, which maps an input signal u(t) to an


output signal y(t). In the diagram the Σ is a system, which transforms the input
signal u into an output signal y.
y(t) = (Σu)(t) ∀t ∈ T (1)

Figure 2: System Σ with input signal u and output signal y

The system Σ operates on the signal u to produce signal y. This does not mean
that y(t) = Σ(t) · u(t)

1.2 Classification of System Characteristics


SISO/MIMO: SISO (single input single output) systems are systems with exactly
one input and one output signal. For MIMO (multiple input multiple output) sys-
tems the dimension of both the input and output signal is greater than 1.

Time invariant/time varying: Time invariant systems are described by system


equations whose coefficients do not vary with time, are not a function of time. For
a time invariant system the following equation must be satisfied:
Σ(u(t − τ )) = (Σu(t − τ )) = y(t − τ ) (2)
Consequently the output of a time invariant system remains constant for all times
if the input and states are kept constant. Time varying systems on the other hand,
contain coefficients that are dependent on time.

6
Paul Aurel Diederichs Control Systems I HS 2017

Example of a Time Invariant and Varying System


a) Start with the delay of the input xd = x(t + δ)
y(t) = t · x(t)
y1 (t) = t · xd (t) = t · x(t + δ)
Now delay the output by δ
y(t) = t · x(t)
y2 (t) = y(t + δ) = (t + δ) · x(t + δ)
y1 (t) 6= y2 (t)

b) Start with the delay of the input xd = x(t + δ)


y(t) = 10 · x(t)
y1 (t) = 10 · xd (t) = 10 · x(t + δ)
Now delay the output by δ
y(t) = 10 · x(t)
y2 (t) = y(t + δ) = 10 · x(t + δ)
y1 (t) = y2 (t)
Causal/Acausal: A causal system is a system where the output depends on past
and current inputs but not future inputs - i.e the the output y(t0 ) depends only on
the input x(t) for values of t ≤ t0 . Consequently the time domain for a casual sys-
tem is (−∞, t]. Note that: nature or physical reality has been considered to be a
causal system.

Static/Dynamic: The output of a static system only depends on the current value
of the input signal and is not dependent on the past. Dynamic systems possess a
reservoir, which contains information about the past of the system. Systems which
may be described by differential equations are always dynamic.

Linear/Nonlinear: No term in the system equations may be nonlinear in x(t), u(t), y(t)
for a system to be linear. Nonlinear systems contain nonlinear terms of x(t), u(t), y(t).
Linearity means that the following two conditions must hold:
• f (a · x) = a · f (x)

• f (x1 + x2 ) = f (x1 ) + f (x2 )


Order of a system: The order of a system always equals the number of reser-
voir variables or states. The order of a differential equation equals the order of the
highest-order derivative. Thus if a system is described by a n-th-order-differential
equation, the order of the system is likewise n.

Exam Questions:
d
1. All signals are scalars. The system dt y(t) = (u(t + 1))2 is:

 Causal
 Memoryless / Static
 Time-Invariant
 Linear

7
Paul Aurel Diederichs Control Systems I HS 2017

2. All signals are scalars. The system y(t) = t2 · u(t − 1) is:

 Linear
 Time - Invariant
 Causal
 Memoryless / Static

1.3 Control System Blocks


The block diagram is a graphical representation of the control system in the signal
domain (Laplace domain). Each block contains a transfer function (will be covered
late in the course). The connections between the blocks represents the path of the
signal flow.

c) Feedback Parallel Con-


a) Serial Connection b) Parallel Connection nection

The associated input-output behaviors are described by:

a) Serial Connection: Σtot = Σ1 · Σ2

b) Parallel Connection: Σtot = Σ1 + Σ2


Σ1
c) Feedback Parallel Connection: Σtot = 1+Σ1 ·Σ2

8
Paul Aurel Diederichs Control Systems I HS 2017

2 System Modeling

2.1 System Equations


Frequently we wish to control complex non-linear systems. In order to design a
suitable controller for such systems, we need to model and simplify the behavior of
the system. We utilize physical and mechanical principles, such as linear and angular
momentum, to derive a mathematical model of the system’s behavior. This model
is then linearized and finally normalized around an equilibrium point (x0 , u0 ) to
obtain a state-space description of the system. The general state-space description
of a possibly non-linear system is:

ż(t) = f (z(t), v(t))


(3)
w(t) = g(z(t), v(t))

where:

• v(t) ∈ R: Input of the System

• w(t) ∈ R: Output of the System

• z(t) ∈ Rn : State of the System

The linearized state-space description for a SISO system is:


d
x(t) = Ax(t) + Bu(t)
dt (4)
y(t) = Cx(t) + Du(t)

where:

• A ∈ Rn×n : square matrix

• B ∈ Rn×1 : column vector

• C ∈ R1×n : row vector

• D ∈ R1×1 : scalar

• u(t) ∈ R: input of the normalized system

• y(t) ∈ R: output of the normalized system

• x(t) ∈ Rn : state of the normalized system

2.2 Definitions
2.2.1 States
The state of a dynamical system is a collection of variables that completely charac-
terizes the motion of a system for the purpose of predicting future motion. For a
system of planets the state is simply the positions and the velocities of the planets.
The number of states is equivalent to the order of the differential equation, i.e. a
second order differential equation will have two states.

9
Paul Aurel Diederichs Control Systems I HS 2017

2.2.2 Equilibrium
A system is in an equilibrium if all of the system’s state variables remain con-
stant/stationary and consequently do not change. Therefore, the following condition
must be satisfied:
d
z(t) ze ,ve = 0 (5)
dt
It follows that:
f (ze , ve ) = 0 we = g(ze , ve ) (6)

2.3 Linearization Recipe


2.3.1 Identify the system equations (modeling)
Using physical principles obtain a mathematical representation of the system:

ż(t) = f (z(t), v(t))


(7)
w(t) = g(z(t), v(t))

2.3.2 Determine an equilibrium State


Linearization tells us how a system behaves when it is near a certain state and
control. The next step is to determine where we want to linearize the system. We
have to find a state xeq and control ueq where the system will remain stationary and
the state does not change.
ẋ(t) = 0 = f (xeq , ueq ) (8)

2.3.3 Linearization via the Jacobimatrix


Linearize f and g around xeq and ueq . The goal here is to derive the A, B, C, and D
matrices for the linearized system. The intuition behind linearization is illustrated
in the figure below: When we linearize a function, we approximate the function

Figure 3: Graphical Interpretation of Linearization

by the tangent line to the function. In general, you have to imagine doing this in
a higher dimensional space. We use the Jacobimatrix to help us derive the linear
state-space description of a non-linear system:
   
ẋ1 (t) f0,1 (x(t), u(t))
 ..   ..
 . = y(t) = g0 (x(t), u(t)) (9)

. 
ẋn (t) f0,n (x(t), u(t))

10
Paul Aurel Diederichs Control Systems I HS 2017

 ∂f0,1 ∂f0,1

∂x1 x=xeq ,u=ueq · · ·

∂xn x=xeq ,u=ueq
∂f0  .. .. 
A= = . · · · .

∂x x=xeq ,u=ueq 
∂f0,n ∂f0,n 
∂x1 x=xeq ,u=ueq · · · ∂xn x=xeq ,u=ueq
 ∂f0,1 

∂u x=xeq ,u=ueq
∂f0  .. 
(10)
B= = .

∂u x=xeq ,u=ueq 
∂f0,n

∂u x=xeq ,u=ueq
∂g0 
∂g ∂g

C= = ∂x01 x=xeq ,u=ueq · · · ∂xn0 x=xeq ,u=ueq
∂x x=xeq ,u=ueq
∂g0 
∂g

D= = ∂u0 x=xeq ,u=ueq
∂u x=xeq ,u=ueq

2.4 Implementation of Linearized Systems (Deviation Variables)


We have to keep in mind that the derived linearized model of the system is only
valid when the system operates in a sufficiently small range around the equilibrium
point (xeq , ueq ). This is because the non-linear system was linearized around this
equilibrium point. Consequently, the linearized model is only a good approximation
of the original nonlinear model when near the equilibrium point (xeq , ueq ). This lim-
itation has to be considered when controlling a linearized system. One way of doing
this, is by introducing the deviation variable δx (t), which describes the deviation of
x(t) from the equilibrium point xeq .

Suppose that we have linearized our system at the equilibrium point (xeq , ueq ). We
know that if we start the system at x(t0 ) = xeq , and apply the constant input
u(t) = ueq then the state of the system will remain fixed at x(t) = xeq for all t. Now
what happens if we start a little bit away from xeq , and we apply a slightly different
input from ueq ? In mathematical terms (where δ are the deviation variables):

x(t) = xeq + δx (t)


u(t) = ueq + δu (t) (11)
y(t) = yeq + δy (t)
To answer this question we calculate the linearized state-space description for (xeq , ueq )
and define a coordinate transformation. The coordinate transformation is defined as
following δx (t) = x(t)−xeq . Thus centering the coordinate system at the equilibrium.

x1 (t) − xeq
 
1
 x2 (t) − xeq 
2 
δx (t) =  .. (12)


 . 
xn (t) − xeq
n

Similarly, we transform u(t) and y(t). We denote these transformations as δu (t) =


u(t) − ueq and δy (t) = y(t) − yeq respectively. Now, the variables x(t) and u(t) are
related by the differential equation:

ẋ(t) = f (x(t), u(t))

11
Paul Aurel Diederichs Control Systems I HS 2017

Substituting in, using the constant and deviation variables, we get:

δ̇x (t) = f (xeq + δx (t), ueq + δu (t))


This is exact. Now however, let’s do a Taylor expansion of the right hand side, and
neglect all higher (higher than 1st) order terms:

∂f ∂f
δ̇x (t) ≈ f (xeq , ueq ) + · δx (t) + · δu (t) (13)
| {z } ∂x xeq ,ueq ∂u xeq ,ueq
f (xeq ,ueq )=0

This differential equation approximately governs, (we are neglecting 2nd order and
higher terms) the deviation variables δx (t) and δu (t), as long as they remain small.
It is a linear, time-invariant, differential equation, since the derivatives of δx are
linear combinations of the δx variables and the deviation inputs, δu . Therefore, the
linear state-space description can be written as:
d
δx (t) = Aδx (t) + Bδu (t)
dt (14)
δy (t) = Cδx (t) + Dδu (t)
You have to think of δx (t) as a new state, δu (t) new control input, and δy (t) new
output respectively. Otherwise nothing has changed.

2.5 Problems
2.5.1 Problem 1

12
Paul Aurel Diederichs Control Systems I HS 2017

2.5.2 Problem 2

2.5.3 Problem 3

13
Paul Aurel Diederichs Control Systems I HS 2017

2.6 Solutions
2.6.1 Problem 1
Das linearisierte Modell beschreibt nur Abweichungen von der Gleichgewichtslage
(xeq , ueq , yeq ) deshalb muss vom Eingang ueq abgezogen werden und am Ausgang
yeq dazu addiert werden.

2.6.2 Problem 2
From the Figure, it is easy to see that the slope at the equilibrium point is ap-
proximately 10. Furthermore, the distance between the curves shows that ∂f ∂u is
approximately -5.

2.6.3 Problem 3

14
Paul Aurel Diederichs Control Systems I HS 2017

3 General Solution of Linear Time Invariant Systems

The solution to the general differential equation (linear-state-space description):

d
x(t) = Ax(t) + Bu(t), x(0) = x0
dt
is: Z t
A·t
x(t) = e · x0 + eA(t−τ ) · B · u(τ )dτ
|0
| {z }
Initial Condition Response {z }
Forced Response (15)
Z t
y(t) =C · eA·t · x0 + C · eA(t−τ ) · B · u(τ )dτ + D · u(t)
0

4 Stability

Stability properties determine the system behavior if its initial state is close to but
not at the equilibrium point of interest. When an initial state is in the vicinity of
the equilibrium point, the state may remain close, move to the equilibrium point,
or it may drift away from the equilibrium point.

4.1 Lyapunov Stability


Lyapunov stability analyzes the behavior of a system in the vicinity of an equilibrium
point for u(t) = 0. There are three possible cases:

4.1.1 Lyapunov Asymptotically Stable


A system is said to be Lyapunov asymptotically stable about an equilibrium point
xeq , if the perturbed system converges to the equilibrium point xeq for each ini-
tial state x0 without an input (u = 0). The system then satisfies the following
mathematical condition:

lim ||x(t) − xeq || = 0; u(t) = 0 (16)


t→∞

4.1.2 Lyapunov Stable

Figure 4: Illustration of the -ball and δ-ball in the definition of Lyapunov stability

15
Paul Aurel Diederichs Control Systems I HS 2017

A system is said to be Lyapunov stable about an equilibrium point xeq , if the


perturbed system stays close to the equilibrium point xeq for each initial state x0
without an input (u = 0). Formally, we say that the system is stable if for all  > 0,
there exists a δ > 0 such that

||x(t) − xeq || < δ for all t ≥ 0 whenever ||x0 − xeq || <  (17)
Notice: that asymptotic stability requires the equilibrium to be Lyapunov stable! This
is important since (15) does not imply Lyaponov stability.

4.1.3 Lyapunov Unstable


A system is unstable if it is not stable. More specifically, we say that the perturbed
system moves away from the equilibrium point xeq instead of converging towards it.

lim ||x(t) − xeq || = ∞; u(t) = 0 (18)


t→∞

4.1.4 Phase Portraits

Asymptotically Stable Stable Unstable

4.1.5 Determining Lyapunov Stability


Lyapunov stability is ultimately determined by evaluating the eigenvalues λi of the
matrix A.

1. Asymptotically Stable: Re(λi ) < 0 ∀i

2. Stable: Re(λi ) ≤ 0 ∀i

3. Unstable: Re(λi ) > 0 for at least one i

4.1.6 Lyapunov’s Stability Principle


If the linearization of a nonlinear system around an isolated equilibrium point xeq
is asymptotically stable (unstable) then this equilibrium is an asymptotically stable
(unstable) equilibrium of the nonlinear system as well. In the critical cases (some
eigenvalues have zero real part) no information is obtained by analyzing the lineariza-
tion, i.e., the nonlinear system can be asymptotically stable, stable, or unstable, and

16
Paul Aurel Diederichs Control Systems I HS 2017

this cannot be decided by analyzing its linear approximation only (in this case draw
the phase portrait).

4.1.7 BIBO Stability


A system is called BIBO-stable if and only if, all finite inputs |u(t)| < M1 result in
finite outputs |y(t)| < M2 for all times 0 ≤ t < ∞. Therefore

||u(t)|| < , x0 = 0, ||y(t)|| < δ (19)


Linear systems satisfy this condition of BIBO stability if the integral of the absolute
value of the impulse response of the system converges to a definite value. Formally
the condition: Z ∞
|σ(t)|dt < ∞ (20)
−∞

must hold true, where σ(t) is the impulse response of the system. Note that: this
is the case if all of the eigenvalues of the matrix A have negative real parts. Con-
sequently, lyapunov asymptotic stability is equivalent to BIBO stability for linear
systems.

17
Paul Aurel Diederichs Control Systems I HS 2017

4.2 Problems
4.2.1 Problem 1

4.2.2 Problem 2

18
Paul Aurel Diederichs Control Systems I HS 2017

4.2.3 Problem 3

4.2.4 Problem 4

19
Paul Aurel Diederichs Control Systems I HS 2017

4.2.5 Problem 5

4.2.6 Problem 6

20
Paul Aurel Diederichs Control Systems I HS 2017

4.3 Solutions
4.3.1 Problem 1

4.3.2 Problem 2

4.3.3 Problem 3

21
Paul Aurel Diederichs Control Systems I HS 2017

4.3.4 Problem 4

4.3.5 Problem 5

4.3.6 Problem 6

22
Paul Aurel Diederichs Control Systems I HS 2017

5 System Response

5.1 Effects of the Eigenvalues on the Initial-Condition Response


Distinct (non-repeating), Real Eigenvalues In this case our initial-condition
response (x(0) = [x0,1 , ..., x0,n ]T , u(t) = 0) will lead to the following output:

y(t) = c1 exp(λ1 · t)x0,1 + c2 exp(λ2 · t)x0,2 + ... + cn exp(λn · t)x0,n (21)


We see that the non-forced system response is a sum of exponential functions, where
the exponents are determined by the eigenvalues of the matrix A. As soon as one of
the eigenvalues λi is positive y(t) will not converge for t → ∞, instead it will diverge
y(t) → ∞.

Complex Paired Eigenvalues In this case the initial-condition response will in-
clude a sinusoid and therefore y(t) will be oscillatory:

y(t) = α1 exp(σ1 t) sin(ω1 t + φ1 ) + ... + αn exp(σn t) sin(ωn t + φn ) (22)


where λi = σi + j · ωi . So σ is the real part of the eigenvalue and ω is the imaginary
part, while αi is a constant and φi is a phase shift. Again the solution contains an
exponential term, which delimits an oscillatory term given by the sinusoid.

Repeated eigenvalues In this case the initial-condition response will be of the


form:
y(t) = c1 exp(λ1 · t)x0,1 + t · c2 exp(λ1 · t)x0,2 + ... (23)
Here we see again the exponential, but scaled with some multiple of the time. If
the eigenvalue repeats itself more than once, then higher powers of t appear in the
initial-condition response.

Basically the initial-condition response of a system is described by the eigenval-


ues of matrix A. Consequently, the lyapunov stability of a system is determined
by calculating the eigenvalues. The relationship between the locations of the eigen-
values of matrix A in the complex plane and the corresponding initial-condition
response is further highlighted by Figure 5.

Figure 5: Relationship Between the Eigenvalues of A and the System Response

23
Paul Aurel Diederichs Control Systems I HS 2017

6 Forced Response of First Order Systems

6.1 First Order Systems


A first order system is defined as following:
d
x(t) = ax(t) + bu(t)
dt (24)
y(t) = cx(t) + du(t)

where:
• a ∈ R: scalar value • b ∈ R: scalar value
• c ∈ R: scalar value • D ∈ R: scalar value
• u(t) ∈ R: input • y(t) ∈ R: output
• x(t) ∈ R: single state
Thus a first order system is described by a first order differential equation. For a
first order system we can calculate the response of system to an input/forcing u(t)
by solving the convolution:
Z t
a·t
y(t) = c · e · x0 + c · ea(t−τ ) · b · u(τ )dτ + d · u(t) (25)
0

6.1.1 Input Signals


Below is a list of the typical input signals, which are used to force a system:

• impulse function: δ(t) ← the dirac delta function/measure

• step function: h(t) ← the heaviside function

• ramp function: t · h(t)

• harmonic function: c(t) = cos(ωt) · h(t)

6.1.2 Time Constant


1
We define the time constant of the system to be τ = −a . The time constant τ
characterizes the responsiveness of a first-order system, i.e., the greater a is, the
faster the response of the system to the input. It is defined as the time it takes
for the system to reach e−1 ≈ 0.37 of the initial value of the impulse response, so
yδ (0) · e−1 = yδ (τ ). For a step response on the other hand, it is defined as the time
it takes for the system to have reached (1 − e−1 ) ≈ 0.63 of the final value.

6.1.3 Impulse Response


The output y(t) of a first-order system is now computed for two of the four input
signals introduced above. The impulse response is derived first. Inserting u(t) = δ(t)
into Equation 24, the impulse response for an arbitrary initial condition x(0) = x0 6=
0 is immediately found to be

24
Paul Aurel Diederichs Control Systems I HS 2017

Z t
yδ (t) =c · ea·t · x0 + c · ea(t−τ ) · b · δ(τ )dτ + d · u(t)
0 | {z }
=0
(26)
yδ (t) =c · ea·t · x0 + c · ea·t · b = c · ea·t (x0 + b)
for d = 0. Note that the derivative of yδ (t) at t = 0 is equal to c · a (x0 + b), therefore
the tangent to yδ (t) at t = 0 passes through the point (− a1 , 0) for all initial-conditions
x0 . This property can be useful to estimate the time constant of a system whose
measured impulse response is available. See Figure 6 below.

Figure 6: Impulse Response of a First Order System

6.1.4 Step Response


The step response response of the system (Eq. 23), i.e., its output y(t) when its
input u(t) = h(t) and d = 0, is easily found using the general solution
Z t
yh (t) =c · ea·t · x0 + c · ea(t−τ ) · b · h(τ )dτ + d · u(t)
0 | {z }
=0
Z t Z t
a·t a(t−τ ) a·t (27)
yh (t) =c · e · x0 + c·e · bdτ = c · e · x0 + c · b ea(t−τ ) dτ
0 0
a·t c·b
1 − ea·t

yh (t) =c · e · x0 −
a
Similarly to the impulse response, the derivative of yh (t) at t = 0 can be used to
1
graphically determine the system’s time constant τ = −a .

Figure 7: Step Response of a First Order System

25
Paul Aurel Diederichs Control Systems I HS 2017

6.2 Problems
6.2.1 Problem 1

26
Paul Aurel Diederichs Control Systems I HS 2017

6.2.2 Problem 2

27
Paul Aurel Diederichs Control Systems I HS 2017

6.3 Solutions
6.3.1 Problem 1

6.3.2 Problem 2

28
Paul Aurel Diederichs Control Systems I HS 2017

7 Controllability and Observability

7.1 Controllability
A LTI system of the form ẋ(t) = A · x(t) + B · u(t) is said to be controllable if for any
given initial state x(t = 0) = x0 there exists a control signal u(t) that takes the state
to the origin x(t) = 0 for some finite time t. This is the case if the controllability
matrix has a full rank:

B AB A2 B ... An−1 B
 
(28)

7.2 Observability
A LTI system of the form ẋ(t) = A · x(t) + B · u(t), y(t) = C · x(t) + D · u(t) is said
to be observable if any given initial condition x(t = 0) = x0 can be reconstructed
from the input and the output signal only, over a finite time interval [0, t]. This is
the case if the observability matrix has a full rank:
 
C
 CA 
 
 CA2 
  (29)
 .. 
 . 
CAn−1

7.3 Controllability and Observability of Diagonal Systems


If an LTI system is given in diagonal form:
 
d1 0 · · · 0
 
b̃1
.
 0 d2 . . . ..   b̃2 

ẋ(t) =   x(t) +  ..  u(t)

 .. . . . . . (30)
. . 0

.
0 · · · 0 dn b̃n

y(t) = c̃1 c̃2 · · · c̃n x(t) + D̃ · u(t)

• The system is controllable if b̃i 6= 0, i = 1, ..., n

• The system is observable if c̃i 6= 0, i = 1, ..., n

We see that each row element b̃i of the matrix B̃, determines the influence of the
input u(t) on the state xi (t). Therefore, if b˜i 6= 0 we can drive the state xi (t) to 0
with the right input. Similarly each column element c̃i of the matrix C̃ determines
how the state xi (t) influences the output.

7.3.1 Stabilizability and Detectability


Weaker conditions are:

• Stabilizable, if all the unstable modes are controllable (we cannot ”control”
everything, but we can avoid that the unstable modes diverge to infinity)

29
Paul Aurel Diederichs Control Systems I HS 2017

• Detectable, if all the unstable modes are observable (we do not ”see” every-
thing, but we ”see” what could blow up to infinity)

8 Transfer Functions

8.1 Laplace Transformation


In the past sections we have learned that dynamic systems can be described by
differential equations. These differential equations have to be solved to determine
the state x(t) and the output y(t) at time t. Equation (31) enables us to solve these
differential equations, yet the matrix integration and the matrix exponential render
this approach impractical.
Z t
y(t) = C · eA·t · x0 + C · eA(t−τ ) · B · u(τ )dτ + D · u(t) (31)
0
Instead we introduce the Laplace transformation to transform the differential equa-
tion into an algebraic equation in the frequency domain, marked by the variable
s. With the Laplace transformation we leave the time domain t and consider the
frequency domain s. Ultimately the inverse Laplace transformation enables us to
convert our frequency domain response back to a time response. The Laplace trans-
formation is defined as following:
Z ∞
X(s) = x(t) · e−st dt (32)
0

8.2 Transfer Functions


8.2.1 General Definition
The transfer function G(s) describes a system in the frequency domain and is defined
as:
Y (s)
G(s) = (33)
U (s)
where Y (s) and U (s) are the Laplace transformations of the output y(t) and input
u(t) respectively. The transfer function allows one to calculate the system’s output
for a given input by considering the equation Y (s) = G(s) · U (s) and subsequently
taking the Laplace inverse of Y (s) to obtain the time response y(t).

L−1 (Y (s)) = L−1 (G(s) · U (s)) = y(t) (34)

8.2.2 Derivation
Given an LTI system in state space form we can derive the system’s transfer function
using the Laplace transform.

ẋ(t) = A · x(t) + B · u(t)


y(t) = C · x(t) + D · u(t)

30
Paul Aurel Diederichs Control Systems I HS 2017

Now we can take the Laplace transform of the linear-state-description. We obtain:

s · X(s) = A · X(s) + B · U (s)


Y (s) = C · X(s) + D · U (s)

Solving for X(s):

(s · I − A)X(s) = B · U (s)
X(s) = (s · I − A)−1 · B · U (s)
Y (s) = C · X(s) + D · U (s)
Y (s) = C · (s · I − A)−1 · B · U (s) + D · U (s)
Y (s)
=⇒G(s) = = C · (s · I − A)−1 · B + D
U (s)

8.2.3 Input u(t) = est


Exponential signals play an important role in linear systems. Many signals can be
represented as exponential or as sum of exponentials. A constant signal is simply
eαt with α = 0. Damped sine and cosine signals can be represented by:

e(σ+jω)t = eσt (cos(ωt) + j sin(ωt)


Many other signals can be represented by linear combination of exponentials. To
investigate how a linear system responds to the exponential input u(t) = est we
consider the state space system:

ẋ(t) = A · x(t) + B · u(t)


y(t) = C · x(t) + D · u(t)

Let the input signal be u(t) = est and assume that s 6= λi (A), i = 1, ..., n, where
λi (A) is the ith eigenvalue of A. The state is then given by:
Z t Z t
A·t A(t−τ ) sτ A·t A·t
x(t) = e · x0 + e · B · e dτ = e · x0 + e e(s·I−A)τ · Bdτ
0 0

Since s 6= λ(A) the integral can be evaluated and we get:


τ =t
A·t A·t −1 (s·I−A)τ

x(t) =e · x0 + e · (s · I − A) ·e · B
τ =0
A·t A·t −1 (s·I−A)t
=e · x0 + e · (s · I − A) · (e − I) · B
−1
A·t
· B + (s · I − A)−1 · B · est

=e x0 − (s · I − A)

The output is thus:

y(t) =C · x(t) + D · u(t)


=C · eA·t x0 − (s · I − A)−1 · B + D + C · (s · I − A)−1 · B · est
 

If the initial state is chosen as:

x(0) = (s · I − A)−1 · B

31
Paul Aurel Diederichs Control Systems I HS 2017

the output only consists of the pure exponential response and both the state and
the output are proportional to the input:

x(t) = (s · I − A)−1 · B ·est = (s · I − A)−1 · B · u(t)


| {z }
Eigenfunction

y(t) = C · (s · I − A)−1 · B + D ·est = C · (s · I − A)−1 · B + D · u(t)


 
| {z }
Eigenfunction

The ratio of the output and the input:

G(s) = C · (s · I − A)−1 · B + D

Using transfer functions the response of the system to an exponential input is thus:

y(t) = C · x(t) + D · u(t) = C · eA·t x0 − (s · I − A)−1 · B + G(s) · est



| {z } | {z }
Transient Response Steady State Response

The time response of a system may be considered in two parts:

• Transient response: this part reduces to zero as t → ∞ (dominates the


response in the beginning)

• Steady-state response: response of the system as t → ∞

Note: If we set s = jω we can determine the response of the system forced by


u(t) = sin(ω · t) or u(t) = cos(ω · t).

9 Pole Placement

Consider a control system with dynamics ẋ(t) = A · x(t) + B · u(t), and assume that
we are not satisfied with its behavior (e.g., it is unstable since λi > 0, or maybe
stable but extremely slow). We can change the behavior of the system by choosing
the input u(t) in a clever way. We can choose

u(t) = −K · x(t)

where, u(t) ∈ R, x ∈ Rn×1 and K ∈ R1×n Thus the state-space description of the
system becomes:

ẋ(t) =A · x(t) − B · K · x(t) = (A − B · K) · x(t)


(35)
y(t) =C · x(t) + D · u(t)

Now the eigenvalues of the closed loop system with feedback (u(t) = −K · x(t),
which describe the dynamic of the closed loop response of the system, are defined
by
!
det((A − B · K) − λ · I) = 0 (36)
This allows us to influence the eigenvalues of the closed loop system via K. We can
define K in such a manner that the closed-loop poles/eigenvalues are in predefined
locations (denoted γi , i = 1, ..., n ) in the complex plane.

32
Paul Aurel Diederichs Control Systems I HS 2017

10 Problems

10.1 Problems
10.1.1 Problem 1

10.1.2 Problem 2

33
Paul Aurel Diederichs Control Systems I HS 2017

10.1.3 Problem 3

10.1.4 Problem 4

34
Paul Aurel Diederichs Control Systems I HS 2017

10.1.5 Problem 5

10.1.6 Problem 6

35
Paul Aurel Diederichs Control Systems I HS 2017

10.1.7 Problem 7

11 Solutions

11.1 Problems
11.1.1 Problem 1

11.1.2 Problem 2

36
Paul Aurel Diederichs Control Systems I HS 2017

11.1.3 Problem 3

11.1.4 Problem 4

11.1.5 Problem 5

37
Paul Aurel Diederichs Control Systems I HS 2017

11.1.6 Problem 6

11.1.7 Problem 7

38
Paul Aurel Diederichs Control Systems I HS 2017

12 Proper Transfer Function

N (s) bm · sm + bm−1 · sm−1 + ... + b0


G(s) = =
D(s) sn + an−1 · sn−1 + ... + a0
In control theory, a proper transfer function is a transfer function in which the de-
gree of the numerator does not exceed the degree of the denominator (n ≥ m). A
proper transfer function describes a causal system.

A strictly proper transfer function is a transfer function where the degree of the
numerator is less than the degree of the denominator (n > m). A strictly proper
transfer function describes a strictly causal system.

An improper proper transfer function is a transfer function where the degree of the
numerator is larger than the degree of the denominator (n < m). An improper
proper transfer function describes a acausal system.

12.1 Examples
12.1.1 Proper Transfer Function
The following transfer function:

N(s) s4 + n1 s3 + n2 s2 + n3 s + n4
G(s) = = 4
D(s) s + d1 s3 + d2 s2 + d3 s + d4

is proper because
deg(N(s)) = 4 ≤ deg(D(s)) = 4

12.1.2 Strictly Proper Transfer Function


The following transfer function:

N(s) s 3 + n1 s 2 + n2 s + n3
G(s) = = 4
D(s) s + d1 s3 + d2 s2 + d3 s + d4

is strictly proper because

deg(N(s)) = 3 < deg(D(s)) = 4

12.1.3 Improper Transfer Function


The following transfer function:

N(s) s4 + n1 s3 + n2 s2 + n3 s + n4
G(s) = =
D(s) d1 s3 + d2 s2 + d3 s + d4

is improper because
deg(N(s)) = 4  deg(D(s)) = 3

39
Paul Aurel Diederichs Control Systems I HS 2017

13 Poles and Zeros of Transfer Functions

As mentioned in the previous section, the transfer function G(s) of a system describes
the relationship between the output Y (s) and input U (s). G(s) is a rational function
and therefore, it can be rewritten as:
N (s)
G(s) =
D(s)
bm · sm + bm−1 · sm−1 + ... + b0
= (37)
sn + an−1 · sn−1 + ... + a0
Πi=1 (s − zi )
=krl ·
Πj=1 (s − pj )
where zi are the zeros and pi the poles of the transfer function. The location on
the complex plane of the system’s poles and zeros help characterize the system’s
behavior.

13.1 Poles
One can observe that the system’s poles pj are contained in the spectrum of matrix
A. The spectrum of a matrix is the set of its eigenvalues. The vice versa is not true
in the general case, as there may have been a pole zero cancellation.

pj ∈ {λ1 , ..., λn } 6
=⇒ λ ∈ {p1 , ..., pj }

Since the poles are contained in the eigenvalues of matrix A, the system is unstable
if one of the poles is in the right-hand-side of the complex plane (or has a positive
real part). For a system to be stable all of the poles must be on the left-hand-side
of the complex plane (or has a negative real part). Further similarities arise with
respect to the eigenvalues of matrix A:
• poles with an imaginary part cause the system to oscillate

• the further a pole is from the origin (obviously the pole must be in the left hand
plane) the faster the system’s response converges (|Re(pj )| ∝ Speed)

Figure 8: Speed of Response and Pole Position

40
Paul Aurel Diederichs Control Systems I HS 2017

13.2 Zeros
The zeros (zi ) of a system’s transfer function have no direct influence on the stability
of the system. Zeros with a positive real part, are called nonminimum phase zeros.
In contrast to zeros with a negative real part, which are called minimum phase zeros.

Nonminimum Phase Zeros (Re(zi ) > 0) Minimum Phase Zeros (Re(zi ) < 0)
Produce an undershoot in the step response Produce an overshoot in the step re-
u(t) = h(t) → the system lies by react- sponse u(t) = h(t). The overshoot in-
ing in the opposite direction before recov- creases as the zero approaches the ori-
ering. The amount of undershoot grows as gin. Hence z = −0.2 will have a much
the zero approaches the origin. larger overshoot than z = −2.

Nonminimum Phase Zeros Minimum Phase Zeros

13.3 Interpretation of System Zeros


Zeros can be interpreted as a positive or negative derivative response. This is be-
cause:
df (t) L
←→ sF (s) − f (0)
dt
This interpretation is highlighted by the following example. Given the system’s
transfer function G(s), with one nonminimum phase zero at z1 = 3:
3−s 3 s
G(s) = = −
(s + 1)(s + 5) (s + 1)(s + 5) (s + 1)(s + 5)
| {z } | {z  }
L(f (t))

df (t)
L dt

Taking the inverse Laplace of G(s)U (s) where U (s) = 1s , the Laplace equivalent of
a unit step input h(t):
   
1 −5·t 1 −t 1 d 1 −5·t 1 −t 1
⇒= 3 ·e − ·e + − ·e − ·e +
20 4 5 dt 20 4 5
The response of the system to the step is shown in Figure 9.

41
Paul Aurel Diederichs Control Systems I HS 2017

Figure 9: Zero interpreted as Derivative Action

13.4 Pole-Zero Cancellation


Πi=1 (s − zi )
G(s) = krl ·
Πj=1 (s − pj )
Pole zero cancellation takes place if there exists a common pole and zero, which
means pi = zi . Recall that this system is only stable if all of the poles are in the
left-hand-plane, and these poles are the roots of the polynomial D(s). For this rea-
son: it is important to note that one should not cancel any common poles and
zeros of the transfer function before checking the roots of D(s). Specifically,
suppose that both of the polynomials N (s) and D(s) have a root at s = a, for some
complex (or real) number a. One must not cancel out this common zero and pole
in the transfer function before checking whether a is in the left or right hand plane.
The reason for this is that, even though the pole will not show up in the response
to the input, it will still appear as a result of any initial conditions in the system, or
due to additional inputs entering the system (such as disturbances). If the pole and
zero are in the right-hand-plane, the system response might blow up due to these
initial conditions or disturbances, even though the input to the system is bounded.

In the case that an unstable pole is canceled with a zero, the system is not Lya-
punov stable, but it can be BIBO stable. Note that such systems are to be avoided
in practice because the unavoidable small disturbances will always cause the hidden
unstable modes to diverge. This is highlighted by the following example:

Given the following state-space description of a system, determine the transfer func-
tion G(s) and evaluate both the Lyapunov and BIBO stability:
      
ẋ1 (t) 0 1 x1 (t) 0
= + u(t)
ẋ2 (t) 2 −1 x2 (t) 1
 
 x1 (t)
y(t) = 1 0
x2 (t)

42
Paul Aurel Diederichs Control Systems I HS 2017

The system’s transfer function is defined as following:


s−1 1
G(s) = C · (s · I − A)−1 · B + D = =
(s − 1)(s + 2) s+2
| {z }
Pole-Zero Cancellation

The eigenvalues of the system are λ1 = 1 and λ2 = −2. Therefore, the system is
Lyapunov unstable. The system is however BIBO stable. Let’s see why: We said
systems are BIBO stable if they satisfy the following condition:
Z ∞
|σ(t)|dt < ∞
−∞

where σ(t) is the impulse response of the system. In this case σ(t) is equal to
 
−1 −1 1
σ(t) = L (G(s) · L(δ(t))) = L = e−2·t · h(t)
s+2
Z ∞ Z ∞
1
⇒ |e−2·t · h(t)|dt = |e−2·t · h(t)|dt =
−∞ 0 2
Therefore, we can conclude that the system is BIBO stable even though it is Lya-
punov unstable. This is all due to the pole-zero cancellation.

Impulse Response with (0, 0)T initial Impulse Response with (0.01, 0.01)T
conditions - Stable initial conditions - Unstable

43
Paul Aurel Diederichs Control Systems I HS 2017

13.5 Initial and Final Value Theorem


The initial value theorem allows one to determine the value of system’s response
y(t) as t → 0 of any system, stable or not.

y(t → 0) = lim sG(s)U (s) (38)


s→∞

13.5.1 Final Value Theorem


The final value theorem allows one to determine the value of the system’s response
y(t) as t → ∞ of a stable system.

y(t → ∞) = lim sG(s)U (s) (39)


s→0

To be able to apply the final value theorem to a system’s transfer function G(s), the
following conditions must be met:

1. all poles of the transfer function must have negative real parts → system must
be stable

2. at most one pole at origin.

Therefore, the final value theorem cannot be applied to unstable systems.

13.6 System Representations


The diagram shown in Figure 10 illustrates how one system is transformed from one
description into the other. All forms are equivalent, as they all contain the same
information. Therefore, all forms are also equivalent in reproducing the system’s
I/O, i.e. input-output behavior.

Figure 10: Commutation Diagram for the Different System Representation Forms

44
Paul Aurel Diederichs Control Systems I HS 2017

13.6.1 From Transfer Function to State Space Representation


Given any transfer function G(s) with no common poles/zeros (no pole zero cancel-
lation), it is straightforward to find a system in state-space form that realizes that
transfer function. This is known as the controller canonical form. Given the transfer
function

N (s) bm · sm + ... + b1 · s + b0
G(s) = = n
D(s) s + an−1 · sn−1 + ... + a1 · s + a0
The state-space representation of the system is:
   
0 1 0 ··· ··· 0 0
 0 0 1 0 ··· 0  0
 .. .. B =  ... 
   
A = . . . . .. .. ..   
 . . . . 
  
 0 ··· ··· ··· 0 1  0 (40)
−a0 −a1 · · · · · · −an−2 −an−1 1

   
C = b0 · · · bm 0 · · · 0 D= 0

45
Paul Aurel Diederichs Control Systems I HS 2017

13.7 Problems
13.7.1 Problem 1

13.7.2 Problem 2

46
Paul Aurel Diederichs Control Systems I HS 2017

13.7.3 Problem 3

13.7.4 Problem 4

47
Paul Aurel Diederichs Control Systems I HS 2017

13.8 Solutions
13.8.1 Problem 1

13.8.2 Problem 2

48
Paul Aurel Diederichs Control Systems I HS 2017

13.8.3 Problem 3

13.8.4 Problem 4

49
Paul Aurel Diederichs Control Systems I HS 2017

14 Feedback Systems

Figure 11: Block Diagram of Feedback System

The typical feedback system, depicted in Figure 11, contains a plant P (s) with an
input u(t) and a controller C(s) with an output u(t). Furthermore, it contains the
signals r(t), reference signal, e(t), error signal, d(t), output disturbance signal, n(t),
noise signal and w(t), input disturbance signal.

The plant is a model of a real physical system and therefore its dynamics cannot
be altered. We cannot modify or change the transfer function P (s). Therefore, the
ultimate aim of feedback systems is to design a controller C(s) that stabilizes the
entire feedback system. We construct C(s) so that the feedback system has the
desired characteristics, such as a quick impulse response or no undershoot caused
by a non-minimum phase zero.

14.1 Transfer Functions


Since the feedback system is linear, the influence of each input signal can be analyzed
separately. In order to analyze the effect of one individual input signal f on the
output, all other input signals are set to 0.

14.1.1 Open-Loop Gain


The transfer function from signal e(t) → y(t) is a serial connection of the controller
and the plant transfer function, while R(s) = W (s) = D(s) = N (s) = 0. Hence the
transfer function is:
Y (s) = P (s) · C(s) · E(s)
(41)
L(s) = P (s) · C(s)
This transfer function is called the open-loop gain. It describes the open-loop or
feedforward behavior of the system, which is the behavior of the system without the
output y(t) being fed back.

50
Paul Aurel Diederichs Control Systems I HS 2017

14.1.2 Complementary Sensitivity/Closed-Loop Transfer Function


The transfer function from signal r(t) → y(t) is a feedback connection, while D(s) =
N (s) = 0. Hence the transfer function is:

E(s) = R(s) − Y (s)


Y (s) = P (s) · C(s) · E(s)
⇒ Y (s) = P (s) · C(s) · (R(s) − Y (s))
P (s) · C(s) · R(s) L(s) (42)
Y (s) = = · R(s)
1 + P (s) · C(s) 1 + L(s)
L(s)
⇒ T (s) =
1 + L(s)

14.1.3 Sensitivity Transfer Function


The sensitivity S(s) is the closed-loop transfer function from d(t) → y(t) defined by:
1 1
S(s) = = (43)
1 + P (s) · C(s) 1 + L(s)

Obviously, S(s) also is the closed-loop transfer function from r(t) → e(t).

14.2 Closed-Loop Dynamics


With previously introduced transfer functions it is possible to compactly express the
closed-loop system dynamics as follows:

Y (s) = S(s) · [D(s) + P (s) · W (s)] + T (s) · [R(s) − N (s)] (44)


This equation describes how the individual input signals d, w, r, and n influence the
output signal y. Since the feedback system is linear, the influence of each input signal
can be analyzed separately. The resulting output signal is obtained by superposing
all transfer functions. Working in the frequency domain has the distinct advantage
that for each input signal the resulting output signal contribution is obtained by a
simple multiplication with the corresponding transfer function. Notice that equation
(44) formulates important trade-offs present in all control system design problems.
For example the complementary transfer function T (s) should be approximately 1 to
make sure that the output y closely follows the reference signal r, but on the other
hand T (s) should be close to 0 such that y is not influenced by the noise n.

51
Paul Aurel Diederichs Control Systems I HS 2017

14.3 Proportional Control

Figure 12: Proportional Controller C(s) = k

The proportional controller C(s) = k produces a control action proportional to the


error. Therefore, the larger the error r − y = e is the larger the resulting control
action. For such a controller the resulting closed loop transfer function T (s) is
defined as:
k · P (s)
T (s) =
1 + k · P (s)

15 Root Locus

15.0.1 Why - Importance of the Root Locus


To understand what root locus plots are, and why they are important, let us exam-
ine the behavior of a plant P (s) in a feedback system with a simple proportional
controller. Assume that the system is defined by the transfer function:
1
P (s) =
s(s + 3)
We will control this system with a very simple proportional controller in which the
input to the system to be controlled is proportional k to the difference between the
input, R(s), and the output, Y (s).

r e u 1 y
k P (s) = s(s+3)

Figure 13: Feedback System - Root Locus

The open loop gain and the closed loop transfer functions of the given system are:
k
L(s) = P (s) · C(s) =
s(s + 3)
k
P (s) · C(s) k · P (s) s(s+3) k
T (s) = = = k
= 2
1 + P (s) · C(s) 1 + k · P (s) 1 + s(s+3) s + 3s +k
Now we want to examine how the behavior of the closed loop system varies as the
value of k changes. In other words, we want to determine how the value of k affects
the transfer function T (s). The poles of T (s) are dependent on k and therefore,
changing k will affect the whole system. So let us try several values of k. Let us
arbitrarily try k = 1, 10 and 100 so that we have a wide range of k values.

52
Paul Aurel Diederichs Control Systems I HS 2017

Figure 14: k = 1, 10 and 100

The response with k = 1 was too slow, the response with k = 100 was too oscillatory,
and the response with k = 10 is almost just right, though we may want to adjust k
to get a little bit less overshoot. Clearly this method is rather ”hit-or-miss” and it
may take us a long time to find a suitable value for k.

A more analytical method might involve finding the poles of the closed loop transfer
function. Since the transfer function is second order, we can factor the denominator
using the quadratic equation. The poles of T (s) are at:

−3 ± 9 − 4k
p1,2 =
2

15.0.2 Root Locus Plot


A root locus plot shows the path of the roots as k is varied, but does not show the
actual values of k. The root locus plot gives us a graphical way to observe how the
roots move as the gain, k, is varied.

k
Figure 15: Root Locus for T (s) = s2 +3s+k

53
Paul Aurel Diederichs Control Systems I HS 2017

15.0.3 Derivation of Root Locus Plot

r e u y
k L(s) = P (s) · C(s)

k·L(s) m
Figure 16: T (s) = 1+k·L(s) = 1 + k ba00ssn +...+a
+...+bm
n

The poles of T (s) are defined by:

! N (s)
1 + k · P (s) · C(s) = 1 + k · L(s) = 0 =⇒ k· = −1 = ejπ
D(s)

Since this equation involves a complex quantity s (we can have complex roots) both
the magnitude and phase of the two sides of the equation must be equal. The
magnitude condition is expressed as:

N (s) ! N (s)
|k · | = | − 1| =⇒ k| |=1
D(s) D(s)

The phase angle is expressed as:


   
N (s) ! N (s)
∠ k· = ∠(−1) =⇒ ∠ = ±r · π, ∀r = 1, 3, 5, ...
D(s) D(s)

15.0.4 Rules - Root Locus


15.0.5 Symmetry
Since the characteristic equation has real coefficients, any zeros must occur in com-
plex conjugate pairs (which are symmetric about the real axis). Since the root locus
is just a diagram of the roots of the characteristic equation as k varies, it must also
be symmetric about the real axis.

Rule 1: The root locus is symmetric about the real axis.

15.0.6 Number of Branches


Since the root locus is just a diagram of the poles of T (s) equation as k varies, the
number of branches must be equal to the order of the denominator polynomial of
T (s) = N (s)
D(s) .

54
Paul Aurel Diederichs Control Systems I HS 2017

Rule 2: The number of branches of the root locus is equal to the order of
denominator polynomial of T (s) = N (s)
D(s) , and therefore to the order of
D(s) = 1 + k · L(s).

15.0.7 Starting and Ending Points

Rule 3: The locus starts (when k = 0) at poles of the open loop gain L(s), and
ends (when k → ∞ ) at the zeros of the open loop gain L(s).

15.0.8 Root Locus on Real Axis

Rule 4: The locus exists on real axis to the left of an odd number of poles and
zeros of the open loop gain L(s) on the axis.

15.0.9 Asymptotes of the Root Locus

Rule 5: If npoles − nzeros > 0, where npoles is the number of poles of L(s) and
nzeros is the number of zeros of L(s), there are asymptotes of the root locus:
P P
p i − zi
1. The asymptotes intersect real axis at σ = npoles −nzeros , where pi and zi are
respectively the poles and zeros of the open loop transfer function L(s).
−180+360i
2. The asymptotes radiate out with angles θ = npoles −nzeros ∀i =
1, ..., npoles − nzeros .

55
Paul Aurel Diederichs Control Systems I HS 2017

15.1 Example
We will draw the root locus for the following open-loop transfer function:
1
L(s) =
s(s + 3)

This open-loop transfer function would have the following closed-loop transfer func-
tion, where k is the proportional controller:
k
k · L(s) s(s+3) k
T (s) = = k
= 2
1 + k · L(s) 1 + s(s+3) s + 3s + k

When drawing the root-locus plot, which depicts all possible poles of T (s) as we
vary k from zero to infinity, we mainly look at the open loop transfer function L(s).
For the open loop transfer function, L(s), we have 2 poles (npoles = 2) at p1 = 0 and
p2 = −3 . We have no finite zeros and therefore nzeros = 0. We start off by plotting
the starting and ending points of the root locus (Rule 3 ). Root locus starts (k = 0)
at poles, p1 and p2 of open loop transfer function, L(s). These are shown by an x
in the diagram below.

k k
Figure 17: Root Locus for T (s) = s2 +3s+k
and L(s) = s(s+3)

Next we determine, where the root locus exists on the real axis using Rule 4. The
root locus exists on real axis to the left of an odd number of poles and zeros of the

56
Paul Aurel Diederichs Control Systems I HS 2017

open loop gain L(s) on the axis. Therefore, there is locus between 0 and −3 on the
real axis. This is because left of the open-loop pole p1 , we are left of an odd number
of poles and zeros on the real axis. Whereas, left of the pole p2 , we are left of an even
number of poles and zeros on the real axis. Therefore, there exists no locus left of p2 .

Finally we determine where the asymptotes, which exist as npoles − nzeros = 2 > 0,
intersect the real axis by calculating the center of mass. The center of mass is defined
as following P P
p i − zi
σ=
npoles − nzeros
where pi and zi are respectively the poles and zeros of the open loop transfer function
L(s). So in our case
−3
σ= = −1.5
2
Next we calculate the angles at which the asymptotes radiate out. These are deter-
mined by:
−180 + 360i
θ= ∀i = 1, ..., npoles − nzeros
npoles − nzeros
Therefore, in our example
−180 + 360i
θ= ∀i = 1, 2
2
−180 + 360
θ1 = = 90◦
2
−180 + 360 · 2
θ2 = = 270◦
2
Putting all of this information together we can draw the following root locus:

Note: This plot depicts all possible poles of T (s) as k varies from zero to infinity.

57
Corrected

Question 32 Choose the correct answer. (1 Point)

We are given the transfer function:

(s + 4)
g(s) = .
(s4 9s2 )
Which of the following is the root locus plot of g(s)?
Root Locus Root Locus
15 15

10 10
Imaginary Axis (seconds )

Imaginary Axis (seconds )


-1

-1
5 5

0 0

-5 -5

-10 -10

-15 -15
-10 -8 -6 -4 -2 0 2 4 6 8 10 -10 -8 -6 -4 -2 0 2 4 6 8 10
Real Axis (seconds -1 ) Real Axis (seconds )
-1

A D
Root Locus Root Locus
15 3

10 2
Imaginary Axis (seconds )

Imaginary Axis (seconds-1 )


-1

5 1

0 0

-5 -1

-10 -2

-15 -3
-10 -8 -6 -4 -2 0 2 4 6 8 10 -10 -8 -6 -4 -2 0 2 4 6 8 10
Real Axis (seconds -1 ) Real Axis (seconds -1 )
B E
Root Locus
15

10
Imaginary Axis (seconds-1 )

-5

-10

-15
-10 -8 -6 -4 -2 0 2 4 6 8 10
Real Axis (seconds -1 )
C
Paul Aurel Diederichs Control Systems I HS 2017

16 Frequency Response

The steady-state frequency response of a linear system can be computed from its
transfer function by setting s = j · ω, corresponding to a complex exponential:

u(t) = ej·ω·t = cos(ω · t) + j sin(ω · t)

The resulting steady state output (t → ∞) has the form:

y(t) = G(j · ω)ej·ω·t = M · ej·ω·t+ϕ = M · cos(ω · t + ϕ) + j · M · sin(ω · t + ϕ)

where M and ϕ are the gain and phase of G(j · ω)

Im {G(j · ω)}
 
M = |G(j · ω)| ϕ = ∠G(j · ω) = arctan
Re {G(j · ω)}

The frequency response G(j · ω) can thus be represented by two curves: the gain
curve and the phase curve. The gain curve gives |G(j · ω)| as a function of frequency
ω, and the phase curve gives ∠G(j · ω) also as a function of frequency ω. One
particularly useful way of drawing these curves is to use a log/log(dB) scale for the
gain plot and a log/linear scale for the phase plot. This type of plot is called a Bode
plot.

17 Bode Plot

The decibel scale is defined:


XdB
XdB = 20 · log10 (X) X = 10 20

Bode plots are so popular because they are easy to sketch and interpret. Consider
the rational transfer function G(s):

b1 (s) · b2 (s)
G(s) =
a1 (s) · a2 (s)

We have:

20·log10 |G(j·ω)| = 20 (log10 |b1 (j · ω)| + log10 |b2 (j · ω)| − log10 |a1 (j · ω)| − log10 |a2 (j · ω)|)

and hence we can compute the gain curve by simply adding and subtracting gains
corresponding to terms in the numerator and denominator. Similarly,

∠G(j · ω) = ∠b1 (j · ω) + ∠b2 (j · ω) − ∠a1 (j · ω) − ∠a2 (j · ω)

the phase curve can be obtained by adding and subtracting the corresponding angle
terms. The Bode plots of a complex system are then obtained by adding the gains
and phases of the respective terms.

59
Paul Aurel Diederichs Control Systems I HS 2017

17.1 Rules
We will derive the rules for drawing bode plots from an example. Consider the
simple transfer function G(s):
a
G(s) =
s+a
We have following magnitude and phase terms:

|a| a
|G(j · ω)| = =√ ∠G(j · ω) = ∠a − ∠(j · ω + a)
|j · ω + a| w + a2
2

and hence
 
1 ω 
20 · log |G(j · ω)| = 20 log a − log(ω 2 + a2 ) ∠G(j · ω) = − arctan
2 a

Both the gain curve and the phase curve can be approximated by the following
straight lines
(
0 if ω < a
20 · log |G(j · ω)| ≈
20{log(a) − log(ω)} if ω > a

0
 if ω < a/10
∠G(j · ω) ≈ −45 − 45 (log ω − log a) if a/10 < ω < 10a

−90 if ω > 10a

These asymptotes are just straight lines on the dB vs. log plot. The approximate
gain curve consists of a horizontal line at 0 dB up to frequency ω = a, called the
breakpoint or corner frequency. Note that the corner frequency is equal to the pole
location of the transfer function. After the corner frequency ω > a, the transfer
function decreases as 20{log(a) − log(ω)} in dB; on a log- frequency scale this is a
straight line with a slope of −20 dB/decade; that is, the transfer function decreases
by 20dB for every factor of ten increase in frequency. The phase curve is zero up to
frequency a/10 and then decreases linearly by 45◦ /decade up to frequency 10a, at
which point it remains constant at −90◦ .

1
Figure 18: Bode Plots for G(s) = s+10

60
Paul Aurel Diederichs Control Systems I HS 2017

The analogous operations can be performed for unstable poles, minimum phase ze-
ros, and non-minimum phase zeros. The results are shown in the table below:

stable poles unstable poles


dB dB
Gain −20 dec −20 dec
Phase −90 deg +90 deg

Figure 19: Poles always cause an amplitude gradient of −20 dB/decade.

minimum phase zeros non-minimum phase zeros


dB dB
Gain +20 dec +20 dec
Phase +90 deg −90 deg

Figure 20: Zeros always cause an amplitude gradient of +20 dB/decade.

10(s+100)
Figure 21: Bode Diagram of G(s) = (s+1)

61
Paul Aurel Diederichs Control Systems I HS 2017

17.2 Repeated Poles


For example, let’s take a repeated pole at s = −a, with the following transfer
function:
1 1
G(s) = → G(j · ω) =
(s + a)r (j · ω + a)r
where r is an integer representing the number of times the pole is repeated. In this
case the approximated magnitude curve and phase curve are defined as following:
(
−20 · r · log a if ω < a
20 · log |G(j · ω)| ≈
−20 · r · log(ω) if ω > a

0
 if ω < a/10
∠G(j · ω) ≈ −45 · r − 45 · r (log ω − log a) if a/10 < ω < 10a

−90 · r if ω > 10a

Once again the asymptotes are just straight lines. The approximate gain curve
is zero up to corner frequency ω = a. Afterwards the slope breaks downward by
20 · rdB/decade. The approximate phase plot is still zero up to frequency a/10 and
then decreases linearly by r · 45◦ /decade up to frequency 10a, at which point it
remains constant at −r · 90◦ .

1
Figure 22: Bode Diagram of G(s) = (s+10)2

62
Paul Aurel Diederichs Control Systems I HS 2017

17.3 Pole at the origin


For example, let’s determine the bode plots for the following transfer function:
1 1
G(s) = → G(j · ω) =
s j·ω

The transfer function G(s) has one pole at the origin. Therefore, the approximate
magnitude and phase curves are:

20 · log |G(j · ω)| ≈ −20 · log(ω) if ω > 0


∠G(j · ω) ≈ −90 if ω > 0

1
Figure 23: Bode Diagram of G(s) = s

63
Paul Aurel Diederichs Control Systems I HS 2017

17.4 Complex Poles


An important remaining issue is the case of complex-conjugate pole pairs. Consider
the transfer function for a second-order system, form:

ω02
G(s) =
s2 + 2 · ζ · ω0 · s + ω02
The curves can be approximated with the following piecewise linear expressions:
(
0 if ω  ω0
20 · log |G(j · ω)| ≈
40 · log(ω0 ) − 40 · log(ω) if ω  ω0
(
0 if ω  ω0
∠G(j · ω) ≈
−180 if ω  ω0

Note that the asymptotic approximation is poor near ω = ω0 and that the Bode plot
depends strongly on ζ near this frequency. For ω = ω0 the largest gain is obtained,
which is approximately equal to:
1
|G(j · ω0 )| ≈
2·ζ
Therefore, we call the frequency ω0 the resonance frequency, which is the frequency
for which the maximum gain is reached. In the case of no damping ζ = 0, the gain
at the resonance frequency ω = ω0 is infinity.

1
Figure 24: Bode Diagram of G(s) = s2 +2s+10

1
Figure 25: Bode Diagram of G(s) = s2 +10

64
Paul Aurel Diederichs Control Systems I HS 2017

17.5 Problems
17.5.1 Problem 1

65
Paul Aurel Diederichs Control Systems I HS 2017

17.5.2 Problem 2

66
Paul Aurel Diederichs Control Systems I HS 2017

17.5.3 Problem 3

67
Paul Aurel Diederichs Control Systems I HS 2017

17.5.4 Problem 4

68
Paul Aurel Diederichs Control Systems I HS 2017

17.6 Solutions
17.6.1 Problem 1

69
Paul Aurel Diederichs Control Systems I HS 2017

17.6.2 Problem 2

70
Paul Aurel Diederichs Control Systems I HS 2017

17.6.3 Problem 3

17.6.4 Problem 4

71
Paul Aurel Diederichs Control Systems I HS 2017

18 Stability Margins

18.1 Poles of the Closed-Loop Transfer Function


Let us take another look at the poles of the closed-loop feedback system. The
transfer function of the closed-loop feedback system from the reference signal to the
output signal r(t) → y(t) is defined as following:
L(s)
T (s) =
1 + L(s)
For pi to be a pole of the transfer function T (s), pi must fulfill the following condition:
! !
1 + L(pi ) = 0 =⇒ L(pi ) = −1

pi is pole of T (s) if and only if T (pi ) → ∞ and for that to hold the denominator
of T (s = pi ) must be equal to zero. This is only the case if the above condition is
fulfilled. Consequently, we must not determine T (s), which can be computationally
difficult, to compute the poles of T (s). Instead, we can just look at the open loop
transfer function L(s) and check when it is equal to −1 to determine the poles of
T (s). L(s) is easy to compute, as it is just the multiplication of P (s) with C(s).

Our ultimate goal is to have a stable closed-loop system. Therefore, we must ensure
that all poles of pi are negative. We do not want positive poles. Additionally,
we want to avoid purely complex poles too, as these cause the system to become
unstable at the system’s resonance frequency. Therefore we need to ensure that
pi < 0. In other words, L(s) is only allowed to be equal to −1 for s < 0.

18.2 Bode Plot & Horror Point -1


We can use the bode plot of L(s) as a tool to determine whether T (s) is stable or
not. The bode plot of L(s) depicts the frequency response of L(s = j · ω). From the
bode plot we can determine how the magnitude |L(j · ω)| and the phase ∠L(j · ω)
vary with ω.

Since we do not want T (s) to have poles that are purely complex. We must ensure
that:
U L(j · ω) 6= −1 = e−j·π
|L(j · ω∗ )| = 1 and ∠L(j · ω∗ ) = −π = −180◦
We can determine whether the above condition is fulfilled by looking at the bode
plot of L(s). We simply check whether the phase is above or below −180◦ when
the magnitude is |L(j · ω∗ )| = 1. From the above condition we derive two stability
margins, the phase and the gain margin.

18.2.1 Phase Margin


1. Find the frequency ω1 where the gain/magnitude of |L(jω1 )| = 1 and hence
equal to 0 dB. This means the output and input amplitudes (magnitudes) are
identical at this particular frequency. On the Bode magnitude plot, it’s where
the transfer function crosses 0 dB.

72
Paul Aurel Diederichs Control Systems I HS 2017

2. Find the phase, ϕ = ∠L(jω1 ) (in degrees), at this SAME FREQUENCY ω1


(by now looking at the phase plot).

3. Calculate the phase margin: ϕ + 180◦

18.2.2 Gain Margin


1. Find the frequency ω2 where the phase becomes −180◦ on the Bode phase
plot.

2. Find the gain/magnitude, |L(jω2 )| in dB, at this SAME FREQUENCY ω2 .

3. Calculate the gain factor: G ··= 0 − |L(jω2 )|dB

4. Note that G is in dB here, we need to convert this magnitude back into the
G
linear scale. The linear gain factor M is equal to M = 10 20
1
5. Calculate the gain margin: M

18.2.3 Robustness
Think of both of these as safety margins (phase and gain margin) for an open-loop
system L(s) which you would like to make closed-loop T (s). That is, if we are
walking next to a cliff, we want a positive space or margin of safety between us and
a big disaster/bubu. Hopefully, that intuition may help keep straight how gain and
phase margins are defined. So that positive margins indicate there is still a safety
margin (before instability in the closed loop occurs). Conversely, negative margins
in an open-loop system indicate instability issues if you try to close this loop (we
fall of the cliff)! Therefore, for a robust and stable closed-loop system we want the
margins to be positive.

73
Paul Aurel Diederichs Control Systems I HS 2017

19 Nyquist Diagram

A Nyquist plot is a parametric polar plot of a frequency response of a transfer


function G(jω), where r = |G(jω)| and ϕ = ∠G(jω). Typically the Nyquist plot is
drawn for the open-loop transfer function L(jω) = P (jω) · C(jω). The Nyquist plot
of L(jω) allows us to determine the stability of the closed-loop system T (s) using
the Nyquist’s stability theorem.

(a) Nyquist D Contour (b) Nyquist Plot

Figure 26: The Nyquist contour (a) encloses the right half-plane, with a small semi-
circle around any poles of L(s) on the imaginary axis (illustrated here at the origin)
and an arc at infinity, represented by R → ∞. The Nyquist plot (b) is the image of
the loop transfer function L(s) when s traverses in the clockwise direction. The solid
line corresponds to ω > 0, and the dashed line to ω < 0. The gain/magnitude and
phase at the frequency ω are g = |L(jω)| and ϕ = ∠L(jω). The curve is generated
−s
for L(s) = 1.4·e
(s+1)2
.

19.1 Drawing the Nyquist Plot


For drawing the Nyquist Plot of G(s), the following limits are useful:
lim |G(jω)|, lim ∠G(jω), lim |G(jω)|, lim ∠G(jω)
ω→0 ω→0 ω→∞ ω→∞

19.2 Nyquist’s Stability Theorem

Consider a closed loop system with the open loop transfer function L(s) that
has P poles, which are either purely complex or positive poles. Let N be the net
number of clockwise encirclements of -1 by L(s) when s encircles the Nyquist
contour in the clockwise direction. The closed loop system then has
Z =N +P (45)
Z poles in the right half-plane.
P ··= number of poles of L(s) = C(s)P (s) in the RHP

74
Paul Aurel Diederichs Control Systems I HS 2017

Z ··= number of closed-loop poles T (s) in the RHP


N ··= number of clockwise encirclements of the Nyquist Diagram about the
critical point -1

75
Paul Aurel Diederichs Control Systems I HS 2017

19.3 Problems
19.3.1 Problem 1

76
Paul Aurel Diederichs Control Systems I HS 2017

19.3.2 Problem 2

77
Paul Aurel Diederichs Control Systems I HS 2017

19.3.3 Problem 3

78
Paul Aurel Diederichs Control Systems I HS 2017

19.3.4 Problem 4

79
Paul Aurel Diederichs Control Systems I HS 2017

19.4 Solutions
19.4.1 Problem 1

19.4.2 Problem 2

80
Paul Aurel Diederichs Control Systems I HS 2017

19.4.3 Problem 3

81
Paul Aurel Diederichs Control Systems I HS 2017

19.4.4 Problem 4

82
Paul Aurel Diederichs Control Systems I HS 2017

20 System Specifications

20.1 Steady-State Error


The static steady-state error is defined as following:

R(s) · s
e∞ = lim e(t) = lim r(t) − y(t) = lim s · S(s) · R(s) = lim (46)
t→∞ t→∞ s→0 s→0 1 + L(s)

In other words, it is the difference between the reference signal r(t) and the closed-
loop system’s output y(t) for t = ∞. Ultimately, we want the steady-state error
to converge to zero for t → ∞, ensuring that the reference signal and the output
signal are equivalent. E.g. if we set the reference signal for our oven to 180◦ C we
want y(t = ∞) = 180◦ C and thus limt→∞ e(t) = limt→∞ r(t) − y(t) = 0. Typically
we consider the static error for a unit step reference signal r(t) = h(t). Yet, we
can calculate the static error for any reference signal r(t) using the above formula.
A static error can be prevented by designing a controller C(s) with one or more
integrators.

20.2 System Type


Consider the following open-loop transfer function:

K( zs1 + 1)( zs2 + 1)...( zsm + 1)


L(s) = C(s) · P (s) =
sj ( ps1 + 1)( ps2 + 1)...( psn + 1)

We call the parameter j the system type. Note that increased system type number
correspond to larger numbers of poles at s = 0. We call poles at the origin integra-
tors, because they have the effect of performing integration on the input signal. The
input signal of the open loop transfer function is the error signal e(t), thus e(t) is
integrated j times.

r(t) e(t) y(t)


L(s) = P (s) · C(s)

Figure 27: Feedback System & System Type

The table below shows how the static error e∞ of the closed-loop system varies
with the open-loop transfer function system type. In other words, if the open-loop
transfer function L(s) has no integrators, is of type zero, the error for a unit-step
reference input r(t) = h(t) is e∞ = 1+k1bode , where kbode = lims→0 L(s). Furthermore
the steady state error for a zero-type open-loop transfer function for a unit ramp
reference signal r(t) = t · h(t) is infinity. In contrast to a type one open loop-transfer
function, with on integrator that has no static error for r(t) = h(t) and a static error
1
of e∞ = kbode for r(t) = t · h(t).

83
Paul Aurel Diederichs Control Systems I HS 2017

m=0 m=1 m=2


e∞
r(t) = h(t) r(t) = t · h(t) r(t) = 21 · t2 · h(t)
1
Type 0 1+kbode ∞ ∞
1
Type 1 0 kbode ∞
1
Type 2 0 0 kbode

Figure 28: Static Error e∞ of the Closed-Loop System for a reference signal r(t) =
1 m
m! · t

20.3 Limitations of Proportional Control


Note that a purely proportional controller C(s) = k implemented on a zero type
plant P (s) does not remove the static error e∞ for a unit-step reference signal
r(t) = h(t). The overall open loop-transfer function of the described system would
be:
( zs1 + 1)( zs2 + 1)...( zsm + 1)
L(s) = P (s) · C(s) = |{z}
k · 0 s
s ( + 1)( ps2 + 1)...( psn + 1)
C(s) | p1 {z }
P (s)
1
and therefore the static error for r(t) = h(t) → R(s) = s would be:

R(s) · s 1
e∞ = lim =
s→0 1 + L(s) 1+k

While a proportional integral controller C(s) = ks implemented on a zero type plant


P (s) removes the static error e∞ for a unit-step reference signal r(t) = h(t). The
overall open loop-transfer function of the described system would be:

k ( s + 1)( zs2 + 1)...( zsm + 1)


L(s) = P (s) · C(s) = · 0 z1 s
s s ( p1 + 1)( ps2 + 1)...( psn + 1)
|{z}
C(s)
| {z }
P (s)

1
and therefore the static error for r(t) = h(t) → R(s) = s would be:

R(s) · s 1
e∞ = lim = =0
s→0 1 + L(s) 1+∞

20.4 Time Domain Specifications


The following specifications are defined in the time domain:

1. T90 : the time it takes for the closed-loop system to reach 90% of its final value.

2. ˆ or Mp : Initial overshoot past the desired reference signal

We can approximate these time domain specifications using the phase margin and the
cross-over frequency ωc . Therefore, the bode plot of L(s) also provides information

84
Paul Aurel Diederichs Control Systems I HS 2017

in regards to the time response of the system. The overshoot ˆ can be approximated
via the phase margin ϕ as following:
71◦ − ϕ
ˆ = (47)
117◦
Similarly, the the rise time T90 can be approximated via the crossover frequency ωc
1.7
T90 = (48)
ωc

Figure 29: Time Domain Specifications

20.5 Frequency Domain Specifications


Be reminded that the closed-loop system dynamics can be defined as follows:

Y (s) = S(s) · [D(s) + P (s) · W (s)] + T (s) · [R(s) − N (s)]

We note that the disturbances are introduced into our system with S(s), while noise
is introduced with T (s). Ultimately, it would be extremely desirable to completely
attenuate both the noise and the disturbances. Therefore, we would like

T (s) = 0 S(s) = 0

This is impossible as

L(s) 1
T (s) = S(s) =
1 + L(s) 1 + L(s)
1 + L(s) 1
⇒ T (s) = 1 − S(s) = −
1 + L(s) 1 + L(s)
1 + L(s) L(s)
⇒ S(s) = 1 − T (s) = −
1 + L(s) 1 + L(s)
!
T (s) = 0 =⇒ S(s) = 1 − T (s) = 1 6= 0

We note that
T (s) + S(s) = 1 (49)
Therefore, it is impossible to completely attenuate both noise and disturbances for
all frequencies. We note that disturbances (transfer function S(s)) appear at very

85
Paul Aurel Diederichs Control Systems I HS 2017

low frequencies (< 10Hz) and noise transfer function T (s)) at very high frequencies
(> 100Hz). So at low frequencies we must ensure that S(s) ≈ 0 and at high
frequencies T (s) ≈ 0. This leads to following curve shape of L(s), where ωc ··= cross-
over frequency. It is the frequency, where |L(j · ωc )| = 1 and where the magnitude
bode-plot crosses the 0-dB line.

Figure 30: Frequency Domain Specifications

21 Dominant poles Approximation

A system of high order (order > 2) can be approximated via a lower order system,
by considering the dominate poles (slow poles) and ignoring extremely fast poles.
The dominate poles are usually, the poles closest to the imaginary axis and therefore
poles with the largest real part (i.e., the slowest decay rate). An exception is made if
a we have a stable pole and minimum phase zero that are almost equivalent zi ≈ pi .
In this case we have a near-pole/zero cancellation. Therefore, in the approximate
transfer function both the pole and zero can be ignored, no matter how close they are
to the imaginary axis/to the origin. For example, consider the following third-order
system:
130(s + 0.6)
G(s) =
3(s + 0.5)(s + 1 + 5j)(s + 1 − 5j)(s + 2)
We want to approximate this transfer function with a second order system so, we
ignore near-pole/zero cancellations and the fast poles:

130(s
+ 0.6)

Gapprox (s) =
3
(s
+0.5)(s + 1 + 5j)(s + 1 − 5j)
(s
+2)
 

We first note that we have a near-pole/zero cancellation since the pole p1 = −0.5
is approximately equal to the minimum phase zero z1 = −0.6. The fastest pole is

86
Paul Aurel Diederichs Control Systems I HS 2017

p4 = −2, therefore, this pole can also be neglected. We would be tempted to stop
here and conclude that:
130
Gapprox (s) =
3(s + 1 + 5j)(s + 1 − 5j)

This is incorrect as
130 5
Gapprox (0) = = 6= G(0) = 1
3(1 + 5j)(1 − 5j) 3

When defining the approximate transfer function of lower order, we must ensure
that the static gain is preserved, which means
!
Gapprox (0) = G(0)

Therefore, we add an additional gain parameter k to our approximate transfer func-


tion.
130
Gapprox (s) = k ·
3(s + 1 + 5j)(s + 1 − 5j)
We chose k so that the following condition is fulfilled
!
Gapprox (0) = G(0)
130 !
Gapprox (0) = k · = G(0) = 1
3(1 + 5j)(1 − 5j)
3
⇒k=
5
Note: that these approximations are not valid if there are unstable poles or non-
minimum phase zeros within the transfer function. These types of poles and zeros
cannot be factored away and need to be kept as factors of G(s).

87

You might also like