NE Lecture3 DTMCs
NE Lecture3 DTMCs
Boris Bellalta
[email protected]
WN Wireless Networking
UPF https://www.upf.edu/web/wnrg
Markov process
●
Next value of the stochastic process depends only on the
current value.
Markov chains
0.2
0.2 0.2
0.6
Example: 0.4 0.5
0 1 5 10 0.2
●
The equilibrium distribution exists if:
– Transition probabilities / rates are time independent
– All states can be reached (the Markov chain is irreducible)
– Once departing one state, we will return in the future to it (the Markov chain is positive
recurrent)
– There are different possible paths between any two states (the Markov chain is aperiodic)
0.2
0.2 0.2
0.6
0.4 0.5
0 1 5 10 0.2
●
Markov chains can be very useful to model real systems
●
It is easy to fix the state space and the transition/rate matrices based on design and
functional requirements of the real system
●
So, the goal is to be able to obtain the equilibrium distribution
●
It can be done from a data set
●
Data set = [a,b,a,b,a,a,b,a,b,b,b,a,a,b,a,a,a,b]; N=18 samples, N-1 transitions
– Values that appear: State space: {a,b}
– Prob of the value t+1 after a value t, given we are at state ‘i’
●
From a→b: 6 times → pa,b = 6/10
●
From a→a: 4 times → pa,a = 4/10
●
From b→ a: 5 times → pb,a = 5/7
●
From b→ b: 2 times → pb,b = 2/7
Solving a Markov chain
●
The equilibrium distribution is:
– π0, π1, π5, π10
0.2
0.2 0.2
0.6
0.4 0.5
0 1 5 10 0.2
●
Weather model (sunny, cloudy, raining, etc.)
●
Number of packets waiting for transmission in a buffer
●
Number of bikes in a bike station
●
People waiting to pay in a supermarket
●
Number of active users in a WIFI network
Discrete and Continuous Markov chains
Equilibrium distribution
●
The stationary probability distribution is also called equilibrium
distribution.
●
It represents the probability to find the Markov process in state
‘i’ when we observe it at an arbitrary instant of time.
●
We use πi=P(X=i) to represent the probability that the Markov
chain is in state ‘i’
Discrete Time Markov chains (DTMC)
Global Balance Equations
Example
●
The traffic load (Mbps) generated by a user changes every second. We
are interested to know the probability the user is generating a given
load at an arbitrary instant of time, and then calculate the expected
traffic load, and its variance.
●
To answer this question, we must calculate the equilibrium distribution
of a DTMC (the system changes at specific time instants, every second)
with the following state space and transition probabilities:
– Χ={0, 1, 5, 10} Mbps;
– P=[0 0.6 0.2 0.2; 0.4 0 0.4 0.2; 0 0.5 0 0.5; 0 0 0.8 0.2;]
Example
●
We can describe the stochastic process using a Markov chain:
0.2
0.2 0.2
0.6
0.4 0.5
0 1 5 10 0.2
t
1 second
Example: We write the balance equations
●
Balance Equations
●
We have a system of equations
to solve
Example: We solve the DTMC (i.e., find the stat. dist)
●
Alternative way (Matlab code):
– [V D]=eigs(P.’); st=V(:,1).'; stat_dist = st./sum(st);
●
Stationary distribution
– π0=0.0948 π1=0.2370 π5=0.3602 π10=0.3081
Example
Example
function ExampleUserLoadModel() function stat_dist = SolvingDTMC(P)
EX=X*pi_dist';
EX2 = (X.^2)*pi_dist';
VX = EX2 - EX^2;
●
In the 1st lecture we had a question: how to select the capacity of a link.
●
For instance, let’s assume that the criterion is the following:
– C = 4 · The expected traffic load
●
Given information of the process representing the load of a link, such as the
state space, and P, which can be obtained by just observing the dynamics of
the process, we can easily answer that question.
Reversible Markov chains
●
We say a Markov chain is reversible if the following condition is satisfied for all its
states:
●
In practice, we have bidirectional transitions between two given states.
●
Example: Two users. Each user can be active or idle. When one user is active, the
other is idle. User
A
idle
User
B
Local Balance Equations
Example
0.8 User
A
0.5
idle
0.4
User
0.2 B
●
Transitions between states are only between consecutive states (forward (births) or
backward (deaths)).
– Births increase by 1 the current value of the state space.
– Deaths decrease by 1 the current value of the state space.
●
They are reversible Markov process → local balance equations
●
We will use them to capture the buffer dynamics in network interfaces:
– The state space of the random variable is the number of packets in the buffer.
– Forward transitions: a packet arrival
– Backward transitions: a packet departure
Example of a birth and death Markovian process
●
Consider a network interface where ‘events’ happen only at predefined time instants.
●
The network interface has a single transmitter, and the maximum buffer size is Q=2 packets.
●
At each predefined time instant, only one event may occur:
– It is an arrival with probability p
– It is a departure with probability q
– Nothing happens with probability 1-p-q, 1-p or 1-q
Q=2 packets
p q p p p
0 1 2 3
q q q
K=3 packets