0% found this document useful (0 votes)
15 views

TT Lecture 6 EN NB

TT lecture notes5

Uploaded by

topaz-help.0i
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

TT Lecture 6 EN NB

TT lecture notes5

Uploaded by

topaz-help.0i
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Telecommunications B (EE2T21)

Lecture 6 overview:

Bit error probability for digital modulation techniques


* Matched filter
* Bit Error Probability
- AWGN: (non-) matched filter
- colored noise

Bit Error Probability in baseband systems


* unipolar signals
* polar signals
* bi-polar signals

EE2T21 Telecommunications B
Dr.ir. Gerard J.M. Janssen
March 2, 2022

Faculty of Electrical Engineering, Mathematics, and Computer Science


Lectures & Work-instructions Telecommunications B
Telecommunication Techniques

Lectures:
Monday 9-5 1st+2nd hour
Wednesday 11-5 3rd+4th hour

Q&A/Working lectures:
Thursday 12-5 5th+6th hour

2
Signal detection quality (1)
The quality of the recovered signal at the receiver is determined by:
- received signal power
- noise- plus interference power at the receiver input

Receiver

Carrier-to-Noise ratio at the detector input:


C received signal power

N noise power
determines: - SNR after detection for analogue signals
- bit error probability (BER) for digital signals

3
Signal detection quality (2)

Digital modulation: relation between C/N and

bit error rate (BER): BER  f ( Eb / N 0 )

C Eb / Tb Eb Rb Should be
   Rb = bitrate
N N 0 BN N 0 BN maximized

Eb PEIRP  GFS  GAR Eb average bit energy


 
N0 k  Tsyst  Rb N0 noise PSD
A larger data rate (with the same detection quality) can be obtained by:
- increasing EIRP (TX-power or TX-antenna gain)
- increase of the RX-antenna gain
- lower receiver system noise
4
Matched filter
How can we maximize the SNR for the detection of pulse-like signals at the input
of a decision circuit? This will result in the best estimate or reconstruction of the
transmitted message! We can calculate S/N using the method of Chapter 8, which
assumes an "ideal" rectangular filter, but is this the best filter for pulse-like
signals?

S so2 (t )
Problem: optimize h(t), H(f) so that    2 is maximized at t  t0 .
 N out no (t )

The aim is to obtain a sample of a time limited signal (data symbol, radar pulse)
which is above the noise as much as possible at t  t0 .
Note, this filter may distort the signal pulse shape!
5
Derivation matched filter (1)
Signal at t  t0 :
so (t0 )  h(t ) * s (t ) t t Parseval
0

 
  h( )s(t0   )d    H ( f ) S ( f )e 2 jft0 df
  

Noise power at t  t0 : no2 (t0 )  no2 (t )   | H ( f ) |2


Pn ( f )df

Now we find the SNR as:
 2
2 jft0
Maximize SNR by choosing

S
 H ( f ) S ( f )e df optimum H(f) for given S(f)
 and Pn(f).
  (t0 )  
 N out 2
 | H ( f ) | Pn ( f )df
Matched Filter (MF)

6
Derivation matched filter (2)
Let us write:
 2  2

 H ( f ) S ( f )e2 jft0 df  A( f ) B( f )df


S  
  (t0 )  
 
 N out 2 2
 | H ( f ) | Pn ( f )df  | A( f ) | df
 

S ( f )e 2 jft0
when we choose: A( f )  H ( f ) Pn ( f ) and B ( f )  .
Pn ( f )
Using Schwartz's inequality:
 2  
2 2
 A( f ) B( f )df   | A( f ) | df   | B ( f ) | df
  

*
where equality holds when: A( f )  K  B ( f )
7
Derivation matched filter (3)
 
 S 2 | S ( f ) |2
Then we find:   (t0 )  | B ( f ) | df  df
 N out 



Pn ( f )
*
and the SNR is maximized when A( f )  K  B ( f ) , so when:

S * ( f )e 2 jft0
H( f )  K 
Pn ( f ) This is the Matched filter.
Discussion: What do we learn from H(f)?
H(f) is small for frequencies with a lot of noise and little signal power and
H(f) is large where there is a lot of signal power: as could be expected!

The given proof is pure mathematical.


The question is whether such a filter can be implemented in practice?
8
Matched filter and white noise
N0 2K *
In the case of white noise: Pn ( f )  and H ( f )  S ( f )e  jt0
2 N0
Taking the inverse Fourier transform:

2K  jt0  jt
h(t )  F 1{H ( f )}   S *
( f ) e e df
N 0 
 *
2K  2 jf ( t0 t )  2K *
   S ( f )e df   s (t0  t )
N 0    N0

So for a real signal s(t), the impulse response of the matched filter is
given by: '
h(t )  K s (t0  t )
The signal shape reversed in time!

9
SNR after matched-filtering

The SNR achieved after matched-filtering is:


 2 
 S   | S ( f ) | df  2 s 2 (t )dt  2 Es
   N0 / 2
 N out  
N 0  N0
related to energy

Parseval
 independent of the signal shape
 only depends on the energy in the signal pulse!

Relation with SNRin  we measure the signal power over a period T


and determine the noise power in the signal bandwidth W:

 S   2 Es  2 Es / T  T  W  2  S   T  W
   
N
 out N 0 N 0W  N in time-bandwidth product
of the input signal
10
Example: Integrate & Dump filter (1)
For a rectangular pulse: h(t )  s (t0  t )
 the output pulse is triangular with a
width of 2T, so a shape different from the
input signal pulse, but with maximum
SNR on t = t0.

 for causality: t0 ≥ t2 otherwise a


response before t = t1 when the signal
does not yet exist!
How can we approximate non-causal filters?

 ro (t0 )  r (t ) * h(t ) t
0


  r ( )h(t0   )d 


1
t0
 r ( ) d  
t0 T

Integrate & Dump (I&D)-filter


11
Example: Integrate & Dump filter (2)

12
Matched filtering = correlation processing (1)

As we have seen for the I&D-filter:



ro (t0 )  r(t ) * h(t ) t0   r( )h(t0   )d 


 s (t0  t ) 0t T
Since: h(t )  
0 otherwise

we get:
t0
ro (t0 )   r (  ) s(  ) d   Correlation processor is matched!
t T
0

13
Matched filtering = correlation processing (2)
For PRK, BPSK:
 A cos ct "1"
s (t )  
 A cos ct  A cos(ct   ) "0"

We can use: s '(t )  A cos ct as


template.

and choose:
h(t )  A cos c (t0  t ) t0  T  t  t0

What does
h(t )  s* (t0  t )
mean for modulated signals?

14
Bit error probability for digital modulation systems

Two important design criteria for digital communication systems:


- required bandwidth (bandwidth efficiency)
- performance of the system with noise (power efficiency)

In previous lectures we discussed a number of digital modulation


schemes (OOK, BPSK, FSK, QPSK, M-PSK, QAM, etc.)

In the following, we will focus on the detection of these signals


and the performance in terms of bit error probability.

First we will discuss a general frame work on decision making for


baseband signals:
What is the probability of an erroneous decision?
15
General binary communication system model (1)
 s1 (t ) 0  t  T "1", mark When s1 (t )   s2 (t )
s (t )  
 s2 (t ) 0  t  T "0", space  the signals are
anti-podal

For bandpass signals the processing circuit contains a


superheterodyne receiver (p. 279, fig. 4.29) with a mixer
that produces a baseband signal.
All signals are analog till after the decision circuit.
16
General binary communication system model (2)
A binary signal plus noise at the receiver input results in an analog
baseband signal at the input of the sample circuit. The value at sample
time t0 is called the test statistic:

 r01  {r0 | s1}  s01  n01 "1"


r0  ro (t0 )  
 r02  {r0 | s2 }  s02  n02 "0"
with corresponding conditional probability density functions f {r0 | si }
for r0 i {i  1, 2} . Often, these PDF's are Gaussian.

17
General binary communication system model (3)

s01  VT for s1 
Assume:  , then we find the following
s02  VT for s2  VT
P (error | s1 )   f (r0 | s1 )dr0

conditional error probabilities:

P (error | s2 )   f (r0 | s2 )dr0
VT

and the total unconditional error probability:


P(error )  P(error | s1 ) P ( s1 )  P(error | s2 ) P( s2 )
Bit error probability is often indicated as bit error rate (BER).
18
General binary communication system model (4)
Efficient use of the symbols requires a good coding of the source
information signal (see eqn. 1-8, p. 17). A measure for the average
source information per symbol is the entropy:
2 2
H   j 1 Pj I j    j 1 Pj log 2 Pj [bits]
Entropy Binary Source
The best source statistic (maximum H)
1

has equi-probable symbols: P1  P2  0.5


0.8

With a different source statistic less bits


Entropy H

0.6

per symbol are transmitted (on average).


0.4

Maximum surprise for the detector means


0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
that most uncertainty is removed by a
Prob("1")
successful detection.

19
Additive White Gaussian Noise

In the following we assume the noise to be AWGN.

Additive White Gaussian Noise:


- is additive to the signal, i.e. independent of that signal
- has a flat (white) power spectral density Pn ( f )  N 0 / 2 (double sided)
- its amplitude has a Gaussian (Normal) probability density function
with a variance  02  n02 equal to the noise power in the observed
filter bandwidth.

When the noise process at the receiver input is a zero mean WSS
Gaussian process, and the receiver is linear, then the noise at the
output will also be Gaussian. Only the Gaussian statistic has this
characteristic.

20
Bit error probability (1)

 s01  r0 ( s  s1 )
Test statistic: r0  s0  n0 with s0  
 s02  r0 ( s  s2 )
2
  ( r0  s0 i ) 
1 
 2 0
2 

 conditional pdf: f (r0 | si )  e
 0 2
2 2
where  0  n0 is the noise power within the detection filter
bandwidth. Now we find:
VT   ( r0  s01 ) 2     ( r0  s02 ) 2 
1 
 2 0
2 
 1 
 2 0
2 

Pe  P1  e dr0  P2  e dr0
  0 2 VT  0 2
21
Bit error probability (2)
r0  s0i 1
Using  and P1  P2 
0 2
we find:
(VT  s01 )
0    2      2 
1 1 
 2 
 1 1 
 2 

Pe   e d   e d
2  2 2 (VT  s02 ) 2
0

1  s01  VT  1  VT  s02 
 Q   2Q  
2  0   0 

1  2 
Q (k )   exp    d 
2 k  2  Decision threshold
22
Q-function (1)
Normal distribution
0.1

0.09

0.08


0.07
1  2 
0.06
Q(k )   exp    d 
2 k  2 
pdf(z)

0.05

0.04
Q(k )  1  Q(k )
0.03

0.02

0.01

0
-10 -5 0 5 10
z [dB] k

23
Q-function (2)

For larger values of k the


Q-function can be approximated
by: 2
 
1 
Q(k )   e 2
d
2 k
1 k2 / 2
 e
k 2

which is quite accurate for k > 3.

Q-function: Couch pp. 700-701,


722-725, and cover page in the
back.

24
Bit error probability (3)
What is the optimum decision threshold for the general case?
For which VT is Pe minimized?
VT   ( r0  s01 ) 2 
1 
 2 0
2 

Pe  P1  e dr0
  0 2
   ( r0  s02 ) 2 
1 
 2 0
2 

 P2  e dr0
VT  0 2

Determine VT for which dPe / dVT  0 .

Using Leibnitz's rule we find:


  (VT  s01 ) 2    (VT  s01 ) 2 
dPe P1 
 2 02

 P2 
 2 02


 e  e 0
dVT  0 2  0 2
25
Bit error probability (4)
Manipulation of:
  (VT  s01 ) 2    (VT  s02 ) 2 
P1 
 2 02

 P2 
 2 02


e  e 0
 0 2  0 2

results in:

P
2 02 ln  1   2VT ( s01  s02 )  s01
2 2
 s02
 P2 
 02  P  s s
 VT  ln  1   01 02
s01  s02  P2  2

s01  s02
and we find for P1  P2  0.5 : VT 
the optimum threshold! 2
26
Bit error probability (5)
Now we find for the bit error probability:

1  VT  s01  1  VT  s02 


Pe  Q   Q
2   0  2   0 

s s
with VT  01 02
2 Also for non-matched
filters.
 ( s01  s02 ) 2   | s01  s02 | 
Pe  Q    Q  
 4 2
 2  
 0  0

This holds in general: for any type of filter!!


The BER can be further minimized by maximizing the argument of
the Q-function:
[ s01 (t0 )  s02 (t0 )]2 sd2 (t0 )
2

 0  02
with sd (t0 ) ˆ s01 (t0 )  s02 (t0 ) , is the difference signal.
27
Bit error probability (6)
This requires a "matched filter" (see section 6.8, p. 486 - 494).
2K
h(t )  [ s1 (t0  t )  s2 (t0  t )]
N0

This gives a maximum SNR at sample time instant t0.

The optimal SNR is now (6.159):


 
sd2 (t0 ) | Sd ( f ) |2 2 2
 df   | S d ( f ) | df
 02 opt 
Pn ( f ) N 0 

With Parseval's theorem we find:

T T
sd2 (t0 ) 2 2 Ed
 
2
sd (t )dt  where Ed   [ s01 (t )  s02 (t )]2 dt
 02 opt
N0 0 N0 0

28
Bit error probability (7)
Now we find for the BER with a matched filter:

 ( s01  s02 ) 2   Ed  This is the best we can do


Pe  Q    Q   with additive white Gaussian
 4 2
2 N
 0   0  noise!!!

The matched filter for rectangular pulses with duration T is given by:

 ( TTt ) 0t T sin  fT


h(t )   H( f ) T  T sinc fT
other t  fT
0
1
with equivalent noise bandwidth Beq  .
2T
( s01  s02 ) 2 sd2 sd2 2Tsd2 Ed
and: 2
 2
  
4 0 4 0 4 Beq N 0 4 N 0 2 N 0
29
Colored noise (1)
When the noise is non-white, maybe due to filtering, we cannot apply
the previous theory unless we use a pre-whitening filter:
1
H pw ( f )   hpw (t )  F 1{H pw ( f )}
Pn ( f )

and the filtered signal becomes: s (t )  s (t ) * hpw (t )


30
Colored noise (2)

Due to the pre-whitening filter, the filtered signal s (t )  s (t ) * hpw (t )


will be dispersed in time:

- loss of signal power since part of it will be outside


the interval: 0  t  T

- Inter-symbol interference (ISI, see also Ch. 3, pp. 176-185)

This problem can be solved by using shorter pulses and thus


concentrating the power in time: requires more bandwidth!

31
Resume: optimal detection of binary signals

In general for additive white Gaussian noise (AWGN) and arbitrary type
of filter:
 ( s01  s02 )
2   s01  s02  s01 and s02 are the sampled
Pe  Q    Q  
 4 2
 2   values without noise.
 0  0

s01  s02
and the optimum decision threshold is: VT 
if P1  1  P2  0.5 .
2

 Ed 
For the matched filter: Pe  Q  
 2 N 0 
T

with Ed   [ s1 (t )  s2 (t )]2 dt is the "difference symbol" energy.


0

32
Baseband signaling: review

Some line codes:


- Unipolar NRZ
- Polar NRZ
- Bipolar NRZ

33
Unipolar baseband signal (1)

s1 (t )  A
s2 (t )  0

34
Unipolar baseband signal (2)
2
1. For a non-matched lowpass filter: Beq 
(no ISI at sample moment t = t0) T

s01 (t0 )  A A
- sample values:   VT ,opt 
s02 (t0 )  0  2 What happens when:
- Beq is increased?
- Beq is decreased?
N0
- noise variance:  02   2 Beq  N 0 Beq
2

 s01  s02   ( s01  s02 ) 2   A2 


 Pe  Q    Q  2   Q  
 2 0   4 0   4 N 0 Beq 

35
Exercise: BER unipolar signal (1)
For a received unipolar signal we find:
A2 V
 02  0.1 V 2

What is the BER?

 ( s01  s02 ) 2   A2 
Pe  Q    Q  
 4 02
   4 N 0 Beq 

 4 
 Q Q  
10  Q  3.16   8 104
 4  0.1 

36
Exercise: BER unipolar signal (2)

37
Unipolar baseband signal (2)
2. With a matched filter we obtain maximum achievable SNR at the
sample moment t = t0 .

 Ed   Eb   A2T 
with Pe  Q    Q   Q  
 2 N0   N0   2 N0 
T
where Ed  [ s1 (t )  s2 (t )] dt  A T
2 2

0

The average amount of energy transmitted per bit is:


( A2T  0) A2T Ed
Eb     Ed  2 Eb
2 2 2
AT
and the optimum decision threshold is: VT ,opt 
2
38
BER for binary modulations with
matched filter detection

39
Polar baseband signal (1)

s1 (t )  A
s 2 (t )   A

40
Polar baseband signal (2)
2
1. For a non-matched lowpass filter: Beq 
T

s01 (t0 )  A  s01 (t0 )  s02 (t0 )


- sample values:   VT ,opt  0
s02 (t0 )   A 2
Important for fading
2 N channels!
- noise variance:  0  0  2 Beq  N 0 Beq
2

 ( s01  s02 ) 2   A2 
 Pe  Q    Q  Note:
 2
4 0   N 0 Beq
    - 2x more power efficient
and
- 4x more PEP efficient than
the unipolar case!
41
Polar baseband signal (2)

2. With a matched filter we obtain maximum SNR at the sample


moment t = t0.
T

The difference signal sd  2 A  Ed  (2 A) dt  4 A T  4 Eb


2 2

0

 Ed   2 Eb   2 A2T 
and Pe  Q    Q   Q  
 2N0   N0   N0 

2 Ed
where Eb  A T  and VT ,opt  0
4

What is the essential reason for the difference in BER performance


between uni-polar and polar signaling?
42
Unipolar v.s. polar signaling
B
B
0
B B *0 B *B
2 B 2 2

B
2
T
2 2 B 2T Ed
Unipolar: Ed   [ s1 (t )  s2 (t )] dt  B T  Eb   , Ed  2 Eb
0
2 2
 Ed   Eb   B 2T 
 Pe  Q    Q   Q  
 2 N 0   N 0   2 N 0 
T
2 2 B 2T Ed
Polar: Ed   [ s1 (t )  s2 (t )] dt  B T  Eb   , Ed  4 Eb
0
4 4
 Ed   2 Eb   B 2T  Anti-podal signals!!!
 Pe  Q    Q   Q   (s01-s02)2 is maximized
 2 N0   N0   2 N 0 
with minimum power.
43
Bipolar baseband signal (1)

s1 (t )   A
P1  P2  0.5
s 2 (t )  0

44
Bipolar baseband signal (2)
2
1. For any non-matched lowpass filter: Beq 
T
 s01a  s02  A
s01 (t0 )   A for "1"   
- sample values:
2 2
  VT ,opt 
s02 (t0 )  0 for "0"  s01b  s02  A
 2 2
2 N0
- noise variance:  0   2 Beq  N 0 Beq
2 Now two (optimum)
Now Pe  P ( | A) P ( A)  P ( |  A) P ( A)  P ( | 0) P (0) decision levels are
required!
VT ,opt
 14 Q  AVT
0   Q
1
4
AVT
0   2 Q 
1
2
VT
0  32 Q  VT0 

 ( s01  s02 ) 2 
For each of the steps we use: Pe  Q   What has been
 4 2
neglected?
and ( s01  s02 ) 2  A2  (2VT ) 2  0 
45
Polar baseband signal (2)
A 3
 A  3  A 2 
1. Using: VT  we find Pe  2 Q   Q
 2  4 N B 
2 2
 0  0 eq 
The same as for unipolar for NRZ with
twice the error probability in "0"-s.

2. With a matched filter we obtain maximum SNR at the sample


moment t = t0.
T

The difference signal sd  s01  s02  A  Ed  ( A) dt  A T  2 Eb


2 2
0

3  Ed  3  Eb  3  A2T 
and Pe  Q    Q   Q  
2  2 N 0  2  N 0  2  2 N 0 

A2T Ed AT
E
where b   V
and T ,opt  
2 2 2
46

You might also like