0% found this document useful (0 votes)
4 views

Chapt 04

Uploaded by

alisher135ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Chapt 04

Uploaded by

alisher135ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 37

Chapter 4

Channel Coding

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 1
Outline
 Introduction
 Block Codes
 Cyclic Codes
 CRC (Cyclic Redundancy Check)
 Convolutional Codes
 Interleaving
 Information Capacity Theorem
 Turbo Codes
 ARQ (Automatic Repeat Request)
 Stop-and-wait ARQ
 Go-back-N ARQ
 Selective-repeat ARQ

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 2
Introduction

Information to
be transmitted Source Channel
Channel
Modulation Transmitter
coding coding
coding

Channel
Information
received Source Channel
Channel
Demodulation Receiver
decoding decoding
decoding

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 3
Forward Error Correction (FEC)

 The key idea of FEC is to transmit enough


redundant data to allow receiver to recover
from errors all by itself. No sender
retransmission required.
 The major categories of FEC codes are
 Block codes,
 Cyclic codes,
 Reed-Solomon codes (Not covered here),
 Convolutional codes, and
 Turbo codes, etc.

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 4
Linear Block Codes
 Information is divided into blocks of length k
 r parity bits or check bits are added to each block
(total length n = k + r),.
 Code rate R = k/n
 Decoder looks for codeword closest to received
vector (code vector + error vector)
 Tradeoffs between
 Efficiency
 Reliability
 Encoding/Decoding complexity

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 5
Linear Block Codes
 The uncoded k data bits be represented by the m vector:
m=(m1, m2, …, mk)
The corresponding codeword be represented by the n-bit c
vector:
c=(c1, c2, …ck, ck+1, …, cn-1, cn)
 Each parity bit consists of weighted modulo 2 sum of the data
bits represented by  symbol.
c1 m1

c2 m2
...

ck mk
c m p
1 1( k 1)  m2 p2 ( k 1)  ...  mk pk ( k 1)
 k 1
...

cn m1 p1n  m2 p2 n  ...  mk pkn
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 6
Block Codes: Linear Block Codes
 Linear Block Code
The block length C of the Linear Block Code is
C=mG
where m is the information codeword block length, G is the
generator matrix.
G = [Ik | P]k × n,
where pi = Remainder of [xn-k+i-1/g(x)] for i=1, 2, .., k, and I is
unit matrix.
 The parity check matrix
H = [PT | In-k ], where PT is the transpose of the matrix
p.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 7
Block Codes: Example
Example : Find linear block code encoder G if code generator
polynomial g(x)=1+x+x3 for a (7, 4) code.

We have n = Total number of bits = 7, k = Number of information bits = 4,


r = Number of parity bits = n - k = 3.

 1 0  0 p1 
 0 1 0 p 
G I | P   2 
,
  
 
 0 0 1 pk 
where n  k i  1
x 
pi Re mainder of   , i 1, 2, , k
 g ( x) 
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 8
Block Codes: Example (Continued)
 x3 
p1 Re  3
1  x  110 
1 x  x 

 x4 
p2 Re  3
 x  x 2  011 1000110 
1 x  x   0100011
G  
 x5   0010111
p3 Re  3
1  x  x 2
 111  
1 x  x   0001101

 x6 
p4 Re  3
1  x 2
 101
1 x  x 

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 9
Block Codes: Linear Block Codes
Parity
Message Generator Code Code check Null
vector matrix Vector Vector matrix vector
m G C C HT 0

Operations of the generator matrix and the parity check matrix

The parity check matrix H is used to detect errors in the received code by using the fact
that c * HT = 0 ( null vector)

Let x = c e be the received message where c is the correct code and e is the error
Compute S = x * HT =( c e ) * HT =c HT e HT = e HT
If S is 0 then message is correct else there are errors in it, from common known error
patterns the correct message can be decoded.

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 10
Linear Block Codes

 Consider a (7,4) linear block code, given by G as

1000111 
 0100110  1110100 
G  
Then, H 1101010 
 0010101
  1011001 
 0001011 

For m = [1 0 1 1] and c = mG = [1 0 1 1 0 0 1].


If there is no error, the received vector x=c, and s=cHT=[0, 0,0]

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 11
Linear Block Codes
Let c suffer an error such that the received vector
x=c  e
=[ 1 0 1 1 0 0 1 ] [ 0 0 1 0 0 0 0 ]
=[ 1 0 0 1 0 0 1 ].
Then,
s=xHT 111 
110 
 
101 
1001001 011  [101]
= 100 
 
 010 
 001
 

=(eHT)
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 12
Cyclic Codes
It is a block code which uses a shift register to perform encoding and
decoding
The code word with n bits is expressed as
c(x)=c1xn-1 +c2xn-2……+ cn
where each ci is either a 1 or 0.
c(x) = m(x) xn-k + cp(x)
where cp(x) = remainder from dividing m(x) x n-k by generator g(x)
if the received signal is c(x) + e(x) where e(x) is the error.

To check if received signal is error free, the remainder from dividing


c(x) + e(x) by g(x) is obtained(syndrome). If this is 0 then the received
signal is considered error free else error pattern is detected from
known error syndromes.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 13
Cyclic Code: Example
Example : Find the codewords c(x) if m(x)=1+x+x 2 and g(x)=1+x+x3
for (7,4) cyclic code.

We have n = Total number of bits = 7, k = Number of information bits = 4,


r = Number of parity bits = n - k = 3.

 m( x ) x n  k 
 c p ( x) rem  
 g ( x) 
x x x 
5 4 3

rem  3  x
Then,
 x  x 1 
c( x) m( x) x n  k  c p ( x)  x  x 3  x 4  x 5

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 14
Cyclic Redundancy Check (CRC)
 Cyclic redundancy Code (CRC) is an error-checking code.

 The transmitter appends an extra n-bit sequence to every


frame called Frame Check Sequence (FCS). The FCS holds
redundant information about the frame that helps the
receivers detect errors in the frame.

 CRC is based on polynomial manipulation using modulo


arithmetic. Blocks of input bit as coefficient-sets for
polynomials is called message polynomial. Polynomial with
constant coefficients is called the generator polynomial.

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 15
Cyclic Redundancy Check (CRC)
 Generator polynomial is divided into the message
polynomial, giving quotient and remainder, the
coefficients of the remainder form the bits of final CRC.
 Define:
M – The original frame (k bits) to be transmitted before
adding the
Frame Check Sequence (FCS).
F – The resulting FCS of n bits to be added to M (usually
n=8, 16, 32).
T – The cascading of M and F.
P – The predefined CRC generating polynomial with
pattern of n+1 bits.
The main idea in CRC algorithm is that the FCS is
generated so that the remainder of T/P is zero.

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 16
Cyclic Redundancy Check (CRC)
 The CRC creation process is defined as follows:
 Get the block of raw message
 Left shift the raw message by n bits and then divide it
by p
 Get the remainder R as FCS
 Append the R to the raw message . The result is the
frame to be transmitted.
 CRC is checked using the following process:
 Receive the frame
 Divide it by P
 Check the remainder. If the remainder is not zero, then
there is an error in the frame.

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 17
Common CRC Codes

Code Generator polynomial Parity check


g(x) bits
CRC-12 1+x+x2+x3+x11+x12 12

CRC-16 1+x2+x15+x16 16

CRC-CCITT 1+x5+x15+x16 16

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 18
Convolutional Codes

 Encoding of information stream rather than


information blocks
 Value of certain information symbol also affects
the encoding of next M information symbols,
i.e., memory M
 Easy implementation using shift register
 Assuming k inputs and n outputs
 Decoding is mostly performed by the Viterbi
Algorithm (not covered here)

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 19
Convolutional Codes: (n=2, k=1, M=2)
Encoder

y1
Input Output
x D1 D2 c
y2

Di -- Register

Input: 1 1 1 0 0 0
… Output: 11 01 10 01 11 00

Input: 1 0 1 0 0 0 …
Output: 11 10 00 10 11 00 …

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 20
State Diagram
10/1

11
01/1 01/0
10/0
10 01
00/1
11/1 11/0
00

00/0
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 21
Tree Diagram
0
00
……
First input
00
00 First output
11
…11001 11
10
11
01 … 10 11 11 01 11
10
11
10
00
00
1 01
01
10
11
00
11
01 11
10
00
01
01
11
01
10 00
01
10
10
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 22
Trellis
… 11 0 0 1

00 00 00 00 00 00 00 00 00 00 00

11 11 11 11 11 11
11
11
10 10 10 10 10 10

00 00 00
10
10 10 10
01 01 01 01 01 01

01 01 01 01
01 01 01
11 11 11 11 11 11
10 10 10
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 23
Interleaving
Input Data a1, a2, a3, a4, a5, a6, a7, a8, a9, …

Write
a1, a2, a3, a4
a5, a6, a7, a8
Interleaving

Read
a9, a10, a11,
a12 a13, a14, a15,
a16
Transmitting a1, a5, a9, a13, a2, a6, a10, a14, a3, …
Data
Read
a1, a2, a3, a4
a5, a6, a7, a8
Write

De-Interleaving a9, a10, a11, a12


a13, a14, a15, a16

Output Data a1, a2, a3, a4, a5, a6, a7, a8, a9, …
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 24
Interleaving (Example)

Burst error

Transmitting 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,…
Data
Read
0, 1, 0, 0
Write 0, 1, 0, 0
De-Interleaving
0, 1, 0,
0 1, 0, 0,
0

Output Data 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, …

Discrete error

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 25
Information Capacity Theorem
(Shannon Limit)
 The information capacity (or channel capacity)
C of a continuous channel with bandwidth B
Hertz can be perturbed by additive Gaussian
white noise of power spectral density N0/2,
provided bandwidth B satisfies
 P 
C B log 2  1   bits / sec ond
 N0 B 
where P is the average transmitted power P =
EbRb (for an ideal system, Rb = C).
Eb is the transmitted energy per bit,
Rb is transmission rate.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 26
Shannon Limit
Rb/B

20
Region for which Rb>C
10
Capacity boundary Rb=C

Shannon Region for which Rb<C


Limit
-1.6
1 Eb/N0 dB
0 10 20 30

0.1
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 27
Turbo Codes
 A brief historic of turbo codes :
The turbo code concept was first introduced by C. Berrou in
1993. Today, Turbo Codes are considered as the most
efficient coding schemes for FEC.
 Scheme with known components (simple convolutional or
block codes, interleaver, soft-decision decoder, etc.)
 Performance close to the Shannon Limit (Eb/N0 = -1.6 db
if Rb 0) at modest complexity!
 Turbo codes have been proposed for low-power applications
such as deep-space and satellite communications, as well as
for interference limited applications such as third generation
cellular, personal communication services, ad hoc and sensor
networks.

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 28
Turbo Codes: Encoder

Data X
X
Source

Convolutional Encoder
1 Y1

Interleaving Y
(Y1, Y2)
Convolutional Encoder Y2
2

X: Information
Yi: Redundancy Information

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 29
Turbo Codes: Decoder

De-interleaving

Convolutional
Y1 Interleaver
Decoder 1

X Interleaving Convolutional
Decoder 2 De-interleaving X’
Y2

X’: Decoded Information

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 30
Automatic Repeat Request (ARQ)

Source Transmitter Channel Receiver Destination

Transmit Transmit
Encoder Modulation Demodulation Decoder
Controller Controller

Acknowledge

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 31
Stop-And-Wait ARQ (SAW ARQ)
Retransmission

Transmitting 1 2 3 3
Time
Data

NAK
ACK

ACK
Received Data 1 2 3
Time
Error

Output Data 1 2 3
Time

ACK: Acknowledge
NAK: Negative ACK

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 32
Stop-And-Wait ARQ (SAW ARQ)

Throughput:
S = (1/T) * (k/n) = [(1- Pb)n / (1 + D * Rb/ n) ] * (k/n)
where T is the average transmission time in terms of a block duration
T = (1 + D * Rb/ n) * PACK + 2 * (1 + D * Rb/ n) * PACK * (1- PACK)
+ 3 * (1 + D * Rb/ n) * PACK * (1- PACK)2 + …..
= (1+ D * Rb/n) * PACK
 i * (1-P
i 1 )i-1
ACK

= (1+ D * Rb/n) * PACK/[1-(1-PACK)]2


= (1 + D * Rb/ n) / PACK
where n = number of bits in a block, k = number of information bits in a block,
D = round trip delay, Rb= bit rate, Pb = BER of the channel, and PACK = (1- Pb)n
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 33
Go-Back-N ARQ (GBN ARQ)
Go-back 3 Go-back 5

Transmitting 1 2 3 4 5 3 4 5 6 7 5
Time
Data

K
NA

NA
Received Data 1 2 3 4 5
Time
Error Error

Output Data 1 2 3 4 5
Time

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 34
Go-Back-N ARQ (GBN ARQ)

Throughput
S = (1/T) * (k/n)
= [(1- Pb)n / ((1- Pb)n + N * (1-(1- Pb)n ) )]* (k/n)
where
T = 1 * PACK + (N+1) * PACK * (1- PACK) +2 * (N+1) * PACK *
(1- PACK)2 + ….
= PACK+PACK * [(1-PACK)+(1-PACK )2 +(1-PACK)3+…]+
PACK[N * (1-PACK)+2 * N * (1-PACK )2 +3 * N * (1-PACK)3+…]
= PACK+PACK *[(1-PACK)/PACK + N * (1-PACK)/PACK2
= 1 + (N * [1 - (1- Pb)n ])/ (1- Pb)n
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 35
Selective-Repeat ARQ (SR ARQ)
Retransmission Retransmission

Transmitting 1 2 3 4 5 3 6 7 8 9 7
Time
Data

K
NA

NA
Received Data 1 2 4 5 3 6 8 9 7
Time
Error Error

Buffer 1 2 4 5 3 6 8 9 7
Time

Output Data 1 2 3 4 5 6 7 8 9
Time

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 36
Selective-Repeat ARQ (SR ARQ)

Throughput
S = (1/T) * (k/n)
= (1- Pb)n * (k/n)
where
T = 1 * PACK + 2 * PACK * (1- PACK) + 3 * PACK * (1- PACK)2
+ ….
= PACK 
i 1 i * (1-PACK)
i-1

= PACK/[1-(1-PACK)]2
= 1/(1- Pb)n
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 37

You might also like