0% found this document useful (0 votes)
31 views

Chap_07

This document covers Chapter 7 of ECE 6640, focusing on Channel Coding, specifically Convolutional Encoding and Decoding. It discusses various representations of convolutional encoders, properties of convolutional codes, and methods for decoding them, including maximum likelihood decoding. The content is based on Bernard Sklar's textbook on Digital Communications and includes diagrams and examples to illustrate the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Chap_07

This document covers Chapter 7 of ECE 6640, focusing on Channel Coding, specifically Convolutional Encoding and Decoding. It discusses various representations of convolutional encoders, properties of convolutional codes, and methods for decoding them, including maximum likelihood decoding. The content is based on Bernard Sklar's textbook on Digital Communications and includes diagrams and examples to illustrate the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

ECE 6640

Digital Communications

Dr. Bradley J. Bazuin


Assistant Professor
Department of Electrical and Computer Engineering
College of Engineering and Applied Sciences
Chapter 7
7. Channel Coding: Part 2 .
1. Convolutional Encoding.
2. Convolutional Encoder Representation.
3. Formulation of the Convolutional Decoding Problem.
4. Properties of Convolutional Codes.
5. Other Convolutional Decoding Algorithms.

ECE 6640 2
Sklar’s Communications System

Notes and figures are based on or taken from materials in the course textbook:
ECE 6640 Bernard Sklar, Digital Communications, Fundamentals and Applications, 3
Prentice Hall PTR, Second Edition, 2001.
Signal Processing Functions

Notes and figures are based on or taken from materials in the course textbook:
ECE 6640 Bernard Sklar, Digital Communications, Fundamentals and Applications, 4
Prentice Hall PTR, Second Edition, 2001.
Waveform Coding Structured Sequences

• Structures Sequences:
– Transforming waveforms in to “better” waveform representations
that contain redundant bits
– Use redundancy for error detection and correction
• Block Codes are memoryless
• Convolution Codes have memory!

ECE 6640 5
Convolutional Encodings

• The encoder transforms each sequence M into a unique


codeword sequence U=G(m). Even though the sequence m
uniquely defines the sequence U, a key feature of
convolutional codes is that a given k-tuple within m does
not uniquely define its associated n-tuple within U since
the encoding of each k-tuple is not only a function of that
k-tuple but is also a function of the K-1 input k-tuples that
precede is.
• Each k-tuple effects not just the codeword generated when
the value is input, by the next K-1 codewords as well.
– the system was memory

ECE 6640 6
Convolutional Encoder Diagram
• Each message, mi, may be a
k-tuple. (or k could be a bit)
• K messages are in the
encoder
• For each message input, an
n-tuple is generated
• The code rate is k/n

• We will usually be working


with k=1 and n=2 or 3

ECE 6640 7
Proakis Convolution Encoder

John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth


ECE 6640 8
Edition, 2001. ISBN: 0-07-232111-3.
Representations

• Several methods are used for representing a convolutional


encoder, the most popular being;
– the connection pictorial
– connection vectors or polynomials
– state diagrams
– tree diagrams
– trellis diagrams.

ECE 6640 9
Connection Representation

• k=1, n=3
• Generator Polynomials
– G1 = 1 + X + X2
– G2 = 1 + X2
• To end a message, K-1 “zero” messages are transmitted.
This allows the encoder to be flushed.
– effective code rate is different than k/n … the actual rate would be
(2+k*m_length)/n*m_length
– a zero tailed encoder ….
ECE 6640 10
Impulse Response of the Encoder
• allow a single “1” to transition through the K stages
– 100 -> 11
– 010 -> 10
– 001 -> 11
– 000 -> 00
• If the input message where 1 0 1
– 1 11 10 11
– 0 00 00 00
– 1 11 10 11
– Bsum 11 10 00 10 11
– Bsum is the transmitted n-tuple sequence …. if a 2 zero tail follows
– The sequence/summation involves superpoition or linear addition.
• The impulse response of one k-tuple sums with the impulse responses of
successive k-tuples!
ECE 6640 11
Convolutional Encoding the Message

• As each k-tuple is input,


an n-tuple is output
• This is a rate ½ encoding
• The “constraint” length is
K=3, the length of the k-tuple
shift register.
• The effective code rate for
m_length = 3 is: 3/10

ECE 6640 12
Proakis (3,1), rate 1/3, K=3 Pictorial

• Generator Polynomials
– G1 = 1
– G2 = 1 + X2
– G3= 1 + X + X2

John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth


ECE 6640 13
Edition, 2001. ISBN: 0-07-232111-3.
Polynomial Representation

• Generator Polynomials (also represented in octal)


– G1 = 1 + X + X2  7octal
– G2 = 1 + X2  5octal
• It is assumed to be a binary input.
• There are two generator polynomials, therefore n=2
– each polynomials generates one of the elements of the n-tuple
output
• Polynomial Multiplication can be used to generate output
sequences
• m(X)*g1(X) = (1 + X2)* (1 +X+ X2) = 1 +X+X3+ X4
• m(X)*g2(X) = (1 + X2)* (1 + X2) = 1 + X4
• Output: (11 , 10 , 00, 10, 11) as before
ECE 6640 14
State Representation
• Using the same encoding:
• Solid lines represent 0 inputs
• Dashed lines represent 1 inputs
• The n-tuple output is shown with
the state transition
• It can be verified that 2 zeros
always returns to the same
“steady state”
• Note: two previous k-tuples
provide state, the new k-tuple
drives transitions
ECE 6640 15
Proakis (3,1) K=3
State Diagram

Solid Lines
are 0 inputs
Dashed Lines
are 1 inputs

ECE 6640 JohnG. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth 16
Edition, 2001. ISBN: 0-07-232111-3.
Tree Diagram

• Input values define where


to go next.
• Each set of branch outputs
transmit complementary
n–tuples

• States can be identified by


repeated level operations

ECE 6640 17
Proakis (3,1) K=3
Polynomial and Tree

g1  1 0 0
g 2  1 0 1
g 3  1 1 1

ECE 6640 JohnG. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth 18
Edition, 2001. ISBN: 0-07-232111-3.
Trellis Diagram

• The tree structure repeats itself.


• The tree/state diagrams define a finite number of states.
– the tree has ever increasing number of branches to show the
complete path of a message
– the state folds back on itself so observing the complete path is
difficeult
• Can we identify diagrammatically a figure that shows the
state transitions and the entire message path?

• Yes, the Trellis Diagram

ECE 6640 19
Trellis Diagram

• Initial state, state development, continuous pattern


observable, “fully engaged” after K-1 inputs, easily trace
tail zeros to the “initial state”.
ECE 6640 20
Proakis (3,1) K=3
Trellis Diagram

ECE 6640 John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth 21
Edition, 2001. ISBN: 0-07-232111-3.
A more complicated example follows

• Proakis (3,2) K=2 Encoder


– Pictorial
– Polynomial and Tree
– State Diagram
– Trellis

ECE 6640 22
Proakis (3,2) K=2
Pictorial

John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth


ECE 6640 23
Edition, 2001. ISBN: 0-07-232111-3.
Proakis (3,2) K=2
Polynomial and Tree
g1  1 0 1 1
g 2  1 1 0 1
g 3  1 0 1 0

ECE 6640 JohnG. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth 24
Edition, 2001. ISBN: 0-07-232111-3.
Proakis (3,2) K=2
State Diagram

Solid Lines
are 0 inputs
Dashed Lines
are 1 inputs

ECE 6640 JohnG. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth 25
Edition, 2001. ISBN: 0-07-232111-3.
Proakis (3,2) K=2
Trellis Diagram

ECE 6640 JohnG. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth 26
Edition, 2001. ISBN: 0-07-232111-3.
Encoding

• Each approach can be readily implemented in hardware.


• Good codes have been found be computer searches for
each value of the constraint length, K.
• The easy part, now for decoding.

ECE 6640 27
Decoding Convolutional Codes

• As the codes have memory, we wish to use a decoder that


achieves the minimum probability of error … using a
condition called maximum likelihood.

ECE 6640 28
ECE 5820 MAP and ML

• There are various estimators for signals combined with


random variables.
• In general we are interested in the maximum a-posteriori
estimator of X for a given observation Y.
max PX  x | Y  y 
x

– this requires knowledge of the a-priori probability, so that

PY  y | X  x  PX  x 
P X  x | Y  y  
PY  y 

ECE 6640 29
ECE 5820 MAP and ML

• In some situations, we know


PY  y | X  x 
but not the a-priori probabilities.
• In these cases, we form a maximum likelihood estimate
For a maximum likelihood estimate, we perform
max PY  y | X  x 
x

ECE 6640 30
ECE 5820 Markov Process

• A sequence or “chain” of subexperiments in which the


outcome of a given subexperiment determines which
subexperiment is performed next.
Ps0 , s0 ,  , sn   Psn | sn 1  Psn | sn 1  Psn 1 | sn  2   Ps1 | s0  Ps0 

• If the output from the previous state in a trellis is known,


the next state is only based on the previous state and the
new input.
– the decoder can be computed one step at a time to determine the
maximum likelihood path.
• Viterbi’s improvement on this concept.
– In a Trellis, there is a repetition of states. If two paths arrive at the
same state, only the path with the maximum likelihood must be
ECE 6640 maintained … the “other path” can no longer become the ML path! 31
Maximum Likelihood Decoding

• If all input message sequences are equally likely, the


decoder that achieves the minimum probability of error is
the one that compares the conditional probabilities of all
possible paths against the received sequence.
 
P Z | U m '   max
m 
all U
P Z | Um 

– where U are the possible message paths
• For a memoryless channel we can base the computation on
the individual values of the observed path Z
    PZ    Pz 
  
m  m  m 
P Z |U i | Ui ji | u ji
i 1 i 1 j 1

– where Zi is the ith branch of the received sequence Z, zji is the jth
code symbol of Zi and similarly for U and u.
ECE 6640 32
ML Computed Using Logs

• As the probability is a product of products, computation


precision and the final magnitude is of concern.

• By taking the log of the products, a summation may be


performed instead of multiplications.
– constants can easily be absorbed
– similar sets of magnitudes can be pre-computed and/or even scaled
to more desirable values.
– the precision used for the values can vary as desired for the
available bit precision (hard vs. soft values)

ECE 6640 33
Channel Models:
Hard vs. Soft Decisions
• Our previous symbol determinations selected a detected symbol with
no other considerations … a hard decision.
• The decision had computed metrics that were used to make the
determination that were then discarded.
• What if the relative certainty of decision were maintained along with
the decision.
– if one decision influenced another decision, hard decisions keep certainty
from being used.
– maintaining a soft decision may allow overall higher decision accuracy
when an interactions exists.

ECE 6640 34
ML in Binary Symmetric Channels

• Bit error probability


– P(0|1)=P(1|0) = p
– P(1|1)=P(0|0) = 1-p
• Suppose Z and any possible message U differ in dm
positions (related to the hamming distance). Then the ML
probability for an L bit message becomes

 
P Z | U m   p dm  1  p 
L  dm

• taking the log


 
log P Z | U m   dm  log p   L  dm   log1  p 

log PZ | U     dm  log


1 p 
m
  L  log1  p 
ECE 6640  p  35
ML in Binary Symmetric Channels

• The ML value for each possible U is then

  1 p 
log P Z | U m   dm  log   L  log1  p 
 p 
– The constant is identical for all possible U and can be pre-computed
– The log of the probability ratios is also a constant
 
log P Z | U m   dm  A  B

• Overall, we are looking for the possible sequence with a


minimum Hamming distance.
– for hard decisions, we use the Hamming distance
– for soft decisions, we can use the “certainty values” shown in the
previous figure!
ECE 6640 36
Viterbi Decoding

• The previous slide suggested that all possible U should be


checked to determine the minimum value (or maximum
likelihood).
– If we compute the “metrics” for each U as they arrive, the trellis
structure can reduce the number of computations that must be
performed.
– For a 2^K-1 state trellis, only that number of possible U paths need
to be considered.
• Each trellis state has two arriving states. If we compute path values
for each one, only the smallest one needs to be maintained.
• The larger can never become smaller as more n-tuples arrive!
• Therefore, only 2^K-1 possible paths vs. 2^L possible paths for U
must be considered!

ECE 6640 37
Viterbi Decoder Trellis

• Decoder Trellis with Hamming distances shown for each


of the possible paths from “state to state”.

encoder trellis

ECE 6640 38
Viterbi Example

• m: 1 1 0 1 1
• U: 11 01 01 00 01
• Z: 11 01 01 10 01

merging
paths

ECE 6640 39
Viterbi Example

• m: 1 1 0 1 1
• U: 11 01 01 00 01
• Z: 11 01 01 10 01

ECE 6640 40
Add Compare Select
Viterbi Decoding Implementation
• Section 7.3.5.1, p. 406

Possible Connections

ECE 6640 41
Add Compare Select
Viterbi Decoding Implementation
• State Metric Update based on new Branch Metric Values
– Hard coding uses bit difference measure
– Soft coding uses rms distances between actual and expected branch
values
– The minimum path value is maintained after comparing incoming
paths.
– Paths are eliminated that are not maintained.
• When all remaining paths use the same branch, update the output
sequence
• Path history does has to go back to the beginning anymore …

ECE 6640 42
MATLAB

• See doc comm.ViterbiDecoder


– matlab equaivalent to Dr. Bazuin’s trellis simulation structure
– Use:
– t = poly2trellis(K,[g1 g2 g3])

– t2 = poly2trellis([3],[7 7 5])
– t2.outputs
– t2.nextStates

ECE 6640 43
MATLAB Simulations

• Communication Objects.
– see ViterbiComm directory for demos
– TCM Modulation
• comm.PSKTCMModulator
• comm.RectangularQAMTCMModulator
• comm.GeneralQAMTCMModulator
– Convolutional Coding
• comm.ConvolutionalEncoder
• comm.ViterbiDecoder (Hard and Soft)

• comm.TurboEncoder – available from Matlab, no demo

ECE 6640 44
Properties of Convolutional Codes

• Distance Properties
– If an all zero sequence is input and there is a bit error, how and
how long will it take to return to an all zeros path?
– Find the “minimum free distance”
• The number of code bit errors required before returning
• Note that this is not time steps and not states moved through
• This determines the error correction capability
 d  1
t f 
 2 

– Systematic and non systematic codes


• For linear block codes, any non-systematic code can be transformed
into a systematic code (structure with I and data in columns)
• This is not true for convolutional codes. Convolutional codes focus
ECE 6640 45
on free distance, making them systematic would reduce the distance!
Catastrophic Error Propagation

• A necessary and sufficient condition to have catastrophic


error propagation is that the generator polynomials have a
common factor.
– Catastrophic errors is when a finite number of code symbol errors
can generate a infinite number of decoded data bit errors.
– See Section 7.4.3 and p. 414

ECE 6640 46
Computing Distance Caused by a One

• Split the state diagram to start at 00.. and end at 0..


– Show state transitions with the following notations
– D: code bit errors for a path
• Define the state equations using the state diagram
– Determine the result with the smallest power of D and interpret
– See Figure 7.17 and page p. 411

47

ECE 6640
Computing Distance, Number of Branches,
and Branch Transition caused by a One
• Split the state diagram to start at 00.. and end at 0..
– Show state transitions with the following notations
– D: code bit errors for a path
– L: one factor for every branch
– N: one factor for every branch taken due to a “1” input
• Define the state equations using the state diagram
– Determine the result with the smallest power of D and interpret
– See Figure 7.18 and page p. 412

ECE 6640 48
Computing Distance, Number of Branches,
and Branch Transition caused by a One

Interpretation:
N=1 branch transitions caused by a 1 input
L=3 number of branches taken counter
D=5 number of 1 outputs that occurs (Hamming distance of error)
ECE 6640 49
Performance Bounds
• Upper Bound of bit error probability
dTD, N 
PB 
dN N 1,D  2 p1 p 
D5  N
– for Figure 7.18 and Eq. 7.15 on p. 412 TD, N  
1 2  D  N
dTD, N  D5 D5  N
    2  D 
dN 1  2  D  N 1  2  D  N 2


1  2  D  N   2  N  D  D 5
1  2  D  N 2
D5

1  2  D  N 2

PB 
D 5

2  p  1  p  
5

ECE 6640
1  2  D  N 2 N 1, D  2 p1 p  1  4  p  1  p  
2

50
Performance Bounds
• For
EC E k E
 r b   b
N0 N0 n N0

• The bound becomes

 5  Eb   
PB  Q   exp 5  E b   1
  2 N  2
 N0   0     Eb 
1  2  exp 
  2  N  
  0 

ECE 6640 51
Coding Gain Bounds
• From Eq. 6.19
E   
G dB   b  dB   E b  dB, for Pb  Same Value
 N 0  uncoded  N 0  coded
• This is bounded by
– The 10 log base 10 of the code rate and the min. free distance
G dB  10  log10 r  d f 
• Coding Gains are shown in Tables 7.2 and 7.3, p. 417

ECE 6640 52
Proakis Error Bounds (1)

• The sequence error probability is bounded by


Pe  T Z  Z     p y | 0 p y | 1
y path

– in terms of the transfer function d



Pe  a
d  d free
d  d

– and for convolutional codes based on the Chap 7 derivation (7.2)


1 T Y , Z 
Pb  
k Y Y 1, Z  

– for soft decisions   exp Rc   b 


– for hard decision   2  p  1  p 
ECE 6640 53
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Proakis Error Bounds (2)

• Additional error bounds and computations


– Hard decision pairwise errors (p. 514-15)
d
d  k
P2 d       p  1  p n  k
k   d 1 2  k 

1  d  d 2 d
d  k
P2 d     p  1  p       p  1  p 
d nk
1 2
2  d  k d 2  k 
2 


Pe  a d  P2 d 
d  d free

ECE 6640 54
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Soft Decision Viterbi

• The previous example used fractional “soft values”


– See the Viterbi example slides on line
• For digital processing hardware: use integer values and
change the observed code to the maximum integer values
– For 0, 1 in an 8-level system use 0,7
– Compute distances as the rms value from the desired received code
to the observed received code.
• Note that only 4 values need to be computed to define all branch
metric values,
• Example: see Fig. 7.22 for (0,0) and (7,7) now computed distances
from (0,7) and (7,0) and you have all 4!
– Apply computations and comparisons as done before.

ECE 6640 55
Other Decoding Methods

Viterbi and Trellis is not the only way …

Section 8.5 of Proakis:


• Sequential Decoding Algorithm, p. 525
• Stack Algorithm, p. 528
• Feedback Decoding, p. 529

ECE 6640 56
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Sequential Decoding

• Existed prior to Viterbi


• Generate a hypothesis about the transmitted sequence
– Compute metric between hypothesis and received signal
– If metric indicates reasonable agreement, go forward
otherwise go backward, change the hypothesis and keep trying.
– See section 7.5 and p. 422-425.

• Complexity
– Viterbi grows exponentially with constraint length
– Sequential is independent of the constraint length
• Can have buffer memory problems at low SNR (many trials)

ECE 6640 57
Feedback Decoding

• Use a look-ahead approach to determine the minimum


“future” hamming distance
– Look ahead length, L, has received code symbols forward in time
– Compare look-ahead paths for minimum hamming distance and
take the tree branch that contains the minimum value ….
• Section 7.5.3 on p. 427-429

• Called a feedback decoder as detection decisions are


required as feedback to compute the next set of code paths
to search for a minimum.

ECE 6640 58
References
• http://home.netcom.com/~chip.f/viterbi/tutorial.html
• http://www.eccpage.com/

• B. Sklar, "How I learned to love the trellis," in IEEE Signal Processing


Magazine, vol. 20, no. 3, pp. 87-102, May 2003.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1203212

ECE 6640 59
Practical Considerations (1)

• Convolutional codes are widely used in many practical


applications of communication system design.
– The choice of constraint length is dictated by the desired coding gain.
– Viterbi decoding is predominantly used for short constraint lengths
(K ≤ 10)
– Sequential decoding is used for long-constraint-length codes, where
the complexity of Viterbi decoding becomes prohibitive.

ECE 6640 60
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Practical Considerations (2)
• Two important issues in the implementation of Viterbi decoding are
1. The effect of path memory truncation, which is a desirable feature that ensures a
fixed decoding delay.
2. The degree of quantization of the input signal to the Viterbi decoder.
• As a rule of thumb, we stated that path memory truncation to about five
constraint lengths has been found to result in negligible performance loss.
• In addition to path memory truncation, the computations were performed with
eight-level (three bits) quantized input signals from the demodulator.

ECE 6640 61
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Practical Considerations (3)

• Figure 8.6–2 illustrates the performance obtained by


simulation for rate 1/2, constraint-lengths K = 3, 5, and 7
codes with memory path length of 32 bits.
• The broken curves are performance results obtained from
the upper bound in the bit error rate given by Equation 8.2–
12.
• Note that the simulation results are close to the theoretical
upper bounds, which indicate that the degradation due to
path memory truncation and quantization of the input signal
has a minor effect on performance (0.20–0.30 dB).

ECE 6640 62
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
MATLAB

• ViterbiComm
– poly2trellis: define the convolutional code and trellis to be used
– istrellis: insuring that the trellis is valid and not catastrophic
– distpec: computes the free distance and the first N components of
the weight and distance spectra of a linear convolutional code.
– comm.ConvolutionalEncoder
– quantiz - a quantization index and a quantized output value
allowing either a hard or soft output value
– comm.ViterbiDecoder – either hard or soft decoding
– bercoding

– Viterbi_Hard.m
– Viterbi_Soft.m
ECE 6640 63
Supplemental Information

• John G. Proakis, “Digital Communications, 5th ed.,”


McGraw Hill, Fourth Edition, 2008. Chapter 8.
ISBN: 978-0-07-295716-6.

ECE 6640 64
Figure 8.1-2

• Convolutional Code (3,1), rate 1/3,


n=3, k=1, K=3
• Generator Polynomials
– G1 = 1
– G2 = 1 + D2
– G3= 1 + D + D2

ECE 6640 65
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Polynomial Representation
Example 8.1-1

• Generator Polynomials (also represented in octal)


– G1 = 1
– G2 = 1 + D2
– G3= 1 + D + D2
• It is assumed to be a binary input.
• There are three generator polynomials, therefore n=3
• Polynomial Multiplication can be used to generate output
sequences u = (1 0 0 1 1 1)
• c1 = u(D)*g1(D) = (1 + D3 + D4 + D5)* (1 )
• c2 = u(D)*g2(D) = (1 + D3 + D4 + D5)* (1 + D2)
• c3 = u(D)*g3(D) = (1 + D3 + D4 + D5)* (1 +D+ D2)
ECE 6640 66
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Polynomial Computation
Example 8.1-1

• EXAMPLE 8.1–1. Let the sequence u = (100111) be the


input sequence to the convolutional encoder shown in
Figure 8.1–2.

6 bits in plus 2 zero tail bits (flush memory)


8 x 3 = 24 bit output sequence

ECE 6640 67
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Decoding Convolutional Codes

• As the codes have memory, we wish to use a decoder that


achieves the minimum probability of error … using a
condition called maximum likelihood.

• But first there is the Transfer Function of a Convolutional


Code

ECE 6640 68
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Transfer Function Example (1)

• the textbook outlines a procedure for “tracing” from an


“input” to return to the “output”
– moving from state a back to state a!

• ignore trivial path (a to a)


• describe path transition as number of
output “ones”
• determine state equations

ECE 6640 69
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Transfer Function Example (2)

• The transfer function for the code is defined as


T(Z) = Xe/Xa.
• By solving the state equations given above, we obtain

ECE 6640 70
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Transfer Function Example (3)

• The transfer function for this code indicates that there is a


single path of Hamming distance d = 6 from the all-zero
path that merges with the all-zero path at a given node.
– The transfer functions defines who many “ones” will be output
based on 1 or more errors in decoding.
– If all 0’s are transmitted and a “state-error” occurs, there will be 6
ones transmitted before returning to the “base”/correct state and a
cycle consisting of acba will have to be completed.
– Such a path is called a first event error and is used to bound the
error probability of convolutional codes
• The transfer function T (Z) introduced above is similar to
the weight enumeration function (WEF) A(Z) for block
codes introduced in Chapter 7.
ECE 6640 71
Augmenting the Transfer Function (1)
• The transfer function can be used to provide more detailed information
than just the distance of the various paths.
– Suppose we introduce a factor Y into all branch transitions caused by the
input bit 1. Thus, as each branch is traversed, the cumulative exponent on
Y increases by 1 only if that branch transition is due to an input bit 1.
– Furthermore, we introduce a factor of J into each branch of the state
diagram so that the exponent of J will serve as a counting variable to
indicate the number of branches in any given path from node a to node e.

ECE 6640 72
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Augmenting the Transfer Function (1)
• The transfer function can be used to provide more detailed information
than just the distance of the various paths.
– Suppose we introduce a factor Y into all branch transitions caused by the
input bit 1. Thus, as each branch is traversed, the cumulative exponent on
Y increases by 1 only if that branch transition is due to an input bit 1.
– Furthermore, we introduce a factor of J into each branch of the state
diagram so that the exponent of J will serve as a counting variable to
indicate the number of branches in any given path from node a to node e.

ECE 6640 73
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Augmenting the Transfer Function (2)
• This form for the transfer functions gives the properties of all the paths
in the convolutional code.
– That is, the first term in the expansion of T (Y, Z, J ) indicates that the distance
d = 6 path is of length 3 and of the three information bits, one is a 1.
– The second and third terms in the expansion of T (Y, Z, J ) indicate that of the two
d = 8 terms, one is of length 4 and the second has length 5.

ECE 6640 74
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
References (Conv. Codes)
• K. Larsen, "Short convolutional codes with maximal free distance for rates
1/2, 1/3, and 1/4 (Corresp.)," in IEEE Transactions on Information Theory,
vol. 19, no. 3, pp. 371-372, May 1973.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1055014
• E. Paaske, "Short binary convolutional codes with maximal free distance for
rates 2/3 and 3/4 (Corresp.)," in IEEE Transactions on Information Theory,
vol. 20, no. 5, pp. 683-689, Sep 1974.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1055264
• J. Conan, "The Weight Spectra of Some Short Low-Rate Convolutional
Codes," in IEEE Transactions on Communications, vol. 32, no. 9, pp. 1050-
1053, Sep 1984.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1096180
• Jinn-Ja Chang, Der-June Hwang and Mao-Chao Lin, "Some extended results
on the search for good convolutional codes," in IEEE Transactions on
Information Theory, vol. 43, no. 5, pp. 1682-1697, Sep 1997.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=623175

ECE 6640 75

You might also like