Channels

Introduction:

In the previous discussion of Hamming Distance and Linear Codes, by assuming that there is at most a one bit error in the transmission of a given letter code, we simplified the problem of transmission error analysis. Probabilities had really only played a role when we considered the table of letter Transmission frequencies in our discussion of Huffman Codes.

Symbol A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Probability 0.086 0.014 0.028 0.038 0.130 0.029 0.020 0.053 0.063 0.001 0.004 0.034 0.025 0.071 0.080 0.020 0.001 0.068 0.061 0.105 0.025 0.009 0.015 0.002 0.020 0.001


In the following discussion we will want to consider table above as a row vector. Informally, the reason for this is that, assuming that there are Channel errors, associated with each letter,say F, there is a row vector of error probabilities, possibly different for different letters:

Symbol A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Probability 0.0015 0.0003 0.0012 0.0017 0.0032 0.9200 0.0022 0.0043 0.0013 0.0011 0.0024 0.0034 0.0025 0.0071 0.0080 0.0020 0.0001 0.0068 0.0013 0.0011 0.0024 0.0034 0.0025 0.0071 0.0080 0.0020


Where, for example MATHAMATHFMATH the probability that A was received given that F was transmitted. MATHFMATHFMATHis the most probable.

By considering the 26 error row vectors as a 26 by 26 matrix, we compute MATHA$\QTR{Large}{\ )}$ by multiplying the "Transmission row vector" by the "Received A" column of the error matrix.

MATHAMATHAMATHAMATHAMATHAMATHBMATHBMATHAMATHZMATHZMATH

Notation:

________________________________________________________________________________________________________________

The Setting

Definition: A Channel MATHconsists of:

Note:

___________________________________________________________________________________________________

Definition:

A Channel MATH is called Symmetric if each row contains the same entries, and each column contains the same entries, perhaps not in the same order.

An Aside: This is confusing terminology. We are saying the Channel is Symmetric, not the Matrix (we are not claiming MATH

______________________________________________________________________________________

Example - A Binary Symmetric Channel:

Figure

MATH.

MATH

Exercise: Suppose we are transmitting 2 bits in parallel down two independent Binary Symmetric Channels with

channel matrices MATH and MATH the input and output alphabets are both MATH

What is the channel matrix for this Channel? is it a Symmetric Channel?

________________________________________________________________________

The Output Vector:

Again explicitly, MATHand $\sum $ MATH

MATH $\ \bigskip $

______________________________________________________________________________________

Derived Matrices:

The Joint Probability Distribution:

MATH, where

MATH

MATH

MATH

The Posterior Distribution

For a given vector of input probabilities, MATH

MATH, where

MATH

_____________________________________________________________________________________

Four Examples:

  1. MATH MATH and MATH, $\QTR{Large}{i=\ j}$

    $\QTR{Large}{=\ 0}$ , MATH

    No Noise, verify that:

  2. MATH MATH and MATHall $\QTR{Large}{m.}$$\QTR{Large}{\ }$

    All Noise, verify that:

  3. The Binary Symmetric Channel:

  4. Exercise: Again letting MATHcompute the Joint Probability Distribution

    and the Posterior Distribution for a " Binary Biased" Channel, defined by the matrix MATH

_______________________________________________________________________

Decoding Strategies:

Definition: Given a Channel MATH a Decoding Strategy is a function MATH to be read, "Given that MATHis the output, MATH was the input.