Channel capacity



next up previous
Next: Error detection coding Up: Information and Coding Previous: Channel coding; Hamming

Channel capacity

One of the most famous of all results of information theory is Shannon's channel coding theorem. For a given channel there exists a code that will permit the error-free transmission across the channel at a rate R, provided , the channel capacity. Equality is achieved only when the SNR is infinite.

As we have already noted, the astonishing part of this theory is the existence of a channel capacity. Shannon's theorem is both tantalising and frustrating. It offers error-free transmission, but it makes no statement as to what code is required. In fact, all we may deduce from the proof of the theorem is that it must be a long one. No one has yet found a code that permits the use of a channel at its capacity. However, Shannon has thrown down the gauntlet, in as much as he has proved that the code exists.

We shall not give a description of how the capacity is calculated. However, an example is instructive. The binary channel (BC) is a channel with a binary input and output. Associated with each output is a probability p that the output is correct, and a probability it is not. For such a channel, the channel capacity turns out to be:

 

Here (Figure gif), p is the bit error probability. If p = 0 then C = 1. If the C = 0. Thus, if there is equal probability of receiving a 1 or 0, irrespective of the signal sent, the channel is completely unreliable and no message can be sent across it.

 
Figure:   The capacity of a binary channel

So defined, the channel capacity is a non-dimensional number. We normally quote the capacity as a rate, in bits/second. To do this we relate each output to a change in the signal. For a channel of bandwidth B, we can transmit at most 2B changes per second. Thus, the capacity in bits/second is 2BC. For the binary channel, we have:

 

For the binary channel the maximum bit rate W is 2B. We note that C < W, i.e. the capacity is always less than the bit rate. The data rate D, or information rate describes the rate of transfer of data bits across the channel. In theory, we have:

 

As a matter of practical fact:

 

Shannon's channel coding theorem applies to the channel, not to the source. If the source is optimally coded, we can rephrase the channel coding theorem: A source of information with entropy can be transmitted error free over a channel provided .

All the modulations described earlier are binary channels. For equal BER, all these schemes have the same capacity. We have noted, however, that QPSK only uses half the bandwidth of PSK for the same bit rate. We might suppose that for the same bandwidth, QPSK would have twice the capacity. This is not so. We have noted that PSK modulation is far from optimum in terms of bandwidth use. QPSK makes better use of the bandwidth. The increase in bit rate provided by QPSK does not reflect an increase in capacity; merely a better use of bandwidth.

The capacity of the binary channel is much less than that calculated from the Hartley-Shannon Law (equation gif). Why so? The answer is that equation gif applies to systems whose outputs may take any values. We use systems obeying equation gif because they are technically convenient, not because they are desirable.



next up previous
Next: Error detection coding Up: Information and Coding Previous: Channel coding; Hamming



Saleem Bhatti
Tue Mar 7 14:17:59 GMT 1995