Processing Gain vs. Spreading Gain in CDMA
Advertisement
This page explains the difference between processing gain and spreading gain, terms commonly used in the context of CDMA (Code Division Multiple Access) technology. We’ll also cover the concept of the spreading factor. As you likely know, CDMA transmitters and receivers rely on identical PN (Pseudo-Noise) sequences for data retrieval. The signal is spread in the transmitter and de-spread in the receiver. Unwanted signals remain spread and act like noise. This de-spreading process strengthens the desired signal relative to the undesired ones.
Processing Gain
In Direct-Sequence Spread Spectrum (DSSS), after the spreading process, bits are referred to as “chips.”
Figure 1: Illustration of Chip Rate in DSSS
As shown in the (imaginary) figure above, represents one bit period, and represents one chip period. The chip rate, which is , characterizes this spread spectrum transmission system. The processing gain (PG) is defined as the ratio of the information bit duration to the chip duration.
This is also known as the spreading factor. In simpler terms, it represents the number of chips used to represent a single data bit. More generally, processing gain can be defined as the ratio of the signal-to-noise ratio (SNR) at the output to the SNR at the input of the de-spreading process.
Spreading Factor vs. Spreading Gain
There’s actually no difference between spreading gain and processing gain. These terms are used interchangeably in the field of CDMA.
As mentioned above, the spreading factor is the ratio of the chip rate at the output to the information bit rate at the input of the spreading block within the CDMA transmitter.
A higher spreading factor is generally desirable, as it leads to better and more effective spreading. This allows for more codes to be accommodated within the same frequency channel.