In telephony systems, two companding/expanding laws are used to reduce quantization noise in speech coding:
y=1+(log(x)/k)
The size of the quantization step increases with the input signal value: small amplitudes are coded with fine granularity (moore steps) then larger amplitudes. Signal to quantization-noise ratio becames constant because each signal level is coded with the same relative precision (fig. 1). As when x --> 0, log x becames infinite, this function can not be used for small signal levels, so an approximation of this curve is necessary for small signal levels:
Fig. 1 - Logarithmic compression Signal-to-Noise ratio.
- for the A-law, the curve is approximated with its tangent near the origin, so the qantization is linear for small signals. The slope of the linear part, called the compensation rate, has been fixed to 16 and is given by:
C=A/(1+ln(A))=16
which makes A = 87.6
- for the Mu-law, the function taken is quasi-linear for x small and quasi-logarithmic for x large. The compression rate near the origin is given by:
C=Mu/ln(1+Mu)=16
which makes Mu = 255.
[ Index | Next Paragraph ]