Let us generate two different noises: A uniform and a Gaussian noise.
The uniform noise is signal composed of a sequence of samples that are independently generated according to uniform distribution.
In Python one can use the command np.random.uniform(0,1,(M,N))
to generate a M times N matrix of values independently generated
according to a uniform distribution over the interval [0,1].
In order to change the interval from [0,1] to [a,b], one can use the command
a+(b-a)*np.random.uniforum(0,1,(M,N))
The Gaussian noise is signal composed of a sequence of samples that are independently generated according to a Gaussian distribution.
In order to change the mean from 0 to a value m and the standard deviation from 1 to a value sigma2 one can use the command m+sigma*randn(M,N).
m+sigma2*randn(0,1,(M,N))
Let us now generate a uniform noise in the interval [-1.7,1.7] and a centered Gaussian noise with unitary standard deviation
N = 10000
import numpy as np
noise1 = -1.7 +3.4*np.random.uniform(0,1,(N,)) # Generate a uniforum noise
noise2 = np.random.normal(0,1,(N,))
from Audio import Audio
print 'Noise 1'
Audio(noise1, rate=44100)
Noise 1
print 'Noise 2'
Audio(noise2, rate=44100)
Noise 2
Do they sound differently ?
Let us play them, do you hear any difference? Not really...
Do they look different ?
Let us plot the first 100 samples.
%pylab inline
N = 100
import numpy as np
noise_1 = -1.7 +3.4*np.random.uniform(0,1,(N,)) # Generate a uniforum noise
noise_2 = np.random.normal(0,1,(N,))
subplot(2,1,1)
stem(np.arange(noise_1.size),noise_1)
ylabel('noise1')
subplot(2,1,2)
stem(np.arange(noise_2.size),noise_2)
xlabel('Samples')
ylabel('noise2')
show()
Populating the interactive namespace from numpy and matplotlib
Do you see a difference? Yes, a bit (amplitude), but not to evident
Do they have different power?
We compute the power spectral density as the average the square of the DFT / DFS ... but to compute the average we need to generate several realizations of the noise! Let generate 1000 realizations of each noise (in matrix form).
N = 10000 # Number of samples of the noise
M = 5000 # Number of realizations
nnoise1 = -1.7 +3.4*np.random.uniform(0,1,(M,N)) # each line is a realization of N samples of a uniform noise distributed over the columns
nnoise2 = np.random.normal(0,1,(M,N)) #each line is a realization of N samples of a Gaussian noise distributed over the columns
To compute the DFT / DFS for each realization (along each line), we use Python command scipy.fftpack.fft(x)
.
We then use the command np.mean(x,dim)
to average x
along the dimension dim
.
from scipy import fftpack as f
Noise1 = f.fft(nnoise1,N) # fft of the uniform noise
Noise2 = f.fft(nnoise2,N) # fft of the Gaussian noise
squareMagNoise1 = pow(abs(Noise1),2) # Square magnitude of the DFT/DFS of the uniform noise
squareMagNoise2 = pow(abs(Noise2),2) # Square magnitude of the DFT/DFS of the Gaussian noise
psdNoise1 = np.mean(squareMagNoise1,0)/float(N)
psdNoise2 = np.mean(squareMagNoise2,0)/float(N)
Now let's plot the power spectral density of the two noises, and by adjusting the axis limits we get
normFrequ = np.arange(1,psdNoise1.size+1,dtype=float)/float(psdNoise1.size)
subplot(2,1,1)
plot(normFrequ,psdNoise1)
ylim([0,1.4])
ylabel('psdNoise1')
subplot(2,1,2)
plot(normFrequ,psdNoise2)
ylim([0,1.4])
ylabel('psdNoise2')
xlabel('Normalized Frequencies')
show()
Once again, not a clear difference, except that the power spectral density of the uniform noise is slightly below 1. Actually, here's a good paper and pencil exercise for you: Compute the theoretical values of the power spectral density to check that indeed the power spectral density of the uniform noise we have defined is slightly below 1, while the power spectral density of the Gaussian noise we have defined is exactly 1.
Both noises have the same for of power spectral density because they are a sequence of independent and identically distributed samples, corresponding to a correlation function equal to a Kronecker delta. So, same type of correlation, same type of power spectral density! (As seen in the course, it can be proven that the power spectral density is the DFT / DFS of the correlation)
Do they have different distribution?
Oh well, we do expect so since one is uniformly generated and the other is Gaussian!
To see how the samples are distributed we bin the values of the
samples into equally spaced intervals using the function numpy.hist(x,K)
,
where $x$ is the vector of samples and $K$ denotes the number of
intervals (default $K = 10$).
So let us do it!
N = 10000
noise1 = -1.7 +3.4*np.random.uniform(0,1,(N,)) # Generate a uniform noise
noise2 = np.random.normal(0,1,(N,)) # Generate a Gaussian noise
K = 50
#plotting done by using the pylab.hist function hist(noise,K)
subplot(2,1,1)
hist(noise1,K)
ylabel('Noise 1 Recurrences')
subplot(2,1,2)
hist(noise2,K)
ylabel('Noise 2 Recurrences')
xlabel('Values')
show()
As expected, we can see that the values of the first noise signal are uniformly distributed, while the values of the second signal are distributed according to a Gaussian function.
Let's now apply the leaky integrator to the Gaussian noise and compute the power spectral density.
# Construct the impulse response of the system of length 100
K = 10 # Order
L = 100 # Length of the filter
Lambda = float(K-1)/float(K)
h = (1-Lambda) * pow(Lambda,np.arange(L))
# Compute the transfer function
from scipy import fftpack as f
H = f.fft(h)
squareMagH = pow(abs(H),2)
# Generate M realizations of a Gaussian white noise of N samples
N = 1000
M = 512
noise2 = np.random.normal(0,1,(M,N))
# Compute the power spectral density of the Gaussian nose
Noise2 = f.fft(noise2)
squareMagNoise2 = pow(abs(Noise2),2)
psdNoise2 = np.mean(squareMagNoise2,0)/float(N) # Psd Gaussian noise
# Perform the convolution between h and the columns of noise2 along the
# so to filter each realization of the noise with h
yy = np.zeros((M,N-L+1))
from scipy import signal as s
for i in range(M):
yy[i] = s.convolve(noise2[i],h,'valid')
# Perform the DFT / DFS
# Notice that yy if of length N-L+1
YY = f.fft(yy,N-L+1)
# Take the square of the magnitude
squareMagYY = pow(abs(YY),2)
# and average to get the power spectral density
psdY = np.mean(squareMagYY,0)/float(N)
subplot2grid((3,3), (0,0), colspan=3)
plot(np.arange(1,psdNoise2.size+1,dtype=float)/float(psdNoise2.size),psdNoise2)
ylabel('psdNoise2')
ylim([0,1.5])
subplot2grid((3,3), (1,0), colspan=3)
plot(np.arange(1,squareMagH.size+1,dtype=float)/float(squareMagH.size),squareMagH)
ylabel('squareMagH')
ylim([0,1.5])
subplot2grid((3,3), (2,0), colspan=3)
plot(np.arange(1,psdY.size+1,dtype=float)/float(psdY.size),psdY)
ylabel('psdY')
xlabel('Normalized Frequencies')
ylim([0,1.5])
show()
We can see how the power spectral density of the noise has been shaped by the square magnitude of the transfer function of the leaky integrator (DFT/DFS). Try to play the output y of the filter and you will hear that the noise has been low pass filtered.
Fs = 16000
length = 0.5
noise_signal = np.random.normal(0,1,(length*Fs))
print 'Original noise signal'
Audio(noise_signal, rate=Fs)
Original noise signal
filtered_noise = np.convolve(noise_signal, h, 'valid')
print 'Filtered noise signal'
Audio(filtered_noise, rate=Fs)
Filtered noise signal
We now take a look at the uniform quantization, starting with a 2 period a sawtooth between -1 to 1, quantized with 3 bits. In particular we plot the original signal versus the quantized one, and the quantization error.
numberBits = 3
# Generate a two period signal between -1 and 1
x = np.linspace(-98,100,100)/100
x = np.append(x,x)
# Constrain the signal to have 2^numberBits values (0 to 2^(numberBits-1)
xQuant = np.floor((x)*pow(2,numberBits-1))
# And quantize it between -1 and 1
xQuant = (xQuant/(pow(2,numberBits -1)))-(pow(2,numberBits)-1)/pow(2,numberBits) + 0.125
# Compute the quantization error
errorQuant = x - xQuant
subplot(2,1,1)
plot(np.arange(xQuant.size),xQuant,'.')
plot(np.arange(x.size),x)
subplot(2,1,2)
plot(np.arange(errorQuant.size),errorQuant,'.')
xlabel('Samples')
ylabel('errorQuant')
show()
Of course we can reduce the number of bits, for instance to 2, obtaining
numberBits = 2
# Generate a two period signal between -1 and 1
x = np.linspace(-98,100,100)/100
x = np.append(x,x)
# Constrain the signal to have 2^numberBits values (0 to 2^(numberBits-1)
xQuant = np.floor((x)*pow(2,numberBits-1))
# And quantize it between -1 and 1
xQuant = (xQuant/(pow(2,numberBits -1)))-(pow(2,numberBits)-1)/pow(2,numberBits) + 0.3125
# Compute the quantization error
errorQuant = x - xQuant
subplot(2,1,1)
plot(np.arange(xQuant.size),xQuant,'.')
plot(np.arange(x.size),x)
subplot(2,1,2)
plot(np.arange(errorQuant.size),errorQuant,'.')
xlabel('Samples')
ylabel('errorQuant')
show()
Let us now see what happens if we quantize a sinusoid
numberBits = 3
import numpy as np
from scipy import constants as c
# Generate a two period sinusoid signal between -1 and 1
n = np.arange(1,101,dtype=float)/float(100)
x = np.sin(2*c.pi*n)
x = np.append(x,x)
# Constrain the signal to have 2^numberBits values (0 to 2^(numberBits-1)
xQuant = np.floor((x)*pow(2,(numberBits-1)))
# And quantize it between -1 and 1
xQuant = (xQuant/(pow(2,(numberBits-1))))-(pow(2,(numberBits)-1)/pow(2,(numberBits))) + 0.125
errorQuant = x - xQuant
subplot(2,1,1)
plot(np.arange(xQuant.size),xQuant,'.')
plot(np.arange(x.size),x)
subplot(2,1,2)
plot(np.arange(errorQuant.size),errorQuant,'.')
xlabel('Samples')
ylabel('errorQuant')
show()
Using only 2 bits we get
numberBits = 2
import numpy as np
from scipy import constants as c
# Generate a two period sinusoid signal between -1 and 1
n = np.arange(1,101,dtype=float)/float(100)
x = np.sin(2*c.pi*n)
x = np.append(x,x)
# Constrain the signal to have 2^numberBits values (0 to 2^(numberBits-1)
xQuant = np.floor((x)*pow(2,(numberBits-1)))
# And quantize it between -1 and 1
xQuant = (xQuant/(pow(2,(numberBits-1))))-(pow(2,(numberBits)-1)/pow(2,(numberBits))) + 0.3125
errorQuant = x - xQuant
subplot(2,1,1)
plot(np.arange(xQuant.size),xQuant,'.')
plot(np.arange(x.size),x)
subplot(2,1,2)
plot(np.arange(errorQuant.size),errorQuant,'.')
xlabel('Samples')
ylabel('errorQuant')
show()
Can we hear the effect of the quantization? Let's try.
numberBits=3
# let's play a pure A
Fs = 16000
f = 440.
length = 0.5
# Generate a 1000 period sinusoid between -1 and 1
x = np.sin(2*c.pi*f/Fs*np.arange(1,length*Fs,dtype=float))
# Constrain the signal to have 2^numberBits values (0 to 2^(numberBits-1)
xQuant = np.floor((x+1)*pow(2,(numberBits-1)))
# And quantize it between -1 and 1
xQuant = (xQuant/(pow(2,(numberBits-1))))-(pow(2,(numberBits))-1)/pow(2,(numberBits))
print 'Original A signal'
Audio(x, rate=Fs)
Original A signal
print 'A signal quantized to 3 bits'
Audio(xQuant, rate=Fs)
A signal quantized to 3 bits
Weird, isn't it? exactly like the first generation mobile phones .. but to get a better idea, let's quantize our jingle!
# Loading the jingle
import scipy.io as sio
mat_contents = sio.loadmat('jingle.mat')
jingle = mat_contents['jingle']
jingle = jingle.reshape(jingle.size,)
# Restrict length of jingle to 1s
Fs = 44100
length = 1.
jingle = jingle[:Fs*length]
numberBits = 3
#% Constrain the signal to be between -1 and 1
maxValue = max(jingle)
minValue = min(jingle)
rangeJingle = maxValue - minValue
scalingFactor = 2/float(rangeJingle)
shiftFactor = -1-(scalingFactor*minValue)
jingleResized = jingle*scalingFactor+shiftFactor
# Constrain the signal to have 2^numberBits values (0 to 2^(numberBits-1)
jingleQuant = np.floor((jingleResized+1)*pow(2,(numberBits-1)))
# And quantize it between -1 and 1
jingleQuant = (jingleQuant/(pow(2,numberBits-1)))-(pow(2,(numberBits))-1)/pow(2,(numberBits))
print 'Original jingle'
Audio(jingle, rate=44000)
Original jingle
print 'Quantized jingle'
Audio(jingleQuant, rate=44000)
Quantized jingle
Nasty isn't it? ... well try with 2 bits!