Correlated+noise

toc =Introduction= Consider a sequence of random numbers math \eta=(\eta_1,\eta_2,\ldots, \eta_N) math used to simulate the result of a stochastic evolution. In most simple cases, those numbers are taken to be //independent// and //identically distributed//, which is realistic enough to simulate a number of physical phenomena. However, such uncorrelation often correspond to an idealistic limit, and one might want to go beyond this limit.
 * Class Session 11: Correlated noise **

In this exercice, we discuss how to generate //N// identically distributed but **correlated** Gaussian random numbers.

=Covariance matrix= For simplicity we assume that each of the //η i // is normally distributed, whith zero mean and variance equal to 1.

The covariance matrix //C// is the matrix whose entry (//i//,//j//) is equal to the following covariance: math C_{i,j}=\langle \eta_i \eta_j\rangle_c=\langle \eta_i \eta_j\rangle math Show that:
 * //C// is a symmetric matrix.
 * Eigenvalues of //C// are positive or zero (the matrix is positive semi-definite).

Show that there exists a matrix //A//, symmetric and positive semi-definite, verifying //C// = //A// 2. The matrix A is a **square root** of the matrix //C//.
 * How can one compute //A// in practice?

Show that the **correlated noise** (//η i //) can be generated in the following way:
 * Draw are //N// **independent** numbers //X//=(//X// 1 ,...,//X N //) of Gaussian distribution of zero mean and unit variance.
 * Compute //η// from: //η// = //A//.//X//.

Explain why Gaussianity is important. Use this program to test our conclusions: code format="python" import pylab, math, numpy N = 128 covariance =  numpy.zeros((N,N)) A = numpy.zeros((N,N)) for i in range(N): for j in range(N): x = abs(i-j) covariance[i,j] = math.exp(-x/200.) w,v = numpy.linalg.eig(covariance) for i in range(N): for j in range(N): A[i,j] = sum(math.sqrt(w[k])*v[i,k]*v[j,k] for k in range(N)) x = numpy.random.randn((N)) eta = numpy.dot(A,x) pylab.title('Correlated vs non-correlated noise') pylab.xlabel('i') pylab.ylabel('eta, x') pylab.plot(eta,'b.-') pylab.plot(x,'r.-') pylab.show code
 * 1) numpy.random.seed(2)



=From correlated noise to non-Markovian random walk= We consider a one dimensional Gaussian random walk of //N// steps, initially located at the origin. Its trajectory {//x// 1 ,//x// 2 ,...,//x N //} can be written as: math x(i)=\sum_{k=1}^i \eta_k math

If the variables //η i // are independent, the walk is Markovian, and the process //x//(//i//) becomes a Brownian motion of time //t// = //i// / //dt// in the continuum limit where the time step //dt// goes to zero. In particular the autocorrelation function of the process is math \langle x(i) x(j)\rangle = \text{min}(i,j) math What happens when the noise (//η i //) is correlated?

The fractional Brownian motion is an important class of non-Markovian processes. The autocorrelation function for these processes is math \langle x(i) x(j)\rangle = \frac{1}{2}\left( i^{2 H} + j^{2 H}- |i-j|^{2 H}\right) math H is the Hurst exponent (0 < //H// < 1) and for //H// = 1/2 we recover Brownian motion and standard diffusion.
 * Compute the covariance matrix of the //η// i = //x//(//i//) − //x//(//i// − 1) associated to a generic fractional Brownian motion.
 * Looking at the properties of this matrix, explain why fractional Brownian motion is so famous.
 * Plot the behavior of the covariance matrix for //H// < 1/2 and for //H// > 1/2.
 * Generate some paths of fractional Brownian motion in python.

Use this program to test our conclusions:

code format="python" import pylab, math, numpy numpy.random.seed(2) N = 128 H = 3./4. HH = 2*H covariance = numpy.zeros((N,N)) A = numpy.zeros((N,N)) for i in range(N): for j in range(N): d = abs(i-j) covariance[i,j] = (abs(d - 1)**HH + (d + 1)**HH - 2*d**HH)/2. w,v = numpy.linalg.eig(covariance) for i in range(N): for j in range(N): A[i,j] = sum(math.sqrt(w[k])*v[i,k]*v[j,k] for k in range(N)) x = numpy.random.randn((N)) eta = numpy.dot(A,x) xfBm = [sum(eta[0:i]) for i in range(len(eta)+1)] xBM = [sum(x[0:i]) for i in range(len(x)+1)] pylab.title('fBm (blue) vs BM (red)') pylab.xlabel('i') pylab.ylabel('x(i)') pylab.plot(xfBm,'b.-') pylab.plot(xBM,'r.-') pylab.show code



url}?f=print| [Print this page]