Errors+evaluation+for+a+Markov+chain

The homework is due on October 18th (there is no class on October 11th).


 * Homework 04: Errors evaluation for a Markov chain **


 * Don't hesitate to ask questions and make remarks on this wiki page.**

toc =Introduction=

Monte Carlo calculations are useful only if we can come up with an estimate of the error on the observables, such as an approximate value for the mathematical constant //π//, or an estimate of the quark masses or the velocity of neutrinos, the value of a condensate fraction, an approximate equation of state, etc. Here we study different methods to analyze the data when we use a Markov chain method.

=I. Sampling the position of a particle in a Mexican hat=

A particle is in equilibrium with a two dimensional external potential: math V(r)= r^4-8 r^2 math where the vector //r//=(//x//,//y//) identifies the position of the particle in the plane. We set to //T//=1 the temperature of the system. math \min \big[ 1, \frac{\pi(r_1)}{\pi(r_0)} \big] math > where //π// is the Boltzmann weight math \pi(r_0) \propto e^{-V(r_0)} . math
 * **1.** Write a Markov chain algorithm for this simple system with a Metropolis acceptance rate:
 * 1) Propose a random move delta = [random.uniform(-1,1),random.uniform(-1,1)] from position //r// 0 to position //r// 1 =//r// 0 +//δ//
 * 2) Accept the move with the Metropolis probability:

Consider a generic random variable //ξ.// As discussed during Class session 02 - Errors and fluctuations, for uncorrelated realizations math \{\xi_1, \xi_2, \ldots, \xi_N\} math sampled //e.g.// through direct sampling, the statistical error in the evaluation of the mean of //ξ// can be computed using the Central Limit Theorem: math \langle \xi\rangle=\frac{1}{N}\sum_{i=1}^N \xi_i \pm \frac{\sqrt{\text{Var}(\xi_i)}}{\sqrt{N}} math > Observable //A//: the average distance from the origin |//r//|. > Observable //B//: the average horizontal coordinate //x//.
 * **2.** Use this method evaluate the error for two observables in the Mexican hat problem:
 * **3.** Is the estimation of the statistical error correct? Use the "quality control" method introduced in the TD to discuss this point.

=II. The Bunching algorithm: see SMAC page 60-62=

We test here a basic, but powerful tool for error analysis of Markov chains: the bunching algorithm. The basic transformation of this algorithm is the reduction of a sequence of //N// data into a related sequence of //N///2 data. An example of the algorithm is here: code format="python" list=[1.,2.,2.,1.5] new_list=[] while list != []: x=list.pop y=list.pop new_list.append((x+y)/2.) print new_list list=new_list[:]  # the "[:]" is essential list is a copy of new_list. code The original list [1.,2.,2.,1.5] has been contracted 2 by 2 into [1.5, 1.75]. Apply this bunching transformation to the observable //A// and //B//, and treat the successive lists as if they were made of independent random variables, to compute their statistical errors.
 * 1) Plot the statistical errors as a function of the iteration of the bunching transformation. (For instance take 16 iterations for //N//=2 20 data).
 * 2) Comment with the help of some plots your results and the difference between observable //A// and //B//.

=III. The Autocorrelation method:=

We introduce a second method to analyze our data.

The autocorrelation function is defined as math C_{i,j}=\langle\xi_i\xi_{j}\rangle - \langle\xi_i \rangle \langle \xi_{j}\rangle math Far enough from the initial condition the system becomes invariant to a temporal translation and the autocorrelation function depend only on the distance between data: math C_{i,i+n}= C(n)\,. math The measure of the autocorrelation function allows to determine the statistical error in our Markov chain data: math \langle\xi\rangle= \frac{1}{N}\sum_{n=0}^N \xi_n \pm \frac{\sqrt{2 \sum_{n=0}^N C(n)}}{\sqrt{N}} \qquad\qquad (1) math url}?f=print| [Print this page]
 * 1) Plot the autocorrelation function //C//(//n//) for observable //A// and //B// as a function of //n//. Comment.
 * 2) Show that, for the evaluation of the error, the second method agrees perfectly with the results of the bunching algorithm.
 * 3) Justify the formula (1) above of the autocorrelation method.