Hopping+models+Transfer+matrices,+Jamming

toc In this exercise, we study two hopping models for one-dimensional transport. We apply two methods to study these models: the Monte Carlo Markov Chain (MCMC) algorithm and the transfer matrix.
 * DM 1: Hopping models, transfer matrix and jamming **
 * The first model describes the **equilibrium** dynamics of a single particle moving on a one dimensional lattice.
 * The second one describes the **non-equilibrium** transport of particles between two reservoirs, in absence of external potential, and with exclusion interaction between particles. Although one-dimensional, this model presents a //phase transition//, linked to jamming.


 * Don't hesitate to ask questions and make remarks on this wiki page.**

= Markov Chain Monte Carlo method = We would like to sample a configuration //C// with a given probability // π C //. A solution for this problem is the Markov Chain Monte Carlo method. We define a dynamics over the ensemble of configurations {//C//} such that, in the long time limit each configuration is visited with the correct probability // π C //. As discussed in the lecture, a Markov chain describes the stochastic evolution which is fully specified by the transition probabilities math p_{C\to C'} math between configurations //C// and //C'//. The probability // π C //(//t//) of being at time t in configuration //C// evolves according to math \pi_C(t+1)=\sum_{C'} \pi_{C'}(t) p_{C' \to C} math To ensure that // π C //(//t//) converges towards the probability distribution // π C //, one must satisfy the global balance condition: math \sum_ C \pi_C p_{C \to C'} = \sum_{C'} \pi_{C'} p_{C' \to C} math The **detailed balance** is more restrictive: math \pi_C p_{C \to C'} = \pi_{C'} p_{C' \to C} math but it leads towards the same probability distribution in the long-time limit.

= Model 1 - hopping in a box: the Markov Chain Monte Carlo method = As an example we consider a particle on a system of //L// sites and an hard wall at sites 0 and //L-1//. At the equilibrium the particle occupies site //i// with a probability //1/L//. To sample a configuration according to that probability, a possibility is to implement a dynamics in which the particle jumps from one site to one of its neighboring sites with a probability verifying detailed balance. Below we implement the corresponding Markov chain with the Metropolis algorithm. As a first test, consider the program below. code format="python" import random, pylab, math random.seed('CFP') Ntime = 10000 L = 8 k = L/2 all_pos = [k] for itime in range(Ntime): l= (k + random.randint(0,1)*2 -1) if l in range(L): k = l    all_pos.append(k) pylab.hist(all_pos, bins = L, range = (-0.5,L-0.5), normed = True) pylab.title('Markov Chain') pylab.xlabel('Site') pylab.ylabel('Occupation probability') pylab.show

code >
 * 1) Explain why the detailed balance is respected
 * 2) Check numerically that the above program converges to // π i //. Plot the histograms for Ntime=100, 10000... to show that the agreement improves with increasing Ntime.
 * 3) Explain the signification of  l = (k + random.randint(0,1)*2 -1) and  if l in range(L): k = l
 * 4) Replace the previous two line with: k = (k + random.randint(0,1)*2 -1)%L . Which boundary condition do you implement? What happens to the occupation probability?
 * 5) Instead of this Markov Chain algorithm, can you imagine an simple algorithm which samples exactly the configurations according to the equilibrium law?

Model 1 - hopping in a box: the transfer matrix method
We now solve Model 1 using the transfer matrix method. The ensemble of all transition probabilities can be represented in the transfer matrix of the system. The function transfer_calc computes the transfer matrix of Model 1. We introduce a new package: numpy. This package is very useful for handling matrices, linear algebra operations... code format="python" import math, numpy def transfer_calc(L): neighbor = min(k+1,L-1),max(k-1,0)] for k in range(L)]   transfer = numpy.zeros((L,L))    for k in range(L):        transfer[neighbor[k][0],k] = 0.5        transfer[neighbor[k][1],k] = 0.5    return transfer [[code The transfer matrix controls the time evolution of all possible simulations performed with Markov Chain algorithm. This program implements the time evolution for the Model 1. The matrix vector product is implemented by a numpy function! (Remark: you have to copy-and-paste the function transfer_calc in the previous program to get a single file.) code format="python" import pylab, math, numpy Ntime = 20 L = 10 transfer = transfer_calc(L) position = numpy.zeros((L)) position[L/2] = 1 for itime in range(Ntime):     position = numpy.dot(transfer,position) pylab.title('Transfer matrix') pylab.xlabel('Site') pylab.ylabel('Occupation probability') grid=range(L) pylab.ylim(ymin = 0, ymax = 0.5) pylab.plot(grid,position,'bo') pylab.show code
 * 1) How is prepared the system at time Ntime=0? Is the initial condition important for the long time limit?
 * 2) Check the convergence as a function of Ntime (Try Ntime=10, 100 ...). Write explicitly the transfer matrix for L=3.
 * 3) Compute sum(position). Does it change with Ntime? Why?

We study now the eigenvalues and eigenvectors of the Transfer Matrix using numpy. code format="python" import pylab, math, numpy L = 4 transfer = transfer_calc(L) w,v = numpy.linalg.eig(transfer) print w[0],v[:,0] print w[1],v[:,1] print w[2],v[:,2] print w[3],v[:,3] code
 * 1) Why is the largest eigenvalue equal to 1? Give the physical meaning of the left and right eigenvectors associated to the eigenvalue 1 (in the program //v// is the right eigenvector and //w// is the eigenvalue).
 * 2) We call //λ// 2 the second largest eigenvalue. Explain why the time scale //Δ// = -1 / (log //λ// 2 ) gives the relaxation time towards equilibrium.
 * 3) Study the behavior of //Δ// as a function of //L//, numerically and analytically. Comment.
 * 4) Bonus: Change the transfer matrix algorithm to implement periodic boundary conditions. If you repeat the simulations, the equilibrium is not reached. Explain why and propose a solution to fix this problem.

Model 2: the Totally Asymmetric Exclusion Process (TASEP)
In the previous examples the occupation probability // π C // was known and we checked if the dynamics defined by transition probabilities math p_{C\to C'} math was sampling the steady state distribution // π C //in the large time limit. In the following example the problem is reversed: the transition probabilities are //given// and we look for the //corresponding// occupation probability // π C //. The system is described by a one-dimensional lattice of //L// sites with open boundary conditions. Each site is either empty or occupied by a single particle (in general there is more than one particle in the system). Different moves are possible: (i) each particle can jump to its right, provided the target site is empty (the process is totally asymmetric), (ii) a particle can be injected at site 0 when empty, while (iii) ejected at site L-1 when occupied (the system is in contact with two reservoirs). The dynamics is defined as follows:
 * At each step we select at random one of the possible moves.
 * We check whether this move is allowed or not.
 * If the move occurs in the bulk of the system, we accept it (case (i)).
 * If the move is a particle injection, we accept it with probability //α// (case (ii)).
 * If the move is a particle ejection, we accept it with probability //β// (case (iii)).

Below we implement the Markov chain corresponding to this dynamics. code format="python" import random, pylab, math random.seed(2) L = 100 alpha = .3 beta = .1 Ntime = 100000 state = [0 for k in range(L+1)] for iter in range(Ntime): k = random.randint(0,L) if k == 0: if random.random < alpha: state[1] = 1 elif k == L:        if random.random < beta: state[L] = 0 elif state[k] == 1 and state[k+1] == 0: state[k+1] = 1 state[k] = 0 if iter%2000 == 0: yaxis = [] for i in range(L): if state[i+1] == 1: yaxis.append(i) xaxis=[iter for k in            range(len(yaxis))] pylab.plot(xaxis,yaxis,'ro') pylab.xlabel('Number of steps') pylab.ylabel('System configuration') pylab.show code > (i) //α// < //β  (ii) //β// < //α// // (//α,β//) = (.6,.8) vs (//α,β//) = (.8,.6) and > (//α,β//) = (.1,.3) vs (//α,β//) = (.3,.1) and It can be proven that in this example the detailed balance condition is not fulfilled but the global balance is fulfilled. The dynamics converges to a non equilibrium stationary state where a steady current flows through the system.
 * 1) How many moves are possible?
 * 2) Describe what you observe in the system for the following values of the probabilities//α// and //β//:
 * 1) In which phase do you observe jamming?
 * 2) The phase //α// >1/2 and //β >// 1/2 is the //maximal current phase//. Compare the cases
 * 1) How large is the transfer matrix for this model as a function of L (the number of sites)?

= References, further reading = If you're interested in the TASEP model, you can read for instance:
 * B. Derrida, E. Domany, D. Mukamel, //An exact solution of the one dimensional asymmetric exclusion model with open boundaries//, J. Stat. Phys. 69, 667 (1992).
 * M. R. Evans, R. Juhász, and L. Santen, //Shock formation in an exclusion process with creation and annihilation//, Phys. Rev. E 68, 026117 (2003).

url}?f=print|print this page