Spin+systems+Enumeration+Cluster

=Introduction= In this lecture, we approach the statistical mechanics and computational physics of the Ising model. This archetypal physical system undergoes an order-disorder phase transition. The Ising model shares this transition and many other properties with more complicated models which cannot be analyzed so well.

Definition
Consider spins //σ// = +/- 1 on a lattice of //N// sites //k//=0... //N//-1, say, in two dimensions. Spins prefer to be aligned with each other, and the energy is written as: math E = - J \sum_{\langle k,l \rangle } \sigma_k \sigma_l math with positive //J//, which we take equal to 1.

The partition function of the Ising model is given by math Z = \sum_{\sigma_0 = \pm 1 \dots \sigma_{N-1} = \pm 1} \exp( - \beta E) = \sum_E \mathcal{N}(E) \exp( - \beta E) math

Note that the first equation expresses the partition function as a sum over the 2// N // spin configurations, and the second equation as a sum over the ~//N// energies, using the density of states math \mathcal{N}(E) math The analytical and numerical approaches for the Ising model often aim at computing the equation of state.

Physical observables, as the energy, the free energy, the specific heat, etc, can also be expressed easily, for example, for the internal energy, we have: math \langle E \rangle = \frac{1}{Z} \sum_E E \mathcal{N}(E) \exp( - \beta E) math

and math \langle E^2 \rangle = \frac{1}{Z} \sum_E E^2 \mathcal{N}(E) \exp( - \beta E) math

This gives for the specific heat: math C_V = \partial E / \partial T = - \beta^2 \partial E / \partial \beta math and math c_V = C_V/N = \frac{\beta}{N} \left( \langle E^2 \rangle - \langle E \rangle^2 \right). math

Description of the lattice
We must clarify how to represent the lattice sites. In one dimension, this is easy. On //L//×//L//×... lattices, rather than identifying each site through the coordinates (//k//_//x//,//k//_//y,...//), it is better to think of them as sites 0... //N//-1, withthe geometry defined through a "neighbor" function. In Python, this can be easily done through dictionaries. Programs written in this way are largely independent of the underlying lattice.

Consider the following function which determines two dictionaries (site_dic and x_y_dic) and a tuple of tuples called nbr (for neighbors). The dictionary site_dic and x_y_dic perform the calculations between rows and columns and nbr allows us to determine which sites of the lattice are the neighbors (right, up, left, down) of the current site. code format="python" def square_neighbors(L): N = L * L   site_dic = {} x_y_dic = {} for j in range(N): row = j // L       column = j - row * L        site_dic[(row, column)] = j        x_y_dic[j] = (row, column) nbr = [] for j in range(N): row, column = x_y_dic[j] right_nbr = site_dic[row, (column + 1) % L]       up_nbr = site_dic[(row + 1) % L, column] left_nbr = site_dic[row, (column - 1 + L) % L]       down_nbr = site_dic[(row - 1 + L) % L, column] nbr.append((right_nbr, up_nbr, left_nbr, down_nbr)) nbr = tuple(nbr) return nbr, site_dic, x_y_dic code

Plotting configurations
Here's an idea for plotting configurations ...

code format="python" figure x_plus=[] y_plus=[] x_minus=[] y_minus=[] for i in range(N): x, y = x_y_dic[i] if S[i] == 1: x_plus.append(x) y_plus.append(y) else: x_minus.append(x) y_minus.append(y) axis('scaled') axis([-0.5, L -0.5, -0.5, L - 0.5]) plot(x_plus, y_plus, 'r+', markersize=10) plot(x_minus, y_minus, 'bx', markersize =10) savefig('test2.eps') show code

=Enumerations (Basic algorithms)= A most straightforward approach to the Ising model consists in enumerating all configurations.

Binary enumeration
For the 2x2 lattice, this can be done on a piece of paper:



A one-line Python program generates these configurations, if we identify a "-" spin with a "0" bit and a "+" spin with a "1" bit, in the binary representation of the numbers 0 ... //N//-1: code for k in range(16): print bin(k) code From these configurations, we best determine the density of states and then all the observables we want. We can determine the properties at different temperatures without rerunning expensive calculations.

Gray code enumeration
Even on the level of basic enumerations, there is room for improvement, because the above sequence, based on the binary representation of the numbers //k// = 0 ... //N//-1 is not unique, and it is not the best one. Consider the following [|Gray-code enumeration]]:

To understand how this enumeration works and why it is interesting, it is best to write out each configuration on a single line:

We notice that the word "enumeration" has two meanings: it can refer to the listing of items (as in our example), or to the counting. The famous Onsager solution of the Ising model, in the formulation of Kac and Ward, is such a counting method, see, sect. 5.1.4.

=Metropolis algorithm= Here is a simple implementation of the Metropolis algorithm (, algorithm 5.7, p. 250). A basic task in statistical physics is to write a local Metropolis algorithm for the Ising model. This program is even simpler than a basic Markov-chain simulation for hard disks, but the Ising model has a less immediate connection with classical mechanics (there is no molecular dynamics algorithm for the model). Its phase transition is better understood than that of hard disks, and the results can be compared with exact solutions in two dimensions, even for finite systems, so that very fine tests of the algorithm on lattices of any size are possible. Analogously to the hard-disk Markov-chain algorithm, we randomly pick a site and attempt to flip the spin at that site.



code ... L = 32 N = L * L S = [choice([-1, 1]) for k in range(N)] beta = 0.42 nbr, site_dic, x_y_dic = square_neighbors(L) for i_sweep in range(100): for iter in range(N): k = randint(0,N-1) h = sum(S[nbr[k][j]] for j in range(4)) Delta_E = 2.0 * h * S[k] Upsilon=exp(- beta * Delta_E) if ran(0.0, 1.0) < Upsilon: S[k] = -S[k] code

The problem with the local Monte Carlo algorithm is the correlation time. This can be most easily be understood by looking at the histogram of the magnetization of the system. At high temperature, it is strongly peaked around zero, at low temperatures, it is bi-modal, but close to the critical point, it explores a region which is of the order of Δ//M// ~//N// (without going into details).

=Cluster algorithms= Single-spin-flip Monte Carlo algorithms are slow close to //T//_c, because the histogram of essential values of the magnetization is wide and the step width of the magnetization is small. To sample faster, we must foster moves which change the magnetization by more than +/- 2. However, using the single-spin-flip algorithm in parallel, on several sites at a time, only hikes up the rejection rate. Neither can we, so to speak, solidly connect all neighboring spins of the same orientation and flip them all at once. Doing so would quickly lead to a perfectly aligned state, from which there would be no escape.

Swendsen and Wang and Wolff have proposed a cluster algorithm, which has been the role models for about a generation of research. Starting from an initial site //k//, one adds neighboring sites with probability //p// until the construction stops. This generates the move.

We present and implement the Wolff Cluster algorithm for the Ising model. (SMAC algorithm 5.9, p. 257). Note that the Pocket and the Cluster could be taken as sets, rather than lists. Note also that k does not need to be a random element of the Pocket. Any element will do.

code format="python" (add the above function for the neighbors...) L = 32 N = L * L S = [choice([-1, 1]) for k in range(N)] beta = 0.4407 p = 1.0 - exp(-2.0 * beta) nbr, site_dic, x_y_dic = square_neighbors(L) for iter in range(100): k = randint(0, N - 1) Pocket = [k] Cluster = [k] while Pocket != []: k = choice(Pocket) for l in nbr[k]: if S[l] == S[k] and l not in Cluster and ran(0,1) < p:               Pocket.append(l) Cluster.append(l) Pocket.remove(k) for k in Cluster: S[k] = - S[k] print iter, N_cluster code

=References=