Hello, Bonjour, kumusta ka, sawa dee-krap! Welcome to the eighth week of Statistical Mechanics: Algorithms and Computations from the Physics Department of Ecole normale supérieure. This week and the next one, our subject will be the statistical mechanics and computational physics of the Ising model, That has inspired generations of physicists This very simple model of spins on a lattice with simple pair interactions Undergoes an order/disorder transition in dimensions 2 and higher and, in two dimensions, it can be solved exactly, as first shown by Onsager in 1944. In this week's lecture, after a short introduction, we will treat Monte-Carlo algorithms for the Ising Monte-Carlo. First, a local metropolis algorithm, and then, a global cluster algorithm. Cluster Monte-Carlo algorithms originated here in the Ising Model, and they have since revolutionized computations in many fields of classical and quantum physics. The algorithm we present here, fortunately for us, can be explained in a few minutes, and implemented in just over a dozen lines of Python code. In the tutorial, we will consider the Heatbath algorithm, and illustrate one of its surprising features, namely that it couples and that it allows for perfect simulations. The Ising model is a unique meeting point for Mathematics, Computer Science, and many branches of Physics It as a less immediate connection with classical mechanics than the Hard disks model because there is no molecular dynamics, and no kinetic energy, but the Ising model phase transition is much better understood, and in two dimensions, there are exact solutions, even for finite systems, so this allows for very fine-tests of the algorithms on lattices of any size. So, let's get started, with the Statistical mechanics, and computational physics of the Ising model, and with week 8 of Statistical Mechanics: algorithms and computations. The Ising model describes classical spins sigma that can be +1 or -1 on a lattice like the two-dimentionnal square lattice shown here. We consider the ferromagnetic Ising model, where neighbooring spins prefer to be aligned. Two neighbooring up spins or two neighbooring down spins have an energy -1, whereas two spins like this or like that have energy +1. This gives an energy of each configuration like this, where the sum is over all pairs. But attention: we count each pair only once. For concreteness, here is a list of the 16 configurations of the 2 by 2 Ising model in two dimensions. Without periodic boundary conditions, these two configurations have energy +4, these two have an energy -4, and all other configurations have energy 0. For small systems, we can enumerate all configurations explicitely, and a nice little algorithm implementing Gray code enumeration is shown here. Here is the output of the movie version of the movie version of enumerate_ising.py, for 4*4 Ising Model for periodic boundary conditions. Each configuration differs from the previous one on a single site only, and we don't have to recalculate the energy from scratch at each step. It is sufficient to compute the field H on the spin that is going to flip and the new energy is then equal to the old energy plus 2*H*sigma. Keeping track of the energy like this, we can compute the partition function as a function of the temperature, the mean energy, the mean square energy, and the specific heat: the change of the mean energy with temperature. There are 2 things to notice: first of all, the specific heat, altough it is a derivative with temperature of the energy, does not have to be computed by numerical derivation. It is simply given by a quantity that is proportional to the variance of the energy. Secondly, to compute the thermodynamic quantities, it is not necessary to do the enumeration at all temperatures. It's a lot better to compute the number of configurations per energy, the density of states. So, compute the density of states once and for all, like here in enumerate_ising.py, and then use this density of states for all temperatures to compute the thermodynamics, as in in thermo_ising.py. So, here, you see the specific heat for small systems, and the curve already indicates the phase transition that is known to take place at a temperature of 2/log(1+sqrt(2)). We can do our enumerations up to system sizes of 4*4, or, with a little bit of effort, to 6*6. But our pure Python program is a bit to slow for real number crunshing so, let's go ahead, and consider the sampling approach. Before doing so, however, please take a moment to download, run and modify the programs that we discussed so far. There is this nice little program enumerate_ising.py, that uses the Grey code enumeration for all the configurations. Try to understand what the Grey code does, you may need it later. In the Ising model, we can get very far by counting configurations even tough the listing of them becomes very difficult. But in general, for large systems and systems that don't exactly correspond to the Ising energy - sigma_i sigma_j, the sampling approach is more reasonable. Analogously to what we did for hard disks, we select one site and attempt to flip that spin on that site. The move from configuration a to the flipped configuration b must be accepted with the Metropolis acceptance probability... This algorithm in Python is the following: markov_ising.py. It is even simpler than the corresponding version for hard disks: markov_disks.py Output of the graphics version of markov_ising.py is shown here. This program can be checked against the exact enumeration for small lattices, but I can assure you that everything comes out all right. This program also very easily recovers the phase transition that takes place between the ferromagnet and the paramagnetic phase at a temperature just above 2. You can trace this phase transition by plotting the absolute mean magnetization as a function of the temperature, even though there are more expert ways of locating Tc that you will discover in this week's homework session. You can also simply look at the configurations as a function of the temperature, and you see a very clear qualitative difference between the paramagnetic regime at high temperature, and the ferromagnetic regime a lower temperatures. Close to the critical temperature, the local Monte-Carlo algorithm slows down increasingly. In a nutshell, this is due to the increase of the correlation length in the system, and to the fact that turning around a spin in a larger correlated region becomes increasingly difficult. The large-scale correlations of the local Monte-Carlo algorithm close to the phase transition is also known in experiments and it is called critical slowing down. Critical slowing down is overcomed by the cluster algorithms that we next discuss. But vefore going to the next session, please take a moment to download, run ad to modify the programs that we just discussed: this is the algorithm markov-ising.py, with a local Markov-Chain Monte-Carlo algorithm, and its graphics version, markov_ising_movie.py. The traditional simulations methods for the Ising model, markov_ising.py and heatbath_ising.py that we will discuss in this week's tutorial, have gradually given way to cluster algorithms that converge much faster. Rather than flipping a spin indiviually at each timesteps, these algorithms construct larger ensembles, clusters, of the length scale of the system, and flip them in one step. Cluster algorithms are truly elegant, very general, and you will see, they turn out to be very easily programmed. Suppose that, starting from a random initial site, rather than flipping the site on that site, we attempted to construct a cluster of connected up spins and flip them later. Clearly, we cannot construct a cluster from all the connected up-spins, flip then, then construct a cluster of all the connected down spins, and flip them again. Very soon, we would be flipping between the all-up and the all-down configuration and the Boltzmann probabilities would be violated. So, from a spin already in a cluster, we should accept a spin outside of the cluster with probability p: sometimes we grow the cluster and sometimes we dont. So here, in this example, we added all three new spins to the site already present all of them with probability p. We added them to the cluster and also to a pocket, just to remember that we haven't checked their neighboors yet. At some moment, the pocket is empty, for example for the cluster shown here. You see, and can count for yourself, that accross the boundaries of the cluster, there are 18 links plus-minus and 14 links plus-plus. For each of these 14 links, the extension of the cluster was rejected because at some time, a random number was drawn and was larger than p. So then, we can flip this cluster and arrive from the configuration a to the configuration b. Notice that in the configuration b, there are now 18 links minus-minus, and 14 links minus-plus. Now you must take into account the detailed balance condition The statistical weight of configuration b times the probability to move from a to b must be equal to the statistical weight of configuration b times the probability to move from b to a. The probability to move from a to b is given by the probability to construct this cluster and then to stop the construction of the cluster at this stage, times the probability to accept the move from a to b Likewise, the probability to move from b to a is given by the probability of constructing this cluster the flipped cluster rather than another cluster, to stop the construction of this cluster at that stage, times the probability of accepting the move from b to a. Now, for the a priori probability A(a->b) to construct this cluster rather than another cluster and to stop the construction of the cluster right here: In these a-priori probabilities, there are terms that are the same in a and b, and they drop out of the picture, for example the probability to add a plus spin to a plus spin already present in a is the same as the probability of adding a minus spin in b to a spin already there. The game is really played at the boundary: the probability to stop the cluster construction here in a differs from the probability to stop cluster construction in b. So, each site on the boundary of the cluster in a was once a pocket site, and the construction of this cluster came to a halt because none of the edges accross the cluster was accepted, precisely, in this cluster a, they are 14 edges plus-plus and the extension was rejected with a probability 1-p for each of them. So the a-priori probability from a to b is proportional to (1-p)^14. Analogously, for the cluster in b, also the construction came to a halt because none of the possible minus-minus edges accross the boundaries were accepted. There are 18 minus-minus edges in b, so the a-priori probability from b to a is proportional to (1-p)^18. Now, let us consider the statistical weights of configurations a and b: again, there are terms outside the cluster, but they are the same in a and b. There are terms inside the cluster, also they are the same in a and b. The game is played at the boundary. Now, in configuration a, threy are 18 terms plus-minus, hence 18 terms +1 and 14 terms -1 in the energy, or if we write N1 instead of +1 and N2 instead of the terms plus-plus, the energy accross the boundary is N1-N2 in a, and it is N2-N1 in b. Now, we have all that it takes to write down the detailed balance condition, like here, we have the statistical weight of configuration a, times the a-priori probability to construct this cluster times the acceptance probability to move from a to be, equals the statistical weight times the construction probability times the acceptance probability from be to a. This gives the Metropolis acceptance rate shown here, and you see we can simplify this acceptance probability and write it as... You see that this equation again simplifies for the magic value of... For this magic value, we have no rejections. So with this value of p, we may simply construct the cluster, then start another one, construct the cluster and flip it, another construction flip, and so on. This is the famous cluster algorithm by Wolff from 1989. In Python this gives the following 19-lines program: so in this algorithm, you take a random spin and you put it into the custer and into the pocket, then, a complete cycle of this algorithm consists in taking a random element out of the pocket, and checking all of its neighboors. For each of the neighboors, you check wether it is not already in the cluster, wether it has the same sign, and wether a random number between 0 and 1 is smaller than p. If these 3 conditions are satisfied, you add the new spin to the cluster and to the pocket. Once you have checked all the neighboors of this spin, you take it out of the pocket. Once the pocket is empty, you flip the entire cluster This is all there is to one of the most influencial algorithms in all statistical physics. An example of how this cluster algorithm works is shown here. Close to the transition temperature of the Ising model, enormous clusters are constructed, and they are flipped without thinking twice. It moves though configuration space with breathtaking speed, and far outpaces the local Monte-Carlo algorithm. Very important it is to realize that the speedup it realizes with respect to markov_ising.py increases with system size, so for a small system it may be 10 times faster, for a larger system 100, 1000, 1000000... times faster than the local algorithm. So before continuing, please take a moment to go over a derivation of this cluster algorithm and then to download, run and modify cluster_ising.py, to see how these simple ideas are implemented in Python. In conclusion, we have introduced in this lecture to the Ising model and its associated Monte-Carlo algorithm. What we have presented here, and what we can program in just over a dozen lines today, has taken several decades to develop. Roughly speaking, in Ising model computations, there have been 2 crucial periods: The first crucial period was the following: once it was realized that Monte-Carlo calculations were rather imprecise in the viscinity of the critical point, it has taken a long time to understand that there were two reasons: one was that it is quite complicated to compute observables close to the critical point, because of the critical slowing down as we discussed. But there is a second reason, and the second reason is that the observables of the infinite systems are quite different from the observables in the finite system. The extrapolation from the finite system where we can do our calculations to the infinite system which we are really interested in, is halas non trivial. The second crucial period of Monte-Carlo calculations came in the 1980, when it was found out that critical slowing down was not an inescapable faith for algorithms as it is for experiments, and this was realized when the first cluster algorthms came out. These cluster algorithms realized non-physical dynamics but they satisfied detailed balance, so there final state is exactly the same as the one of the local algorithm. As we have seen, they do through configurations at breathtaking speed without suffering from the critical slowing down. We will immerse ourselves farther in this fascinating subject in this week's tutorial and homework session but for the time being, let me thank you once again for your attention. See you again next week, in week 9 in Statistical Mechanics: Algorithms and Computations, for faster-than-the-clock algorithms. So good luck with this week's homework session, homework session 8.